Erica Schoder, Executive Director at The R Street Institute, joined Tom Hashemi, Policy Unstuck, to discuss the use of LLMs in the policy world, specifically digital twins.


Values are revealed in behaviour, not in what you say

What are the signs that an organisation actually has values? It’s how its people behave and the decisions that it makes. Full stop. It is all in the behaviour. Not what you say you believe in, or what you say about how you do things. When a hard moment arrives, what is that decision that you make?

A good example of that for us is our decision on January 6th to make a statement condemning the attack on the Capitol. We were one of the only right-of-centre organisations to do so. In that moment, we weighed our mission and our values, our commitment to the Constitution, to the rule of law and to the democratic process–not what our funders or stakeholders thought or would say about it. 

We knew that it would cost us something, but we also knew that we had to stand up for our principles. It was a time when people were making lists of who they would partner with and be in coalition with. There were blacklists going on, and absolutely we were blacklisted. But it was a price we were willing to pay.

Long term, the relationships that mattered became stronger. It was a signal that we are who we say we are, and that we’re going to stand up for our principles. That gave us more credibility in the end.

The role of organisational structure in organisational values

One of our values is ‘one team’. You can say that all day long, but if you don’t create the structures to enable coordination between teams, it’s just a platitude.

We developed a model called ‘programme teams’ to address this. Take our Energy and Environment policy area. All the different functions–research, communications, etc.–are represented on that programme team. That team is the one making the decisions about what we do and what our impact strategy is.

That means that communications knows the resources they’re coming in with, and can make decisions about their realm, but they’ve listened to everyone else about the trade-offs and what others can live with. So when they make decisions, they know the limits; it’s a mechanism to get coordination. 

LLMs are a blurry JPEG of human encounter

Humans have tacit, inarticulate knowledge; our practitioner, contextual experience of being in the world. It is completely unique to every individual. LLMs are amazing, but they strip all the context out of knowledge, out of ideas, out of information. There’s no context in them. 

There’s that idea from Ted Chiang comparing LLMs to a blurry JPEG of an idea or of some things that humans did in the world. It’s decontextualised. 

AI is super powerful, and helps us see patterns and do all kinds of crazy things with that decontextualised information. But what it cannot do is reproduce the actual encounter, the actual moment of decision or imagination in a human brain.

The challenges of AI adoption

Anybody who is implementing AI in an organisation will understand the challenges. At R Street, we have so much diversity of disposition towards the tool, all the way from doomers to evangelists. You also have the ability, or capability, of people who are using it and are just stellar at it, and others who don’t want to ever start using it.

We’ve had a lot of pushback from junior staff who do not use AI, who do not want to, and who are fearful of it replacing them. And that is fair: I can totally see how the ladder is being pulled up. Some organisations are not hiring at the most junior level because they think AI can essentially be a research assistant. 

But you’re also going to see an AI-native generation who just naturally use it. The challenge for them is how they form an ability to exercise judgment. And that comes back to us as managers: how do we train them in this area? If junior staff don’t understand what good looks like, there is no way for them to evaluate the output of an LLM.

Some organisations have addressed that by saying, ‘Look, you can’t use AI as a junior staffer. You need to learn it by doing.’

The augmented intern: a paradox

I’m developing a framework called ‘the augmented intern’. Let’s say I want to create a digital twin of you, Tom. I want to quantify all of your practitioner knowledge and everything you’ve written about, and I create this digital twin based on your thinking that can help a junior staffer understand what good looks like.

But then what happens to your role? What about the Tom whose knowledge has now created a digital twin? What is your value?

I think there are two elements here. Firstly, contexts will change, so judgment will change, and digital twins will need to change as well. At the moment you build it, a digital twin encodes only some of you, the part that is accessible at that point in time. That means that a human has to keep creating that knowledge and articulating it: LLMs are the exhaust of human knowledge, not its creator.

The second is consequences. I love using AI for creating digital twins—although it can be kind of creepy. But it is not the same as Tom, who has stakes in the outcome, actually telling me what he would do in this situation. That conversation is totally different compared to chatbot digital-twin Tom. The real Tom has a reputation, and relationships and is continually discovering what kinds of consequences he can live with. There are no stakes with a digital twin, it’s a representation of you, it has no consequences.

I think AI is going to augment a lot of what we do. The question is what we keep doing ourselves and what we hand off. And whether we’re strengthening human agency and judgement along the way. That’s our job to figure out.