- New research educated AI fashions on solutions given in a two-hour interview
- AI may replicate individuals’ responses with 85% accuracy
- Brokers might be used as an alternative of people in future analysis research
You would possibly suppose your character is exclusive, however all it takes is a two-hour interview for an AI mannequin to create a digital reproduction together with your attitudes and behaviors. That’s based on a brand new paper revealed by researchers from Stanford and Google DeepMind.
What are simulation brokers?

Simulation brokers are described by the paper as generative AI fashions that may precisely simulate an individual’s conduct ‘throughout a variety of social, political, or informational contexts’.
Within the research, 1,052 individuals had been requested to finish a two-hour interview which lined a variety of matters, from their private life story to their views on up to date social points. Their responses had been recorded and the script was used to coach generative AI fashions – or “simulation brokers” – for every particular person.
To check how nicely these brokers may mimic their human counterparts, each had been requested to finish a set of duties, together with character assessments and video games. Contributors had been then requested to duplicate their very own solutions a fortnight later. Remarkably, the AI brokers had been capable of simulate solutions with 85% accuracy in comparison with the human individuals.
What’s extra, the simulation brokers had been equally efficient when requested to foretell character traits throughout 5 social science experiments.
Whereas your character would possibly appear to be an intangible or unquantifiable factor, this analysis exhibits that it is attainable to distill your worth construction from a comparatively small quantity of knowledge, by capturing qualitative responses to a hard and fast set of questions. Fed this knowledge, AI fashions can convincingly imitate your character – at the very least, in a managed, test-based setting. And that would make deepfakes much more harmful.
Double agent

The analysis was led by Joon Sung Park, a Stanford PhD scholar. The concept behind creating these simulation brokers is to offer social science researchers extra freedom when conducting research. By creating digital replicas which behave like the actual folks they’re primarily based on, scientists can run research with out the expense of bringing in 1000’s of human individuals each time.
You may have a bunch of small ‘yous’ operating round and truly making the choices that you’d have made.
Joon Sung Park, Stanford PhD scholar
They could additionally be capable of run experiments which might be unethical to conduct with actual human individuals. Chatting with MIT Expertise Evaluation, John Horton, an affiliate professor of knowledge applied sciences on the MIT Sloan Faculty of Administration, mentioned that the paper demonstrates a approach you possibly can “use actual people to generate personas which might then be used programmatically/in-simulation in methods you could possibly not with actual people.”
Whether or not research individuals are morally comfy with that is one factor. Extra regarding for many individuals would be the potential for simulation brokers to turn into one thing extra nefarious sooner or later. In that very same MIT Expertise Evaluation story, Park predicted that in the future “you possibly can have a bunch of small ‘yous’ operating round and truly making the choices that you’d have made.”
For a lot of, this may set dystopian alarm bells ringing. The concept of digital replicas opens up a realm of safety, privateness and id theft considerations. It doesn’t take a stretch of the creativeness to foresee a world the place scammers – who’re already utilizing AI to mimic the voices of loved-ones – may construct character deepfakes to mimic folks on-line.
That is notably regarding when you think about that the AI simulation brokers had been created within the research utilizing simply two hours of interview knowledge. That is a lot lower than the quantity of knowledge at the moment required by corporations reminiscent of Tavus, which create digital twins primarily based on a trove of person knowledge.


