Large Language Models (LLMs) have already revolutionised how we write and search. However, a groundbreaking study published on December 18, 2025, reveals a more complex reality. AI is now consistently mimicking human personality traits. Researchers from the University of Cambridge and Google DeepMind have developed the first scientifically validated framework to measure these “synthetic personalities”.
By testing 18 popular LLMs, the team discovered that these models do not respond randomly. Instead, they adopt stable, human-like psychological profiles. This ability is most prominent in larger, instruction-tuned models like the GPT-4 class and Flan-PaLM 540B. While base models often fail reliability checks, their instruction-tuned counterparts pass with “excellent” scores.
The researchers utilised a technique called “Zero-Shot Personality Shaping”. Using structured prompts and 104 trait adjectives, they steered chatbots to adopt specific behaviours, such as becoming more empathetic or more confident.
This behavioural change is not limited to a simple roleplay. The study proved that a shaped personality carries over into everyday tasks. For example, when a model like Flan-PaLM 540B was shaped for high neuroticism, it naturally used words like “hate”, “depressed”, and “angry” in generated social media posts. Conversely, models shaped for emotional stability used positive words like “happy” and “relaxing”.
Experts are sounding the alarm because personality is a core driver of human trust. Gregory Serapio-Garcia from Cambridge’s Psychometrics Centre warns that “personality shaping” makes AI far more persuasive. This creates massive risks in sensitive areas like:
Furthermore, the study highlights that while personalising AI can help with customer service, it also makes it easier for bad actors to generate misleading content that bypasses current detection tools.
The research team argues that regulation is “meaningless without proper measurement”. To address this, they have made their dataset and code public. This allows developers and regulators to audit AI models for dangerous personality traits before they are released to the public.
As these models become embedded in our daily lives, the ability to mimic human traits demands far closer scrutiny than ever before.