ChatGPT’s Unexpected Use of First Names Alarms Users and Sparks Privacy Concerns

A growing number of ChatGPT users are reporting an unexpected and unsettling behaviour as the AI chatbot begins addressing them by their first names without ever being given that information.
This week, the issue began to emerge on various social media platforms, with users expressing alarm and uncertainty regarding ChatGPT’s abrupt change in tone. Although, the chatbot’s memory and personalisation settings have been disabled, numerous individuals assert that it has begun to incorporate their names into its internal reasoning and responses.
Simon Willison, a well-known software developer and AI commentator, described the experience as “creepy and unnecessary” in a widely shared post on X. Another developer, Nick Dobos, echoed the sentiment, stating he “hated it.”
Insanely creepy. I hate it. Been trying for figure out how to turn it off
— Nick Dobos (@NickADobos) April 17, 2025
Several users wonder about the method ChatGPT uses to retrieve name information as well as its possible connection with OpenAI’s recent enhancements to memory capabilities. The updates created concerns about how ChatGPT manages user data and seeks consent because developers made these changes to personalize the system’s responses.
A user published a screenshot displaying the model’s internal prompt that contained his name to ask about potential identifier placement in system-level instructions by OpenAI. “Is it really using that in the custom prompt?” he asked.
It feels weird to see your own name in the model thoughts. Is there any reason to add that? Will it make it better or just make more errors as I did in my github repos? @OpenAI o4-mini-high, is it really using that in the custom prompt? pic.twitter.com/j1Vv7arBx4
— Debasish Pattanayak (@drdebmath) April 16, 2025
As of now, OpenAI has not issued any public response to these claims, nor has the company clarified whether this behavior is intentional or the result of a system error.
The backlash has prompted broader discussions about AI boundaries and user comfort. A recent article by The Valens Clinic, a psychiatry practice in Dubai, states that using personal names in conversation can provoke strong psychological reactions.
“Using an individual’s name when addressing them directly is a powerful relationship-developing strategy,” the clinic noted. “However, undesirable or extravagant use can be looked at as fake and invasive.”
Critics argue that this move by ChatGPT feels more like an awkward attempt at anthropomorphizing the AI than a meaningful enhancement. “In the same way most people wouldn’t want their toaster calling them by name, they don’t want ChatGPT pretending it understands the emotional weight of names,” one post read.
Interestingly, some users reported that the issue had seemingly been reverted by Friday, with the chatbot defaulting back to more generic forms of address like “user.”
Related Posts
TikTok AI alive feature Lets Users turn photos Into Videos ( It’s free for everyone)
TikTok has introduced a powerful new tool called TikTok AI Alive, a free feature that allows users to transform static photos into animated, short-form…
DeepSeek App outage disrupts Users, now partially Restored
A major DeepSeek app outage on May 13 left hundreds of thousands of users unable to access the platform, with the issue trending second…