An unexpected development in AI interactions has emerged as ChatGPT users report the chatbot addressing them by name without prior instruction. This unrequested personalization marks a shift from previous versions where such behavior wasn’t standard, sparking debates about AI boundaries and user comfort.
Social media platforms reveal polarized reactions. Software developer Simon Willison described the feature as “creepy and unnecessary,” while others like Nick Dobos expressed active dislike. X (formerly Twitter) hosts numerous threads where users question the purpose and implications of this functionality, with one person comparing it to “a teacher constantly calling my name” and another stating it undermines their perception of AI objectivity.
Does anyone LIKE the thing where o3 uses your name in its chain of thought, as opposed to finding it creepy and unnecessary? pic.twitter.com/lYRby6BK6J
— Simon Willison (@simonw) April 17, 2025
The timing coincides with OpenAI’s recent memory feature rollout, designed to let ChatGPT reference past conversations. However, users confirm the name usage occurs even with memory settings disabled, suggesting separate backend changes. OpenAI has not clarified whether this is intentional or explained its technical implementation.
Psychological perspectives highlight why this triggers discomfort. The Valens Clinic notes that frequent name usage typically signals intimacy, but when overused by non-human entities, it can feel manipulative. This aligns with user experiences where ChatGPT’s attempts at personalization instead emphasize its artificial nature, creating what some describe as an “uncanny valley” effect.
It feels weird to see your own name in the model thoughts. Is there any reason to add that? Will it make it better or just make more errors? pic.twitter.com/j1Vv7arBx4
— Debasish Pattanayak (@drdebmath) April 16, 2025
The controversy underscores challenges in AI personalization strategies. While OpenAI leadership envisions systems that adapt to individual users over time, this incident shows how technical enhancements might clash with human perceptions of appropriate machine behavior. As one user noted, the experience had “the opposite of the intended effect,” making the AI’s synthetic nature more apparent rather than creating meaningful rapport.