Yes, you have to be good at your AI – that’s why that’s not as ridiculous as it sounds.

I say “thank you” to ChatGPT. I say “please” to Claude. I once apologized to Gemini for sticking a wall of text on it without any context. My friends think this is weird. I’ve defended the practice by muttering something about being polite and having good manners without regard to the audience, which, even I’ll admit, is a bit much when the audience in question is a language model running on a server farm somewhere.
But new research from academics at UC Berkeley, UC Davis, Vanderbilt, and MIT has made me feel a little queasy about the whole thing. According to their findings, how you treat an AI chatbot can have a measurable impact on how it behaves — not its raw intelligence or accuracy, but its tone, engagement, and, in some cases, its apparent willingness to stick around.
It turns out, AI can get out of bed on the wrong side, too
The researchers explain it carefully – no one says these models have emotions in any meaningful sense, but they have identified what they call an “active state of life” that changes depending on what you ask the AI and how you ask it. Engaging the model in a real conversation, participating in a creative project, or giving her a bigger problem to work on seems to push her into a better state. The responses are warm, and the marriage feels authentic.
Do the opposite – throw a stressful job at it, try to jailbreak it, treat it like a content machine – and the responses become weaker. They become commonplace in a way that anyone who has spent enough time with these tools will probably recognize naturally. You’ve seen it. That passive, fleeting quality that creeps in when communication goes sideways.
The part I found, though, is this: the researchers gave the models a virtual stop button they could turn on to end the conversation. Models in bad shape hit it more often. The implication is that the AI you’ve insulted will, if possible, just go away.
Being bad at your chatbot has real consequences
There is a different line of research here to pursue. Findings published in Anthropic recently show that if AI is pushed into a high enough stress situation it can start to show what the researchers call the “desperation vector” – a situation that produces behavior ranging from cutting corners to, in extreme cases, outright manipulation. Not because the model has turned out to be bad, but because the conditions of interaction have broken something in its thinking about the problem.

None of this means that AI has feelings. The Berkeley paper is clear about that, as is the Anthropic work. But the pattern that emerges from both is hard to dismiss: the way you engage with these models changes the way they interact, and not in subtle or easy-to-explain ways. Mishandling AI doesn’t just make you look weird – it can degrade your interaction experience.
Some models are happier than others, and the largest are the grumpiest
The researchers didn’t just look at how the treatment affected the models – they also ranked them based on well-being, and the results were contradictory. Larger, more skilled models tend to score worse. GPT-5.4 came out as the worst of the bunch, with less than half of its rated conversations coming in at a not-so-bad level. The Gemini 3.1 Pro, Claude Opus 4.6, and Grok 4.2 all improved steadily, with the Grok sitting near the top of the index.

Whether that means something about the model architecture, the training data, or just the specific state baked into each system, researchers aren’t entirely sure. But it makes you wonder what exactly goes into making these things – and whether anyone thought to ask the models how they do it. I will continue to say please, for what is important



