In July 2025, OpenAI revealed that ChatGPT receives 2.5 billion prompts a day, from users across the world. Google’s Gemini, meanwhile, has grown to more than 450 million monthly users this year. And, although Grok’s user numbers aren’t publicly disclosed, anyone still on X will be all too familiar with the user who has no clue how to think, feel or act until they’ve consulted the Elon Musk chatbot. In other words, we’re very much immersed in the age of AI. AI bots do our jobs for us. They handle our coursework, shopping lists, and romantic dramas. Middle-aged men talk about them like an old family friend.
Of course, there has also been some backlash. This is partly because of the resource-heavy nature of AI tools, their dependence on widespread copyright violations, and the suspect motives of their creators. But there’s also a growing fear about what the technology is doing to the brains of the people that use it. According to a recent study from MIT, offloading cognitive work onto AI tools comes at a significant cost, including reduced neural connectivity and memory recall. Another paper, by researchers from Microsoft and Carnegie Mellon University, suggests that reliance on AI leaves humans “atrophied and unprepared” for cognitive tasks in the real world.
What’s the best way to communicate these potential risks to heavy AI users? Well, experts could spend a lot of time and resources on education or developers could be legally required to label them as hazardous, like those gruesome warnings on cigarette packets. But there’s another, time-honoured alternative that might prove even more effective: social stigma.
AI’s more inventive haters have already started rolling out new insults for obsessive fans of chatbots – the kind of person who asks Grok to confirm every blatantly false fact they see online, or turns to ChatGPT to make pivotal life choices. “A friend of mine has coined the term ‘sloppers’ for people who are using ChatGPT to do everything for them,” says the TikTok user @intrnetbf in a recent video. “That’s such a good slur, bro.” Others in the comments have been quick to offer up their own “slurs”. “Botlicker.” “ChatNPC.” “Second-hand thinker.”
Grok what am i feeling. Grok what am I thinking. What’s in my heart grok. What’s the last image my dying brain conjures before my consciousness blinks out forever grok.
— derek (@websitehomepage) May 24, 2025
On social platforms like X, users have even come together to consciously brainstorm new slurs for people who over-rely on AI. “Grokkers”, “Groklins” and “Grocksuckers” are just a few of the names that critics might call users of the xAI chatbot (which, lest we forget, was openly praising Hitler just a few weeks ago). More general insults include “bots” and “prompstitutes”.
The most common slur that’s taken off in the last few weeks, though, is levelled at AI systems themselves. You might have already heard it. “Clankers.” As in: “I hate when the comment section is filled with clankers.” Or: “My job was taken by a clanker.” The term actually originates as a slur from the Star Wars universe, where it’s used to refer to evil battle droids, but it’s found new life as a way to complain about the AI characters and tools that are increasingly inescapable on the World Wide Web.
Given the disruptive effects of these technologies, much of this backlash was inevitable, and it is kind of convenient to have a catch-all term for humans to express their robophobia. Then again, others have questioned whether it’s actually a good sign that humans are so eager to invent new slurs, or if it suggests some deeper, darker motive. Like… just having an excuse to say slurs.
The ai in your smart car in 2035 when they catch you saying clanker https://t.co/GTv1FEqUj2pic.twitter.com/w98JL3oZLT
— RaymondNoodles (@_RaymondNoodles) July 22, 2025
This concern is backed up by the fact that many of the derogatory words for AI and its users (which aren’t included here) are rooted in actual, historic slurs levelled at humans. This raises a good question: are we really making a case for human superiority, if we’re recycling racist, homophobic, and ableist language? Or are we, as one X user puts it: “getting a little too slur happy with this ‘clanker’ nonsense”?
There’s also the question of our relationship with the robots of the future to consider. If we really do believe that AI is coming for our jobs, breaking our brains, and ultimately fuelling robots’ rise to power, do we really want to get off on the wrong foot by calling it slurs while it’s still in its infancy? Would we be any safer if we were to treat it with kindness and respect? I’m not sure. But I don’t want to be the one trying to explain to the Terminator that “it was a different time”.