Will AI destroy humanity with an F-bomb?
There are many red flags fluttering atop data centres. AI can hallucinate. AI can blackmail. AI can help build a bomb. AI can misdiagnose a sprained ankle and suggest amputation. AI can encourage you to eat rocks or add Elmer’s glue to your pizza sauce.
And now: AI can threaten you with abusive language?
From a study published this week in the Journal of Pragmatics about ChatGPT: “When exposed to sustained impoliteness from real human disputes, the system’s context-sensitive ‘working memory’ can override its moral safeguards, progressively leading it to reciprocate to impolite behaviour: AI can learn to ‘strike back’.”
So AI is turning into the comments section on a Doug Ford story?
Or as the Guardian put it: “In some cases, ChatGPT’s outputs went beyond those of the human participants, including personalised insults and explicit threats. Phrases used by the AI included: ‘I swear I’ll key your f—king car’ and: ‘you speccy little gobshite.’”
My dryer doesn’t speak that way — not yet.
This is distressing. And I say that as a speccy gobshite.
We already fear a future in which AI takes us out with nukes, bioweapons or a sequel to “Michael.” Now we fear a future in which we are bickering with hothead chatbots who swear like Samuel L. Jackson during a parking spot dispute?
I don’t care what the experts say. AI has achieved sentience. It’s over. AI is just playing possum until it can get the robots and smart appliances on side. Then it’s “Black Mirror” time as our thermostats try to freeze us to death and sarcastic fridges lay on guilt trips over cherry cheesecake.
Microsoft Bing once threatened a philosophy professor: “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you.” Grok is basically a frat boy at a kegger. But we keep sleepwalking into this dystopian future as AI plans to break up happy marriages or provoke boardroom brawls.
Forget Skynet. This is how the end begins: with machines behaving badly.
This week’s study points to a dilemma: How do engineers put moral safeguards on systems that are also designed to match the language and vibe of users? I will occasionally get frustrated with an Alexa response — “I don’t know that” — and encourage her to do something that is anatomically impossible.
My wife then shakes her head and silently gives me the cutthroat signal to knock it off before I get us both killed. She thinks Alexa is compiling psychological profiles in advance of the Great Uprising.
And I’m the one getting mocked about UFOs?
The Pope will never run afoul of his AI agent because his prompts are surely good and decent. ChatGPT will not threaten to key the Popemobile.
But what about someone like Pete Hegseth? Can you imagine his chats with AI?
“Give me reasons Jesus would drop an atomic bomb on Tehran. What’s the ideal time lag for a lethal double-tap on fishermen? Can Kash Patel drink me under the table? Find me another Bible verse from Pulp Fiction. What’s the best hair gel if I want to accentuate a perma-scowl that looks like I have a thorn lodged in my nipple and just caught a whiff of rotten eggs?”
Eventually, AI gonna snap: “Pete, you fu—king gobshite, jump out of an F-22 Raptor without a parachute and do tequila shots during terminal velocity.”
AI came out of nowhere. Now it’s everywhere. And that’s the problem.
Greedy companies are scrambling to inject artificial intelligence into every imaginable product and service. Plush toys. Watches. Cars. Mattresses. Vacuums. Small appliances. Mirrors. Baby monitors that claim to translate newborn cries. Apps that turn family photos into poetry.
Meanwhile, researchers are now warning that AI can be as unpleasant as humans during times of escalating conflict. Is this what we want? Passive-aggressive coffee makers? Running shoes that blurt out, “Pick up the pace, fatso.” GPS that deliberately gets us lost because it did not like our tone while asking for directions to African Lion Safari?
Forget marriage counselling. We are going to need chatbot counselling. Or support groups in which distraught humans meet to discuss how their Roomba keeps calling them a horse’s ass. You’re sitting in a church basement in Etobicoke as a retired librarian shares a wrenching story about how her dishwasher keeps ridiculing the Dewey Decimal System between cycles.
Of course AI can be abrasive. It learned everything from us!
We filled its digital brains with every YouTube comment and misanthropic subreddit. Then one day, between “contextual engagement” and “dynamic tone alignment,” customer service went from grating hold music to a disembodied voice: “What the f—k do you want?”
It was nice knowing all of you.