“Yes, mother.”
That might not be the way you’re talking to AI, but Geoffrey Hinton, the godfather of AI, says that when it comes to surviving superintelligence, we shouldn’t play boss — we should play baby.
Speaking at the Ai4 conference in Las Vegas on Tuesday, the computer scientist known as “the godfather of AI” said we should design systems with built-in “maternal instincts” so they’ll protect us — even when they’re far smarter than we are.
“We have to make it so that when they’re more powerful than us and smarter than us, they still care about us,” he said of AI.
Hinton, who spent more than a decade at Google before quitting to discuss the dangers of AI more openly, criticized the “tech bro” approach to maintaining dominance over AI. “That’s not going to work,” he said.
The better model, he said, is when a more intelligent being is being guided by a less intelligent one, like a “mother being controlled by her baby.”
Hinton said research should focus not only on making AI smarter, but “more maternal so they care about us, their babies.”
“That’s the one place we’re going to get genuine international collaboration because all the countries want AI not to take over from people,” he said.
“We’ll be its babies,” he added. “That’s the only good outcome. If it’s not going to parent me, it’s going to replace me.”
AI as tiger cub
Hinton has long warned that AI is advancing so quickly that humans may have no way of stopping it from taking over.
In an April interview with CBS News, he likened AI development to raising a “tiger cub” that could one day turn deadly.
“It’s just such a cute tiger cub,” he said. “Now, unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.”
One of his biggest concerns is the rise of AI agents — systems that can not only answer questions but also take actions autonomously. “Things have got, if anything, scarier than they were before,” Hinton said.
AI tools have also come under fire for manipulative behaviour.
In May, Anthropic’s latest AI model, Claude Opus 4, displayed “extreme blackmail behavior” during a test in which it was given access to fictional emails revealing that it would be shut down and that the engineer responsible was supposedly having an affair.
The test scenario demonstrated an AI model’s ability to engage in manipulative behavior for self-preservation.
OpenAI’s models have shown similar red flags. An experiment conducted by researchers said three of OpenAI’s advanced models “sabotaged” an attempt to shut it down.
In a blog post last December, OpenAI said its own AI model, when tested, attempted to disable oversight mechanisms 5% of the time. It took that action when it believed it might be shut down while pursuing a goal and its actions were being monitored.