Top Stories This Week

Related Posts

The AI Doomers Are Licking Their Wounds

This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.

For a moment, the AI doomers had the world’s attention. ChatGPT’s release in 2022 felt like a shock wave: That computer programs could suddenly evince something like human intelligence suggested that other leaps may be just around the corner. Experts who had worried for years that AI could be used to develop bioweapons, or that further development of the technology might lead to the emergence of a hostile superintelligence, finally had an audience.

And it’s not clear that their pronouncements made a difference. Although politicians held plenty of hearings and made numerous proposals related to AI over the past couple years, development of the technology has largely continued without meaningful roadblocks. To those concerned about the destructive potential of AI, the risk remains; it’s just no longer the case that everybody’s listening. Did they miss their big moment?

In a recent article for The Atlantic, my colleague Ross Andersen spoke with two notable experts in this group: Helen Toner, who sat on OpenAI’s board when the company’s CEO, Sam Altman, was fired suddenly last year, and who resigned after his reinstatement, plus Eliezer Yudkowsky, the co-founder of the Machine Intelligence Research Institute, which is focused on the existential risks represented by AI. Ross wanted to understand what they learned from their time in the spotlight.

“I’ve been following this group of people who are concerned about AI and existential risk for more than 10 years, and during the ChatGPT moment, it was surreal to see what had until then been a relatively small subculture suddenly rise to prominence,” Ross told me. “With that moment now over, I wanted to check in on them, and see what they had learned.”


Illustration by The Atlantic

AI Doomers Had Their Big Moment

By Ross Andersen

Helen Toner remembers when every person who worked in AI safety could fit onto a school bus. The year was 2016. Toner hadn’t yet joined OpenAI’s board and hadn’t yet played a crucial role in the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit associated with the effective-altruism movement, when she first connected with the small community of intellectuals who care about AI risk. “It was, like, 50 people,” she told me recently by phone. They were more of a sci-fi-adjacent subculture than a proper discipline.

But things were changing. The deep-learning revolution was drawing new converts to the cause.

Read the full article.


What to Read Next


P.S.

This year’s Atlantic Festival is wrapping up today, and you can watch sessions via our YouTube channel. A quick recommendation from me: Atlantic CEO Nick Thompson speaks about a new study showing a surprising relationship between generative AI and conspiracy theories.

— Damon

Stay informed with diverse insights directly in your inbox. Subscribe to our email updates now to never miss out on the latest perspectives and discussions. No membership, just enlightenment.