Journalism begins where hype ends

,,

AI is a tool. The choice about how it gets deployed is ours."

— Oren Etzioni

“Summoning an Alien Species”: David Krueger on Why Superintelligence Could End Us All

Professor David Krueger speaking at Bernie Sanders AI existential threat panel Capitol Hill April 2026
May 9, 2026 08:04 PM IST | Written by Vaibhav Jha

There is a researcher who believes the odds of artificial intelligence wiping out humanity are greater than fifty percent and yet, nobody is slowing down.

In March, David Krueger, an assistant professor in Machine Learning at University of Montreal, marched through the streets of San Francisco protesting outside the offices of AI companies.

Six weeks later, he was sitting on Capitol Hill, next to Senator Bernie Sanders, MIT physicist Max Tegmark, and two China based academics, Zeng Yi and Xue Lan, to discuss the “existential threat of AI”.

Days before the high voltage panel discussion, Treasury Secretary Scott Bessent accused Senator Sanders of “inviting foreign nationals to tell the United States how to regulate AI.” Zeng Yi serves as Dean of Beijing Institute of AI Safety and Governance, and Xue Lan, Professor at Tsinghua University chairs China’s National AI Governance Expert Committee. The duo attended the panel discussion online.

But for Professor Krueger, there was never a moment of hesitation, as according to him, “the risks are too high for humanity.”

Post the discussion, he tweeted, It’s stupid and cowardly to say “don’t talk to China”. We talked to Russia during the Cold War.  We’re not even at war with China. What are you afraid of?  Unless you think your adversary is way smarter than you, talk to them.  It’s pathetic to be scared of talking to China..”

 

Professor Krueger has been advocating against frontier AI model research by major AI companies and the subsequent risks to humanity, if the research leads to creation of superintelligence, a theoretical form of AI where the machine surpasses human intelligence and capabilities across all domains.

Krueger warns that achieving the “superintelligence” state of AI would cross a threshold humanity has no blueprint to manage. Even during the Sanders’ panel discussion, he put the chances of “humanity extinction” beyond 50% if superintelligence arrives.

The core debate is : if a species becomes invasive and dominant, would it allow an inferior species to survive let alone thrive?

To provide more clarity on his stance on superintelligence and its existential threat to humanity, Professor Krueger spoke to AI FrontPage editor Vaibhav Jha in this exclusive interview.

Here are excerpts from the interview.

Question :This was a high-voltage panel under scrutiny. Knowing this criticism was building before the panel, did you have any hesitation about participating, and what was your reasoning for going ahead? 

Krueger: No hesitation. The risk of extinction from AI is extremely important and international cooperation is essential.  I’m truly grateful to Senator Sanders for standing up for what’s right.  He’s at the forefront of US politics on these issues.  It’s about time a US politician addressed the elephant in the room!

Question: Treasury Secretary Scott Bessent accused Senator Sanders of “inviting foreign nationals to tell the United States how to regulate AI.” Marc Andreessen attacked the panel publicly. How would you respond to them?

Krueger: I already addressed this on Twitter. Scott Bessent acknowledged the risks of AI and the need to cooperate with China in a recent interview, so I think these criticisms are just politics, plain and simple.

Question: The day after a high-profile panel, what does your inbox look like? Are you hearing from supportive peers, or is there pushback? Has the criticism cost you anything professionally?

Krueger: Most of the feedback has been extremely positive!  Of course, there are trolls on Twitter, and the mainstream media coverage has been a bit disappointingly focused on critics’ political grandstanding.

Question: During the panel, your probability estimate for a catastrophic risk outcome due to AI was higher than 50%. This is much more pessimistic than Geoffrey Hinton. What evidence justifies your estimate being that much higher?

Krueger: My estimate is based on common sense reasoning and expert knowledge, as well as evidence. The technical problems — safety testing, value alignment, interpretability — are unsolved, despite huge research investments over the past decade plus. We have very good theoretical reasons to expect sufficiently misaligned AI systems to seek power, and we don’t know how aligned AI needs to be for this to not be a serious issue.  If AI overpowers humanity, would it want to keep us alive?  Probably not, we would have nothing to offer it. And instead of prioritizing safety, AI companies are lobbying against any serious regulation and racing to make AI more and more powerful and autonomous.  And society is putting AI in charge of more and more things, including life-and-death decisions in military contexts.

But this is all assuming we don’t course correct!  That’s the whole point of my organization Evitable.  I’m not saying these things to be a doomer, I just think we need to look clearly at the situation we’re in in order to address these problems head-on.

(Geoffrey Hinton said his “inside view” is that it’s about 50%, BTW.)

Professor David Krueger at Stop the AI Race protest in San Francisco in March 2026
Professor David Krueger at Stop the AI Race protest in San Francisco in March 2026

Question: On the panel you said building superintelligence is “basically summoning an alien species, and one that is much smarter than us.” Critics could say this is rhetorical imagery backed by sci-fi movies and there is no scientific evidence to suggest LLMs can reproduce with evolutionary trajectory and have unified goals. Are you suggesting superintelligence is akin to Frankenstein’s monster setting loose or AI can really become a highly evolved invasive species?

Krueger: These critics don’t know what they’re talking about.  This isn’t about today’s LLMs, it’s about superintelligence — there’s a huge difference!  

Today’s AI systems can already pursue goals, that’s how we’re able to automate large software engineering projects with LLMs.  But AI is also quickly getting better at acting independently without human oversight.  And we’re giving them more and more power. 

Alan Turing, the founding titan of computer science, said “At some stage, therefore, we should have to expect the machines to take control”.  As AIs get more control over not just computers, but also robots, factories, transportation, mining, and infrastructure, they become more and more capable of surviving and reproducing, no humans needed. 

Question: You founded a nonprofit called Evitable, named to reject what you call “the myth of inevitability.” What does Evitable actually do, what’s it funded by, and what is the one policy outcome you’d consider a success in the next 12 months? 

Krueger: We are a public-facing organization, informing and organizing people to confront societal-scale risks from AI.  Not just extinction, but mass unemployment, concentration of power, etc. We’re currently funded by a mix of private donors and institutions. We’re in the early stages, and are focused on  growing the organization and getting our message out there. We’ve (helped) put out a few communications products recently including a report on data center opposition groups, showing how large they are and how fast they are growing and an issue tracker for the US 2026 midterm election.

Question: Would you expect a similar cross-country panel discussion hosted by China and if given an opportunity, would you travel to the country for it?

Krueger: Well, perhaps… I did travel to China in 2023 to present at the AI Safety and Alignment Forum at the Beijing Academy of Artificial Intelligence Conference.

Also Read: “Just Say You’ll Pause If Everyone Pauses”: Activist Who Led America’s Largest Anti-AI Protest

Author

  • Vaibhav Jha

    Vaibhav Jha is an Editor and Co-founder of AI FrontPage. In his decade long career in journalism, Vaibhav has reported for publications including The Indian Express, Hindustan Times, and The New York Times, covering the intersection of technology, policy, and society. Outside work, he’s usually trying to persuade people to watch Anurag Kashyap films.