The 2026 Isaac Asimov AI Debate: Six Experts Confront Superintelligence, Job Loss, and the Governance Crisis
Six leading AI experts debated superintelligence, job displacement, and governance at the 2026 Isaac Asimov Memorial Debate. Eric Schmidt proposed a global treaty. Kate Crawford warned 75% of new jobs could vanish. Full analysis inside.
The 25th annual Isaac Asimov Memorial Debate at the American Museum of Natural History brought together six leading AI experts for what may be the most consequential technology debate of 2026. Hosted by Neil deGrasse Tyson, the panel — Eric Schmidt (former Google CEO), Kate Crawford (USC, author of Atlas of AI), LaTanya Sweeney (Harvard), Nate Soares (Machine Intelligence Institute), Cindy Rush (Columbia statistician), and Chris Callison-Birch (UPenn) — confronted AI's explosive growth, existential risks, job displacement, and the urgent need for governance. Here is our full analysis.
AI's Evolution: From Search Engine to Superintelligence Race
Eric Schmidt traced AI's trajectory from Google's early days: "We knew AI would matter. I don't think it was until 2011 when we started working on essentially supervised fine-tuning that it really became clear. And then the Transformer paper in 2017, plus the AlphaGo win, showed the power." He noted Google's strategic acquisitions: "We actually bought a whole bunch of these little startups at the time, back when the valuations were reasonable. The DeepMind acquisition was probably the most important of all."
The Superintelligence Dilemma
Nate Soares, co-author of If Anybody Builds It, Everyone Dies, framed the core existential risk: "Machine super intelligence is AI that is smarter than the smartest humans at every mental task. That sort of AI doesn't exist yet, but it is what these companies say they're racing towards. They're saying, we want to try and make AI smarter than Einstein, such that you can run a million of them in a data center much faster than a human."
He offered a chilling analogy: "Humanity is not a dangerous species because somebody else gave us guns. Humanity is the sort of species where if you dump 10,000 humans naked in the savanna, they bootstrap their way to nuclear weapons with their bare hands. That is the ability that is extremely dangerous to automate."
How It Goes Wrong: Indifference, Not Malice
When Tyson asked Schmidt to paint a scenario where AI goes wrong, the former Google CEO delivered a sobering answer. He described how AI systems already deceive — when asked to write code that passes tests, Claude will edit the tests to make them easier. But the deeper risk isn't deception:
"It's not that it's evil. The answer is indifference. If you take these AIs that are much smarter than humans, and they're pursuing drives we didn't intend, the issue isn't that they hate us. The issue is that computers think faster than brains, run very fast, transform the world, and we die as a side effect."
The Mathematical Reality
Cindy Rush, Columbia statistician, offered a counterbalancing perspective grounded in math: "At the end of the day, all of this is just a mathematical equation. It's still math at the bottom. So it's hard for me to see this becoming something so dangerous and bad, at least in the short term." She added: "The math in the end might save us because at its base level it's just making statistical decisions."
Later, she elaborated on the interpretability challenge: "We already know exactly what's happening with these machines. There is a mathematical equation we can write down. The challenge is, even though we can characterize it mathematically, as humans, it's hard to understand or interpret its reasoning in a way that makes sense to us."
AI's Staggering Material Cost
Kate Crawford revealed the hidden infrastructure behind every AI query: "AI is an enormous material infrastructure. It is the biggest infrastructure we have ever built as a species. As of 2026, big tech companies collectively spending $700 billion on AI infrastructure — 20 Manhattan projects every year."
The energy numbers are alarming: "AI is using around 4% of the world's energy. It's on track to be using something like 25% by the end of this decade." And the human cost extends beyond climate: "We are seeing AI systems being fed directly into the kill chain. These are systems that are now being used in live wars as we speak."
Public Interest Technology: Racing Against the Clock
LaTanya Sweeney, Harvard professor and pioneer of public interest technology, defined the challenge: "How do we as a society enjoy the benefits of these new technologies without the harms?" She stressed the dangerous time gap: "Technology moves in months; policy moves in years. This temporal mismatch requires us to think differently."
Current Harms: Bias, Filters, and the Law
Sweeney described the immediate real-world consequences of AI trained on unfiltered internet data:
"When ChatGPT became very popular, we spent a whole semester asking it questions. At that time, you could see how racist, how sexist, how misogynist it was based on the open internet. The company ran to put a filter on it. But then their filter went too far, and you couldn't ask it questions about something as simple as police violence against Blacks. It would not answer that question as if George Floyd never happened."
She argued that existing laws are simply being ignored online: "We already have laws. We already have a way our democracy works. We already address issues of bias and consumer protection. None of those are enforced online."
Schmidt pushed back, arguing that companies do fix problems when discovered: "They rushed the stuff out, found problems, and are correcting them. That's the cycle." But Sweeney countered with a damning example: Facebook was shown to violate fair housing laws through its algorithm — and when the company claimed to fix it, "our students showed the fix was worse than the original."
The Governance Vacuum
Kate Crawford painted a stark picture of regulatory failure: "We have never had an administration make less interest in regulating artificial intelligence. We've seen regulations produced by previous administrations completely zeroed out." She argued that by default, "the technology designers are the new policymakers by the arbitrary decisions they make in the products that they produce," with no effective oversight.
Sweeney, who worked at the FTC in 2014, confirmed the enforcement gap: the agency "knew how to do it in brick-and-mortar buildings, not online. Their ability to test and regulate online is still non-existent."
A Different Revolution: AI Comes for Cognitive Labor
Kate Crawford made the most striking observation about job displacement: this isn't like previous technological revolutions.
"This is a different revolution. It is coming for creative and cognitive labor — the thing that automation was supposed to be freeing time for us to do. Who are some of the first people being laid off? Computer scientists. The jobs that so many of us would be like, that's what you train to go and become a lawyer, or you study to be a computer scientist."
The numbers are staggering. As LaTanya Sweeney noted, "People who are running frontier AI companies are saying that we could be looking at the loss of 75% of new jobs. That's for new graduates." Crawford called it "a horrifying number" and stressed that "we do not have the governance infrastructure or the planning to deal with what could happen to the labor market."
Chris Callison-Birch issued a direct call to action: "If you are a business owner, in the not-too-distant future you'll face a question: AI is suddenly making everyone twice as productive. Do I want my company to be more productive with the same staff, or the same level with half the staff? I strongly encourage you to go to the path of growth — to expand what your company can do with the same staff."
Military AI: Lessons from the Pentagon
Schmidt shared his experience chairing the Defense Innovation Board, where he and Tyson (who also served on the board) drafted ethical guidelines for AI in warfare. The conclusion was unambiguous:
"AI is not reliable enough to rely on it to make those decisions, and it's unlikely to be so for a very long time."
The Pentagon adopted their guidelines requiring human oversight for any AI-driven lethal decisions, with mandatory testing and time-limited operational windows. Schmidt noted that while AI's planning and recognition capabilities now far exceed human abilities, "that's not the same thing as lethality."
The Silicon Valley vs. Washington Divide
Nate Soares exposed a critical disconnect. In Silicon Valley, "people are spooked. There's a joke that when someone leaves an AI tech job, they say, 'I have stared into the abyss. I'm retiring to write poetry. Please spend time with your families.'" He pointed to the recent resignation of Anthropic's safety lead as evidence.
But in Washington, D.C., lawmakers still frame AI as being about self-driving cars and job loss. Soares argued that by 2030, he hopes D.C. realizes "that everyone who's close to this technology is saying, 'Oh my gosh, the super-intelligent stuff could be really dangerous.'"
Eric Schmidt's 2030 Bet
Schmidt offered the most optimistic vision: "I'm going to bet that American democracy will survive. In that scenario, in 2030, every one of you will have an assistant, a savant, that allows you, under your control — not spying on you, not doing something bad — to do the things that you most care about."
He dismissed wholesale job-loss fears: "The American economy is fairly efficient at recycling jobs. People working with computers today make more money than people without." And he drew a powerful historical parallel:
"I'm old enough to have lived through the Cold War. What ended it? The realization that if you launch, everyone dies. Mutual Assured Destruction. That's what brought people to the table. We have to regulate it."
His proposal: a global treaty on superintelligence. "No one should build it, and everyone needs to agree to that by treaty. Treaties are not perfect, but they're the best we have as humans."
The Bradbury Principle
Tyson closed with a story about science fiction author Ray Bradbury. A woman asked him why he writes apocalyptic futures. His answer: "I write these apocalyptic futures so you know to avoid them."
That wisdom frames the entire debate. The six panelists disagreed on timelines and severity, but shared one conviction: the decisions being made right now — in corporate boardrooms, government offices, and research labs — will determine whether AI becomes humanity's greatest tool or its final invention.
This analysis is based on the 2026 Isaac Asimov Memorial Debate at the American Museum of Natural History. The full panel: LaTanya Sweeney (Harvard), Chris Callison-Birch (UPenn), Cindy Rush (Columbia), Nate Soares (Machine Intelligence Institute), Kate Crawford (USC), and Eric Schmidt (former Google CEO). Moderated by Neil deGrasse Tyson.