OpenAI Chose Defense Contracts Over Consumer Trust — And 295% of Users Noticed
OpenAI's Pentagon partnership caused ChatGPT uninstalls to surge 295% while rival Claude hit #1 on the App Store. Now NATO contract rumors threaten to deepen the trust crisis.
ChatGPT Uninstalls Spike as Sam Altman's Military Pivot Backfires
On February 28, 2026, something unprecedented happened in the artificial intelligence industry. Not a breakthrough. Not a product launch. Something far more revealing about where AI is actually headed.
ChatGPT mobile app uninstalls surged 295% day-over-day as users responded to OpenAI's new partnership with the Department of Defense. Meanwhile, rival Anthropic's Claude app climbed to #1 on the App Store for the first time ever.

The Pentagon Deal That Shattered Consumer Confidence
The backlash was swift and severe. According to market intelligence firm Sensor Tower, ChatGPT's typical day-over-day uninstall rate of 9% exploded to 295% on Saturday, February 28. Downloads dropped 13% day-over-day and continued falling.
But the damage went deeper than metrics. One-star reviews for ChatGPT surged 775% on Saturday, then grew another 100% on Sunday. Five-star reviews declined by 50% during the same period.
Users weren't just uninstalling an app. They were making a statement about trust — or the loss of it.
The $200 Million Question: What Did OpenAI Agree To?
While OpenAI hasn't disclosed full contract details, the deal reportedly involves classified military operations. The initial agreement sparked immediate controversy when OpenAI claimed it had "more guardrails than any previous agreement for classified AI deployments."
That claim didn't age well.
By Monday, March 2, OpenAI CEO Sam Altman was backtracking publicly. In a post on X, he admitted the company had made a mistake by rushing "to get this out on Friday." He described the rollout as "opportunistic and sloppy" — an extraordinary admission from a CEO whose company is valued at $157 billion.
"The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."
— Sam Altman, OpenAI CEO
Altman announced new amendments explicitly prohibiting the use of OpenAI systems to spy on Americans. Intelligence agencies including the NSA would now require "follow-on modifications" to access the technology.
But for many users, the damage was already done.
The Anthropic Alternative: Ethics as Competitive Advantage
While OpenAI stumbled, Anthropic seized the moment. The company announced it would not partner with the Department of Defense, citing concerns that AI could be used to surveil Americans and deploy in fully autonomous weaponry.
The market responded immediately:
- Claude's U.S. downloads jumped 37% day-over-day on Friday, February 27
- Downloads surged 51% by Saturday, February 28
- Appfigures data showed Claude's total daily U.S. downloads surpassed ChatGPT's for the first time
- Claude reached #1 on the App Store — a jump of over 20 ranks in one week
- Claude became the #1 free iPhone app in six countries including Canada, Germany, and Norway
Similarweb reported Claude's U.S. downloads over the past week were approximately 20x what they were in January.
The message from consumers was clear: when given a choice, many prefer AI companies that refuse military contracts over those that rush to sign them.

Why the NATO Contract Makes Everything Worse
As if the Pentagon backlash wasn't enough, Reuters broke news on March 4 that OpenAI is now considering a contract to deploy its technology on NATO's "unclassified" networks.
The timing couldn't be more damaging. Just days after the 295% uninstall spike, just days after Altman's apology, OpenAI is already circling another defense contract.
The pattern is becoming impossible to ignore:
February 28, 2026: Pentagon deal announced → 295% uninstall spike
March 2, 2026: Altman apologizes, adds guardrails
March 4, 2026: NATO contract rumors surface
If the Pentagon deal was "opportunistic and sloppy," what do you call pursuing NATO contracts before the dust has even settled?
The Geopolitical Reality: AI Is Becoming a Defense Asset
Let's be clear about what's happening. OpenAI isn't making a secret pivot to defense — they're just moving faster than public opinion can keep up.
The Department of Defense has been renamed the "Department of War" under the Trump administration. The U.S.-Israel conflict with Iran is ongoing. And AI is increasingly viewed as critical military infrastructure.
Palantir, which provides AI-powered defense platforms to NATO and the UK Ministry of Defence, explicitly states its technology helps make "faster, more efficient, and ultimately more lethal decisions." Unlike Anthropic, Palantir does not support a blanket ban on autonomous weapons — only that there should be a "human in the loop."
But as Professor Mariarosaria Taddeo of Oxford University told the BBC, with Anthropic refusing Pentagon contracts, "the most safety-conscious actor" is now "out from the room."
"That is a real problem."
— Professor Mariarosaria Taddeo, Oxford University
What This Means for AI Users
If you're using ChatGPT, Claude, Gemini, or any other AI system, this controversy matters — even if you're not interested in defense policy.
Here's why:
1. Your Data May Not Be As Private As You Think
Military contracts require different security standards than consumer products. When OpenAI pivots to defense, they're building infrastructure that serves government surveillance as much as your grocery list.
2. "Guardrails" Are Negotiable
OpenAI initially claimed their Pentagon deal had "more guardrails than any previous agreement." Within 72 hours, they were adding more restrictions after public backlash. If those guardrails were so robust, why did they need emergency amendments?
3. Consumer Trust Is Expendable
295% uninstall spike. 775% surge in one-star reviews. And OpenAI is already pursuing NATO contracts. The lesson? Defense revenue matters more than consumer sentiment.
4. Competition Is Creating Ethical Options
The Anthropic surge proves something important: users will vote with their downloads when given ethical alternatives. The AI market is big enough to support companies with different values.
The Bigger Picture: AI's Military Moment
This isn't just about OpenAI or Anthropic. It's about the trajectory of artificial intelligence as a technology.
For years, AI companies emphasized democratization, accessibility, and consumer empowerment. The narrative was about giving individuals superpowers — writing assistants, coding help, creative tools.
But 2026 is revealing the industry's other path: military contracts, surveillance infrastructure, and defense applications worth billions.
OpenAI is valued at $157 billion. Consumer subscription revenue is nice. Defense contracts are transformative.
The question isn't whether AI will be used by militaries — it will. The question is whether the companies building consumer AI can maintain trust while pursuing defense revenue. OpenAI's 295% uninstall spike suggests the answer is no.
What Comes Next
Three scenarios seem likely:
Scenario 1: OpenAI Doubles Down
Sam Altman accepts that consumer backlash is the cost of defense contracts. The company pivots toward government revenue, accepting that ChatGPT becomes a secondary product. User trust never fully recovers.
Scenario 2: The Ethics Premium
Anthropic and similar companies capture the consumer market by refusing defense contracts. A two-tier AI ecosystem emerges: consumer tools from ethical providers, military tools from defense contractors.
Scenario 3: Regulatory Intervention
Governments mandate transparency about military contracts, requiring AI companies to disclose defense relationships. Users can make informed choices, but the defense integration continues.
Which scenario plays out depends on whether users continue voting with their uninstalls — and whether competitors like Anthropic can scale fast enough to meet migrating demand.
The Bottom Line
OpenAI had a choice: slow military integration to maintain consumer trust, or accelerate defense contracts and accept the backlash.
They chose speed. The 295% uninstall spike was the market's response.
Now they're pursuing NATO contracts before the Pentagon controversy has cooled. Sam Altman called the first rollout "opportunistic and sloppy." The second attempt, if it happens, will be harder to explain away.
For AI users, the lesson is clear: the company that built ChatGPT is prioritizing defense contracts over the trust that made it a household name. Whether that tradeoff is worth it depends on whether you believe AI should serve consumers or militaries first.
295% of ChatGPT users already made their choice. They uninstalled.
Sources: TechCrunch, BBC News, Reuters, eWeek