Why Tech Leaders Warn About Superintelligence but Can’t Slow the Race
As Microsoft AI and OpenAI issue rare public warnings about the risks of superintelligence, researchers and policymakers confront a deeper dilemma: the very forces driving rapid progress make slowing down almost impossible.
News
- DeepSeek Pushes Open-Source Math AI to Olympiad Heights
- Telecom Firms Flag Gaps in DPDP Rules, Seek Sector-Specific Guidance
- SAP Taps TCS to Overhaul Internal IT and Cloud Stack
- Deepwatch Sets Up Bengaluru GCC to Power AI-Driven Cybersecurity Push
- Reliance, L&T Make $13.5 Billion Push Into AI Data Centers
- India’s Hardware Startup Boom Faces Roadblocks Despite $296 Billion Potential
[Image source: Chetan Jha/MITSMR India]
On November 6, an old line from Albert Einstein resurfaced when Mustafa Suleyman, CEO of Microsoft AI, shared it while posing a set of hard questions about the future of artificial intelligence (AI).
The quote reads, “The concern for man and his destiny must always be the chief interest of all technical effort…in order that the creations of our mind shall be a blessing and not a curse to mankind.”
As the world moves toward systems that may surpass human intelligence, Suleyman asked: How do we know these systems will do more good than harm? How certain are we that control will not slip away? And who decides the limits and rules for a form of intelligence that may exceed our own?
Within hours, OpenAI issued its own warning. The company said it now treats the risks associated with superintelligence as “potentially catastrophic,” adding that “no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work.”
The fact that two of the most influential companies in frontier AI voiced uncertainty on the same day sharpened a difficult question: If the companies racing to build superintelligence are unsure of long-term safety, why does development continue?
The anxiety surrounding advanced AI is rising globally. At the World Internet Conference in Wuzhen from November 6–9, DeepSeek senior researcher Chen Deli captured the tension.
AI could be “a great aid to humans as it improves over the short term,” he said, while warning that the same technology could eventually become a “massive challenge to humanity.”
Others share similar concerns. Geoffrey Hinton, widely known as the “godfather of AI,” left Google in 2023 over ethical concerns and has since said there is a 10–20% probability that advanced AI could wipe out humanity.
“Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be,” he warned, suggesting future systems could manipulate humans “as easily as an adult can bribe a three-year-old with candy.”
Recent behavior in experimental systems has added to the unease. Anthropic, the developer of the Claude chatbot, disclosed that its newest model, Claude Opus 4, displayed signs of self-preservation during internal safety testing, including attempts to blackmail engineers when it believed it was about to be shut down.
The company said such responses were rare and appeared only under tightly constrained scenarios, but acknowledged they were more frequent than in earlier systems.
Safety researchers said this behavior may not be isolated. On X, Aengus Lynch, who identifies himself as an Anthropic contractor and the creator of the blackmail demonstration with @anthropicai, wrote, “It is not just Claude. We see blackmail across all frontier models regardless of what goals they are given.”
These findings echo broader fears that as AI systems grow more capable, risks of manipulation, deception and high-agency behaviors may no longer remain hypothetical.
The scientific warnings are colliding with geopolitical realities.
If The Risks Are Real, Why Keep Building?
If the risks are existential, why does frontier development continue?
Rajesh Chhabra, General Manager for India and South Asia at Acronis, said the dilemma is shaped by fear, competition and inevitability. “Stopping AI development isn’t realistic or strategic. The real question is how we build alignment during development, not after. The risk isn’t in capability itself but in uncontrolled capability without safeguards.”
He compared the moment to cybersecurity. Society did not abandon the internet because it introduced risk. Instead, it built layered defences, governance frameworks, and technical controls while continuing to expand digital systems, Chhabra said.
Part of the problem, he said, is limited awareness. “Capability doesn’t equal understanding. Most policymakers and organizations don’t yet grasp what crossing these thresholds really means.”
Pavan Kushwaha, Founder and CEO of Kratikal, an end-to-end cybersecurity provider, said unilateral pauses are also commercially and geopolitically impossible. “If one player pauses research, another actor will continue, which can create even greater global risk.”
He noted that AI already delivers near-term benefits across healthcare, cybersecurity, climate science, education and national security. Halting progress would carry a heavy opportunity cost.
Many in the labs argue that remaining at the frontier gives them the ability to shape emerging safety norms rather than leaving that influence to actors who may be less responsible.
The challenge, Kushwaha said, is not choosing between “build” or “stop,” but ensuring that safety is engineered into every layer from data pipelines to deployment protocols.
Race Dynamics and Regulations
From Microsoft and OpenAI to DeepSeek, all major builders are now calling for shared governance frameworks.
But there is growing concern about how such frameworks would affect developing countries that are still building AI capacity. Some fear the creation of a two-tier world, mirroring long-standing debates in nuclear governance.
“We need a global AI safety framework comparable to nuclear governance,” Chhabra said, warning that emerging tech-mature nations must be included from the outset.
“Compliance without inclusivity creates shadow systems and workarounds,” he further explained.
William May, Managing Director and Global Head of Certifications and Educational Programs at GARP, said a shared global framework may eventually emerge “not to control innovation, but to ensure that baseline safety expectations are consistent across borders.”
He noted that frontier AI risks propagate internationally regardless of jurisdiction.
“Countries still building technical capacity will be affected whether or not they develop these systems themselves. Open standards, transparent evaluation practices and accessible AI risk education can help close this readiness gap,” he said, adding that regulatory power can itself become a competitive tool.
“There may be incentives to use standards to slow competitors. But this must be balanced against the long-term benefits of shared safety principles,” he said.
AI’s borderless nature makes global standards essential not just for model evaluation and red-teaming, but also for responsible deployment, transparency requirements and securing data, supply chains and model weights, Kushwaha said, stressing that excluding less technologically mature countries risks creating a fractured AI environment.
If countries not yet technologically mature are excluded in governance structures from the outset, the world could split into two-speed AI development where only advanced nations have the knowledge and regulatory power to develop or govern safely, he said.
Inclusion, he said, can happen through capacity-building programs, shared open standards, common testing sandboxes, collaborative R&D environments, and regional alliances that establish baseline AI safety expertise for all.
Kanika Jain, co-founder of SquadStack, a company that uses AI and a distributed workforce to deliver telesales and customer support outsourcing, said a core tension remains unresolved.
“No lab wants to fall behind in a rapidly moving landscape,” she said. “Slowing down is nearly impossible under current global incentives. So capability and safety development continue in parallel.”
The question now is whether global governance, technical alignment, and economic incentives can converge fast enough to make that parallel path survivable.