Altman Sees Superintelligence Arriving in a Couple of Years
At the AI summit, Altman positioned superintelligence as a near-term reality and global regulation as an urgent priority.
News
- Mistral’s Mensch Pitches Open-Source AI as Key to Digital Sovereignty
- LeCun Questions Superintelligence Hype
- Altman Sees Superintelligence Arriving in a Couple of Years
- Inside the Debate Over AI’s Ownership Model
- In India, Language Is AI’s Next Infrastructure Challenge
- Ambani Vows $110 Billion to Build AI Infrastructure in India
OpenAI Chief Executive Officer Sam Altman said early forms of superintelligence could emerge within a couple of years and called for a global body to coordinate artificial intelligence governance, as the technology’s capabilities accelerate.
“By the end of 2028, more of the world’s intellectual capacity could reside inside of data centres than outside of them,” Altman said in a keynote address at the India AI Impact Summit 2026.
Altman compared the need for global oversight of AI to the International Atomic Energy Agency’s role in nuclear power, arguing that rapid advances will require coordinated global supervision and the ability to respond quickly to changing risks.
The remarks come as OpenAI advances a funding round that values the company at more than $100 billion and explores a potential public listing by late 2026.
Altman said decentralization and broad access to AI systems are critical to ensuring the technology benefits society rather than concentrating power.
“Democratization of AI is the only fair and safe path forward,” he said.
Centralizing data and compute in the hands of a few companies or governments, he suggested, would heighten risks. “I don’t think we should accept that trade-off, nor do I think we need to.”
AI Safety
He added that AI should “extend individual human will” and that new governance mechanisms may be required to prevent extreme imbalances in access to computing power.
Altman also urged a broader definition of AI safety that includes societal resilience alongside technical safeguards.
“We will continue to build safe systems and solve difficult technical alignment challenges,” he said. “But increasingly, we need to start broadening how we think about safety to include societal resilience.”
He warned that advances in AI could be misused, including in sensitive areas such as biotechnology, and said no single lab or system can guarantee a positive outcome.
“No AI lab, no AI system can deliver a good future on its own,” he said.
On the pace of development, Altman downplayed concerns about a race between the US and China, emphasizing what he described as OpenAI’s strategy of iterative deployment.
“The society needs to contend with and use each successive new level of AI capability,” he said. “Have time to integrate it, understand it, and decide how to move forward.”
Addressing fears of job losses, Altman said technological disruption is historically followed by new forms of work. “Technology always disrupts jobs. We always find new and better things to do,” he said.
Concentration of Power
Citing World Bank estimates that 7% of jobs could be at risk from generative AI while 17% may be complemented, Altman said the coming years will test whether AI empowers individuals or concentrates power.
“As this technology continues to improve at a rapid pace, we can choose to either empower people or concentrate power,” he said.
The summit also featured executives from Google DeepMind and Anthropic, underscoring intensifying global competition over the direction of advanced AI systems.
Altman said the development of advanced AI systems has already produced unexpected outcomes and warned against overconfidence in forecasting their trajectory.
“The development of AI has already held many surprises, and I assume there are bigger ones to come,” he said. “It’s important to be humble about what we don’t know, and always remember that sometimes our best guesses are wrong.”
He said superintelligence could reshape geopolitics in unpredictable ways, including how governments deploy AI in warfare or redesign social contracts.
“Sharing control means accepting that some things are going to go wrong in exchange for not having one thing go mega wrong,” he said.
Altman added that major breakthroughs often emerge where technology and society intersect, arguing that progress will depend as much on public debate and institutional design as on engineering advances.

