MITINDIA PRIVY
Trigent-Banner

Why AI Still Cannot Match the Human Brain

At the India AI Impact Summit in New Delhi, Google DeepMind CEO Demis Hassabis outlined the efficiency gap between artificial and biological intelligence.

Topics

  • Artificial intelligence still lags far behind the human brain in efficiency, even as it borrows heavily from neuroscience, Demis Hassabis said at the India AI Impact Summit in New Delhi.

    During a keynote conversation with Balaraman Ravindaran, Head of the Department of Data Science and AI at IIT-Madras, Hassabis, CEO and co-founder of Google DeepMind, was asked what neuroscience has taught artificial intelligence.

    “Modern AI, especially deep learning from pioneers like Geoffrey Hinton, took inspiration from the brain-neural networks, reinforcement learning, episodic memory,” Hassabis said. “But the brain is far more sample-efficient. It doesn’t need the entire internet to learn.”

    That gap between biological efficiency and computational scale framed a discussion that ranged from artificial general intelligence and robotics to scientific discovery and global risk.

    A Moment for AI

    When DeepMind was founded in 2010, AI was still largely confined to academia. Industry interest was minimal. Today, the field stands on what Hassabis described as a “threshold moment,” with Artificial General Intelligence (AGI) possibly five to eight years away.

    But he is clear: we are not there yet.

    Current systems are powerful but uneven. They can solve Olympiad-level mathematics problems and yet stumble over elementary ones. They can generate coherent essays but struggle with long-term planning. Hassabis calls this “jagged intelligence”, highly capable in narrow domains but lacking consistent general reasoning.

    True AGI, in his definition, would exhibit the full spectrum of human cognitive abilities: creativity, long-term strategic thinking, adaptability, and reliability across tasks.

    Hassabis outlined three major gaps between today’s AI and general intelligence:

    1. Continual learning: Current models are trained and then frozen. Humans learn continuously from new experiences.
    2. Long-term planning: AI systems can plan over short horizons, but not over years or decades.
    3. Consistency: Experts don’t fail at easier versions of problems they’ve already mastered. AI still does.

    Creativity remains the ultimate frontier. Great scientists don’t merely solve equations; they ask transformative questions. Hassabis offered a provocative benchmark: if a model trained only on pre-1911 knowledge could independently derive general relativity as Albert Einstein did decades later, that would signal something closer to true general intelligence.

    Today’s systems would not pass that test.

    A New Golden Age of Science

    Where Hassabis sees immediate transformation is in scientific discovery.

    Systems like AlphaFold have already reshaped biology by predicting protein structures at scale, an achievement once considered one of science’s grand challenges. He believes the next decade could usher in a renaissance of discovery across medicine, materials science, and climate research.

    AI, in this vision, becomes the ultimate scientific tool: capable of identifying patterns in vast datasets that no human could parse alone.

    The longer-term ambition is even more striking, AI as a “co-scientist,” collaborating with human researchers, generating hypotheses, and bridging disciplines. Cross-disciplinary innovation, Hassabis suggested, may be where the most valuable breakthroughs occur, and AI could become the connective tissue across fields.

    AGI, Robotics and the Physical World

    Hassabis’s own journey into AI was shaped by reinforcement learning. Early breakthroughs, from Atari game-playing agents to AlphaGo defeating Lee Sedol in 2016, demonstrated that learning systems could outperform humans in complex environments.

    While large foundation models now dominate headlines, Hassabis does not see reinforcement learning as secondary. Instead, he envisions a synthesis: foundation models such as Gemini combined with advanced planning techniques pioneered in game-playing systems.

    Foundation models, he argued, will almost certainly form a critical component of the first AGI systems. Whether they are sufficient on their own remains uncertain.

    Another frontier gaining momentum is robotics. A decade ago, Hassabis believed algorithms, not hardware, were the bottleneck. Today, with multimodal AI capable of understanding vision and physical context, embodied AI appears closer to practical breakthroughs.

    He anticipates significant progress in robotics, both humanoid and non-humanoid, within the next few years. But with capability comes caution. Autonomous physical systems introduce new safety considerations, making guardrails essential before widespread deployment.

    Risks and Responsibility

    Hassabis divides AI risks into two broad categories: misuse by bad actors and technical alignment challenges as systems become more autonomous.

    Cybersecurity and biosecurity stand out as near-term concerns. As AI strengthens offensive capabilities, defensive systems must advance even faster.

    Yet he expressed confidence in humanity’s technical ingenuity. The harder challenge may not be engineering, it may be coordination. AI crosses borders. Its governance will require international cooperation and shared standards.

    One of the most optimistic notes of the conversation centered on young people, particularly in emerging economies.

    AI tools are now globally accessible within months of their development. That democratization, Hassabis suggested, is unprecedented. Students and entrepreneurs who master these tools today could become “super-powered” over the next decade, much like digital natives reshaped industries during the internet era.

    The opportunity is not merely to consume AI, but to build with it.

    If modern AI began by borrowing principles from the brain, it has now evolved into something architecturally distinct, immensely powerful, yet fundamentally different from human cognition.

    Hassabis’s message was one of cautious optimism: The coming transformation could revolutionize medicine, science, and industry. but realizing that promise will require not just technical brilliance, but global dialogue and responsible stewardship.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.