Google AI Cracks Gold at Maths Olympiad with Gemini Deep Think
Google DeepMind’s Gemini Deep Think becomes the first AI to earn a gold-medal score at the International Mathematical Olympiad
News
- Indian IT Recasts Talent Strategy as AI Reshapes Operations
- TCS to Shed 12,000 Jobs This Fiscal, Says AI Isn't the Reason
- CoverSure Partners with Jupiter to Bring AI-Powered Insurance Clarity
- OpenAI's GPT-5 Is Coming, But AGI May Have to Wait
- Google AI Cracks Gold at Maths Olympiad with Gemini Deep Think
- OpenAI Chief Sees Fraud Crisis Brewing as AI Clones Identities

Google DeepMind’s latest artificial intelligence model has done what was once the exclusive preserve of teenage prodigies: it clinched a gold-medal score at the International Mathematical Olympiad (IMO), widely seen as the Mount Everest of school-level math contests.
In a first for AI, the advanced version of Gemini Deep Think scored 35 out of 42 points, correctly solving five of the six competition problems under official time and grading conditions.
That performance not only clears the gold threshold, which is typically reserved for the top 8% of human participants, but also sets a new benchmark for mathematical reasoning by machines.
IMO president Prof. Gregor Dolinar confirmed the model’s feat, calling the solutions “astonishing” and “clear, precise and most of them easy to follow.”
The achievement comes a year after DeepMind’s AlphaProof and AlphaGeometry systems together hit the silver mark.
Unlike last year’s models, which needed problem statements translated into formal proof languages like Lean and took days to compute answers, Gemini Deep Think worked end-to-end in natural language within the standard 4.5-hour competition window, just like a human participant.
“We’re getting closer to building AI that can solve advanced mathematics in ways that are useful and intuitive to humans,” said Thang Luong, one of the project leads at DeepMind.
The system’s success hinged on two key advances: a new reasoning setup called “Deep Think mode,” which explores multiple solution paths in parallel before settling on an answer, and fresh reinforcement learning techniques fine-tuned for complex proofs.
The model was also trained on a curated trove of high-quality math solutions and guided with IMO-style problem-solving strategies.
Google said it plans to share the Deep Think version of Gemini with select mathematicians before a broader release to Google AI Ultra subscribers.
The breakthrough is the latest in DeepMind’s decade-long push to marry AI and pure mathematics.
While Gemini operates in natural language, work continues in parallel on formal systems like AlphaGeometry and AlphaProof that aim for machine-verifiable rigor.
The ultimate goal, said the company, is to create AI agents that can fluently toggle between intuitive reasoning and formal precision, serving as powerful tools for mathematicians, scientists and researchers.
Although the IMO board validated the correctness of the submitted solutions, it did not independently audit the underlying model or methods.