Autonomy in Orbit: How AI is Becoming the Brain of Deep-Space Missions
As missions push deeper into space, AI-driven autonomy is becoming essential, raising new questions about reliability and control.
Topics
News
- India, EU Test Digital Identity Interoperability as Twin Transition Gains Shape
- India Ranks No. 2 in AI Usage—But Who Is Being Left Behind?
- Autonomy in Orbit: How AI is Becoming the Brain of Deep-Space Missions
- Quantum, AI Move Toward a Hybrid Future as Commercial Use Nears
- India’s AI Ambition Faces Its Hardest Test in MSMEs
- Why AI Still Cannot Match the Human Brain
At the India AI Impact Summit 2026, engineers and researchers signalled that the next leap in space exploration will depend less on rockets and more on algorithms that can think for themselves. As missions move farther from Earth, communication delays grow from seconds to minutes. During that time, autonomy is a must.
Consider India’s Gaganyaan program. ISRO plans to send Vyomitra, a humanoid robot, on its first uncrewed mission. The robot is designed to mimic astronaut activity inside the crew module.
Durairaj Ranganathan, who leads space robotics at ISRO, described Vyomitra as “a 35-degree-of-freedom system” equipped with speech and visual cognitive engines. During ascent, it will read display panels, evaluate system performance, and report by voice. In orbit, it will perform microgravity manipulation while staying in contact with mission control.
But physics limits what’s possible. “We don’t have any kind of luxury of having a data center or cloud or internet connectivity,” Ranganathan said. Everything must run on board. That means using edge computing, compact models, and new architectures such as spiking neural networks on specialized hardware.
The shift from ground control to onboard intelligence is most obvious in landing systems.
Dr. Dipti Patil from MKSSS Cummins College of Engineering for Women, Pune, who works on autonomous lunar landing, described the challenge as “making the real-time decision in an uncertain environment”. Lunar terrain is unstructured and uneven, filled with craters and boulders under extreme lighting conditions.
Training AI models requires data, but “with the limited data availability,” researchers have had to simulate entire lunar environments and train spacecraft in virtual Apollo landing sites before validating them for real missions. Real-time decision making is not just about identifying hazards. It means adjusting thrust and trajectory without human intervention. “There is no kind of human control or human intervention,” she said. The system must act autonomously and accurately, even with limited computing power.
Autonomy is even more critical in deep space. Ishita Ganjoo, URSC, ISRO, pointed out that for Mars missions, communication delays can reach 20 minutes. “By the time a command from the Earth reaches the spacecraft, it’s already outdated.” Full autonomy may be possible, she said, but reliability remains a challenge.
Engineers want systems that are explainable and reliable. “What we’re concerned about is guarantees in the system,” she said. Stability and repeatability still matter. This balance between speed and certainty is the key issue across the AI ecosystem.
Vinay Simha, founder & CEO of SkyServe, discussed compressing heavy geospatial models to enable their execution on satellites. A 300 megabyte model can be reduced to 30 megabytes and deployed on board. Updates are sent from the ground in batches. “It works like a yin and yang,” he said. But every update requires thorough validation, sometimes backed by “hundreds of pages of warranties,” to meet mission standards.
That caution extends beyond satellites. In agriculture, panelists said farmers don’t need to understand orbital mechanics to benefit from AI-driven advisories. Dr. Jagriti Dabas, founder of ARMS 4 AI, described tools that can turn space data into easy-to-understand insights and even carbon credit calculations.
“Every landholder in India can generate an extra income just by doing the same thing that they have been doing for the past two, three decades,” she said, referring to carbon markets. The promise is clear. So is the risk of over-reliance if models fail.
Throughout the session, one theme recurred. AI is lowering barriers to entry, but it is raising the premium on fundamentals. Ganjoo warned that engineers must still understand the systems they design. “If you give a tough problem even to one of these AI models, they will give you a solution. But to understand whether that solution is right for your application, you need to have very strong fundamentals.”
She suggested the gap between those who grasp the basics and those who do not might grow wider.

