MITINDIA PRIVY
Trigent-Banner

Agentic AI Push Spurs Calls for Human Oversight, Clear Rules

As AI agents begin to transact, engineer and defend systems on their own, the race is no longer just to build smarter machines but to define the rules that keep them accountable, experts said at the India Impact Summit 2025.

Topics

  • Artificial intelligence is starting to act, not just advise, and that shift is forcing companies and regulators to draw clear lines on oversight, data use and accountability before autonomous agents are rolled out at scale.

    The debate unfolded during a session titled “Agentic AI Governance and Interoperability” at the India Impact Summit 2025, where officials and executives outlined the guardrails needed before autonomous agents are deployed at scale.

    The discussion brought together Austin Mayron of the US Center for AI Standards and Innovation (CASI), Prith Banerjee of Synopsys, Caroline Louveaux of Mastercard and Syam Nair of NetApp.

    Mayron said CASI is drafting security frameworks for AI agents and has issued a request for information to map industry risks. The agency plans sector specific listening sessions across healthcare, finance and education to identify technical gaps and barriers to adoption.

    The goal, he said, is to understand “what challenges the industry faces when adopting AI agents” and to unlock deployment through standards and best practices.

    Banerjee said rising system complexity is pushing companies toward what he called “agentic engineers” that handle lower level reasoning tasks while humans retain final authority.

    “That’s where agent AI comes in,” he said, adding that transistor density and system scale have grown exponentially. When AI interacts with the physical world, he warned, “we have to be extra careful to avoid things going wrong. That’s the challenge.”

    Louveaux said financial services are already operating at machine speed, but agentic systems introduce a sharper accountability test.

    “With agentic AI, we are moving from AI systems that recommend to AI systems that act,” she said. If agents are to block fraud in real time, “decisions have to be made in milliseconds at scale and with accountability.”

    She outlined Mastercard’s four guardrails for agentic payments: “know your agent,” security by design, explicit consumer intent and full traceability.

    “We want these agentic payments to be safe, secure and trusted,” she said, adding that autonomy must remain bounded by clear permissions and human oversight.

    Nair said the durability of agentic AI will hinge on data governance and infrastructure level controls.

    With cyber threat breakout times measured in seconds, he said, “you need agents at the very layer where data sits.” He called for closer public private coordination to define guardrails that match real world use cases.

    Mayron added that CASI is taking a bottom up approach to safety standards, asking deployers and security practitioners where agents fail in practice and how guardrails should evolve across sectors.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.