MITINDIA PRIVY
Trigent-Banner

AI Governance Debate Sharpens at Impact Summit

Speakers warn that as AI agents enter finance and public services, voluntary safeguards may not be enough.

Topics

  • [Image source: Chetan Jha/MITSMR India]

    At a time when generative AI is accelerating faster than most regulatory systems can adapt, policymakers, multilateral institutions and industry leaders gathered to debate a central tension: how to balance innovation with accountability. 

    The session, titled “Trustworthy AI: Balancing Innovation and Regulation,” at the IndiaAI Impact Summit, brought together representatives from the Reserve Bank of India (RBI), UNESCO, the Australian government and industry to examine what responsible AI governance should look like in practice.

    Moderated by Syed Ahmed, Global Head of the Responsible AI Office at Infosys Ltd, the discussion moved beyond abstract principles toward institutional realities.

    Ankur Singh from RBI’s fintech department framed trust as foundational to the financial system. “The entire financial sector is based on this trust,” he said, noting that AI systems deployed in banking must ultimately remain accountable to human decision-makers. Even as autonomous agents become more embedded in workflows, Singh emphasized that “the human has to come into the picture somewhere,” particularly in high-stakes decisions such as lending.

    He also cautioned against “AI washing,” where firms exaggerate technological capabilities. Proper disclosure, model repositories and supervisory oversight, potentially even “using AI itself to supervise AI,” will be essential to protect consumers.

    From a global governance perspective, Mariagrazia Squicciarini,  Chief of Executive Office and Director of AI at the Social and Human Sciences Sector of UNESCO, underscored that trustworthiness is not a single attribute but a system of safeguards rooted in human rights.

    Referencing UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by 193 countries, she argued that AI must remain anchored in “human dignity and fundamental freedoms.” Importantly, she stressed that regulation should not be framed as innovation’s adversary.

    “We don’t have to decide what we want the technology to do,” she said. “We nevertheless need to agree on what we surely don’t want the technology to do.”

    Cucinelli also warned that AI failures often occur post-deployment. While systems may be carefully designed, insufficient monitoring in real-world environments can produce harm. “By the time systems go in the wild, that’s when things go wrong,” she noted, calling for stronger oversight mechanisms and ethical impact assessments.

    On the multilateral front, Caitlyn Searle, representing the Australian High Commission, outlined his country’s evolving AI governance approach. The country recently released guidance simplifying its earlier voluntary AI guardrails and is establishing a national AI Safety Institute. The goal, she said, is to create a governance framework that can respond “nimbl[y]” to technological risks without stifling innovation.

    Transparency emerged as a recurring theme. “How do we make sure people understand the risks so that governments and others are prepared to mitigate them?” she asked, highlighting the need for clear communication between regulators, industry and citizens.

    The conversation also addressed inclusivity, particularly in the context of language, access and financial inclusion.

    Singh pointed to India’s rapid digitization and digital public infrastructure as a positive step forward, noting that AI-powered credit models using alternative data could help bring underserved populations into the financial mainstream.

    Cucinelli added that inclusiveness improves not only equity but technical performance: “Inclusiveness does not only benefit the included… That enables better AI to be developed.”

    In closing reflections, panelists identified awareness and readiness as the defining shifts of the past two years. Public engagement around AI risks and opportunities has expanded significantly, yet implementation remains the harder task ahead. As Caitlyn observed, with national AI plans now in place, “now starts the hard bit… translate principles to practice.”

    If there was consensus, it was this: trustworthy AI cannot rely on regulation alone, nor innovation alone. It requires continuous dialogue, institutional capacity, transparency and above all sustained human oversight.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.