MITINDIA PRIVY
Trigent-Banner

AI Policy Debate Turns to Power and Enforcement

As AI systems spread across public and private sectors, panellists warn that soft law may not be enough.

Topics

  • Who gets to shape artificial intelligence before it shapes society?

    That question dominated a governance session at the India AI Impact Summit 2026, where policymakers, academics and civil society leaders debated whether multistakeholder participation in AI remains largely symbolic while deployment accelerates.

    The discussion, moderated by Aliya Bhatia of the Center for Democracy and Technology, examined how power is distributed across the AI lifecycle, from model design to post-deployment accountability. 

    Panelists included Jhalak Kakkar, executive director of the Centre for Communication Governance at National Law University Delhi, and Dhanaraj Thakur of George Washington University’s Multiracial Democracy Project.

    Large language models are already being procured and deployed across public institutions and private companies. Yet governance forums remain concentrated in industry-led processes and intergovernmental discussions, often centered in advanced economies. Speakers argued that this structure sidelines communities most directly affected by AI systems, particularly in the Global South.

    Kakkar drew parallels with the early years of social media regulation, when companies operated largely under voluntary commitments before enforceable standards emerged. The risk with AI, she suggested, is that policymakers remain too long in the norm-building phase.

    Rather than prescribing specific technical outcomes, she argued for process-based obligations, including risk assessments, documentation of design decisions, impact reporting and structured mechanisms to identify harm early. Without such guardrails, regulatory responses may again lag behind technological deployment.

    Participation Problem in AI

    As Bhatia framed it, large language models and AI systems are already being procured and deployed across both public and private sectors. Yet conversations around their governance remain largely concentrated in the Global North, often within industry-led forums or government-to-government dialogues.

    That structure, she argued, sidelines critical voices: civil society groups, policy experts from the Global South, and perhaps most importantly, communities most directly affected by AI deployment.

    “Participation,” the panel emphasized, cannot be reduced to consultation theater or box-checking exercises. The question is not simply who is invited into the room, but who has decision-making authority, and at what stage of the AI lifecycle.

    Participation is About Power

    If Kakkar focused on regulatory scaffolding, George Washington University’s Dhanaraj Thakur, Director shifted the discussion to something more foundational: power.

    “Participation is ultimately about the distribution of power,” he said.

    The idea of community engagement in technological design is not new. International development, public health, and infrastructure planning have long grappled with similar questions: Who defines the problem? Who determines the solution? Who evaluates success?

    Thakur referenced long-standing development scholarship emphasizing the principle of “putting the last first”, beginning with the needs of communities rather than imposing solutions from above.

    In AI, however, the model often works in reverse. Frontier companies build general-purpose systems first, release them widely, and then observe what use cases emerge. Evaluation is reactive. Community input arrives after deployment, sometimes after harm.

    Meaningful participation would invert that sequence. It would mean:

    • Allowing communities to help define relevant use cases
    • Recognizing local linguistic and cultural expertise in model design
    • Embedding contextual knowledge into development pipelines
    • Shifting from post-hoc auditing to pre-deployment co-creation

    Crucially, Thakur highlighted that communities are not merely “beneficiaries” of technology; they are knowledge holders with domain-specific expertise.

    AI policy discussions often privilege technical fluency, but technical expertise alone cannot substitute for contextual intelligence.

    ‘Technical Exceptionalism’

    A recurring theme in the session was what Bhatia described as the tendency to treat AI governance as historically unprecedented, as if only highly specialized engineers can meaningfully contribute.

    Yet societies have navigated complex technological shifts before: cryptography policy, data protection frameworks, and digital rights advocacy all required bridging technical complexity with democratic oversight.

    Speakers pushed back against the idea that AI governance is uniquely inscrutable. Democracies have confronted technically complex shifts before, from cryptography controls to modern data protection regimes, and built oversight frameworks that balanced expertise with accountability.

    Artificial intelligence, they argued, should not be treated as an exception.

    Instead of relying on voluntary pledges and broad principles, panelists called for earlier and more structured participation. That would mean involving affected communities at the design stage rather than after deployment, embedding risk assessment and documentation into development processes, and moving from consultation exercises to enforceable obligations.

    They also argued that governance discussions must widen beyond North American and European forums, as AI systems are increasingly deployed across emerging markets with different linguistic, cultural and institutional realities.

    The central point was that participation cannot remain symbolic. If communities are invited into the room but lack decision-making authority, the power structure remains unchanged.

    A Critical Moment for AI Governance

    The debate unfolded as India seeks to play a larger role in global AI policy discussions, positioning itself as both a major engineering base and one of the world’s largest digital markets.

    That dual role adds weight to the governance question. Decisions taken in India will affect not only domestic deployment across public services and enterprise systems, but could also influence how other emerging economies approach oversight.

    The panelists did not argue against AI development or enterprise adoption, but their warning was narrower. If regulatory structures lag behind deployment, governments risk repeating the reactive cycle seen in social media, where safeguards followed harm rather than preventing it.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.