Enterprise AI Is Outpacing the Governance Built to Control It

AI systems are now embedded across enterprise operations, but governance, visibility, and accountability frameworks are lagging, creating risks that many organizations still cannot fully measure or control.

Topics

  • Image Credit- Chetan Jha/ MIT Sloan Management Review India

    Key Takeaways

    01

    Organizations cannot reliably stop AI systems during incidents. A majority lack defined response timelines, with only 21% able to halt systems within 30 minutes.

    02

    78% of organizations are transforming with AI, but only 6% have established secure governance frameworks. 81% are deploying agents with access to sensitive systems, often without central oversight or control.

    03

    Accountability and explainability gaps are becoming regulatory risks. Only 42% of organizations are confident they can explain AI incidents, while many lack clearly defined ownership.

    Picture a scenario playing out right now across enterprises that consider themselves well-run. An AI system embedded in a core workflow behaves unexpectedly. Someone needs to stop it. The security team reaches for a kill switch and discovers there isn’t one. A third of the way through an incident, nobody can confidently say what the system accessed, what decisions it influenced, or who is accountable for the damage.

    This is not a thought experiment. According to new research from ISACA, the global IT governance association, nearly 59% of organizations say they cannot determine how quickly they could halt an AI system in the event of a security incident. Only 21% say they could do so within 30 minutes. Fewer than half are confident they could explain what went wrong after an AI failure. And roughly one in five do not know who would be responsible if an AI system caused harm.

    The numbers describe a gap that most enterprises have been slow to name: AI has become operational infrastructure, but the governance frameworks that should surround it are still catching up. In many organizations, they have barely started.

    “AI deployment is currently outpacing AI governance,” says Kunal Ruvala, Senior Vice President and General Manager, India at Palo Alto Networks.

    Adoption Has Outpaced the Controls Built to Manage It

    AI has entered enterprises through use rather than through structured rollouts. Business teams adopted tools to improve speed, automate processes, and reduce manual effort. These deployments often begin as pilots but quickly become embedded in production workflows. By the time governance teams engage, the systems are already in place.

    “AI adoption moved faster than enterprise controls,” says Amarbir Singh, Senior Director at AHEAD. “By the time leadership started asking how AI was being used, it was already embedded across functions.”

    This pattern mirrors earlier technology cycles, such as cloud and SaaS adoption, in which decentralized use preceded formal oversight. But AI introduces a different level of complexity. Unlike traditional software, AI systems are dynamic. They evolve based on data inputs, interact across multiple systems, and influence decisions rather than simply executing predefined logic.

    Nilesh Bhojani, Chief Product and Technology Officer at Seclore, argues that governance frameworks have not adapted to this shift. “Most frameworks still treat AI as a deployment event rather than an ongoing data interaction,” he says. That distinction is critical. Approving a tool at onboarding provides limited visibility into how it behaves over time, what data it accesses, or how its outputs are used.

    Biswajeet Mahapatra, Principal Analyst at Forrester, points to a structural reason for the lag. “Organizations are pursuing disconnected deployments and lack a unifying narrative that connects AI to enterprise value,” he says, “which results in fragmented ownership and weak control structures.”

    The consequence is a growing visibility gap. Around 33% of organizations surveyed by ISACA do not require employees to disclose AI use in their work, leaving governance teams without a clear view of where AI operates across the enterprise.

    “AI deployment is currently outpacing AI governance.”

    Kunal Ruvala, Senior Vice President and General Manager, India, Palo Alto Networks

     

    Shadow AI Is Creating a New Internal Attack Surface

    As adoption expands, a second layer of risk is emerging. Shadow AI refers to tools and systems deployed without central oversight, including publicly accessible AI tools, internally built agents, and automation systems operating outside formal governance structures. Unlike traditional cybersecurity threats, it does not originate from outside the organization. It emerges from within.

    “The risk profile is different,” Bhojani says. “Shadow AI operates with legitimate credentials, on real data, inside normal workflows.”

    This makes it harder to detect and control. Traditional security systems are designed to identify anomalies. Shadow AI activity often appears indistinguishable from routine business operations.

    Industry data cited by Ruvala illustrates the scale of the issue:

    • 78% of organizations are undergoing AI-driven transformation
    • only 6% have mature governance guardrails
    • 81% are deploying AI agents capable of executing tasks autonomously

    These systems are not passive tools. They act on behalf of users, access enterprise data, and interact with critical infrastructure. In many cases, they are granted broad permissions to reduce friction during deployment. That creates a new form of exposure.

    Mandar Patil, Executive Vice President at Cyble, notes that employees may unknowingly expose sensitive information through these tools. “Confidential data, including customer information and internal strategy, can be shared with external AI systems without governance controls,” he says.

    The nature of risk is shifting. “Ungoverned AI introduces new vectors such as unapproved data sharing and model misuse,” Mahapatra says. “The risk is not necessarily larger than traditional threats, but it is more diffuse and harder to detect because it originates from internal user behaviour rather than external attackers.” This creates internal pathways through which sensitive data can move without clear oversight.

    Research Context

    This article is based on interviews with executives from Palo Alto Networks, EPAM Systems, Seclore, Cyble, AHEAD, and Forrester. It also draws on findings from ISACA’s 2026 report, Adopted, not governed: new ISACA research reveals AI blind spot at the heart of enterprise risk, and enterprise AI security assessments published by Palo Alto Networks on Prisma AIRS and agentic AI security.

     

    Autonomous Systems Are Introducing Accountability Gaps

    The next phase of enterprise AI risk is tied to autonomy. AI systems are no longer limited to generating content or assisting workflows. Increasingly, they are making decisions, triggering actions, and interacting with enterprise systems in real time. This introduces governance challenges that organizations are only beginning to confront.

    “One of the biggest concerns is the emergence of autonomous systems operating with excessive privileges and limited accountability,” Ruvala says.

    As agents gain access to enterprise infrastructure, they can execute actions faster than organizations can verify or audit them. This creates scenarios where decisions are made without clear oversight.

    “Once an agent is making decisions across operational workflows, responsibility becomes blurred,” says Singh. The question of ownership becomes complex. It may involve business teams, platform providers, and vendors simultaneously, with no clear lines of accountability.

    Mahapatra identifies where this trajectory leads. “The most critical governance failures will emerge from lack of system observability, inadequate monitoring of model behaviour, and weak accountability structures for AI-driven decisions,” he says.

    Another concern is decision drift. AI systems may perform as expected at deployment, but evolve as inputs and context change. Without continuous monitoring, organizations may not detect deviations until they have a material impact. This is particularly significant in regulated industries, where explainability and auditability are becoming mandatory requirements.

    What Leaders Cannot See Is Already Creating Exposure

    Across all use cases, one constraint appears consistently: enterprises cannot see what their AI systems are doing in real time. AI systems continuously interact with data, APIs, and workflows. They generate outputs, trigger actions, and influence decisions across distributed environments. Traditional monitoring tools are not designed to track this level of activity.

    “Most organizations cannot fully see where sensitive information is flowing or how AI-generated outputs are influencing decisions,” Ruvala says.

    Mahapatra offers a structural explanation for why this persists. “Governance efforts are often front-loaded at approval stage rather than embedded into runtime monitoring and lifecycle management,” he says. “Organizations know what should not be done but lack mechanisms to observe what is actually happening in production.”

    This lack of visibility affects multiple layers of risk. It limits the ability to detect anomalies, investigate incidents, and enforce governance policies. It also undermines compliance efforts, particularly as regulations begin to require traceability and accountability. Around 20% of organizations do not know who would be accountable if an AI system caused harm.

    Role

    Required Action

    C-Suite

    AI systems are being deployed without the ability to control them during failure scenarios. ISACA’s 2026 data shows 59% of organizations cannot determine how quickly they can halt an AI system, while only 21% can act within 30 minutes. At the same time, adoption is outpacing governance. With 78% of organizations undergoing AI transformation and only 6% operating with adequate guardrails, enterprises are expanding operational risk faster than they are containing it.

    AI risk is now directly tied to business continuity, regulatory exposure, and service reliability.

    Technology and Security Leaders




    Accountability for AI systems remains undefined across many organizations. ISACA data show that 20% do not know who is responsible if an AI system causes harm, while only 38% assign accountability to the board or executive level. Fewer than half are confident they can explain AI-related incidents after the fact.

    This creates exposure under emerging regulatory frameworks that require explainability, auditability, and executive accountability. AI governance is moving from a technical oversight issue to a board-level liability and compliance concern.

      

    Editor’s Note: MIT Sloan Management Review’s AI Research Forum will make its India debut in Bengaluru on 23 July, bringing together enterprise leaders, researchers, and practitioners to examine how autonomous AI is moving from experimentation to governed deployment at scale. To speak, partner, or attend, register here.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.