Synopsis
Executives warned that risks such as prompt injection, data exposure and vulnerabilities in widely used models could trigger large-scale impact, raising the threat of a wave of shadow AI-or use of unapproved AI tools-if governance frameworks fail to keep pace.Executives warned that risks such as prompt injection, data exposure and vulnerabilities in widely used models could trigger large-scale impact, raising the threat of a wave of shadow AI—or use of unapproved AI tools—if governance frameworks fail to keep pace.
Many firms today lack clear visibility into the AI systems being used as business teams deploy models and pipelines faster than central oversight can keep up, executives said.
“Enterprises know what workloads they deployed six months ago,” said Arjun Nagulapally, chief technology officer at AionOS. “Very few can tell you with confidence what AI workloads are running right now, who authorised them, what data they are touching, and how the models are behaving.”
Companies are now tightening processes, treating AI-generated code at par with human-written code with strict reviews, testing, and audit trails.
But the push for faster adoption is real.
Amith Singhee, CTO at IBM India and South Asia, said the company is experiencing productivity increases of almost 35–40% across various stages of the software engineering lifecycle when AI agents are used effectively.
“There is a growing shift toward specification-based coding and stronger test-driven development, where developers guide AI agents toward defined engineering goals,” he said.
Singhee, however, said governance must keep pace with adoption, especially as AI agents begin connecting with enterprise systems and APIs. “Organisations need a well-defined review of AI-generated code and a security architecture for integrating enterprise systems with these AI agents to minimise risk,” he said.
Business units are spinning up models and pipelines faster than central security teams can track them.
“The demand shift is not just for compute,” Nagulapally said. “It is for visibility, governance, and control infrastructure to keep pace with deployment velocity.”
Sudheer Mathur, senior vice president and managing director of ServiceNow India, said the problem runs deeper than technology. “The controls enterprises built for humans are not suited for AI systems that operate continuously at scale,” he said.
ServiceNow created an internal system to track its models and agents in one place. It has reported nearly $325 million in annual value from its AI projects.
But there are limitations. “AI-generated code is not production-ready by default. It often falls short on security, access controls, and enterprise context,” Mathur explained.
“Without strong governance frameworks, enterprises risk a new wave of shadow AI, similar to the shadow IT challenges seen during early cloud adoption,” he warned.
Organisations that skip those controls are not just taking on technical risk. They are making decisions they cannot see and may not be able to reverse, officials added.
Recent outages caused by AI-generated code and reports of AI systems behaving unexpectedly have brought the issue into focus.
Summer Yue, in charge of AI safety and alignment at Meta, reported in a post on X that an autonomous agent linked to her Gmail started deleting a lot of emails despite explicit instructions. This shows how these kinds of systems can go beyond their intended limits.
ET could not independently verify the veracity of this claim.
At the same time, the rise of tools like Claude’s coding agents, which can create and change software on their own, has accelerated the use of these tools in businesses.
Ratinder Paul Singh Ahuja, CTO at Everpure, said the company applies the same standards to AI-generated code as it does to code written by human engineers.
“Anything that can touch schemas, data pipelines, policies, or infrastructure is treated as production-grade software, whether or not AI wrote it,” he said. “Every configuration or code change, AI-authored or human-authored, has a clear human owner, a documented approval path, and an auditable trail. AI assistance does not dilute accountability.”
The risks are not limited to careless deployment.
Nagulapally said enterprises are also exposed to attacks that their security teams were never trained to defend against. “When AI agents are given the ability to browse the web, read documents, or query external sources, they become vectors for injecting malicious instructions into enterprise workflows,” he said.
The long-term danger is systemic. A small number of foundational models sit across hundreds of enterprise applications.
“The AI equivalent of Log4j could be an order of magnitude more consequential, because AI is embedded deeper in decision-making logic than a logging library ever was,” Nagulapally said. A flaw in a model may go undetected for months, silently producing bad outputs or leaking data before anyone notices.