March 20, 2026 Enterprise organisations are accelerating AI adoption despite lacking clear architectures, metrics or validated use cases, according to industry advisers. Experts warn that gaps in evaluation and oversight are already leading to quality, performance and liability risks in both software development and business workflows.
“No one knows right now what the right reference architectures or use cases are for their institution,” said Dorian Smiley, co-founder and CTO of advisory firm Codestrap.
“A lot of people are pretending that they know. But there’s no playbook to pull from,” he added while speaking to Register.
The concern centres on how AI output is being measured and trusted. In software engineering, AI-generated code can pass unit tests and appear correct while still introducing critical performance or logic issues. Smiley cited an AI-assisted rewrite of SQLite that passed tests but produced code 3.7 times larger and roughly 2,000 times slower, rendering it unusable.
Current engineering metrics may be contributing to the problem. Measures such as lines of code or pull requests can show improvement with AI tools, but do not capture reliability or system performance. Smiley argues organisations should instead track indicators such as deployment frequency, change failure rates and incident severity, alongside new metrics specific to AI usage, such as token consumption per approved change.
Beyond engineering, similar risks are emerging in business applications. AI-generated reports and content are being used at scale without consistent validation processes. Connor Deeks, Codestrap’s co-founder and CEO, pointed to cases where consulting work had to be refunded due to AI-related errors, highlighting potential exposure to financial and legal consequences.
The issue is compounded by incentive structures. Organisations seeking efficiency gains may reduce human oversight while increasing reliance on AI outputs, creating a gap between productivity gains and quality assurance. “That does not lend itself well to saying all the humans on the team will use AI but review every output,” Smiley said.
Insurers are also responding to the uncertainty. According to the advisers, some underwriters are exploring ways to exclude AI-related risks from coverage where accountability is unclear, raising potential challenges for organisations deploying AI in production systems.
At the same time, pricing pressure is beginning to surface. Clients aware of AI-assisted workflows are negotiating lower fees, particularly in consulting and professional services, where automation reduces perceived labour value.
