Insights

Here’s Why 89% Are Trying GenAI But Only 15% Are Scaling It

Rob Acker
CEO, Hubbl Technologies
November 19, 2025

The numbers from the World Quality Report 2025 tell a story that anyone working in enterprise AI already knows. Enthusiasm doesn't equal readiness.

Nearly 90% of organizations are now pursuing generative AI in their quality engineering practices. That's pretty incredible adoption. But the part that makes me pause is that only 15% have achieved enterprise-scale deployment.

That 74-point gap? That's what I call the Agentic Divide. It’s the growing chasm between organizations that are experimenting with AI and those that are actually transforming with it. It’s being discussed in every boardroom, and it’s a huge problem.

The Shift from Strategic to Technical Barriers

What's particularly revealing is how the challenges have changed over just one year. In 2024, the top obstacles were more strategic, with lack of validation strategy (50%), insufficient AI skills (42%), undefined organizational structure (41%) being reported most.

Fast forward to 2025, and the barriers have shifted to integration complexity (64%), data privacy risks (67%), and AI hallucination concerns (60%).

Think about what that shift means. Organizations have moved past the "should we do this?" phase. They've invested. They've committed. They've built teams. Now they're hitting a different wall entirely and it’s the messy reality of making AI work in their actual business environment.

This isn't a strategy problem anymore. It's a context problem.

AI Without Context Is Just Expensive Noise

Foundation models are incredibly powerful. They can generate content, analyze patterns, and automate tasks at a scale we couldn't have imagined five years ago. But they're essentially blind to the specific business processes, governance requirements, and operational constraints that define each enterprise.

When the report shows that 67% of data and analytics leaders feel pressure to implement AI quickly while 42% lack confidence in the accuracy of AI outputs, that's not a trust issue with the technology. That's a context gap.

Your AI doesn't know that Field A always needs to be populated before Field B can be trusted. It doesn't understand that Process X has compliance implications that Process Y doesn't. It can't see that your team has been working around a technical debt issue for three years that's now blocking your automation strategy.

Without that context, even the most sophisticated AI is just making educated guesses based on incomplete information.

The Real Cost of the Divide

The report reveals that 89% of data and analytics leaders with AI in production have experienced inaccurate or misleading outputs. Among those training their own models, 55% report wasting significant resources due to bad data.

But the cost isn't just financial—it's organizational. When AI outputs can't be trusted, teams stop relying on them. When automation breaks because it doesn't understand business logic, people build workarounds. When agents make decisions without context, humans have to double-check everything, eliminating the efficiency gains you deployed AI to achieve.

The Agentic Divide isn't just about who has AI and who doesn't. It's about who can actually rely on it to drive business outcomes versus who's stuck in perpetual pilot mode.

What Scaling Actually Requires

The shift to enterprise-scale deployment isn't about buying more compute power or hiring more data scientists. The organizations crossing the divide are addressing three fundamental requirements:

  1. Integration Intelligence: With 42% of enterprises needing access to eight or more data sources just to deploy AI agents successfully, integration isn't a nice-to-have—it's the foundation. But connection without comprehension creates more problems than it solves. Your AI needs to understand not just where data lives, but what it means, how it's used, and what business rules govern it.
  2. Contextual Awareness: When 86% of enterprises require tech stack upgrades to deploy agents, that's not about swapping out old servers. It's about creating the intelligence layer that gives AI the business context it needs to make informed decisions. Think of it like GPS—the satellite signal is impressive, but without knowing where you actually are, it's just noise.
  3. Governance at Scale: Security concerns emerged as the top challenge for both leadership (53%) and practitioners (62%). That's not surprising when you're giving AI agents access to critical business systems. But locking everything down defeats the purpose of automation. The answer isn't more control—it's smarter governance that travels with the context.

Moving from Pilot to Production

So how do organizations cross the divide? Based on what we're seeing across thousands of deployments, the ones succeeding share a common approach:

  • Start with the intelligence layer, not the agent layer. Before you deploy another AI agent, ask whether it has the context it needs to operate safely and effectively. Can it see your technical debt? Does it understand your business processes? Does it know which changes will impact compliance?
  • Treat integration complexity as a context problem, not a plumbing problem. Connecting systems is table stakes. The value comes from helping AI understand what those systems mean to your business.
  • Build trust through transparency, not just testing. When AI can explain why it's making a recommendation—grounded in your specific business context—adoption accelerates. When it can't, even perfect accuracy won't overcome skepticism.
  • Democratize the intelligence, not just the tools. Give everyone in your organization access to the context AI needs to be useful. When admins, developers, architects, and business users all operate from the same source of truth, agents become enablers instead of experiments.

The Path Forward

The World Quality Report confirms what many of us have been experiencing firsthand: the journey from AI experimentation to enterprise-scale deployment is more complex than the initial hype suggested. But complexity doesn't mean impossible.

The organizations that will lead in the agentic era won't be the ones with the most AI agents. They'll be the ones that gave those agents the context to actually do their jobs.

The divide exists. But it's not permanent. It's a choice—between continuing to experiment with AI that lacks the intelligence to be useful, or investing in the context layer that makes AI actually work.

The 15% who've crossed it aren't smarter or luckier. They just understood that AI without context is blind, and they built the intelligence layer to help it see.