Shadow AI is a fast-developing offshoot of shadow IT, the longstanding challenge where employees deploy unsanctioned software, cloud services, or systems within the enterprise.
This hidden usage makes it nearly impossible for leaders to accurately assess the true state of AI adoption within their companies. According to Cisco’s 2025 Cybersecurity Readiness Index, 60% of organizations lack confidence in detecting unregulated AI deployments posing significant cybersecurity and data privacy risks.
Countless shadow AI scenarios are playing out every day across industries in Asia Pacific. Shadow AI isn’t just a risk—it’s a clear indicator of where official AI projects and governance aren’t meeting user needs.
Inthis interview, Ed Keisling, Chief AI Officer, Progress Software, shares his perspective on how shadow AI is quietly shaping—and sometimes derailing—enterprise AI adoption, and discusses how we can address this growing and often unseen challenge lurking in the shadows of our organizations.
What challenges do organizations face when trying to measure ROI for real AI adoption?
Keisling: One of the biggest challenges is that many organizations still lack a clear AI strategy that connects initiatives to business outcomes. Without that alignment, it’s easy to fall into the trap of tracking vanity metrics—usage stats, prompt counts, or meeting summaries—that don’t reflect actual impact.
Just because a team is using AI doesn’t mean they’re using it well or driving meaningful results.
At Progress, we’ve seen firsthand that adoption alone is a poor proxy for success. You need to measure proficiency, effectiveness, and ultimately whether AI is helping teams move the needle on core business KPIs like conversion rates, retention, or cycle times.
Speed without direction is dangerous—true ROI comes from velocity, which requires both momentum and a clear vector.
Gartner predicts that by 2027, 75% of employees will acquire, modify or create technology outside IT’s visibility, up from 41% in 2022. What is driving the surge in shadow AI?
Keisling: Shadow AI is a symptom of unclear policies, insufficient enablement, and a lack of trust in sanctioned tools. Employees will always gravitate toward what’s easy, accessible, and exciting.
If they’re bypassing enterprise solutions, it’s worth asking: Are the tools we’ve provided intuitive and powerful enough? Do our policies empower experimentation or stifle it? What can you learn about “why” your team is making this choice?
The explosion of AI vendors and the rapid pace of innovation make it tempting to chase the “shiny object”, but that leads to fragmentation and a lot of false starts. At Progress, we’ve found that picking a leading platform and driving deep proficiency yields better outcomes than constantly switching.
We’ve found in most cases it’s a lack of awareness of what is available or possible with the approved tools that leads to shadow IT use. So, you need to close that gap with strong governance and enablement, otherwise shadow AI will become inevitable.
What are the security implications of employees trusting personal AI tools over sanctioned enterprise solutions?
Keisling: The risks are significant. When employees use unsanctioned AI tools, they may unknowingly expose sensitive data to external systems that lack proper security controls. This opens the door to data breaches, compliance violations, and reputational damage.
Beyond that, shadow AI undermines governance—there’s no way to ensure responsible or ethical use, and no visibility into how decisions are being made. Unvetted models can produce inaccurate or fabricated output, which can lead to poor decision-making.
We regularly review our policies to ensure teams have clear guidance and safe environments to experiment with, but always within guardrails that protect the business.
How can organizations strike the right balance with governance, education and the right AI infrastructure?
Keisling: It starts with clarity—clear policies that define what’s allowed, what’s expected, and how to stay compliant. But governance alone isn’t enough.
You need to create a culture of psychological safety where teams feel empowered to experiment and fail. That means providing sandboxes, access to infrastructure, and time for hands-on learning. Hackathons, communities of practice, and structured enablement programs are key.
We’ve seen that the best way to drive adoption is to give teams a use case that delivers an “Aha!” moment. Once they see the value, the transformation becomes bottom-up and self-sustaining and they are better equipped to take advantage of future enablement.
Transparency is also critical—celebrating wins, sharing failures, and inspiring others to innovate fuels the kind of grassroots momentum that makes AI adoption real.