What systemic challenges stand in the way of software quality in the fast-moving digital economy? Does AI really help?
As enterprises and governments ramp up their AI ambitions, organizations in Asia Pacific face growing pressure to turn these AI ambitions into real-world business gains.
In the digital economy driven by cloud-first business environments and mobile-first customer needs, we are often required to develop and ‘live’ software applications at an increasing pace to meet business demands. This impacts time the entire software development lifecycle – from conceptualization to coding to testing to delivery – affecting the quality and security of the apps developed, and ultimately business performance and reputation.
What role does Agentic AI play in enhancing software quality and business resilience for the region? DigiconAsia finds out from Damien Wong, Senior Vice President, Asia Pacific, Tricentis.
How does software quality – or the lack thereof – affect businesses?
Wong: The most immediate and visible consequence of poor software quality is financial loss. Our recent 2025 Quality Transformation Report, which surveyed 500 technology leaders in Singapore, revealed that nearly three-quarters (74%) of Singaporean organizations believe poor software quality costs them at least USD500,000 annually.
Ensuring software quality requires robust testing strategies. However, with the advent of AI, code is being generated at breakneck speeds. Businesses are finding it increasingly difficult to maintain speed while thoroughly testing more frequent and complex deployments.
In fact, the same report found that nearly half (47%) of organizations are shipping code changes without fully testing them, citing pressure to expedite release cycles. This creates a dangerous cycle where the race for speed compromises quality, ultimately threatening business outcomes.
Holistic testing strategies allow businesses to achieve both speed and quality without compromise. A recent IDC Business Value study found that organizations using comprehensive testing solutions have seen tangible results with benefits worth an average of $5.33 million per year realized, while requiring 51% less time to complete testing cycles. These improvements enhance business resilience and bottom-line outcomes, validating the critical role of software quality in driving business success.
What are the key challenges impacting software quality?
Wong: While it’s clear that organizations see the value of software quality, several systemic challenges are standing in the way of progress.
Our 2025 Quality Transformation Report found that weak feedback loops and misalignment with leadership are driving rushed releases and unclear quality standards. In Singapore, 46% of organizations cite improving software quality and speed as a top priority, significantly higher than the global average of 13%. Yet nearly half still ship untested code, driven by pressure to release quickly and accidental oversights.
Legacy infrastructure compounds the problem. Many businesses still rely on outdated systems that weren’t built for today’s fast-paced, modern development and delivery models. As teams struggle to integrate old systems with new tools, testing slows down, reliability suffers, and the risk of failure grows with every release.
In fact, more than 6 in 10 (61%) of Singaporean organizations expect to be at risk of a software outage within the next year, with 7% of respondents having already experienced a major software outage this year.
In response, organizations are relooking their approach to software delivery. A growing number are looking to AI, with a Gartner survey forecasting that by 2027, 80% of enterprises will have integrated AI-augmented testing tools.
In Singapore, our own study found that 80% of respondents are excited about AI agents taking over repetitive tasks, while 94% plan to increase their use of AI in software testing. This signals a shift from basic automation to more intelligent, agent-driven quality strategies that support both speed and resilience.
What is the role of Agentic AI in the software development lifecycle, and how is it evolving?
Wong: As testing demands grow in both scale and complexity, many teams are hitting a ceiling with traditional tools. We are seeing a new generation of capabilities, driven by agentic AI, redefining what is possible. Our research shows that mature DevOps teams who do use AI in testing are nearly 30% more likely to consider themselves effective.
Agentic AI takes this a step further. Unlike traditional assistive tools, it operates with autonomy, capable of planning, executing, and adapting without constant human oversight and predefined instructions. To realize its full potential, teams must first address integration challenges around security, context, and compatibility. This is where our Model Context Protocol (MCP) plays a critical role. Serving as a universal interface, MCP connects agentic AI systems with enterprise-grade testing tools to reduce agentic AI sprawl and its complications, enabling secure and contextual integration into existing pipelines.
To continue optimizing workflows in the software development lifecycle, AI agents are taking on broader responsibilities like analyzing test histories, adapting to enterprise-specific context, and interpreting visual elements across platforms.
Innovations such as Tricentis Agentic Test Automation introduce AI agents capable of generating complete test cases from natural language prompts, offering customers ultimate flexibility to build, deploy, and scale AI-powered testing.
When implemented effectively, agentic AI enables faster, smarter, and more resilient software delivery. As adoption grows, organizations are well-positioned to accelerate innovation to meet local demand with confidence.
What is the “UI for AI” concept? How does it improve software testing and quality?
Wong: “UI for AI” is a concept developed by Tricentis to enable AI agents such as those from OpenAI to securely and efficiently interact with enterprise-grade testing tools like Tricentis solutions Tosca, NeoLoad, and qTest.
By providing a structured interface between AI systems and testing platforms, this approach allows AI agents to become active participants in the testing process. Instead of relying solely on human intervention, AI agents can now autonomously generate, maintain, and execute tests based on context, user prompts, or system changes.
AI agents now play a transformative role across the software testing lifecycle.
Firstly, AI agents can automatically generate full test cases from natural language prompts, using contextual understanding of the system. Early adopters of Tricentis Tosca have reported 85% time-savings in test creations and 60% gain in overall productivity.
Beyond test generation, AI agents can also detect patterns in test failures and adjust test execution, accordingly, reducing false positives and ensuring more reliable results. By enabling natural-language interaction, non-technical users can contribute to test creation and analysis, improving collaboration and aligning tests more closely with business goals.
Furthermore, AI agents can analyze test outcomes in real-time, prioritize risks, and surface insights that help teams fix critical issues faster.
“UI for AI” presents a meaningful shift in autonomous testing, transforming it from a manual, brittle process to an intelligent, adaptive system that delivers faster releases, fewer bugs, and higher-quality applications, and stronger collaboration across teams.