Moving beyond traditional research to faster, richer insights that fill gaps in understanding customers better, can synthetic data and AI transform how business decisions are made?
AI and synthetic data are helping to provide faster and richer insights into customer behaviour and experience. But questions arise concerning data quality and accuracy, as well as trust.
What are the challenges and opportunities synthetic research brings to the C-suite’s table, and how can organizations overcome the issues of data quality and trust when it comes to synthetic data?
DigiconAsia.net discusses some research findings from Qualtrics with Hui Ching Tan, Head of Research Insights, APJ, Qualtrics:
What are the biggest barriers preventing organizations from turning data into action?
Tan: The cost and complexity of data collection remains a significant challenge, but what I’m seeing across APAC is that the real barrier is data fragmentation. Organizations are spending substantial amounts, our research shows that 83% of Australian companies and 61% of Singapore companies already dedicate over 10% of their marketing budget to business intelligence, yet this data sits in silos across different platforms and departments.
The fundamental issue is ensuring that the data collected actually answers the business question at hand. When data is scattered and disconnected, it becomes expensive and time-consuming to synthesize into actionable insights. This cost barrier often leads organizations to skip doing research altogether, which is counterproductive.
The second major barrier is data quality and accuracy concerns. With the rise of AI-generated data and synthetic research, marketing leaders are rightfully asking: where does this data come from and what is its quality?
There’s particular skepticism in markets like Australia, where 58% of marketers express concerns about AI or synthetic data, which is significantly higher than Singapore at 44%. This quality question becomes even more critical when you consider that data may be coming from bots or AI systems rather than traditional human sources.
How are AI and synthetic research changing the way business leaders approach decision-making?
Tan: Synthetic research is fundamentally transforming the speed at which we can turn data into insights, a skill that’s key for business leaders. In APAC, we’re seeing remarkable adoption rates: Singapore leads globally with 63% of organizations already using synthetic data.
What excites me most is how synthetic data challenges us to think differently about research methodology. The question isn’t just “How do we replicate what human panels do?” but “How can we use synthetic research to uncover insights that human panels cannot?”
For instance, synthetic data allows us to simulate hard-to-reach segments or test scenarios that would be impossible or prohibitively expensive with traditional methods. We’re seeing organizations achieve up to 50% cost reductions while dramatically improving their time-to-insight from weeks to minutes.
But synthetic research is just the beginning. While it is founded in machine learning, I see it as a stepping stone to agentic AI – systems that don’t just share insights, but actively advise on what to do next.
We’re moving beyond simply churning out data insights to building systems that augment human research capabilities. The goal is to enhance what we know about research with humans, not replace the human element entirely.
Bias and accuracy remain top concerns, how can companies address these responsibly?
Tan: This is absolutely critical, and I’m encouraged that 79% of marketing leaders in our study express concern about potential AI bias affecting insight accuracy. The key to reducing bias lies in what we call data hydration, which is essentially ensuring we gather good representation across diverse datasets.
Our approach involves training models with both operational data, such as sales data, and publicly available data sources. But it doesn’t stop there. We hydrate our data models monthly with fresh data to keep them current and comprehensive. This includes ensuring diverse representation across different regions, cultures, and demographic segments, which is particularly important for a region like APAC, with such strong diversity.
Diversity in data is crucial for ensuring minority voices are heard across different regions and cultures. When we hydrate our data models, we take responsibility for ensuring anonymized data is being used, and used appropriately.
At Qualtrics, we’ve developed a rigorous four-step validation framework that tests for generalization, data shape, diversity, and transferability. This systematic approach to validation helps ensure the synthetic data we generate is not only accurate but also representative of the populations we’re trying to understand.
We’re also ISO 42001 certified when it comes to data handling: a third-party validation that proves Qualtrics has in place the frameworks and governance to maintain the highest standards of security, privacy and ethical practices in AI across our platform.
What steps should leaders take to integrate AI into their existing systems without losing trust?
Tan: My advice is always: take it slow and steady. Start with low-risk applications first. Begin with activities like concept testing or survey design optimization where you can really experiment and compare the differences between human and synthetic responses. This allows your team to build confidence gradually while understanding the technology’s capabilities and limitations.
Once you’ve established trust in these foundational applications, you can slowly move to more advanced stages where AI becomes integrated into your workflow. For example, you might use AI agents to help design surveys from scratch or automate parts of your analysis process.
The final and most crucial step is ensuring quality throughout the process. This means having robust systems to interpret data insights and reduce what we call “data hallucination” – outputs that aren’t relevant or logical.
Quality control mechanisms need to be built into the model to catch and correct these issues before they impact business decisions. AI ethics must be at the center of this integration. When we launch synthetic capabilities, we take validation extremely seriously. We ask critical questions: Is the data diverse? Is it transferable across different contexts? Are we maintaining the highest standards of data ethics?
This responsible approach to AI implementation is what builds and maintains trust with stakeholders.
Guesswork is one of the most expensive strategies in business. The executives who master AI-driven decision-making will be the ones driving sustainable growth in an increasingly competitive landscape.