The APAC region can benefit from the lessons about grassroots trust and consultation gleaned from a data governance mapping project
Across the Asia-Pacific region (APAC), AI is reshaping key economic sectors, driven by a young and increasingly connected population eager for digital innovation.
However, the rapid integration of AI technologies raises significant regulatory challenges. In a region characterized by a pervasive digital divide, governments and businesses recognize the urgent need to develop regulatory frameworks that support responsible AI.
In this context, APAC regulators and policymakers must have full visibility of the factors that contribute to making AI environments and ecosystems conducive for responsible AI to emerge and thrive.
Trust: a growing imperative in AI governance
The rapid digitalization of society and the economy poses numerous challenges: notably, the lack of consultation, participation, and representation in AI policy design. This is especially urgent considering that, if users do not adhere to or understand the way AI is governed, they may develop a sense of distrust towards AI companies and the policymakers regulating these technologies.
According a report from the Global Data Governance Mapping Project, 43 out of 68 nations studied had an AI strategy in place. However, only 18 had attempted to engage citizens when developing the strategy. When engaged, the people commenting on the strategy generally comprised individuals that were already familiar with AI and can articulate their concerns. The study also found that most governments in the study did not necessarily make changes in response to the comments they received.
The report has concluded that AI governance may be “for the people”, but it was not “by the people”: that is, even the frameworks developed to enable responsible AI “may not be entirely responsible themselves”. These findings are important, as they have a direct impact on the trust that citizens have in AI governance frameworks. As AI systems become more embedded in daily life, the potential for harm can grow if these systems are not designed with fairness, transparency, and accountability.
In this context, protecting citizens and consumers in digital economies is not just about safeguarding data; it is about ensuring that the regulation and deployment of AI technologies do not inadvertently lead to discriminatory outcomes or the perpetuation of existing social disparities.
Addressing the digital divide
The question of trust in both AI and those regulating it is an important question for citizens and consumers in APAC. If effectively regulated, AI can help address a number of longstanding challenges that make it difficult for people to benefit from and contribute to the digital economy.
The region faces a pervasive digital divide, which stems from significant disparities in income, education, health outcomes, and digital access. Framing and regulating AI in a way that empowers all segments of society could provide innovative solutions to complex issues, such as poverty reduction, fair elections, natural disaster mitigation, and smart urbanization.
However, if AI permeates society unchecked, it can have the opposite effect; sidelining rural populations — the unconnected, the unbanked, women, and ethnic minorities — leaving these groups on the fringes of a burgeoning AI-enabled economy.
To ensure AI is used to foster social and economic inclusivity, APAC economies must focus on responsible AI development. This involves:
- Crafting AI technologies that minimize harms and foster economic growth and societal integration
- Establishing a robust framework for AI that emphasizes ethical guidelines and inclusive policies
By doing so, people can use AI as a powerful tool for fundamental social change, ensuring that technological advancements benefit the entire region equitably.
Fostering a responsible regional AI ecosystem
As APAC nations continue to integrate AI into their infrastructures and economies, the imperative is clear: they must cultivate regulatory environments that foster responsible AI development while safeguarding consumer interests and promoting robust data governance.
The key to addressing both the pervasive digital divide and the risk of distrust in AI governance is to first assess the extent to which AI ecosystems are ready to develop and deploy AI responsibly and ethically. There is an urgent need to effectively track and improve countries’ ability to design and implement foundational regulatory and governance frameworks concerning AI.
There are numerous ways to approach this challenge, such as various “global AI readiness” indices, but no matter what process is adopted, identifying a country’s capacity for responsible AI as a first step means policymakers in APAC will be better equipped to understand and address the cross-cutting regulatory issues that contribute to its digital divide.
It is only by accommodating the wide variety of financial, institutional, and political models that characterize APAC that policymakers can produce an equitable and sustainable strategy for advancing AI readiness.
Collaborative efforts among governments, industry stakeholders, and consumers are essential in setting a global benchmark for responsible AI practices, ensuring technological advancements contribute constructively to societal well-being and promote a digitally inclusive future.