To ensure responsible and ethical use of AI, we need to build trust in data management and governance. But is it easier said than done?
Trustworthy artificial intelligence (AI) is the cornerstone of future applications of AI. We learnt our lessons – or have we? – with the runaway lackadaisical approaches to big data management, which made data-related regulations extremely difficult and painful for government, businesses and individuals over the last decade.
Bearing in mind the complexities involved in regulations and compliance, how should we go about ensuring trustworthy data management and trustworthy AI? DigiconAsia looked for some answers from Kalliopi Spyridaki, Chief Privacy Strategist, EMEA & Asia Pacific, SAS.
What are some key challenges that organizations face when implementing privacy, AI and data regulation effectively in the APAC region?
Kalliopi Spyridaki (KS): One of the primary challenges in the APAC region is the sheer volume and diversity of data-related laws. The different governments drafting these laws understandably have varying domestic priorities and visions. In practice, however, organizations are faced with complexities in implementation.
However, if we look at the principles behind these laws, they are strikingly similar. For instance, the recently adopted and rapidly increasing amount of AI policy and regulatory requirements across the region, but also globally, are typically centered around transparency, explainability, fairness, accuracy and human oversight.
Thus, in practice, organizations can focus on these principles while remaining flexible enough to adapt based on the local requirements in each jurisdiction.
What are the practical implications for organizations operating in the APAC region regarding compliance?
KS: The diverse regulatory landscape is a challenge for organizations trying to implement consistent data management and governance policies. It requires organizations to be adaptable and versatile in their compliance efforts.
It might be harder for smaller organizations with fewer resources to stay up to date with local initiatives across the region and the evolving AI governance frameworks, which can influence company operations. Support in the form of advisory services from governments, especially for small and medium-sized enterprises (SMEs), helps lower the financial and compliance burdens on smaller companies.
We have also seen increased willingness for collaboration among enterprises of all sizes with the view to foster innovation. These partnerships can be particularly beneficial for the entire AI supply chain and ecosystem.
Despite the legal compliance complexities, regulation does not need to stifle innovation. Thoughtfully crafted balanced regulations, created with input from various stakeholders, can provide the necessary clarity and certainty for developing and deploying trustworthy AI technologies.
Regulations that ensure the ethical use of AI and protect data privacy can enhance trust and drive responsible innovation. For instance, synthetic data generation and AI model cards, the equivalent of nutrition labelling in food products, are innovative technological options that help address ethical and regulatory concerns. These can help meet compliance requirements while promoting ethical practices.
What are some emerging trends in AI privacy regulations in the APAC region and how might these trends impact organizations?
KS: Privacy regulations in APAC updated or introduced over the last few years, provide today a well-established framework. On the other hand, AI regulation in the region is still in its infancy. Many countries, including Singapore, have produced comprehensive guidelines in various areas, for instance AI governance or generative AI use.
Countries like Japan, Australia and India are currently considering targeted AI regulation so this is a landscape that is still being shaped and will continue to evolve over the new few years, further influenced by global AI regulatory developments too.
A common regulatory thread among governments in the APAC region seems to be the importance of AI governance frameworks related to AI risk management and accountability. Businesses are expected to implement robust AI governance mechanisms to ensure responsible and trustworthy AI use and development.
We observe how the Monetary Authority of Singapore is partnering with the financial institutions to improve trustworthy AI adoption in the financial sector. The Malaysian government has also doubled the budget for the National Scam Response Centre (NSRC) due to rising incidents and created the National Fraud Portal (NFP) to gather information on fraudulent bank accounts.
Global, or at least interoperable, standards for AI development and use would be truly useful for companies. These would also help the uptake of AI technologies across borders promoting economic growth and ultimately benefiting consumers too. In the next decade, we can certainly expect more AI regulation but hopefully also a harmonized approach globally, similarly to how data privacy regulation evolved over the last decade.
Robust data management and responsible data practices will continue to support organizations in their journey towards trustworthy AI.