Which countries are leading the charge? What jurisdictive elements are involved? Find out more and be ahead of the curve…

The advancements of AI, and warnings about its potential for abuse and overreach, are seeing lawmakers in various jurisdictions elucidating approaches to regulate the technology without stifling innovation.

For instance, in the UK, a pro-innovation AI regulatory framework in Mar 2023 proposed measures to evaluate safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. 

Similarly, in Aug 2021, Hong Kong’s Privacy Commissioner published the Guidance on Ethical Development and Use of AI, setting out ethical principles for the development and use of AI including accountability, human oversight, transparency and fairness.

While approaches to governing the use and deployment of AI vary on a jurisdiction level, some common key regulatory themes have emerged, such as those around:

    • Fairness
    • Explainability
    • Transparency
    • Non-discrimination
    • Accountability
    • Governance
    • Ethics and human oversight

In Feb 2022, Singapore’s voluntary guidelines issued by the Monetary Authority of Singapore defined regulation of AI use in the financial sector also includes the principles of “Fairness, Ethics, Accountability and Transparency” (FEAT).

However, in the AI regulatory race, two jurisdictions are emerging ahead of the curve by going above and beyond these common themes: the EU and China.

EU AI regulations

The EU’s proposed Artificial Intelligence Act (EU AI Act) seeks to introduce a first-of-a-kind regulatory framework with extraterritorial reach, meaning that the Act will apply to both EU or non-EU persons — as long as the latter are providing, deploying, importing or distributing AI systems in the EU.

The crux of the EU AI Act is to classify AI systems by categories and then regulate them proportionately according to the level of risk these categories present. The five categories are:

    • Prohibited uses: Systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behavior and socio-economic status)
    • High-risk: These include AI systems that are used for credit scoring as well as employee recruitment. Such systems will be subject to extensive regulations under the EU AI Act, including but not limited to:
      • implementing a risk management system throughout the entire system lifecycle
      • having data management and data governance policies in place
      • conducting a fundamental-rights impact assessment
      • keeping a detailed system activity log as to ensure trackability
      • sufficiently transparent operations and user instructions
      • ensuring appropriate human oversight
      • maintaining a high level of accuracy, robustness and cybersecurity
    • Foundation models
    • Human interaction & Deepfake models
    • All AI models

The other categories of AI systems in the list above will also be covered under the Act, albeit under less stringent regulation in some cases.

While EU lawmakers are finalizing the new legislation text, the maximum penalties for any breaches could lead to a fine of up to €40,000,000 or up to 7% of a firm’s global turnover for the preceding year, whichever is higher. 

China AI regulations

China has also been active in developing a regulatory framework for the AI industry, with the Cyberspace Administration of China (CAC) leading the charge. 

In May 2022 the country’s Internet Information Service Algorithm Recommendation Management Regulation came into effect, providing users with knowledge of rights and the choice of opting-out of algorithms — a cornerstone to AI. 

Additionally, certain algorithms — such as those with the capability to shape public opinion, or to mobilize society — are subject to stringent administration and filing requirements that include submitting extensive documentation to the CAC, including a self-assessment report. 

In Apr 2023, the draft Measures for the Management of Generative Artificial Intelligence Services foreshadowed the mandate for all generative AI products and services to be subject to security assessment by the CAC in upcoming generative AI legislation.

Further, algorithms and generated content must adhere to the “orientation of mainstream values” and promote “positive content”, obligations often described as imposing requirements to comply with core socialist values.

What next?

As regulators worldwide grapple with the implications of fast-changing AI technologies under the high-level ethical principles of AI governance, the EU and China stand ahead, having led in their security assessment of generative AI services; and in defining extensive transparency requirements and bans on AI use in certain high-risk situations.

Given the extra-territorial effect of the EU AI Act, firms and enterprises looking to market their AI products to the EU will be expected to comply with the Act.

Business operating in China should closely monitor the finalization of the country’s continual AI legislation processes.