It may be the first comprehensive legislation over responsible AI development, but the unpredictability and diversity of this aspect raises questions…
AI has allowed organizations to explore new opportunities and augment, scale, and develop offerings. However, these very same doors that AI has unlocked have also expanded threats and opportunities for cybercriminals, as well as raised increased concerns around safety and compliance.
This has reached a tipping point, so much so that governments and agencies across the globe have directed significant efforts towards re-evaluating current AI regulations and establishing new directives to help keep AI risks and related threats at bay.
The newly-enacted European Union AI Act, for example, is the world’s first comprehensive AI legislation. It aims to strike a delicate balance between safeguarding fundamental rights, ensuring compliance and safety, and supporting innovation. Similarly, in October 2023, the Biden Administration had issued an executive order on the use and development of AI, followed by the US National Institute of Standards and Technology (NIST)’s April 2024 pledge to help improve the safety, security and trustworthiness of AI systems.
Can these laws work?
Editorial note: In an interesting blog piece by Deloitte, the fast-and-hard push towards AI development is likened to the existential threat when advanced extra-terrestrial aliens suddenly arrive on Earth brandishing incredible technologies that “captivate some and terrify others. Governments quickly convene to try and make rules for how these enigmatic beings will live and work among humans.”
As noted in Delotte’s blog, there are different approaches surrounding the AI governance proposals being made around the world. They appear to take an analogous path towards policies, and there are parallels which can be drawn from them. This was determined after analyzing over 1,600 AI policies from 69 countries and the EU, spanning regulation, research grants, and national strategies.
Editorial note: Here are some key concepts proposed by the Deloitte blog:
- Amid diverging proposals for coping with AI’s dual-edged capabilities, against expectations, most countries analyzed have so far approached the problem with a very similar set of policy responses
- The general approach has been to Understand the problem better; Grow support for AI to leverage its positive potentials; Shape the resultant development with voluntary standards and regulations as needed
- It is at the Shape stage where various countries in the analysis diverged, due to cultural and experiential diversity. The divergence implies that governmental action is at an inflection point. Who should have jurisdiction over governing AI? How should societies balance competing imperatives of innovation and safety? What is government’s role in responding to AI developments? The complex answers to these questions are still in the making.
- Governments may want to revisit overlooked tools that are outcome-based and risk-weighedt regulations that, rather than mandating the right outcomes in all circumstances, focus on incentivizing those desired outcomes regardless of how the technology develops
Making sense of AI governance
Ultimately, while regulations are being enacted in different countries and regions, when looking at the momentum in both the US and EU, both sides of regulatory efforts unveil a common thread woven throughout them: there is a unified vision and collective goal to minimize risk while fostering AI innovation in a way that does not impede the technology’s full, positive potential.
Accordingly, as governments worldwide continue to introduce and administer legislation — in tandem with organizations proceeding to advance and adopt AI solutions — it is imperative that these regulations stay rooted in reducing risk — in a way that bypasses the curbing of innovation.
Regardless of whether it is a collective approach to global regulation or rules applicable to a specific region, ensuring that an encompassing strategy around establishing guardrails for the safe, compliant use and advancement of AI are paramount.