We all know the phrase, “with great power comes great responsibility”. Now apply that to global AI innovation, adoption and governance…
In drawing qualitative insights from its global network of alliance partners, consultancy/accounting firm KPMG International Ltd has released a forecast of top business risks for the next 12 months.
The three key risks threatening business growth in the period covered by the data have been identified as AI governance gaps; trade policy restrictions; and conflict escalations that call for strengthening of resilience strategies.
In particular, an identified gap in AI governance is worth highlighting here. According to the authors of the forecast, as AI investments have surged more than fivefold from 2013 to 2023, it has become a transformative force across industries. In 2024 and beyond, businesses are expected to move swiftly from investment to implementation across their operational practices.
However, these businesses “must recognize that the technology requires increased alertness to cybersecurity threats and reputational risks. Malicious actors may exploit regulatory gaps, particularly as its use becomes more accessible, and critical threats originate less from well-financed state-sponsored groups and more from motivated individuals,” according to the report.
Furthermore, government surveillance mechanisms are not likely to keep pace with innovations. Hence, a key takeaway is for businesses to take AI strategy and policies into their own hands, and not to expect harmonized global regulatory frameworks in the context of a multi-polar world.
Taking up a heavy AI responsibility
From an internal perspective, businesses referred to by the authors should “feel responsible for ensuring the right infrastructure and strategies are in place to embrace AI in a responsible, human-focused way.”
This means a focus on building trust and transparency among existing employees that are uncertain or skeptical of AI’s impact on their livelihood, as well as among consumers that prioritize personal data security and privacy. In this way, businesses are the initial line of defence in ensuring that their Responsible AI adoption benefits all and disadvantages no one.
Some actionable steps for targeted business leaders to address the AI Governance gap and inextricably-linked geo-economic and political factors include:
- Prioritize and quantify geopolitical/economic, internal and supply chain risks for mitigative action
- AI strategies must be firmly rooted in ethical conduct and responsible practices at all stages of planning, integrating and management
- Drive sustainable innovation to prevent disruptive regulation
As business leaders are increasingly pressured to take public positions on social and political issues of the day (such as polarizing conflicts, gender issues and net zero targets), they need to establish a framework to decide when and how to communicate publicly on social and political issues that could lead to heavy business risks, according to the report.