AI regulation is not just a matter for governments and industries, but for every citizen on a global scale…
With regulatory approaches to AI in danger of fragmenting across borders (according to the MIT Technology Review), this is the time to seek out collaborative synergies.
Even in the absence of an AI regulation framework the rapid advancement of mainstream AI necessitates collaboration. Responsible AI use can be a reality with the right guardrails in place, guided by universally agreed-upon principles.
It is everyone’s collective responsibility to shape the future of AI through thoughtful regulations that balance innovation, societal well-being, and the preservation of individual rights, underpinned by security and trust.
As policymakers consider the requisite AI governance for the coming years, they should be mindful to adopt a multifaceted approach that is: shared, secure and sustainable.
Three facets of responsible AI
AI regulation is not just a matter for policymakers and industry leaders, but for nations and citizens on a global scale to ensure that AI development is:
- Shared: through an integrated, multi-sector and global approach built in alignment with existing tech policies and compliance regulations, including those governing privacy. AI policy should not be created in a vacuum. That is AI development needs the engagement of global policy networks like the Organisation for Economic Co-operation and Development (OECD) through their AI Community and Business Round Table to secure multi-sector consultation to pave a positive way forward.
- Secure: by focusing on security and trust at every level — from infrastructure to the output of machine learning models. It is critical to ensure AI remains a force for good while protecting it from threats and treating it like an extremely high-value asset.
- Sustainable: by protecting the environment, minimizing emissions and prioritizing renewable energy sources. AI is the most intensive and demanding technology we have ever seen, and we must invest in making it sustainable.
Also, to ensure a seamless and collaborative approach to AI regulation, we need to establish an integrated, streamlined global infrastructure that can benefit the entire digital ecosystem and reduce cost. Singapore has taken the first step in this journey through the launch of the AI Verify Foundation. The foundation aims to harness the collective power and contributions of the global open-source community to develop AI testing tools for the responsible use of AI. This will also help to foster an open-source community to contribute to AI testing frameworks and create a neutral platform for open collaboration and idea-sharing on testing and governing AI.
Inevitably there will soon be a significant proliferation of open and closed-source Large Language Models for specialized and general uses across the globe. Many enterprises will independently employ their own closed-source LLMs to effectively and securely harness the power of their data. The distinction between open and closed source is significant. Consequently, it is critical, that we consider the strengths and risks of both approaches to ensure we meet their respective potential and avoid any unnecessary restrictions. The value and risk of AI will necessitate adoption of true zero trust architectures as a path to the new IT security paradigm.
Another element of AI security is cultivating trust in the technology. This mandates appropriate disclosure and transparency approaches to be developed and used globally. Due to the inherent complexity of most AI systems, transparency rules should present what data the AI system used, who created it, and what tools were used, all without attempting to explain the inner workings of the technology. AI is complex: by establishing ways to help users understand the ecosystem that created it, we will build community trust in AI systems.
Advanced technologies require increasing levels of power. That is why every stakeholder in the AI industry needs to improve product energy efficiency, create sustainable data center solutions and ensure the use of sustainable materials wherever possible. We must establish similar protocols for AI hardware infrastructure — protocols that can hold industries to high standards and support innovation.
Compounding all the three abovementioned facets across industries can ensure AI models solve more challenges than they create.