Labeling AI adoption as “responsible” is meaningless without formal accountability, compliance, and action: adjectives alone do not guarantee ethical outcomes.
Amid the race to harness AI-driven efficiencies, there are also growing concerns around privacy, security, fairness, and transparency.
People have been worried (and will likely continue to identify more issues in future) about their data being misused/plagiarized; their words misconstrued; and their work misrepresented. These concerns are creating an environment of fear, uncertainty, and doubt.
As AI regulations take shape across the region, businesses now have a new challenge of integrating AI responsibly. How do we maintain control over AI so that it does not mislead, misinform, or harm humans? How do we provide AI with the necessary level of self-sufficiency and autonomy while also protecting both consumers and businesses?
Responsible AI in the Asia Pacific region
While AI engineers, academics, legal experts, policy makers, and business leaders continue to refine regulatory frameworks, organizations from all private and public sectors need to play a part by proactively embedding responsible AI principles into their operations to balance innovation with accountability.
The International Organization for Standardization (ISO) defines Responsible AI as an approach to developing and deploying AI from both an ethical and legal standpoint. The goal is to employ the technology in a safe, trustworthy and ethical way. Using AI responsibly should increase transparency while helping reduce issues such as AI bias and hallucinations.
As Tess Valbuena, interim CEO, Humans in the Loop, has said, the need for AI oversight —and the magnitude of oversight — is not as objective as many would probably like it to be. However, many standards organizations, governmental regulatory agencies and professional licensing boards are attempting to establish clearer guidelines, aiming to provide a structured approach that balances innovation with accountability.
For example, South Korea has taken a proactive stance on Responsible AI and enacted the AI Basic Act in December 2024, which is set to take effect in January 2026. This legislation provides a comprehensive framework for AI governance, focusing on transparency, safety, and ethical standards. It aims to balance innovation with public trust by establishing clear regulations for AI development and use. Meanwhile, Singapore has been championing Responsible AI deployment through foundational various initiatives.
Turning definition into formalized action
What additional steps should countries in the region take to ensure they are implementing Responsible AI practices? While the approach may vary depending on the specific AI model and its application, these are some best practices for businesses:
- Confirm and vet the source of the AI model or tool: Businesses need to understand the ethical principles, policies, and practices of the AI’s provider and any other party involved in its training or ongoing oversight. Did they act responsibly during the model’s development and training? What are their current and long-term intentions with the model? A clear understanding of these factors is critical to mitigating risks.
- Understand data sourcing and handling: AI systems are only as reliable as the data they are trained on. Organizations need to establish robust data governance policies to ensure transparency, security, and compliance. This includes verifying data provenance, preventing the exposure of sensitive or proprietary information, and evaluating the trustworthiness of third-party data sources.
- Keep humans-in-the-loop (HITL): The insertion of human judgment and intervention into AI decision-making processes can be critical to enhancing safety, reliability, and ethical compliance.
- Understand the risk of hallucinations, and have guidance in place for output verifications: Even well-trained, low-risk AI models can generate misleading or inaccurate outputs. Organizations should implement strict verification protocols before acting on AI-generated content. Adopting a layered approach, including cross-referencing outputs with human expertise can help minimize misinformation risks.
- Ensure only sanitized data is fed into the AI model, and set safeguards to prevent unauthorized input: AI models may process data including customer, partner, supplier, or general market knowledge, stories, and operational data that may comprise confidential information, copyrighted material, personal data and other data requiring certain permissions.
- Ensure proper attribution to sources used in the generation of the AI output: Businesses should always acknowledge the origin of work when using AI to generate content, regardless of format or intended use. GenAI projects lead should have an understanding of the differences and correlations between “authorship” and “ownership” of content or products created using AI. Providing proper attribution for AI-generated content reinforces transparency in Responsible AI practices.
Responsible AI as a core business value
Implementing Responsible AI practices is not just about compliance. It is about integrity and fostering a culture that values ethical decision-making.
While additional training and supplementary procedures are necessary to address the complexities of AI, businesses throughout the region should embed strong corporate ethics as a foundational principle in their AI strategies. This will help to balance the drive for innovation with the need for ethical responsibility, ensuring AI serves society in a positive, accountable and impactful way.