Cognizant of the broad, unpredictable ways in with badly managed GenAI can lead to unmitigated disasters, the SFC is acting presciently
With the increasing adoption of AI, especially generative AI (GenAI), lawmakers and regulators in various jurisdictions have been grappling with approaches on how best to regulate or provide better guidance to the adoption of the technology without stifling innovation.
Hong Kong’s Securities and Futures Commission (SFC) recently issued a circular in line with a growing to prescribe greater clarity and guidance to its “Licensed Corporations (LCs)” [which include private equity firms, asset managers and hedge funds] on key risks and expectations on implementing AI.
The compliance impact of this circular is expected to be widely felt by the Hong Kong financial services industry and throughout the AI third party supply chain. Any LC wishing to adopt GenAI involving LM activities would need to follow the circular or risk opening itself up to regulatory enforcement by the SFC.
Four costs of compliance
When designing compliance program LCs should consider the growing focus and resources needed for developing and maintaining AI technologies. These costs are expected to add up.
Four core requirements in the circular involve:
- Upskilling, training and monitoring costs
The SFC circular assigns the oversight and risk management responsibilities to the senior management of LCs. It mentions the SFC’s expectations for senior management to ensure that there are “responsible staff from the business, risk, compliance and technology functions” to be able to effectively manage the LC’s adoption and implementation of “AI LLMs by possessing the relevant competence in AI, data science, model risk management and domain expertise”.
As LC senior management strive to fulfil their managerial duties, they will be reliant on the abilities of their staff to effectively spot, escalate and mitigate AI-related risks.
Senior management are unlikely to have superior knowledge and be close to the day-to-day use processes. It will be necessary to invest in training and hiring additional staff to support and oversee the AI integration process. Legal departments will also need to invest time and resources to keep on top of evolving compliance requirements.
Legal departments will also need to invest time and resources to keep on top of evolving compliance requirements. LCs should develop training programs by either investing in a technology department; sourcing external support for tailored training and legal advice; or combining these methods while considering cost and effectiveness.
- Potentially-raised compliance costs for “high-risk” use cases
The SFC considers using GenAI for investment recommendations, advice, or research as “high-risk” due to the potential for misinformation harming clients or investors. “High-risk” use cases are subject to additional risk mitigation measures (e.g., having a human in the loop; conducting model validation and ongoing reviews).
In practice, the lines of where client-facing and non-client facing products are drawn may not be so clear. Internal processes using large language models, such as summarizing large documents or informing the structuring of specific investment products, can directly and indirectly impact client output.
Where there is a risk that the GenAI system may be seen as being used for a high-risk use case or to future proof for possibilities, LCs will need to be prepared to implement more stringent risk mitigation measures across the board.
- Validation costs
LLM validation serves as a counterbalance to integrating AI technologies. In response to industry feedback, the SFC circular seems to have stepped back on explicit “validation” standards, instead now referring to the need for “adequate validation”.
Despite the seemingly innocuous wording, the SFC mandates that validation must take place:
(i) before use
(ii) when significant changes are implemented
(iii) throughout the entire lifecycle of the AI LLMs.
The rules emphasize that validation requires comprehensive end-to-end testing, covering the process from user input to system output. This indicates that a key aspect of “adequacy” will be frequency.
LCs will need to ensure procedural processes are in place and properly documented throughout the AI lifecycle, with potential for the scope of model validation to be adjusted based on risk levels, using a “principles-based” approach.
Validation should be conducted independently by stakeholders who are not involved in the design, implementation, or testing process, and who do not have conflicting interests.
LCs will need to consider how these validation processes can be put in place and by which parties (if need be, external parties may need to be engaged).
- Third party diligence costs
The concept of third-party management is not unfamiliar to LCs faced with the outsourcing guidelines. The SFC AI requires that third-party arrangements need to be carefully:
☑ Reviewed in broader aspects (through model validation and risk assessment)
☑ Negotiated (for sufficient contract terms including risks for breaches of data privacy and IP laws)
☑ Continually re-assessed and monitored for risks of over-reliance
LCs will not only need to bear these requirements in mind when onboarding new service providers, but will need to review their existing service provider arrangements to ensure that these requirements are met.
With the growing volume of explicit regulatory guidance on AI by various regulators in Hong Kong, it is imperative for financial services organizations to closely monitor and understand AI regulatory developments.
Incumbents will need to ensure adaptive AI governance and compliance frameworks are in place, and to ensure proper planning and resources are dedicated to AI compliance for regulatory compliance.