The use of generative AI tools from the public domain – such as ChatGPT, Bing Chat, Claude 3, Midjourney and DALL-E 3 – is a game-changer at the workplace today. But game rules need to be in place too…
While public generative AI tools are rapidly gaining traction and employees are adopting them to save time and effort, simply accepting its use without guidance isn’t the safest or most optimal approach.
Veritas Technologies’ latest research finds that confusion over generative AI in the workplace is simultaneously creating a divide among employees while increasing the risk of exposing sensitive information.
Companies need to devise efficient guidelines and policies to encourage responsible and productive use of these tools. DigiconAsia discussed the issues and concerns with Andy Ng, Vice President and Managing Director, Asia South and Pacific Region, Veritas Technologies.
What are the major concerns and potential risks of unregulated use of public generative AI?
Andy Ng: The unregulated use of public generative AI technology raises significant concerns across ethical, security, and legal domains. While there are clearly benefits in terms of productivity increases from new smart technology, checks and balances on usage policies are critical to ensure these benefits are maximized without exposing organizations to any additional risks.
Ethics in public generative AI extend beyond algorithmic fairness. It is crucial for AI systems to prioritize fairness, mitigate bias, and maintain transparency throughout the lifecycle. Organizations must also consider the quality of data they feed into any public AI engines they or their employees are using.
Considering more than a third (36%) of office workers in Singapore acknowledged inputting potentially sensitive information like customer details, employee information and company financials into public generative AI tools (like ChatGPT), it is critical for organizations to ask themselves how they will use, classify and store any data produced by public generative AI, to ensure it complies with local regulations and does compromise data management practices of the organization.
How should organizations start to establish clear data policies to mitigate these risks?
Ng: According to the latest Veritas research, 95% of office workers in Singapore said guidelines and policies on its use are important, but only 43% of employers currently provide any mandatory usage directions to employees.
To establish clear data policies aimed at mitigating risks associated with generative AI, organizations should first conduct comprehensive risk assessment to identify potential vulnerabilities and privacy concerns. This involves evaluating data storage practices, access controls, and data sharing protocols to ensure compliance with relevant regulations and industry standards.
It is also necessary to educate employees on safe and secure generative AI usage, with a particular emphasis on data protection and compliance requirements. That said, data policies would only be truly effective if the organizations are implementing good data governance. This includes establishing frameworks for ongoing monitoring, auditing, and enforcement of data policies to detect and address compliance breaches proactively.
To enjoy the benefits without increasing risk, it is critical for organizations to develop, implement and clearly communicate guidelines and data policies on the appropriate use of generative AI, along with the right data compliance and governance tools in place for ongoing enforcement.
How do you see the increasing regulatory scrutiny around data privacy and responsible AI impacting the use of generative AI among organizations in Asia Pacific?
Ng: Regulatory scrutiny around data privacy and AI ethics is a double-edged sword for public generative AI use in Asia Pacific.
On the one hand, unclear requirements, coupled with data privacy concerns, create uncertainty and compliance costs for businesses. Organizations struggle to understand what it means to be compliant, leading to delays or hesitation in adopting generative AI. As generative AI systems ingest more data, security becomes integral. Safeguarding against breaches, and unauthorized access to AI systems is paramount. Additionally, ensuring responsible data use and mitigating bias in training data adds another layer of complexity.
On the flip side, these regulations can also be a catalyst for positive change. By prioritizing responsible AI practices like data source transparency, privacy and fairness measures, organizations can build trust with users and regulators. The focus on responsible AI development can become a competitive advantage in the long run, as organizations embracing AI must consider ethics as an ongoing commitment.
Besides ethical use of AI, what are some other key considerations in developing comprehensive policies on generative AI use?
Ng: When it comes to developing comprehensive policies on generative AI use, organizations should ensure deployment aligns with ethics, as well as legal compliance and data security considerations. For a start, organizations can outline the types of data that can be used to train the AI systems for specific purposes, as well as the datasets that cannot be uploaded on an AI platform, such as personal identifiable information or proprietary data, to minimize risk exposure.
Next, it is equally important to educate and train employees on the legal compliance and data security policies to create awareness on the risks of using information. For instance, some employees have accidentally leaked confidential information corporate trade secrets into public AI models to perform tasks. Such behavior can put your most valuable information and your organization, at risk.
With the ongoing focus on data privacy, it is imperative for organizations to put priority on training employees on the latest compliance and cybersecurity protocols, so that they do not become the weakest link.
Despite excitement and widespread adoption, is employee access to generative AI equal? Is there a talent gap?
Ng: According to the latest Veritas research, employee access to public generative AI is not equal – 58% of office workers in Singapore said they use public generative AI tools weekly, 20% said they do not use them at all. Notably, nearly a quarter even think that coworkers’ pay should be docked for doing so. Moreover, nearly half (49%) think that those who use these tools should be required to teach the rest of their team on how to use them to create a level playing field. In fact, the confusion over public generative AI is causing a divide between office workers, with 56% saying that some employees using public generative AI have an unfair advantage over those who are not.
Without clear guidelines, office workers in Singapore are recognizing the downsides of using such public AI tools and hesitate to use them. When asked about the risks of using these tools in the workplace, the key reasons given were: they could leak sensitive information (47%); there are compliance risks associated with using these tools; and these tools could generate incorrect or inaccurate information (45%).
Such a disposition towards the use of public generative AI tools would inevitably contribute to a talent gap, especially for those involved in roles that are highly data intensive or require stringent regulatory compliance.
What other gaps do you currently see in employee demand for training and employer offerings in navigating the AI-driven digital landscape?
Ng: With AI advancing at a rate faster than most organizations can keep up with, it is critical for them to realign their business priorities and structures to empower their employees to harness the benefits of AI.
According to the Veritas research, more than 80% of employees in Singapore say they want guidelines, policies, and training from their employers on using public generative AI within their organizations. The top reasons cited were: employees need to know how to use the tools in an appropriate way (70%); to mitigate risks (51%); and to create a level playing field in the workplace (30%).
To stay competitive, organizations should focus on adopting AI technologies for data management to optimize existing systems. Corporate guidelines and policies around the use of generative AI should be developed and well-communicated to all employees. To ensure appropriate usage, organizations should provide employees with training on AI-assisted tools and ensure they are comfortable with handling data across different IT environments.
In the AI-driven digital landscape, the debate on AI development and regulations will intensify, underscoring the importance for organizations to continually evaluate, adapt and engage with their employees on the deployment of AI.