Rapidly-adopted GenAI tools had led to more data loss, intellectual property exposure, and regulatory risks spotted in the analyzed data.
Based on an analysis of historical traffic logs and anonymized data loss prevention (DLP) incidents among 7,051 global enterprises using a cybersecurity firm’s products in 2024 and early 2025* for data trends on generative AI (GenAI) adoption and security, some findings were shared with the media.
First, GenAI traffic among the surveyed enterprises had increased by more than 890% in 2024.
Second, the average number of GenAI applications in use per surveyed enterprise was 66, according to the data collected.
Other findings
Third, GenAI-related DLP incidents among the surveyed enterprises had more than doubled in early 2025, reaching 14% of all data security incidents recorded in the survey period. Also:
- 10% of GenAI applications among the data analyzed had been classified as “high risk”.
- 39% of AI coding transactions comprised corporate customers in the technology and manufacturing sectors
- The most common GenAI use cases among the surveyed enterprises were writing assistants (34.0%), conversational agents (28.9%), enterprise search (10.6%), and developer platforms (10.3%).
- Following the release of DeepSeek-R1 in January 2025, DeepSeek-related traffic among the surveyed enterprises had increased by 1,800% within two months.
- GenAI transactions grew at an average monthly rate of 32% in 2024, outpacing overall cloud software growth by 12 percentage points.
According to Tom Scully, Director and Principal Architect (Government and Critical Industries, Asia Pacific & Japan), Palo Alto Networks, the cybersecurity firm releasing its data set analysis: “Organizations must balance innovation with strong governance, adopting security architectures that account for AI’s unique risks. From shadow AI and data leakage to the more complex threats posed by agentic AI models. Proactive oversight and adaptive security controls are essential to ensuring that the benefits of AI are fully realised without compromising national security, public trust, or operational integrity.”
*focusing exclusively on third-party-provisioned generative AI (GenAI) applications accessed via the firm’s own security products. Data anonymization and strict privacy and security guidelines were applied throughout the analysis to ensure no sensitive customer information was exposed. Findings were derived from observed trends in GenAI app usage, DLP incident patterns, and risk. The survey did not include internally developed or self-hosted GenAI solutions, or capture usage outside the monitored security environments. All reported practices and trends reflect data observed within the specified timeframe and product ecosystem. No geographical details were disclosed in the methodology