What drives AI? Data!

Data underpins all AI processes. In other words, AI success is very much dependent on your data infrastructure. Our own research suggests the following top reasons for AI initiatives failing:


• Inability to access data
• Insufficient data to train models
• Privacy, compliance, and data governance concerns or requirements
• Data engineering complexity
• Untrustworthy or poor-quality data sources

AI transformation is often hindered by the inability to access scattered data in siloed storage infrastructure — so it is harder for AI engineers to train and develop models. Organizations should adopt a data infrastructure that supports the preparation, movement, analysis and use of data across on-premises and hybrid cloud environments. Such seamless integration makes data sources for machine learning and AI readily available, regardless of their location.

AI also is not just about algorithms and models; it is about trust, transparency, and the ethical use of data. Responsible AI is a method for creating AI algorithms that minimize sources of risk and bias throughout the AI lifecycle. To establish effective data governance and strong data practices, companies should consider four key principles: fairness (ensuring unbiased results), interpretability (ensuring data traceability), privacy (maintaining data confidentiality), and security.

Apart from the above, organizations must heighten their cyber resilience capabilities, including having appropriate AI/ML-embedded storage technologies to combat ever-evolving cyber threats like ransomware.

—  Henry Kho, Area Vice President and General Manager (Greater China, ASEAN and South Korea), NetApp


Transforming cybersecurity with generative AI

When integrated carefully, generative AI (GenAI) models can be highly effective tools in proactive security defence programs. However, on the flip side, they can also be used against an enterprise’s cyber defence in ways that we cannot afford to ignore. 

Most recently, GenAI tools like ChatGPT have been seen as instrumental in the latest surge in email phishing attacks in the lead-up to the Paris Olympics — a fertile breeding ground for threat actors to infiltrate their victims’ Microsoft 365 networks. People are also experiencing increasing incidents of attackers abusing Microsoft Copilot with living off the land techniques – this removes latency in attacks, giving perpetrators accelerated access to enterprise networks and critical data.  

This means that, attacker can launch a GenAI-driven attack by using the power of enterprise-level AI against the enterprise itself. As such, speed becomes crucial. Without AI-driven behavioral analysis capability, SOC teams have little chance of discovering the breach, much less stopping it. Therefore, it is important to put guardrails in place that match the AI-driven attack with the speed of AI-driven behavioralanalytics to ensure the integrity of the organisation’s security posture. 

Advanced AI delivered in an integrated attack signal could stop today’s most challenging hybrid cyberattacks. It also helps take the ambiguity out of security analysts’ day and enable them to focus on what matters. 


As AI technology matures and security specialists address specific challenges, organizations can expect even more powerful applications and a more proactive approach to security for a safer digital world. 

— Sharat Nautiyal, Director, APJ Security Engineering, Vectra AI