Here are five trends and predictions that build on the challenges and opportunities encountered this year …

Twelve months ago, “Generative Design AI” appeared on the fringes of the Gartner hype cycle for emerging technologies. Today, the consultancy believes generative AI (GenAI) is close to the peak of inflated expectations.

As 2024 approaches, firms will intensify efforts to operationalize and improve Gen AI, and adjust their approaches to managing growing volumes of data across environments — especially the cloud — to drive flexibility and growth. 

Here are some trends we will see next year:

    1. Strong MLOPS and Data Integration will help operationalize GenAI
      ChatGPT and other software-as-a-service (SaaS)-based Large Language Models (LLMs) have presented significant data privacy challenges for organizations. In many cases, the questions, answers, and contextual data may be sensitive. This is unsuitable for public multi-tenanted services that reuse this data to retrain models.

      Rapid advances in open source LLMs have delivered comparable performance to ChatGPT and present a viable alternative. However, GenAI models are difficult to move from the lab into production in a scalable and reliable way. They are also typically shared between multiple applications, and therefore pose greater data integration challenges compared with traditional ML models.

      In 2024, I expect organizations to continue to focus on developing strong Machine Learning Operations (MLOPS) and data integration capabilities.

    2. Organizations will double down on RAG and fine-tuning to optimize LLMs
      There are several approaches to optimizing the performance of LLMs, including Prompt Engineering, Retrieval Augmented Generation (RAG), and fine-tuning (deep learning).

      • RAG has proven to be an effective approach to adopting LLMs because it does not require any training or tuning of LLMs, while still delivering good results. It does, however, require data engineering pipelines to maintain the knowledge base repository, and a specialized vector database to store the indexed data.
      • One approach to fine tuning, Performance Efficient Fine Tuning (PEFT), trains a small neural network on domain specific data and sits alongside the general purpose LLM. This provides most of the performance benefits of retraining the larger LLM, but at a fraction of the cost and required training data. Fine-tuning LLMs requires stronger ML capabilities, but can lead to greater efficiency, explainability, and more accurate results: especially when training data is limited.

      In 2024, I believe that RAG and PEFT will continue to be accessible approaches to GenAI for many organizations. PEFT will be useful, both for net-new projects and replacing some of the earlier RAG architectures. I expect the uptake to be greatest within organizations with larger and more capable data science teams.

    3. Shifts in Cloud-centric strategies
      In 2023, some businesses shifted from a Cloud-first approach to a “considered and balanced stance” driven by several factors: the economics of the Cloud for many predictable analytical workloads; data management regulations; and organizational fiscal policy — in view of changing economic conditions.

      These organizations have settled on a cloud-native architecture across both public and private clouds to support their data and cloud strategy, with the additional architectural complexity associated with cloud-nativeness being offset by the flexibility, scalability, and cost savings it provides.

      The resulting data fabric across public and private clouds provides the foundation for an intelligent, automated, and policy driven approach to data management in 2024.

    4. Data management automation, data democratization, zero trust security still matter
      Observability across infrastructure, platforms, and workloads will play an increasingly important role in 2024. This is a precursor to automating intelligent platforms that are highly performant, reliable, and efficient.

      At the core of the intelligent data platform will be operational data used to train ML models. Data practitioners will continue to push for greater democratization of data, and greater self-service options. This is aligned with one of the most important principles of the Data Mesh paradigm.

      Removing friction from all stages of the data lifecycle and increasingly providing access to real-time data will be a focus of organizations and technology providers in 2024. Hybrid-cloud native architectures, adoption of third-party SaaS and platform-as-a-service (PaaS) services, and a strengthening of cybersecurity will continue to drive a focus on data security, zero trust, and clear separation of responsibilities for data management. This will be a forcing function to drive innovation within data governance and management, while meeting the increasing demands to democratize access to data.

      In 2024, I expect technology to increasingly simplify the implementation and enforcement of zero trust both within organizations and — more so: across them — as data federation becomes an increased area of interest.

    5. A case for migration to open data lakehouses
      In 2022 we witnessed significant innovation within data lakehouse implementations where leading industry data management providers settled on Apache Iceberg as the de facto format. This rapid adoption of a preferred open technology almost certainly influenced several data management providers to change their open source strategy, and incorporate support for it into their products.

      In 2024, I expect to see a steady migration of data and workloads into open data lakehouse architectures across public and private clouds.