Despite being pinned down by US tech bans, China’s cost-effective AI model has outpaced global industry giants. What can we learn?
In late 2024, DeepSeek’s cost-effective generative AI (GenAI) model effectively demonstrated to the world that smaller, specialized models, paired with refined data management, can outperform large, resource-heavy foundational models, other factors notwithstanding.
This approach lowers costs, enhances efficiency, and shifts focus from building massive networks to optimizing data and infrastructure for AI innovation.
While the AI industry has long been fixated on foundational models — massive, all-knowing networks trained on everything and anything — the MoE approach has proven that smaller, more specialized models are both viable and superior in many ways.
Lessons from a surprise AI player
The meteoric rise of this approach has simply proven that smaller, more specialized models are both viable and superior in many ways.
To implement this, use a mixture-of-experts model, where smaller, highly trained models work together in tandem. This approach employs a sophisticated method for selecting the most appropriate expert model, optimizing for both performance and efficiency. Specifically:
- Instead of one giant model doing everything, enterprises can deploy a system of interconnected models, each specialized in a specific domain. Smaller models require significantly less compute power, but the true benefit goes beyond cost savings.
- Focused expertise makes it easier to test and verify performance in real-world applications. This approach enables the addition of more specialized model capabilities, without the complexity of building a foundation model. Small models also stand to gain reasoning capabilities more quickly, leading to better AI oversight and transparency in the long run.
- Building foundational models is a cost-prohibitive exercise for most organizations, but this new paradigm lowers the barrier to create highly capable, domain-specific models using proprietary data. Looking ahead, industries can also expect the development of tools and base models that will streamline data distillation, making it easier to create smaller and more capable models.
Optimizing data and infrastructure for GenAI
For years, the AI industry has focused on hoarding data, maximizing token counts, and merely using brute force.
With the mixture-of-experts models, data management now takes center stage. To maximize AI effectiveness, shift from hoarding data to selecting, organizing, and refining it. AI is only as good as the data it’s trained on, so prioritize curating high-quality data, optimizing data pipelines, and building infrastructure that support AI. Specifically:
- Use practices like continuous data enrichment, versioning, and traceability to ensure models are trained on up-to-date, reliable data, improving performance and reducing errors.
- Enterprises also need to have systems in place that can quickly and dynamically organize and categorize data, filter out irrelevant information, and retrieve specific data at scale in real-time. This approach has already demonstrated this with a meticulously designed data selection pipeline, where data sets were filtered and refined instead of indiscriminately training on all available data. This approach has not only improved efficiency but also reduced costs.
- AI-driven intelligent data selection is emerging as the cornerstone of future AI training, ensuring efficiency and precision in model development.
- As AI shifts toward specialized models and data refinement, infrastructure must evolve to support this new reality. To support specialized models and data refinement, evolve infrastructure with a multi-dimensional approach to performance. Support thousands of smaller models working in parallel, as well as key-value stores that can efficiently handle data during inference time.
- These models should be capable of processing and producing results at scale without compromising on speed or accuracy. In addition to performance, the infrastructure must also prioritize high connectivity and always-on availability. Systems need to be able to scale rapidly and manage vast quantities of data in real time.
- A critical element in achieving this is efficient storage systems that can index, retrieve, filter, and represent large datasets effectively. Storage is no longer just about holding data: it is about enabling effective data use for AI to drive real innovation and unlock opportunities at the intersection of AI, data science, and data management.
- The new paradigm requires businesses to rethink their approach to data storage, integration, and processing. Simplifying data management while ensuring performance and scalability can pave the way for a smarter AI ecosystem that can help industries drive innovation with data.
By implementing these measures and proactively pressing major software firms to uphold rigorous proactive and preemptive cyber diligence, we can all work and rest easier.