Once a cloud unbeliever, Larry Ellison has announced a comprehensive slew of AI-enabled cloud services and tools for a wide range of use cases.

15 years ago, Oracle Chairman and CTO Larry Ellison famously dissed the cloud: “What is it? It’s complete gibberish. It’s insane.” But he changed his tune a few years later at an Oracle OpenWorld keynote.

I would suggest that Ellison’s changing stands on the cloud was a blessing in disguise: that Oracle moved to cloud offerings later in the game actually stood the company in good stead in the fast-evolving cloud landscape.

While other cloud providers were struggling to navigate the changing needs brought about by government regulations and business requirements, Oracle leapfrogged these hurdles.

One key area, for example, is data sovereignty. What Oracle has done with databases – which stay only within an organization’s walls in on-premises infrastructure – is done the same way now with cloud offerings.

While others had to help customers adapt their evolving cloud strategies – from public to private to hybrid to multi-cloud – Oracle came in later, with hindsight, to embark on open and interoperable platforms, appdev, databases and data management to allow for scaling (even hyperscaling) across hybrid multi-cloud environments.

Now, Ellison is forging ahead with what could possibly be the most comprehensive portfolio of cloud services – by embedding generative AI into it.

Building GenAI models in the cloud

Outlining during his keynote at Oracle CloudWorld 2023 how the company is embedding generative AI into its vast portfolio of cloud services, with the aim of helping customers and society at large tackle their most vexing problems, Ellison also announced a slew of new AI-enabled services, including tools for growing food indoors and out, improving healthcare, and automating application development.

He noted that the release last year of OpenAI’s ChatGPT 3.5 captured the attention of government leaders and the public alike in a way technological advances rarely do – and for good reason.

“Is this the most important new computer technology ever? Probably. One thing’s for certain: We’re about to find out,” he said.

This also puts Oracle in the best position among cloud vendors to help companies develop generative AI models, Ellison said, because its Gen2 Oracle Cloud Infrastructure (OCI) uses ultrafast remote data memory access (RDMA) networking and connects NVIDIA GPUs in huge superclusters that can efficiently train generative AI models at twice the speed of other clouds and at less than half the cost.

RDMA networking means that “one computer in the network can actually access the memory of another computer without kind of tapping that computer on the shoulder and getting it to interrupt itself,” he said. “So it has the ability to move a lot of data from one computer to another extremely fast, many times faster than conventional networks.”

Ellison noted that OCI’s speed and cost advantages are why vendors such as Cohere, NVIDIA, and X.AI are using it to train their large language models (LLMs). “In the cloud, time is money,” he said. “We are much faster and many times less expensive than the other clouds for training AI models.”

Harnessing the power of generative AI in appdev

MosaicML, a software development provider that offers infrastructure and tools for building large-scale machine learning models, selected OCI as its preferred cloud infrastructure to help enterprises extract more value from their data.

MosaicML helps organizations make training and inferencing of AI models more efficient and accessible with its model training capabilities. To scale its business to support the growing demand for AI services, MosaicML selected OCI.

With OCI’s high-performance AI infrastructure, MosaicML states that it has seen up to 50% faster performance and cost savings of up to 80% compared to other cloud providers.

“Hundreds of organizations rely on MosaicML’s platform to develop and train large, complex generative AI models. We provide the complex systems and hardware so our customers can focus on building and deploying their own high-performing custom models,” said Naveen Rao, CEO and co-founder, MosaicML.

“We selected OCI as we believe it is the best foundation for MosaicML. When training models with massive troves of data in the cloud, every minute counts – and with OCI, we pay less than with other cloud providers and can scale almost linearly because of the way Oracle configured its interconnects.”

With OCI, MosaicML has been able to gain access to the latest NVIDIA GPUs, a very high bandwidth interconnect between nodes, and large compute block sizes for scaling to thousands of GPUs. This has enabled MosaicML to help enterprises and startups operationalize AI models, including Twelve Labs.

Twelve Labs is an AI startup building foundation models for multimodal video understanding. By taking advantage of MosaicML’s platform running on OCI and OCI’s AI infrastructure, Twelve Labs has been able to efficiently scale and deploy its AI models to help users effortlessly search, classify, and more effectively utilize their video data for various applications. 

“The combination of MosaicML and Oracle have given us the perfect collaboration to help us handle large capacities at high speeds and to keep up with our growth long-term,” said Jae Lee, founder and CEO, Twelve Labs. “MosaicML enables us to efficiently manage our large AI clusters, while OCI’s AI infrastructure ensures we don’t have to compromise on speed, which has saved us thousands of hours and tens of thousands of dollars in efficiencies.”   

“We are seeing an influx of AI companies come to OCI to run generative AI models, because we can run them faster and more economically than other cloud providers. It is not uncommon to train a 10 billion-parameter model within a few hours on OCI versus a few days on other platforms,” said Greg Pavlik, senior vice president, Oracle.

For training large, complex models, such as large language models (LLMs) at scale, OCI Supercluster provide ultra-low latency cluster networking, HPC storage, and OCI Compute bare metal instances powered by NVIDIA GPUs. OCI Compute instances are connected by a high-performance ethernet network using RoCE v2 (RDMA over Converged Ethernet v2).

Oracle claims the bandwidth on A100 GPUs provided by OCI exceeds that of both AWS and GCP by 4X-16X, which in turn reduces the time and cost of machine learning training.

GenAI application begins at home

Generative AI is also changing how Oracle itself develops new products, said Ellison. While Oracle will continue supporting older applications using Java if they were written in that programming language, it will develop new applications using code generated automatically in Oracle APEX by GenAI tools based on developer prompts: “We’re not writing it anymore. We’re generating that code. It fundamentally changes how we build applications, how we run applications. It just changes everything.”

The really big deal, he noted, was that the APEX application generator allows for faster application development with smaller development teams, and because the code was generated by GenAI, it helps significantly reduce security flaws.

Helping to solve pressing problems for humanity

In his keynote address, Ellison announced that Oracle intends to help governments, healthcare providers, and food growers revolutionize their respective industries and improve the human condition.

Oracle is creating an Internet of Things (IoT) platform that healthcare providers can use to keep better track of their inventories and location of medical supplies. It will also provide IoT capabilities so that providers can use sensors to track patient symptoms and other data.

Oracle will also help providers to better store and access electronic patient records, medical imaging, and diagnostics. Ellison said the company will make it less expensive for providers to store this data and for them to use generative AI to help with diagnoses. He predicted the impact will be “better outcomes for millions of patients.”

In partnership with Applied Inventions, Oracle is developing greenhouses that take advantage of a range of Oracle technologies, including Autonomous Database, AI, robotics, and data analytics. The greenhouses can operate anywhere in the world, irrespective of climate, to let producers grow food more efficiently closer to where it will be consumed.

For example, Oracle Cloud is collecting large amounts of training data – such as the amount of sunlight in different areas, nutritional elements in soils, and plant images – to determine the ideal time for transplanting and harvesting. The AI-powered greenhouses can help growers use 98% less water and 90% less land, which are crucial considerations at a time of significant climate disruption, Ellison said.