Nvidia Corp. today debuted a set of new products that will enable companies to more easily build advanced natural language processing models.
The first product, BioNeMo, is a framework for developing natural language processing models that can assist scientists with biology and chemistry research. Alongside the framework, Nvidia today also debuted two cloud-based artificial intelligence services. The first service will make it easier to use AI models developed with BioNeMo, while the other focuses on speeding up the task of applying neural networks to text processing tasks such as summarizing research papers.
The manner in which an AI processes data and makes decisions is influenced by its configuration settings, which are known as parameters. The more parameters an AI model has, the more accurately it can process data.
Researchers have in recent years developed multiple natural language processing models that contain billions of parameters. Such neural networks are known as large language models, or LLMs. The most advanced LLMs can be applied not only to traditional text processing use cases, such as summarizing research papers, but are also capable of writing software code and performing a variety of other tasks.
Scientists have discovered that LLMs’ processing capabilities lend themselves well to biomolecular research. BioNeMo, the new framework that Nvidia debuted today, is specifically designed for training LLMs that can support research in the fields of biology and chemistry. BioNeMo also includes features that ease the task of deploying such neural networks in production.
Nvidia says that scientists can use the framework to train LLMs with upwards of billions of parameters. Moreover, BioNeMo includes four pretrained language models that can be applied to research tasks faster than neural networks which have to be developed from scratch.
The first two pretrained language models, ESM-1 and OpenFold, are optimized to predict the properties of proteins. BioNeMo also ships with ProtT5, a neural network that can be used to generate new protein sequences. The fourth neural network included in BioNeMo is called MegaMolBART and can be used for tasks such as predicting how molecules interact with each other.
New cloud services
Alongside BioNeMo, Nvidia today debuted two cloud services designed to ease the task of building AI applications. Both offerings provide access to a set of pre-packaged language models.
The first cloud service, BioNeMo Service, provides access to two language models that were created with Nvidia’s newly released BioNeMo framework. The two neural networks are optimized to support biology and chemistry research. They can be configured with upwards of billions of parameters, according to Nvidia.
Nvidia envisions biotech and pharmaceutical companies using BioNeMo Service to accelerate drug discovery. The chipmaker says that the service can help scientists generate new biomolecules for therapeutic applications, as well as perform other tasks involved in medical research.
“Large language models hold the potential to transform every industry,” said Nvidia founder and Chief Executive Officer Jensen Huang. “The ability to tune foundation models puts the power of LLMs within reach of millions of developers who can now create language services and power scientific discoveries without needing to build a massive model from scratch.”
The second cloud service that Nvidia debuted today is called NeMo LLM Service. It provides access to a collection of pretrained language models containing between 3 billion and 50 billion parameters. The language models can be used for tasks such as generating text summaries, powering chatbots and writing software code.
The neural networks in the NeMo LLM Service have been trained in advance by Nvidia, but companies can optionally train them further on their own custom datasets. Familiarizing a neural network with a company’s data enables it to process that data more accurately.
Organizations can train the AI models in the NeMo LLM Service using a method known as prompt learning. Prompt learning involves providing a neural network with a partial sentence such as “Nvidia develops chips for” and instructing it to complete the text. By repeating this process many times, developers can teach a neural network to perform certain computing tasks.
The primary benefit of prompt learning over traditional AI training methods is that it can be considerably faster when used in certain types of machine learning projects. According to Nvidia, customers can train the neural networks provided by the NeMo LLM Service in minutes or hours instead of the months the task often requires. After the training is complete, neural networks can be deployed to a cloud environment or a company’s on-premises infrastructure.
The NeMo LLM Service and the BioNeMo LLM Service will become available in early access next month. The BioNeMo framework is available in beta.
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.