VMware, Nvidia team on enterprise-grade AI platform
Companies trying to deploy generative AI today have a major problem. If they use a commercial platform like OpenAI, they have to send data up to the cloud, which may run afoul of compliance requirements and is expensive. If they download and run a model like Llama 2 locally, they need to know a lot about how to fine-tune it, how to set up vector databases to feed it live data, and how to operationalize it.VMware's new partnership with Nvidia aims to solve these issues by offering a fully integrated, ready-to-go generative AI platform that companies can run on premises, in colocation facilities, or in private clouds. The platform will include Llama 2 or a choice of other large language models, as well as a vector database to feed up-to-date company information to the LLM.To read this article in full, please click here
