Platform for Everyday AI, Dataiku has unveiled the LLM Mesh, addressing the need for an effective, scalable, and secure platform for integrating Large Language Models (LLMs) in the enterprise.

While Generative AI presents opportunities and benefits for the enterprise, organisations face several challenges including an absence of centralised administration, inadequate permission controls for data and models, minimal measures against toxic content, the use of personally identifiable information, and a lack of cost-monitoring mechanisms.

LLM Mesh provides the components required to build safe applications using LLMs at scale efficiently. With the LLM Mesh sitting between LLM service providers and end-user applications, companies can choose the most cost-effective models, ensure the safety of their data and responses, and create reusable components for scalable application development. 

Components of the LLM Mesh include universal AI service routing, secure access and auditing for AI services, safety provisions for private data screening and response moderation, and performance and cost tracking. The LLM Mesh also provides standard components for application development.

Dataiku co-founder and chief technology officer, Clément Stenac said, “The LLM Mesh represents a pivotal step in AI. At Dataiku, we’re bridging the gap between the promise and reality of using Generative AI in the enterprise. We believe the LLM Mesh provides the structure and control many have sought, paving the way for safer, faster GenAI deployments that deliver real value.”  

Dataiku also announced Snowflake, Pinecone, and AI21 Labs as its LLM Mesh launch partners, representing several of the key components of the LLM Mesh – containerised data and compute capabilities, vector databases, and LLM builders.