AI startup Mistral unveiled Le Chat Enterprise, a unified AI assistant platform designed for enterprise-scale productivity and privacy, powered by its new Medium 3 model that outperforms larger ones at a fraction of the cost (here, “larger” refers to the number of parameters, or internal model settings, which typically denote more complexity and more powerful capabilities, but also take more compute resources such as GPUs to run). Available on the web and via mobile apps, Le Chat Enterprise is like a ChatGPT competitor, but one built specifically for enterprises and their employees, taking into account the fact that they’ll likely be working across a suite of different applications and data sources. It’s designed to consolidate AI functionality into a single, privacy-first environment that enables deep customization, cross-functional workflows, and rapid deployment. Among its key features that will be of interest to business owners and technical decision makers are: Enterprise search across private data sources; Document libraries with auto-summary and citation capabilities; Custom connectors and agent builders for no-code task automation; Custom model integrations and memory-based personalization; Hybrid deployment options with support for public cloud, private VPCs, and on-prem hosting. Le Chat Enterprise supports seamless integration into existing tools and workflows. Companies can build AI agents tailored to their operations and maintain full sovereignty over deployment and data—without vendor lock-in. The platform’s privacy architecture adheres to strict access controls and supports full audit logging, ensuring data governance for regulated industries. Enterprises also gain full control over the AI stack—from infrastructure and platform features to model-level customization and user interfaces. Mistral’s new Le Chat Enterprise offering could be appealing to many enterprises with stricter security and data storage policies (especially medium-to-large and legacy businesses). Mistral Medium 3 introduces a new performance tier in the company’s model lineup, positioned between lightweight and large-scale models. Designed for enterprise use, the model delivers more than 90% of the benchmark performance of Claude 3.7 Sonnet, but at one-eighth the cost—$0.40 per million input tokens and $20.80 per million output tokens, compared to Sonnet’s $3/$15 for input/output. Benchmarks show that Mistral Medium 3 is particularly strong in software development tasks. In coding tests like HumanEval and MultiPL-E, it matches or surpasses both Claude 3.7 Sonnet and OpenAI’s GPT-4o models. According to third-party human evaluations, it outperforms Llama 4 Maverick in 82% of coding scenarios and exceeds Command-A in nearly 70% of cases. Mistral Medium 3 is optimized for enterprise integration. It supports hybrid and on-premises deployment, offers custom post-training, and connects easily to business systems.