LexisNexis AI assistant Protégé, aims to help lawyers, associates and paralegals write and proof legal documents and ensure that anything they cite in complaints and briefs is accurate. However, LexisNexis didn’t want a general legal AI assistant; they wanted to build one that learns a firm’s workflow and is more customizable. LexisNexis saw the opportunity to bring the power of large language models (LLMs) from Anthropic and Mistral and find the best models that answer user questions the best, Jeff Riehl, CTO of LexisNexis Legal and Professional, told. LexisNexis uses different models from most of the major model providers when building its AI platforms. For Protégé, the company wanted faster response times and models more fine-tuned for legal use cases. So it turned to what Riehl calls “fine-tuned” versions of models, essentially smaller weight versions of LLMs or distilled models. When a user asks Protégé a question about a specific case, the first model it pings is a fine-tuned Mistral “for assessing the query, then determining what the purpose and intent of that query is” before switching to the model best suited to complete the task. The next model could be an LLM that generates new queries for the search engine or another model that summarizes results.