Aquant Inc., the provider of an AI platform for service professionals, introduced “retrieval-augmented conversation,” a new way for LLMs to retrieve and present information to users that allows them natively to act as a guided domain expert instead of receiving and presenting knowledge as a single all-in-one answer. RAC can be thought of as an expert technician that is aware of its capacity and capabilities, Indresh Satyanarayana, vice president of product technology and labs, and the father of retrieval-augmented conversation, told. It helps the AI look at a user’s question and ask follow-up questions to fill knowledge gaps and generate tailored solutions. Unlike RAG, RAC introduces dynamic turn-taking, much more like a human conversation with an expert in the field in question. It’s designed to provide “bite-sized actions,” which he says avoids cognitive overload for the user. Not only that, RAC can incorporate even more data points into its conversational context depending on the persona developers want to build into their AI app. “It retrieves not only manuals but transactional data, job history, parts catalogs, internet of things readings, and key performance indicator targets, then reasons over that richer context to recommend the action that best balances cost, risk and time,” said Satyanarayana. RAC does not fundamentally replace RAG; it will still perform the retrieval-augmented portion. Documents still need to be searched and retrieved, and this aspect will guide the conversation for the user. On the other end, developers will have a chance to decide how “chatty” their app acts. It can do one-to-one questions solving one ambiguity at a time and then provide a final answer after they have all been resolved. Alternatively, they could develop an app that can resolve multiple questions at once, the way some people can hold multiple threads of conversation at the same time — like many open tabs in Chrome while researching — before resolving the problem.