Red Hat Inc. and Intel Corp.’s collaboration is all about translating open source code into efficient AI solutions, including the use of a virtual large language model. vLLM is a library of open-source code that functions as an inference server, forming a layer between Red Hat’s models and Intel’s accelerators. “What we’re working with Red Hat to do is minimize that complexity, and what does the hardware architecture and what does all the infrastructure software look like, and make that kind of seamless,” Chris Tobias, general manager of Americas technology leadership and platform ISV account team at Intel said. “You can just worry about, ‘Hey, what kind of application do I want to go with, and what kind of business problem do I wanna solve?’ And then, ideally, that gets you into a cost-effective solution.” Intel and Red Hat have worked on a number of proof-of-concepts together, and Intel is fully compatible with OpenShift AI and Red Hat Linux Enterprise AI. Their collaborations have so far seen success from customers hoping to adopt AI without breaking the bank, according to King. “Our POC framework has different technical use cases, and now that vLLM becomes more central and on the stage for Red Hat, we’re seeing a lot of interest for vLLM-based POCs from our customers,” he said. “[It’s] really simple for a model to be able to make itself ready day zero for how it can best run on an accelerator.”