Akamai has announced the launch of Akamai Cloud Inference, a new solution that provides tools for developers to build and run AI applications at the edge. Bringing data workloads closer to end users with this tool can result in 3x better throughput and reduce latency up to 2.5x. Akamai Cloud Inference offers a variety of compute types, from classic CPUs to GPUs to tailored ASIC VPUs. It offers integrations with Nvidia’s AI ecosystem, leveraging technologies such as Triton, TAO Toolkit, TensorRT, and NVFlare. Due to a partnership with VAST Data, the solution also provides access to real-time data so that developers can accelerate inference-related tasks. The solution also offers highly scalable object storage and integration with vector database vendors like Aiven and Milvus. “With this data management stack, Akamai securely stores fine-tuned model data and training artifacts to deliver low-latency AI inference at global scale,” the company said. It also offers capabilities for containerizing AI workloads, which is important for enabling demand-based autoscaling, improved application resilience, and hybrid/multicloud portability. And finally, the platform also includes WebAssembly capabilities to simplify how developers build AI applications.
OpenAI to launch its first ‘open-weights’ model- users will be able to see the model’s weights and alter them, meaning they have a way to customize it without having to retrain it on new data
OpenAI is looking to experiment with a more “open” strategy, detailing its plans to release its first “open-weights” model to the developer community later this year. OpenAI Chief Executive Sam Altman revealed that the upcoming open model will come with “reasoning” capabilities, similar to the company’s existing o3-mini model, which takes time to consider its responses to user’s prompts, increasing its accuracy. According to the U.S. Federal Trade Commission’s definition, an open-weights model is one that makes its weights transparent and publicly available. So users will be able to see the model’s weights and alter them, meaning they have a way to customize it without having to retrain it on new data. One advantage of open-weights models is that it’s cheaper for developers to make these adjustments and customize them for different tasks. It’s possible for an organization to upload internal data to an open-weights model and ensure it has the proper weights. Then it will be able to leverage that information when it generates its responses. It’s a lot easier than traditional model fine-tuning.
Sourcetable enables natural language instructions to spreadsheets with a “fast, accurate code-driven evaluation loop” that can verify the underlying LLM’s responses
Sourcetable Inc. is looking to eliminate the technical barrier in spreadsheets that separates so-called “power users” from those who can just about work out how to use the Sum key to add up the value of entries in a single column. At the heart of Sourcetable is what the company describes as a “fast, accurate code-driven evaluation loop” that can verify the underlying LLM’s responses in real time to ensure the accuracy required for complex, multistep automation tasks. This ensures that users can trust that the insights it generates are accurate, founded in the actual data within the spreadsheet. With Sourcetable, users simply tell the spreadsheet what they’re trying to achieve using their natural language. It supports both keyboard commands or voice, similar to the emerging practice of “vibe coding,” which is where a developer describes a problem in a few sentences in order to prompt a large language model to generate code. The startup describes its product as the first “self-driving” spreadsheet with autopilot capabilities, with the underlying AI having full write and editing access to each file so it can perform complex, multistep tasks on behalf of users. Among other things, Sourcetable can create and edit financial models, create charts and graphs based on spreadsheet entries, build pivot tables, clean data, edit the formatting, enrich the data within a column, or analyze an entire workbook and summarize its contents.