Permutable AI unveiled the world’s first Gen AI-powered API for commodities trading. Permutable AI has created the first dedicated API that transforms how traders interact with market data. The system processes and analyses thousands of articles in real-time, providing traders with deep insights that would be impossible to gather through traditional methods. The company’s API is currently being trialled by several early-adopters including some of the world’s largest energy trading houses, marking a significant milestone in the evolution of commodities trading technology. The API delivers comprehensive coverage across crude oil, natural gas, precious metals, and agriculture markets, incorporating real-time geopolitical and macro analysis from all major news sources. Unlike general-purpose platforms such as SearchGPT and Perplexity.AI, Permutable’s solution is specifically engineered for commodities trading, offering superior real-time insight delivery. The system enables institutional traders to process and analyse vast amounts of market data instantaneously, with features including: Advanced TradeRank signals and risk parameters, Real-time market sentiment analysis across thousands of sources, Sophisticated filtering and bulk operations, Enterprise-grade security with two-factor authentication.
Hume launches Voice Control allowing users and developers to make custom AI voices through precise modulation of vocal characteristics
Hume AI, the startup specializing in emotionally intelligent voice interfaces, has launched Voice Control, an experimental feature that empowers developers and users to create custom AI voices through precise modulation of vocal characteristics — no coding, AI prompt engineering, or sound design skills required. This no-code tool allows users to fine-tune voice attributes in real time through virtual onscreen sliders. The release addresses a key pain point in the AI industry: the reliance on preset voices, which often fail to meet the specific needs of brands or applications, or the risks associated with voice cloning. The tool’s slider-based interface reflects common perceptual qualities of voice, such as buoyancy or assertiveness, without attempting to oversimplify these attributes through text-based prompts. Voice Control offers developers the ability to adjust voices along 10 distinct dimensions, including: Masculine/Feminine; Assertiveness; Buoyancy; Confidence; Enthusiasm etc. Developers can select a base voice, adjust its characteristics, and preview the results in real time. This process ensures reproducibility and stability across sessions, key features for real-time applications like customer service bots or virtual assistants.
Cohere’s Rerank 3.5 enterprise search processes queries across more than 100 languages, with particular strength in major business languages
AI company Cohere released Rerank 3.5; a powerful new search model that promises to transform how global businesses find and use their data across languages and complex systems. The new model arrives as businesses struggle with increasingly complex data environments and multilingual operations. Its most notable advancement is the ability to process queries across more than 100 languages, with particular strength in major business languages including Arabic, Japanese, and Korean. What sets Rerank 3.5 apart isn’t just its linguistic prowess – it’s the model’s ability to fundamentally reshape how global enterprises handle information retrieval. In an era where data silos and language barriers still plague multinational corporations, this advancement could level the playing field for non-English speaking markets and dramatically accelerate global business operations. Internal testing by Cohere demonstrated that Rerank 3.5 performed 23.4% better than hybrid search systems and 30.8% better than traditional BM25 search algorithms on financial services datasets. These improvements, while impressive on paper, could translate to millions in saved costs and significantly reduced risks in regulated industries where information accuracy is paramount.
Redis Cloud serves as a fast and flexible vector database for RAG, efficiently storing and retrieving vector embeddings that provide LLMs with relevant and up-to-date information
Redis, the world’s fastest data platform, announced deeper integration with Amazon Bedrock to further improve the quality and reliability of generative AI apps. Building on last year’s successful integration of Redis Cloud as a knowledge base for building Retrieval-Augmented Generation (RAG) systems, Redis continues to deliver market-leading vector search performance and remains one of only three software vendors listed in the Amazon Bedrock console. Amazon Bedrock’s new RAG evaluation service provides a fast, automated, and cost-effective evaluation tool, natively integrated into the Bedrock platform. By incorporating automated evals, generative AI applications can be optimized to meet specialized requirements across diverse use cases more effectively. Redis Cloud serves as a fast and flexible vector database for RAG, efficiently storing and retrieving vector embeddings that provide LLMs with relevant and up-to-date information. The Redis-Bedrock integration simplifies this process, enabling developers to seamlessly connect LLMs from the Bedrock console to their Redis-powered vector database, streamlining the workflow and reducing complexity.
9fin for debt capital markets provides its subscribers with intelligence on high-yield bonds, leveraged loans, distressed debt, collateralized loan obligations (CLOs), private credit and asset-backed finance
9fin has raised $50 million in a Series B funding round to build the next generation of its AI-powered analytics platform for global debt capital markets. The firm will use the new funding to invest further in its AI technology, grow its analytics team and accelerate its expansion in the United States. 9fin provides its subscribers with intelligence on high-yield bonds, leveraged loans, distressed debt, collateralized loan obligations (CLOs), private credit and asset-backed finance. By integrating generative AI into its platform, the company also provides agentic Q&A tools, real-time market updates and advanced search capabilities. Huss El-Sheikh, co-founder and chief technology officer of 9fin, said in the release: “By investing in the best product and engineering talent, we’ve dramatically increased product velocity, delivering capabilities to give our customers the best workflows, tools and insights, and helping them navigate easily through complex financial markets.”
AWS upgrades Amazon Connect contact centers platform with segmentation tool that can scan a company’s customer base for buyers with similar interests to create automated campaigns
AWS is adding more AI features to its Amazon Connect service, which helps companies run their contact centers more efficiently. The first enhancement that AWS introduced for Amazon Connect is an AI-powered segmentation tool that can scan a company’s customer base for buyers with similar interests. An online retailer, for example, could ask the AI to find frequent shoppers who place at least three orders per month. After generating a customer segment, marketers can create automated campaigns that activate at opportune moments. Such a campaign could detect when online shoppers abandon their cart and offer them a discount to avoid lost sales. The capability can activate in response to other events as well. 2) Amazon Connect includes an integration with Amazon Lex, a tool for creating AI assistants. Companies can now enhance those assistants using another AWS machine learning service called Amazon Q. Lex-powered assistants can use Amazon Q to incorporate data from a company’s internal applications and other sources into their output. Administrators can create guardrails to ensure that AI-generated responses are safe and accurate. 3) AWS is adding a Salesforce integration that will allow users of the customer relationship management platform to leverage Amazon Connect’s routing features. Those features automatically direct each customer request to the agent best equipped to answer it. In conjunction, Amazon Connect is receiving a WhatsApp for Business integration that will allow contact center agents to field user inquires via the popular messaging app. 4) A set of AI-powered tools for measuring contact center performance. According to AWS, managers can usually review only 1% to 2% of customer interactions because of the large volume of tickets processed every day. The new AI features make it possible to review contact center performance data more thoroughly and identify areas for improvements.
Hugging Face’s SmolVLM is a compact open multimodal model that accepts arbitrary sequences of image and text inputs to produce text outputs with unprecedented efficiency: it requires only 5.02 GB of GPU RAM
Hugging Face has just released SmolVLM, a compact vision-language AI model that could change how businesses use AI across their operations. The new model processes both images and text with remarkable efficiency while requiring just a fraction of the computing power needed by its competitors. As companies struggle with the skyrocketing costs of implementing LLMs and the computational demands of vision AI systems, SmolVLM offers a pragmatic solution that doesn’t sacrifice performance for accessibility. SmolVLM is a compact open multimodal model that accepts arbitrary sequences of image and text inputs to produce text outputs. What makes this significant is the model’s unprecedented efficiency: it requires only 5.02 GB of GPU RAM, while competing models like Qwen-VL 2B and InternVL2 2B demand 13.70 GB and 10.52 GB respectively. Rather than following the industry’s bigger-is-better approach, Hugging Face has proven that careful architecture design and innovative compression techniques can deliver enterprise-grade performance in a lightweight package. This could dramatically reduce the barrier to entry for companies looking to implement AI vision systems.
Kubernetes is well-suited for deploying generative AI and large language models, particularly at the edge near users
Kubernetes AI deployment is revolutionizing the way organizations integrate artificial intelligence into their operations, providing scalable, efficient solutions that enhance performance and prioritize security in cloud environments. Vultr leading the charge in scalable cloud solutions, according to Nathan Goulding, senior vice president of engineering at Vultr. The company’s infrastructure enables delivery to 90% of the global population in under 40 milliseconds, crucial for web applications and AI model deployment. Kubernetes is well-suited for deploying generative AI and large language models, particularly at the edge near users. Organizations are increasingly focused on responsibly integrating AI into their applications, with only 1% of corporate data currently utilized in large language models. This highlights a significant opportunity for enterprises to unlock value by securely architecting AI-driven systems that leverage their proprietary data, Goulding pointed out. Vultr’s Platform engineering teams prioritize consuming fundamental cloud infrastructure, particularly VMs and bare metal, to deploy applications using Kubernetes as the standard, Goulding explained.
Kubernetes is well-suited for deploying generative AI and large language models, particularly at the edge near users
Kubernetes AI deployment is revolutionizing the way organizations integrate artificial intelligence into their operations, providing scalable, efficient solutions that enhance performance and prioritize security in cloud environments. Vultr leading the charge in scalable cloud solutions, according to Nathan Goulding, senior vice president of engineering at Vultr. The company’s infrastructure enables delivery to 90% of the global population in under 40 milliseconds, crucial for web applications and AI model deployment. Kubernetes is well-suited for deploying generative AI and large language models, particularly at the edge near users. Organizations are increasingly focused on responsibly integrating AI into their applications, with only 1% of corporate data currently utilized in large language models. This highlights a significant opportunity for enterprises to unlock value by securely architecting AI-driven systems that leverage their proprietary data, Goulding pointed out. Vultr’s Platform engineering teams prioritize consuming fundamental cloud infrastructure, particularly VMs and bare metal, to deploy applications using Kubernetes as the standard, Goulding explained.
Kubernetes is well-suited for deploying generative AI and large language models, particularly at the edge near users
Kubernetes AI deployment is revolutionizing the way organizations integrate artificial intelligence into their operations, providing scalable, efficient solutions that enhance performance and prioritize security in cloud environments. Vultr leading the charge in scalable cloud solutions, according to Nathan Goulding, senior vice president of engineering at Vultr. The company’s infrastructure enables delivery to 90% of the global population in under 40 milliseconds, crucial for web applications and AI model deployment. Kubernetes is well-suited for deploying generative AI and large language models, particularly at the edge near users. Organizations are increasingly focused on responsibly integrating AI into their applications, with only 1% of corporate data currently utilized in large language models. This highlights a significant opportunity for enterprises to unlock value by securely architecting AI-driven systems that leverage their proprietary data, Goulding pointed out. Vultr’s Platform engineering teams prioritize consuming fundamental cloud infrastructure, particularly VMs and bare metal, to deploy applications using Kubernetes as the standard, Goulding explained.