• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Microsoft’s most capable new Phi 4 AI model rivals the performance of far larger systems, yet small enough for low-latency environments

May 1, 2025 //  by Finnovate

Microsoft launched several new “open” AI models, the most capable of which is competitive with OpenAI’s o3-mini on at least one benchmark.  All of the new pemissively licensed models — Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus — are “reasoning” models, meaning they’re able to spend more time fact-checking solutions to complex problems. Phi 4 mini reasoning was trained on roughly 1 million synthetic math problems generated by Chinese AI startup DeepSeek’s R1 reasoning model. Around 3.8 billion parameters in size, Phi 4 mini reasoning is designed for educational applications, like “embedded tutoring” on lightweight devices. Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters. Phi 4 reasoning, a 14-billion-parameter model, was trained using “high-quality” web data as well as “curated demonstrations” from OpenAI’s o3-mini. It’s best for math, science, and coding applications. As for Phi 4 reasoning plus, it’s Microsoft’s previously released Phi-4 model adapted into a reasoning model to achieve better accuracy on particular tasks. Phi 4 reasoning plus approaches the performance levels of R1, a model with significantly more parameters (671 billion). The company’s internal benchmarking also has Phi 4 reasoning plus matching o3-mini on OmniMath, a math skills test. “Using distillation, reinforcement learning, and high-quality data, these [new] models balance size and performance,” wrote Microsoft in a blog post. “They are small enough for low-latency environments yet maintain strong reasoning capabilities that rival much bigger models. This blend allows even resource-limited devices to perform complex reasoning tasks efficiently.”

Read Article

Category: Members, AI & Machine Economy, Innovation Topics

Previous Post: « UiPath launches the first enterprise-grade platform for agentic automation – controlled agency model ensures AI agents operate within clearly defined guardrails
Next Post: Humanoid robot mass adoption will start in 2028, says Bank Of America- annual shipments could hit 1 million by 2030, with a production cost of just $17,000 per unit »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.OkayPrivacy policy