• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

OpenAI’s latest o3-pro AI model rated consistently higher in key domains like science, education, programming, business, and writing help for clarity, comprehensiveness, instruction-following, and accuracy

June 11, 2025 //  by Finnovate

OpenAI has launched o3-pro, an AI model that the company claims is its most capable yet. “In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help,” OpenAI writes in a changelog. “Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy.” O3-pro has access to tools, according to OpenAI, allowing it to search the web, analyze files, reason about visual inputs, use Python, personalize its responses leveraging memory, and more. As a drawback, the model’s responses typically take longer than o1-pro to complete, according to OpenAI. O3-pro has other limitations. Temporary chats with the model in ChatGPT are disabled for now while OpenAI resolves a “technical issue.” O3-pro can’t generate images. And Canvas, OpenAI’s AI-powered workspace feature, isn’t supported by o3-pro. On the plus side, o3-pro achieves impressive scores in popular AI benchmarks. On AIME 2024, which evaluates a model’s math skills, o3-pro scores better than Google’s top-performing AI model, Gemini 2.5 Pro. O3-pro also beats Anthropic’s recently released Claude 4 Opus on GPQA Diamond, a test of PhD-level science knowledge. O3-pro is priced at $20 per million input tokens and $80 per million output tokens in the API. Input tokens are tokens fed into the model, while output tokens are tokens that the model generates based on the input tokens.

Read Article

Category: AI & Machine Economy, Innovation Topics

Previous Post: « Study finds quantum-enhanced algorithm on a photonic circuit with small-sized quantum processors can outperform classical systems in specific machine learning tasks
Next Post: Uniphore’s solution unifies agents, models, knowledge, and data into a single, composable platform and offers, is interoperable with both closed- and open-source LLMs and offers pre-built enterprise-grade agents »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.