• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

NeuroBlade’s Analytics Accelerator is a purpose-built hardware designed to handle modern database workloads delivering 4x faster performance than leading vectorized CPU implementations

April 29, 2025 //  by Finnovate

As Elad Sity, CEO and cofounder of NeuroBlade, noted, “while the industry has long relied on CPUs for data preparation, they’ve become a bottleneck — consuming well over 30 percent of the AI pipeline.” NeuroBlade, the Israeli semiconductor startup Sity cofounded, believes the answer lies in a new category of hardware specifically designed to accelerate data analytics. Their Analytics Accelerator isn’t just a faster CPU — it’s fundamentally different architecture purpose-built to handle modern database workloads. NeuroBlade’s Accelerator unlocks the full potential of data analytics platforms by dramatically boosting performance and reducing query times. By offloading operations from the CPU to purpose-built hardware — a process known as pushdown—it increases the compute power of each server, enabling faster processing of large datasets with smaller clusters compared to CPU-only deployments. Purpose-built hardware that boosts each server’s compute power for analytics reduces the need for massive clusters and helps avoid bottlenecks like network overhead, power constraints, and operation complexity. In TPC-H benchmarks — a standard for evaluating decision support systems — Sity noted that the NeuroBlade Accelerator delivers about 4x faster performance than leading vectorized CPU implementations such as Presto-Velox. NeuroBlade’s pitch is that by offloading analytics from CPUs and handing them to dedicated silicon, enterprises can achieve better performance with a fraction of the infrastructure — lowering costs, energy draw and complexity in one move.

Read Article

Category: Members, AI & Machine Economy, Innovation Topics

Previous Post: « FurGPT’s Web3 virtual pet platform each pet’s behavior securely on-chain creating a unique personality and memory profile, while providing verifiable continuity for users
Next Post: Liquid AI’s new convolution-based, multi-hybrid LLM can work on smartphones and edge devices; uses evolutionary algorithms to auto-design model backbones and optimize for latency, memory usage, and quality »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.