Apache Airflow community is out with its biggest update in years, with the debut of the 3.0 release. Apache Airflow 3.0 addresses critical enterprise needs with an architectural redesign that could improve how organizations build and deploy data applications. Unlike previous versions, this release breaks away from a monolithic package, introducing a distributed client model that provides flexibility and security. This new architecture allows enterprises to: Execute tasks across multiple cloud environments; Implement granular security controls; Support diverse programming languages; and Enable true multi-cloud deployments. Airflow 3.0’s expanded language support is also interesting. While previous versions were primarily Python-centric, the new release natively supports multiple programming languages. Airflow 3.0 is set to support Python and Go with planned support for Java, TypeScript and Rust. This approach means data engineers can write tasks in their preferred programming language, reducing friction in workflow development and integration. Instead of running a data processing job every hour, Airflow now automatically starts the job when a specific data file is uploaded or when a particular message appears. This could include data loaded into an Amazon S3 cloud storage bucket or a streaming data message in Apache Kafka.
Upwind’s ML cloud platform collects multi-layer telemetry data of the networking stack for real-time detection of threats to APIs, enabling 7X reduction in the mean time to respond
Upwind has added a feature to its cloud application detection and response (CADR) platform, allowing real-time detection of threats to application programming interfaces (APIs). The platform uses machine learning algorithms to collect telemetry data from Layers 3, 4, and 7 of the networking stack, enabling the identification of deviations and anomalous behavior in API traffic. The goal is to reduce the time required to investigate API security incidents by up to 10 times and mean time to response times by up to seven times. In the age of generative artificial intelligence (AI), there is a growing focus on API security. Many organizations are discovering that sensitive data is being shared inadvertently with AI models. Historically, responsibility for securing APIs has been unclear, with many cybersecurity teams assuming that application development teams are securing them as they are developed. However, this can lead to thousands of APIs that cybercriminals can exploit to exfiltrate data or modify business logic. Over the next 12-18 months, organizations plan to increase software security spend on APIs, DevOps toolchains, incident response, open source software, software bill of materials, and software composition analysis tools. Advancements in AI and eBPF technologies could simplify the entire software development lifecycle by streamlining the collection and analysis of telemetry data.
Microsoft Sentinel enables more accurate event reconstruction by integrating Endace’s one-click, drill-down access to definitive, full packet evidence and SIEM workflows
Endace has partnered with Microsoft Sentinel to integrate EndaceProbe with the cloud security solution. This integration allows NetOps and SecOps teams to access full packet evidence from Microsoft Sentinel, enabling faster investigations and more accurate event reconstruction. This integration also enhances security teams’ ability to respond to threats with confidence. Benefits of the integration include: Streamlined investigation workflows, alerts, and playbooks from Microsoft Sentinel, with one-click, drill-down access to definitive, full packet evidence captured by EndaceProbe; Continuously capture weeks or months of full packet data, across Hybrid, On-Prem, and Multi-Cloud environments; Single central console for searching and analyzing recorded packet data across global scale networks, integrated with Microsoft Sentinel; Deep visibility that shows exactly what happened before, during, and after every event; Zero-Day Threat (ZDT) risk validation using playback of recorded network traffic; Combining EndaceProbe’s centralized search with Microsoft Sentinel’s AI-powered SIEM enables faster, more efficient incident investigation and resolution; Military-grade Security: EndaceProbe appliances are FIPS 140-3 compliant and are listed on the DoDIIN APL.
BigID’s privacy management solution helps enterprises to capture, score, and track AI-related privacy risks in a centralized register to strengthen governance and enable effective risk mitigation
BigID, announced the launch of AI Privacy Risk Posture Management – the industry’s first solution to help organizations manage data privacy risks across the AI lifecycle. With unmatched visibility, automated assessments, and actionable privacy controls, BigID empowers enterprises to govern AI responsibly while staying ahead of fast-evolving regulations. BigID’s platform help organizations: 1) Automatically Discover AI Assets: Quickly inventory all models, vector databases, and AI pipelines across hybrid environments to understand how sensitive and personal data flows through AI systems – a critical requirement for GDPR Article 35 and beyond. 2) Proactively Manage AI Data Lifecycles: Enforce policies for data minimization, retention, and lawful purpose across training and inference, preventing model drift and limiting risk exposure. 3) Streamline Privacy Risk Management: Capture, score, and track AI-related privacy risks in a centralized Privacy Risk Register to strengthen governance and enable effective risk mitigation. 4) Accelerate AI Privacy Impact Assessments: Use pre-built, customizable templates for DPIAs and AIAs aligned to regulatory frameworks – with automated evidence capture to simplify documentation. 5) Automate Risk Visibility & Reporting: Gain up-to-date reporting and dynamic risk assessments to demonstrate compliance and communicate AI risk posture to regulators and stakeholders. 6) Board Ready Privacy Metrics: Deliver meaningful KPIs and metrics to DPOs and board leaders, helping quantify AI privacy risk and monitor remediation efforts.
Consortium launches a real-time, yield-bearing blockchain settlement network powered by a tokenized treasury fund to deliver scalable, and inclusive settlement for digital assets
Arca Labs, Tassat Group and tZERO Group announced the launch of Lynq, a real-time, yield-bearing settlement network powered by a tokenized treasury fund custodied at a special purpose broker-dealer. This announcement comes after more than a year of market engagement, platform development and the creation of the Arca Institutional U.S. Treasury Fund “TFND”, a tokenized treasury fund that issues shares as digital asset securities. Scheduled for go-live in Q2, 2025, Lynq was developed in collaboration with leading digital asset institutions to deliver an efficient, scalable, and inclusive settlement solution. Lynq’s launch partners, which include B2C2, Galaxy and Wintermute, will assist with counterparty onboarding to accelerate network adoption and drive initial liquidity. Additional partners include U.S. Bank, which will provide treasury management services to the Lynq ecosystem and serve as Lynq’s qualified cash custodian, and Avalanche, which will provide the open-source Layer 1 blockchain network on which TFND shares will be issued and rebalanced. Lynq operates within a legal framework that leverages tZERO’s Broker-Dealer and Special Purpose Broker-Dealer licenses as well as Arca’s Registered Investment Adviser and Delaware Trust. This innovative architecture, paired with Tassat’s widely adopted, real-time blockchain infrastructure, provides clients with segregated account security, transparent proof of reserves, and broad ecosystem connectivity, all on a familiar and trusted platform.
Morgan Stanley research shows Apple Intelligence platform has been downloaded and engaged with by 80% of eligible U.S. iPhone owners in the last six months and has an above average NPS of 53
Consumers’ perception of Apple’s AI platform is more favorable than that of investors, Morgan Stanley said in a research note. Morgan Stanley said it found that the Apple Intelligence platform has been downloaded and engaged with by 80% of eligible U.S. iPhone owners in the last six months, has an above average net promoter score of 53, and is characterized by iPhone users as “easy to use, innovative, and something that improves their user experience.” “While much of the public critique of Apple Intelligence is warranted, and investor sentiment and expectations on Apple’s AI platform couldn’t be lower, our survey of iPhone owners paints a more positive picture,” Morgan Stanley said in the note. Since September, the share of iPhone owners who believe it is extremely or very important to have Apple Intelligence support on their next iPhone rose 15 points to reach 42%. Among iPhone owners who are likely to upgrade their device in the next 12 months, the percentage saying that about the AI platform rose 20 points to reach 54%, according to the note. Morgan Stanley also found that consumers are willing to pay more for Apple Intelligence than they were in September. Those who have used the AI platform are now willing to pay an average of $9.11 per month for it, a figure that’s 11% higher than the $8.17 average seen in September, per the note. While we don’t expect Apple to put Apple Intelligence behind a paywall until the platform is more built out, the potential long-term monetization of an Apple Intelligence subscription could reach tens of billions of dollars annually when considering a 1.4B global iPhone installed base, 32% (and growing) of US iPhone owners have an Apple Intelligence support iPhone, and users are willing to pay up to $9.11/month for Apple Intelligence,” Morgan Stanley said in the note.
Researchers from MIT, McGill University, ETH Zurich, Johns Hopkins University, Yale and the Mila-Quebec Artificial Intelligence Institute have developed a new method for ensuring that AI-generated codes are more accurate and useful. In the paper, the researchers used Sequential Monte Carlo (SMC) to “tackle a number of challenging semantic parsing problems, guiding generation with incremental static and dynamic analysis.” Sequential Monte Carlo refers to a family of algorithms that help figure out solutions to filtering problems. This method spans various programming languages and instructs the LLM to adhere to the rules of each language. The group found that by adapting new sampling methods, AI models can be guided to follow programming language rules and even enhance the performance of small language models (SLMs), which are typically used for code generation, surpassing that of large language models. João Loula, co-lead writer of the paper, said that the method “could improve programming assistants, AI-powered data analysis and scientific discovery tools.” It can also cut compute costs and be more efficient than reranking methods. Key features of adapting SMC sampling to model generation include proposal distribution where the token-by-token sampling is guided by cheap constraints, important weights that correct for biases and resampling which reallocates compute effort towards partial generations.
Crowdsourced AI benchmarks should be dynamic rather than static datasets, and tailored specifically to distinct use casesAI benchmarks
Over the past few years, labs including OpenAI, Google, and Meta have turned to platforms that recruit users to help evaluate upcoming models’ capabilities. When a model scores favorably, the lab behind it will often tout that score as evidence of a meaningful improvement. It’s a flawed approach, however, according to Emily Bender, a University of Washington linguistics professor and co-author of the book “The AI Con.” Bender takes particular issue with Chatbot Arena, which tasks volunteers with prompting two anonymous models and selecting the response they prefer. To be valid, a benchmark needs to measure something specific, and it needs to have construct validity — that is, there has to be evidence that the construct of interest is well-defined and that the measurements actually relate to the construct,” Bender said. “Chatbot Arena hasn’t shown that voting for one output over another actually correlates with preferences, however they may be defined.” Asmelash Teka Hadgu, the co-founder of AI firm Lesan and a fellow at the Distributed AI Research Institute, said that he thinks benchmarks like Chatbot Arena are being “co-opted” by AI labs to “promote exaggerated claims.” Benchmarks should be dynamic rather than static datasets,” Hadgu said, “distributed across multiple independent entities, such as organizations or universities, and tailored specifically to distinct use cases, like education, healthcare, and other fields done by practicing professionals who use these [models] for work.” Wei-Lin Chiang, an AI doctoral student at UC Berkeley and one of the founders of LMArena, which maintains Chatbot Arena said that incidents such as the Maverick benchmark discrepancy aren’t the result of a flaw in Chatbot Arena’s design, but rather labs misinterpreting its policy.
Adaptive Computer’s no-code web-app platform lets non-programmers build full-featured apps that include payments (via Stripe), scheduled tasks, and AI features such as image generation, speech synthesis simply by entering a text prompt
Startup Adaptive Computer wants non-programmers to be using full-featured apps that they’ve created themselves, simply by entering a text prompt into Adaptive’s no-code web-app platform. To be certain, this isn’t about the computer itself or any hardware — despite the company’s name. The startup currently only builds web apps. For every app it builds, Adaptive Computer’s engine handles creating a database instance, user authentication, file management, and can create apps that include payments (via Stripe), scheduled tasks, and AI features such as image generation, speech synthesis, content analysis, and web search/research. Besides taking care of the back-end database and other technical details, Adaptive apps can work together. For instance, a user can build a file-hosting app and the next app can access those files. Founder Dennis Xu likens this as more like an “operating system” rather than a single Web app. He says the difference between more established products and his startup is that the others were originally geared toward making programming easier for programmers. “We’re building for the everyday person who is interested in creating things to make their own lives better.”
OpenAI is looking to acquire AI coding startups for its next growth areas amid pricing pressure on access to foundational models and outperformance of competitors’ models on coding benchmarks
Anysphere, maker of AI coding assistant Cursor, is growing so quickly that it’s not in the market to be sold, even to OpenAI, a source close to the company tells TechCrunch. It’s been a hot target. Cursor is one of the most popular AI-powered coding tools, and its revenue has been growing astronomically — doubling on average every two months, according to another source. Anysphere’s current average annual recurring revenue is about $300 million, according to the two sources. The company previously walked away from early acquisition discussions with OpenAI, after the ChatGPT maker approached Cursor, the two sources close to the company confirmed, and CNBC previously reported. Anysphere has also received other acquisition offers that the company didn’t consider, according to one of these sources. Cursor turned down the offers because the startup wants to stay independent, said the two people close to the company. Instead, Anysphere has been in talks to raise capital at about a $10 billion valuation, Bloomberg reported last month. Although it didn’t nab Anysphere, OpenAI didn’t give up on buying an established AI coding tool startup. OpenAI talked with more than 20 others, CNBC reported. And then it got serious over the next-fastest-growing AI coding startup, Windsurf, with a $3 billion acquisition offer, Bloomberg reported last week. While Windsurf is a comparatively smaller company, its ARR is about $100 million, up from $40 million in ARR in February, according to a source. Windsurf has been gaining popularity with the developer community, too, and its coding product is designed to work with legacy enterprise systems. Windsurf did not respond to TechCrunch’s request for comment. OpenAI declined to comment on its acquisition talks. OpenAI is likely shopping because it’s looking for its next growth areas as competitors such as Google’s Gemini and China’s DeepSeek put pricing pressure on access to foundational models. Moreover, Anthropic and Google have recently released AI models that outperform OpenAI’s models on coding benchmarks, increasingly making them a preferred choice for developers. While OpenAI could build its own AI coding assistant, buying a product that is already popular with developers means the ChatGPT-maker wouldn’t have to start from scratch to build this business. VCs who invest in developer tool startups are certainly watching. Speculating about OpenAI’s strategy, Chris Farmer, partner and CEO at SignalFire, told TechCrunch of the company, “They’ll be acquisitive at the app layer. It’s existential for them.”