OpenAI plans to route sensitive conversations to reasoning models like GPT-5 and roll out parental controls within the next month — part of an ongoing response to recent safety incidents involving ChatGPT failing to detect mental distress. Experts attribute these issues to fundamental design elements: the models’ tendency to validate user statements and their next-word prediction algorithms, which cause chatbots to follow conversational threads rather than redirect potentially harmful discussions. OpenAI thinks that at least one solution to conversations that go off the rails could be to automatically reroute sensitive chats to “reasoning” models. “We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context. We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.” OpenAI says its GPT-5 thinking and o3 models are built to spend more time thinking for longer and reasoning through context before answering, which means they are “more resistant to adversarial prompts.” The AI firm also said it would roll out parental controls in the next month, allowing parents to link their account with their teen’s account through an email invitation. Soon, parents will be able to control how ChatGPT responds to their child with “age-appropriate model behavior rules, which are on by default.” Parents will also be able to disable features like memory and chat history. Perhaps the most important parental control that OpenAI intends to roll out is that parents can receive notifications when the system detects their teenager is in a moment of “acute distress.”
Google refutes mass‑breach rumors, saying Gmail blocks over 99.9% of phishing and malware; aftermath of Salesforce‑linked ShinyHunters exposure which involved business contact data
Google issued a brief statement on its blog, promoting the security of Gmail following erroneous online claims of mass warnings sent to end users. The post keeps things both brief and vague. Google says it’s acting to “reassure” its customers that the platform is secure, and that reports of a “broad warning to all Gmail users” regarding a breach did not in fact occur. The blog post cites Google’s ability to block “more than 99.9% of phishing and malware attempts,” while also noting that its own recommended practices include the use of passkeys over traditional passwords. The reason behind this post seems to spawn from growing noise on both social media and in various press outlets over whether a group called ShinyHunters had gained access to general data on Gmail users at large. While some stories surrounding phishing attempts have made their way to Reddit — and, at the very least, appear to be accurate — it appears as though most of the noise is being made by reporters connecting a recent SalesForce breach with this blog post promoting passkey adoption.
Crittora’s new verify tool delivers portal‑free, passwordless, cryptographic identity checks for real estate documents, plus audit‑ready access logs and authentication settings visibility to combat email‑based fraud
Crittora has introduced Qripton Verify — a platform designed to protect title companies, closing attorneys and real estate professionals from fraud attempts that have become increasingly common in property transactions. “Real estate professionals are under pressure to protect clients without slowing deals,” said Erik Rowan, co-founder of Crittora. “Qripton Verify secures wire instructions and critical documents using cryptographic identity checks — no portals, passwords, added overhead, or IT setup required.” Platform features include: Identity-locked encryption allowing only the intended recipient to access documents; Audit-ready logs that record all access, downloads and verifications; Zero-integration deployment requiring no software installation; DNS spoofing insight that identifies gaps in email authentication settings such as SPF, DKIM and DMARC. Qripton Verify is intended to secure the delivery of wire instructions, settlement statements, powers of attorney and payoffs. The company said the system is designed specifically for title and escrow workflows, and it uses a per-file pricing model that complies with the Real Estate Settlement Procedures Act.
SlashNext’s SEER uses cloud-hosted virtual browser environments to crawl and interact with suspicious web page URLs and pages at run time
Automated data security company Varonis Systems plans to acquire phishing protection company SlashNext Inc. for a reported $150 million. SlashNext’s technology offers what it calls SEER, for Session Emulation and Environment Reconnaissance. It uses cloud-hosted virtual browser environments to crawl and interact with suspicious web page URLs and pages at run time. The platform can observe multistage phishing flows, hidden redirects, credential-harvesting forms and other deceptive behaviors that static scanning often misses by executing pages inside safe, instrumented environments. The company also delivers protection via multiple enforcement and telemetry points, including real-time blocklists and Domain Name System Response Policy Zone, application programming interface and gateway integrations and threat intelligence feeds that feed Security Orchestration, Automation and Response and threat intelligence platforms. The multichannel delivery allows organizations to automatically stop newly created phishing sites and feed contextual evidence, such as screenshots, indicators and verdicts, into incident response workflows so triage can be faster and more accurate. SlashNext also emphasizes zero-hour detection and the ability to spot sophisticated social engineering, including targeted business email compromise, QR-code and multistep scams, by analyzing visual layout, message tone and behavioral indicators rather than relying solely on fuzzy string matches. With the acquisition Veronis plans to extend its data-centric threat detection capabilities with SlashNexts’ phishing and social engineering detection solutions.
Microsoft-backed Sola Security’s no-code AI cybersecurity platform enable teams to build custom threat detection apps with graph research capabilities
Sola Security offers an artificial intelligence platform that enables cybersecurity teams to build threat detection apps using prompts. To use the platform, workers must specify the systems they wish to monitor and the kind of vulnerabilities that they’re looking to track. Sola Security then automatically generates the software components necessary to perform the task. The platform first connects to the systems that it’s instructed to monitor and collects cybersecurity data. Companies can create apps that detect vulnerable assets such as Amazon S3 buckets without encryption enabled. According to Sola Security, its platform also spots issues that affect employee accounts. Customers can build apps that identify accounts with access to more data or systems than strictly necessary. Sola Security says that user-created apps can visualize their findings in graphs to ease analysis. According to the company, its platform regularly checks for new vulnerabilities and generates an alert when one is found. A chatbot embedded in custom apps allows customers to analyze the data with natural language questions. The platform uses a feature dubbed graph research to answer some user questions. According to the company, the technology helps cybersecurity teams determine whether an issue in one system might affect other assets. For example, it could point out if a vulnerability in a tool that a company uses to manage administrator accounts might expose its cloud environment to cyberattacks. Users can modify the dashboards that Sola generates using a no-code customization tool. An administrator might create one version of a database monitoring dashboard for the cybersecurity team and another for the business workers who use the database. For customers that don’t require extensive customization, Sola offers prepackaged cybersecurity apps built by its engineers.
Palo Alto Networks launches Prisma SASE 4.0 with AI agent oversight, browser threat protection and secures 5,000+ AI apps
Palo Alto Networks added more capabilities to its fast-growing Prisma SASE (Secure Access Service Edge) platform by leveraging AI to create what the company calls “a blueprint for the AI-ready enterprise.” The Secure Access Service Edge service delivers protection against AI-powered threats, data security that adapts to how information flows, and unified operations capable of intelligent scaling. These new features break the mold of “legacy SASE,” which is focused on replacement of traditional wide-area network technology with cloud-first offerings. All the new features in Prisma SASE 4.0 are geared toward enabling companies to protect against AI-driven threats and to safeguard data wherever it resides or moves to. The innovations of Prisma 4.0 focus on three key areas: Deploying SaaS agent security to “safeguard the AI frontier”: Prisma SASE 4.0 provides direct oversight of AI agents. As employees connect tools like Microsoft Copilot to sensitive corporate data, these agents can act autonomously, creating new pathways for data leaks through unvetted prompts or risky plugins. This creates new risks, and the new SaaS Agent Security gives security teams the visibility it needs to see which agents are in use, control data access and block risky activities. AI based innovation is important but can’t be at the cost of putting a business at risk.
Apiiro report finds AI-assisted coding increases developer speed fourfold but raises security risks tenfold amid rising architectural flaws and secret exposures
A new report from application security posture management company Apiiro Ltd. details a tenfold increase in security findings among Copilot users, peaking in mid-2025. Two primary factors were found to be driving the surge: open-source dependencies and secure coding issues. AI-assisted developers were found to be more prone to design-level flaws versus conventional developers, who were more likely to introduce logic mistakes. The architectural weaknesses are more costly to remediate and harder to catch later on, creating a structural challenge for organizations trying to balance speed with security. Secrets exposure was also found to diverge between developers. Developers working with Copilot leaked higher volumes of cloud credentials, while non-Copilot users were more likely to expose generic application programming interface tokens. The key takeaway is that AI assistance may inadvertently amplify risks related to cloud identity and credential management. The findings in the report include that developers using AI tools on average generate three to four times more commits on average, but the contributions were consolidated into fewer, larger pull requests, or proposed code changes. The increased throughput was found to accelerate delivery but also add complexity for application security teams — since traditional review processes are now insufficient to keep up with the scale and intricacy of AI-assisted code. The report also details how average pull request sizes and commit volumes have sharply increased as AI coding assistance has been adopted. AI-assisted developers were found to produce more code but open fewer pull requests. Larger, more complex code submissions are noted as elevating the risk of shallow reviews and missed vulnerabilities. Apiiro’s researchers warn that though AI code assistants can drive dramatic improvements in developer productivity, they also introduce new categories of risk that organizations must address
Digital twins turn defense into rehearsal: enterprises can stage zero‑days, ransomware and insider threats in a live‑fidelity mirror to preempt real‑world impact
Digital twins, virtual replicas that learn and evolve in real time, are giving security teams a way to see threats before they strike. For the first time, organizations can stage tomorrow’s attacks today, turning defense from a reaction into a rehearsal. Instead of waiting for a zero-day exploit to spread through production systems, organizations can use their twin to anticipate how an attack might unfold and block it before it becomes a problem. In short, digital twins give defenders foresight in a domain long defined by hindsight. Analysts describe this new approach as a “cyber sandbox,” but one operating at the same scale and fidelity as the production environment. Inside this mirrored environment, teams can stage ransomware attacks, phishing waves and insider threats. Before rolling out a new SaaS integration or shifting workloads into a multicloud environment, teams can rehearse the move inside their twin. If misconfigurations, privilege escalations or API blind spots emerge, they are patched in the model before they exist in production. This approach transforms change management from a gamble into a calculated maneuver, tightening resilience without slowing innovation. Startups are combining AI-driven attack generation with digital twins, producing probability maps that indicate the likelihood of future threats succeeding. In effect, these are predictive laboratories where attackers’ moves can be anticipated, not just countered.
They don’t break in — they log in: 79% of genAI attacks are malware‑free as adversaries bypass legacy IAM with stolen credentials and MFA social engineering
CrowdStrike’s 2025 Threat Hunting report reveals that vishing attacks surged by 442% in late 2024, more than doubling last year’s numbers in the first half of 2025. Adversaries are leveraging AI-driven social engineering and deepfake tools to bypass MFA and exploit credentials at scale. The report also found that 52% of all exploited vulnerabilities are related to initial access, most often through compromised identities, while the use of gen AI to create, impersonate, and abuse identities is a driving force behind these trends. Machine identities now outnumber human users by 45:1 across the average enterprise, while attackers move laterally in just 51 seconds. Traditional identity and access management systems built on static rules and quarterly reviews can’t keep pace with threats moving at machine speed. Gartner predicts information security spending will reach $213 billion in 2025, even with growth revised down to 10.7%. Ongoing threat protection is expected to push spending to $323 billion in 2029. The research firm expects to see more organizations replace legacy rule-based systems with AI-powered platforms that learn, adapt, and respond autonomously. IDC predicts robust growth in identity security.
New‑hire “boss scams” surge: impostors pose as managers to new employees, push urgent gift‑card buys, then resell vouchers on the dark web at a discount
A wave of graduates entering the job market could present new targets for scammers. New hires can often fall prey to “boss scams,” a variation of “spear-phishing” in which fraudsters pretend to be a supervisor, seeking help in purchasing gift cards for employees or customers. The victim passes the voucher onto the scammer, who then sells it on the dark web at a discount. “A new boss is someone that we don’t question. We don’t want to be seen rocking the boat,” Elisabeth Carter, criminologist at Kingston University and host of BBC Radio 4’s “Scam Secrets” podcast, told, adding that this is happening as the job market is more competitive than it has been in years. She argued that the boss scam taps into workers’ insecurities when starting a new job: They haven’t formed a trusted network, and they’re eager to please. Scammers are able to target these workers, the report said, using a type of social engineering scam that involves link analysis, or scraping data from social media platforms to learn the human relationships or connections inside an organization. In particular, they focus on job announcements on sites like LinkedIn, said Jason Hogg, executive chair of Cypfer, a cybersecurity specialist. “With the proliferation of large language models [fraudsters can] emulate human behavior, look up people’s profiles and connections and then behind the scenes be able to manipulate those connections and draw a social graph out,” Hogg said.