Modern phishing attacks exploit trust. Our current security posture and tools aren’t built for that. Most phishing defenses rely on identifying suspicious patterns, such as malformed URLs, unusual IP addresses and inconsistent metadata. Deepfake-driven phishing skips all of that. Security awareness training is falling behind, too. Even newer solutions such as deepfake detection AI are only partially effective. What’s needed is a shift toward contextual and behavioral baselining. Security systems must learn what normal communication patterns, linguistic fingerprints, and working hours look like for every user and flag deviations not just in metadata, but also in tone, semantics and emotional affect. LLMs can be trained on internal communication logs to detect when an incoming message doesn’t quite match a sender’s established patterns. Static multifactor authentication must also evolve into a continuous process that encompasses biometrics, device location, behavioral rhythm and other factors that add friction to the impersonation process. Prevention and response strategies should proceed along several fronts. Adversarial testing — a technique for evaluating the robustness of AI models by intentionally trying to fool them with specially crafted inputs — needs to go mainstream. Red teams must start incorporating AI-driven phishing simulations into their playbooks. Security teams should build synthetic personas internally, testing how well their defenses hold up when bombarded by believable but fake executives. Think of it as chaos engineering for trust. Vendors must embed resilience into their tools. Collaboration platforms such as Zoom, Slack and Teams need native verification protocols, not just third-party integrations. Watermarking AI-generated content is one approach, though not foolproof. Real-time provenance verification — or tracking when, how, and by whom content was created — is a better long-term approach. Policies need more teeth. Regulatory bodies should require disclosure of synthetic media in corporate communications. Financial institutions should flag anomalous behavior with more rigor. Governments need to standardize definitions and response protocols for synthetic impersonation threats, especially when they cross borders.
The U.S. Secret Service expanding its crypto crime prevention efforts by focusing on jurisdictions where criminals exploit lack of oversight or residency-for-sale programs and offering free training workshops for law enforcement
The U.S. Secret Service is reportedly expanding its cryptocurrency crime prevention efforts, according to a report focusing on the agency’s Global Investigative Operations Center (GIOC), which specializes in digital financial crimes. In the last decade, sources told Bloomberg, the center has seized close to $400 million in digital assets. Most of those seized funds, the report added, are in a single cold-storage wallet, making the Secret Service — better known for guarding the president — one of the largest crypto custodians in the world. According to the report, the operation is overseen by Kali Smith, a lawyer who directs the Secret Service’s cryptocurrency strategy, and whose team has held training workshops for law enforcement in more than 60 countries. The agency focuses on jurisdictions where criminals take advantage of a lack of oversight or residency-for-sale programs, and offers the training for free. “Sometimes after just a weeklong training, they can be like, ‘Wow, we didn’t even realize that this is occurring in our country,’” said Smith. Fraud connected to digital currencies are now behind the bulk of U.S. internet-crime losses. Americans reported $9.3 billion in crypto-related scams last year, a 66% increase over the year before, per FBI data. To recover stolen funds, the report added, the Secret Service has turned to industry players such as Coinbase and Tether, with one of the biggest recoveries involved $225 million in Tether’s USDT stablecoin, tied to romance-investment scams.
Hackers are resorting to brand impersonation to steal information or install malware by delivering logos and names to victims through PDF attachments in emails and persuading them to call “adversary-controlled phone numbers”
Hackers are reportedly impersonating brands like PayPal and Apple to steal information and send malware, according to recent research by Cisco Talos on a surge of instances in which victims call the scammers on the phone, responding to a request regarding an urgent transaction. “Brand impersonation is a social engineering technique that exploits the popularity of well-known brands to persuade email recipients to disclose sensitive information,” the researchers wrote. In these phishing scams, “adversaries can deliver brand logos and names to victims using multiple types of payloads. One of the most common methods of delivering brand logos and names is through PDF payloads (or attachments).” Many of these emails persuade victims to call “adversary-controlled phone numbers,” employing another popular social engineering tactic: telephone-oriented attack delivery (TOAD), otherwise known as callback phishing. Victims are told to call a number in the PDF to settle an issue or confirm a transaction. Once they call, the attacker pretends to be a legitimate representative and tries to manipulate them into sharing confidential information or installing malware on their computer.
OpenAI implements “information tenting” policies that limit staff access to sensitive algorithms, isolates proprietary technology in offline systems and maintains a “deny-by-default” policy to protect against corporate espionage
OpenAI has reportedly overhauled its security operations to protect against corporate espionage. The company accelerated an existing security clampdown after Chinese startup DeepSeek released a competing model in January, with OpenAI alleging that DeepSeek improperly copied its models using “distillation” techniques. The beefed-up security includes “information tenting” policies that limit staff access to sensitive algorithms and new products. For example, during development of OpenAI’s o1 model, only verified team members who had been read into the project could discuss it in shared office spaces. OpenAI now isolates proprietary technology in offline computer systems, implements biometric access controls for office areas (it scans employees’ fingerprints), and maintains a “deny-by-default” internet policy requiring explicit approval for external connections. The company has increased physical security at data centers and expanded its cybersecurity personnel.
NiCE’s report reveals scams are still the method of choice across 57% of attempted fraud transactions and 67% of all fraud is linked to just 7% of payments made to newly added payees; ATO fraud is showing no sign of disappearing
According to the 2025 NiCE Actimize Fraud Insights Report from 2023 to 2024, fraudsters’ focus shifted back slightly towards Account Takeover (ATO) Fraud from Scams, in terms of the overall value of attempts. Scams are still the method of choice across 57% of attempted fraud transactions; however, ATO fraud is showing no sign of disappearing. From a volume perspective, there was a slight shift towards Scams to 52% in 2024 vs. 50/50 split in 2023. In the U.S., the top fraud challenges vary significantly depending on whether the focus is on value (dollar amount of the transactions) or volume (number of transactions), the report says. The most notable development since last year’s report, concerns international wires. In 2024, the total value of international wire transactions declined 6% year over year, but the value of attempted fraud for international wires surged 40%. Additionally, Zelle transactions saw a 26% increase in value, accompanied by a 34% rise in attempted fraud. The report’s data also revealed that 67% of all fraud is linked to just 7% of payments made to newly added payees—highlighting how fraudsters are exploiting when a new recipient is introduced into the payment flow. The report also highlights that international wire fraud attempts are becoming more targeted and sophisticated, often involving social engineering tactics and mule accounts across borders. The report’s statistics also show that fraudsters are strategically targeting different payment types based on their characteristics—high-value fraud through checks and wires, and high-frequency, lower-value fraud through faster digital channels like Zelle.
Object First’s plug-and-play solution addresses ransomware threat for Veeam architectures by enforcing immutability policies directly at the storage layer via open API that ensures backups remain tamper-proof through zero access to root
Object First Inc. addresses ransomware threat with its out-of-the-box Immutability solution, delivering plug-and-play, ransomware-proof storage with zero setup hassle, according to Sterling Wilson, Field Chief Technology Officer at Object First. “Ootbi stands for out-of-the-box Immutability, and that is what you get when you buy an Object First secure appliance,” Wilson said. “Threat actors that come in that steal domain admin accounts, that steal the highest level of credentials, go directly for the backup software. We wanted to prevent that. We do that with zero access to root. We provide the security that is third-party tested that the Veeam users today need.” Object First is pioneering ransomware-proof storage for Veeam architectures by combining robust security, simplicity, high performance and broad channel reach. These remedies are achieved through purpose-built solutions, such as Ootbi, according to Wilson. The Smart Object Storage API is pivotal in ensuring seamless integration between Object First and Veeam solutions, especially for ransomware-proof data protection and backup management. By enforcing immutability policies directly at the storage layer, SOSAPI strengthens data resilience and ensures backups remain tamper-proof, according to Wilson. “We have direct integration using the best [application programming interfaces], and even an API that we’ve created together called the SOSAPI, which is the Smart Object Storage API. We leverage it in its full amount, meaning it is an open API that any of the storage vendors can use. We use it intelligently, not to only move the data efficiently, but to also place the data across a clustered environment as data grows.”
KnowBe4’S Just-in-Time training analyses existing security stack and delivers real-time, context-sensitive “nudges” based on users’ current actions to mitigate risky behavior before it escalates by leveraging behavioral science, AI-driven analytics and automation
AI-driven cybersecurity empowers organizations with proactive defenses, accelerated response times and more robust protection. One breakthrough in this space is Just-in-Time AI training, a transformative method that enhances cybersecurity awareness. By delivering real-time, context-sensitive “nudges” based on users’ current actions, KnowBe4 Inc. uses this approach to mitigate risky behavior before it escalates, according to Javvad Malik, lead security awareness advocate at KnowBe4. “The Just-in-Time training or the nudges is where AI can integrate with your existing security stack,” Malik said. “You have firewalls, you have network monitoring controls, you have some [endpoint detection and response], you have some gateway controls, you have a lot of visibility into what people are doing. What AI can do is pull all of that out and analyze it and say, ‘Okay, this user’s now plugged in a USB drive. It’s not a corporate-approved one.’”AI-driven cybersecurity significantly enhances awareness training and user behavior, supporting stronger risk mitigation by leveraging real-time analytics, personalization and automation. KnowBe4 leverages this approach to transform users from potential vulnerabilities into active defenders, greatly strengthening an organization’s human layer of defense against cyber threats, according to Malik. KnowBe4 enables cybersecurity awareness training and human risk management within organizations by leveraging behavioral science, AI-driven analytics and interactive training tools through its comprehensive training platform. This approach transforms employees from potential security liabilities into proactive defenders, according to Malik.
Congruity360 InfoGov generates higher quality pool of AI-ready data and reduces the amount of AI compute and AI storage by identifying redundant and outdated information through metadata, that eliminates 60% to 70% of the data
Congruity360 InfoGov focuses on helping organizations protect and manage unstructured data through AI-driven cyber resilience tools. Congruity360 helps organizations take the bites out of bytes through a process that leverages the promise and capabilities of data classification. By identifying redundant, outdated and sensitive information, the company helps clients understand the data they have and act on it. CEO Mark Ward said, “In order to get smart data, you have to basically limit the amount of garbage that is potentially available for your AI outcomes. We do that by identifying through metadata what information is redundant. It’s copies upon copies or, as you and I both know in the storage world, snapshots across snapshots.” That process yields a higher quality pool of data that is then fed into AI workloads. This solution results in better outcomes and lower costs, according to Ward. “By eliminating anywhere from 60% to 70% of the data, by eliminating rot, we’re able to reduce the amount of AI compute and AI storage required on the backend. With the cost being the cost, that’s a big, big outcome.” With the rise of AI agents, there is also a risk that autonomous bots will act based on erroneous data. Congruity360 has sought to minimize this issue through a solution called CDM Hub that empowers individual data owners. “The CDM Hub was actually developed from one of our large European customers who was managing their [General Data Protection Regulation] exposure,” Ward explained. “It not only gives the end-user owner[ship] of the data, but it’s actually a hierarchical interface so that the management organization … has ultimate say on what data is being used. This hierarchical approach to making sure that human intervention at the appropriate levels is applied to what the machine learning engine produces.”
Exabeam’s multi-agent security AI generates boardroom-ready summaries that reframe technical metrics into business outcomes by integrating into the complete threat detection, investigation and response workflow
Exabeam, a global leader in intelligence and automation for security operations, has expanded its integrated multi-agent AI system, Exabeam Nova, to provide real-time strategic planning and boardroom communication tools. The Exabeam Nova Advisor Agent is the industry’s first AI capability designed to turn security data into a strategy that CISOs can defend in the boardroom. The system includes six agents designed to automate decisions, streamline investigations, and deliver continuous benchmarking of program effectiveness with clear, prioritized recommendations to drive improvement. Embedded into the New-Scale Security Operations Platform, Exabeam Nova is deeply integrated into the complete threat detection, investigation and response (TDIR) workflow. Within 90 days of launch, users reported five-times faster investigations with improved accuracy. Exabeam Nova’s seamless AI agents work together, allowing users to work smarter and prove the business impact of their security programs. Exabeam Nova is now the only agentic AI that empowers security leaders to: Build Strategic Plans: Automatically generate data-backed roadmaps using daily posture assessments, MITRE ATT&CK coverage, and organizational security data. Communicate with the Executive Team and Board: Generate boardroom-ready summaries that reframe technical metrics into business outcomes, enabling leadership to understand progress, support investment decisions, and evaluate ROI. Identify and Prioritize Gaps: Uncover issues like missing log sources, misconfigurations, and ineffective threat detection content that weakens security posture. Run What-If Analysis: Simulate adjustments or additions to security tooling and detection capabilities to evaluate how proposed actions close gaps and improve security posture. Track and Improve Maturity: Benchmark security posture daily, monitor measurable improvements, and align security operations with long-term organizational goals.
Hackers exploiting Vercel’s gen AI tool v0.dev that lets them quickly reproduce the design and branding of authentic login sites such as Okta and Microsoft 365, often hosting visual assets such as company logos, to create sophisticated phishing websites at scale
Cybercriminals are using Generative Artificial Intelligence (GenAI), specifically the v0.dev tool from Vercel, to create sophisticated phishing websites quickly and at scale. The tool allows attackers to quickly reproduce the design and branding of authentic login sites, often hosting visual assets such as company logos on Vercel’s infrastructure. The research revealed that attackers have used the Vercel platform to host phishing sites imitating not only Okta customers but also brands like Microsoft 365 and various cryptocurrency companies. Vercel responded by restricting access to suspect sites and working with Okta to improve reporting processes for additional phishing-related infrastructure. The report also noted the existence of several public GitHub repositories that replicate the v0.dev application, along with DIY guides enabling others to build their own generative phishing tools. Okta Threat Intelligence highlighted that traditional indicators of poor quality or imperfect design are insufficient for deterrence. To address these risks, Okta Threat Intelligence recommends enforcing phishing-resistant authentication policies, prioritizing the deactivation of less secure factors, restricting access to trusted devices, requiring secondary authentication if anomalous user behavior is detected, and updating security awareness training to account for AI-driven threats.
