Modern phishing attacks exploit trust. Our current security posture and tools aren’t built for that. Most phishing defenses rely on identifying suspicious patterns, such as malformed URLs, unusual IP addresses and inconsistent metadata. Deepfake-driven phishing skips all of that. Security awareness training is falling behind, too. Even newer solutions such as deepfake detection AI are only partially effective. What’s needed is a shift toward contextual and behavioral baselining. Security systems must learn what normal communication patterns, linguistic fingerprints, and working hours look like for every user and flag deviations not just in metadata, but also in tone, semantics and emotional affect. LLMs can be trained on internal communication logs to detect when an incoming message doesn’t quite match a sender’s established patterns. Static multifactor authentication must also evolve into a continuous process that encompasses biometrics, device location, behavioral rhythm and other factors that add friction to the impersonation process. Prevention and response strategies should proceed along several fronts. Adversarial testing — a technique for evaluating the robustness of AI models by intentionally trying to fool them with specially crafted inputs — needs to go mainstream. Red teams must start incorporating AI-driven phishing simulations into their playbooks. Security teams should build synthetic personas internally, testing how well their defenses hold up when bombarded by believable but fake executives. Think of it as chaos engineering for trust. Vendors must embed resilience into their tools. Collaboration platforms such as Zoom, Slack and Teams need native verification protocols, not just third-party integrations. Watermarking AI-generated content is one approach, though not foolproof. Real-time provenance verification — or tracking when, how, and by whom content was created — is a better long-term approach. Policies need more teeth. Regulatory bodies should require disclosure of synthetic media in corporate communications. Financial institutions should flag anomalous behavior with more rigor. Governments need to standardize definitions and response protocols for synthetic impersonation threats, especially when they cross borders.
US phone carriers are rolling out blocking of unauthorized number port outs and wireless account locking for combating SIM swap attacks
To combat SIM swap attacks of impersonation and deception tactics, known as social engineering attacks, three major phone carriers in the United States — AT&T, T-Mobile, and Verizon — have introduced security features that make it more difficult for malicious hackers to deceptively get a customer’s account changed, such as porting out their phone number. In July, AT&T introduced its free Wireless Account Lock security feature to help prevent SIM swaps. The feature allows AT&T customers to add extra account protection by toggling on a setting that prevents anyone from moving a SIM card or phone number to another device or account. The feature can be switched on via AT&T’s app or through its online account portal by anyone who manages the account, so make sure that account is protected with a unique password and multi-factor authentication. T-Mobile allows customers to prevent SIM swaps and block unauthorized number port outs for free through their T-Mobile online account. The primary account holder will have to log in to change to the setting, such as switching it on or off. Verizon has two security features called SIM Protection and Number Lock, which prevent SIM swaps and phone number transfers, respectively. Both of these features can be turned on via the Verizon app and through the online account portal by an account’s owner or manager. Verizon says that switching off the feature may result in a 15-minute delay before any transactions can be performed — another safeguard to allow the legitimate account holder to reverse any account changes.
Android 16’s Advanced Protection features seek to secure mobile devices on Chrome by auto-enabling HTTPS for secure connections, disabling the optimizing Javascript compilers inside V8 and isolating malicious sites from accessing data or code from another website
With Android 16, users can enable Advanced Protection to “activate Google’s strongest security for mobile devices.” There are three main Advanced Protection features in Chrome 137+ on Android 16, starting with “Always use secure connections” — or HTTPS — being enabled. Before connecting to an insecure (HTTP) site, Chrome asks for explicit permission before loading. This setting protects users from attackers reading confidential data and injecting malicious content into otherwise innocuous webpages. The next feature disables the “higher-level optimizing Javascript compilers inside V8.” V8 is Chrome’s high-performance Javascript and WebAssembly engine. The optimizing compilers in V8 make certain websites run faster, however they historically also have been a source of known exploitation of Chrome. Of all the patched security bugs in V8 with known exploitation, disabling the optimizers would have mitigated ~50%. This prevents a large category of exploits, but at the expense of “causing performance issues for some websites.” Finally, Advanced Protection enables Site Isolation wherein Chrome “isolates each website into its own rendering OS process” in memory. This isolation prevents a malicious website from accessing data or code from another website, even if that malicious website manages to exploit a vulnerability in Chrome’s renderer—a second bug to escape the renderer sandbox is required to access other sites.
The U.S. Secret Service expanding its crypto crime prevention efforts by focusing on jurisdictions where criminals exploit lack of oversight or residency-for-sale programs and offering free training workshops for law enforcement
The U.S. Secret Service is reportedly expanding its cryptocurrency crime prevention efforts, according to a report focusing on the agency’s Global Investigative Operations Center (GIOC), which specializes in digital financial crimes. In the last decade, sources told Bloomberg, the center has seized close to $400 million in digital assets. Most of those seized funds, the report added, are in a single cold-storage wallet, making the Secret Service — better known for guarding the president — one of the largest crypto custodians in the world. According to the report, the operation is overseen by Kali Smith, a lawyer who directs the Secret Service’s cryptocurrency strategy, and whose team has held training workshops for law enforcement in more than 60 countries. The agency focuses on jurisdictions where criminals take advantage of a lack of oversight or residency-for-sale programs, and offers the training for free. “Sometimes after just a weeklong training, they can be like, ‘Wow, we didn’t even realize that this is occurring in our country,’” said Smith. Fraud connected to digital currencies are now behind the bulk of U.S. internet-crime losses. Americans reported $9.3 billion in crypto-related scams last year, a 66% increase over the year before, per FBI data. To recover stolen funds, the report added, the Secret Service has turned to industry players such as Coinbase and Tether, with one of the biggest recoveries involved $225 million in Tether’s USDT stablecoin, tied to romance-investment scams.
Hackers are resorting to brand impersonation to steal information or install malware by delivering logos and names to victims through PDF attachments in emails and persuading them to call “adversary-controlled phone numbers”
Hackers are reportedly impersonating brands like PayPal and Apple to steal information and send malware, according to recent research by Cisco Talos on a surge of instances in which victims call the scammers on the phone, responding to a request regarding an urgent transaction. “Brand impersonation is a social engineering technique that exploits the popularity of well-known brands to persuade email recipients to disclose sensitive information,” the researchers wrote. In these phishing scams, “adversaries can deliver brand logos and names to victims using multiple types of payloads. One of the most common methods of delivering brand logos and names is through PDF payloads (or attachments).” Many of these emails persuade victims to call “adversary-controlled phone numbers,” employing another popular social engineering tactic: telephone-oriented attack delivery (TOAD), otherwise known as callback phishing. Victims are told to call a number in the PDF to settle an issue or confirm a transaction. Once they call, the attacker pretends to be a legitimate representative and tries to manipulate them into sharing confidential information or installing malware on their computer.
OpenAI implements “information tenting” policies that limit staff access to sensitive algorithms, isolates proprietary technology in offline systems and maintains a “deny-by-default” policy to protect against corporate espionage
OpenAI has reportedly overhauled its security operations to protect against corporate espionage. The company accelerated an existing security clampdown after Chinese startup DeepSeek released a competing model in January, with OpenAI alleging that DeepSeek improperly copied its models using “distillation” techniques. The beefed-up security includes “information tenting” policies that limit staff access to sensitive algorithms and new products. For example, during development of OpenAI’s o1 model, only verified team members who had been read into the project could discuss it in shared office spaces. OpenAI now isolates proprietary technology in offline computer systems, implements biometric access controls for office areas (it scans employees’ fingerprints), and maintains a “deny-by-default” internet policy requiring explicit approval for external connections. The company has increased physical security at data centers and expanded its cybersecurity personnel.
NiCE’s report reveals scams are still the method of choice across 57% of attempted fraud transactions and 67% of all fraud is linked to just 7% of payments made to newly added payees; ATO fraud is showing no sign of disappearing
According to the 2025 NiCE Actimize Fraud Insights Report from 2023 to 2024, fraudsters’ focus shifted back slightly towards Account Takeover (ATO) Fraud from Scams, in terms of the overall value of attempts. Scams are still the method of choice across 57% of attempted fraud transactions; however, ATO fraud is showing no sign of disappearing. From a volume perspective, there was a slight shift towards Scams to 52% in 2024 vs. 50/50 split in 2023. In the U.S., the top fraud challenges vary significantly depending on whether the focus is on value (dollar amount of the transactions) or volume (number of transactions), the report says. The most notable development since last year’s report, concerns international wires. In 2024, the total value of international wire transactions declined 6% year over year, but the value of attempted fraud for international wires surged 40%. Additionally, Zelle transactions saw a 26% increase in value, accompanied by a 34% rise in attempted fraud. The report’s data also revealed that 67% of all fraud is linked to just 7% of payments made to newly added payees—highlighting how fraudsters are exploiting when a new recipient is introduced into the payment flow. The report also highlights that international wire fraud attempts are becoming more targeted and sophisticated, often involving social engineering tactics and mule accounts across borders. The report’s statistics also show that fraudsters are strategically targeting different payment types based on their characteristics—high-value fraud through checks and wires, and high-frequency, lower-value fraud through faster digital channels like Zelle.
Object First’s plug-and-play solution addresses ransomware threat for Veeam architectures by enforcing immutability policies directly at the storage layer via open API that ensures backups remain tamper-proof through zero access to root
Object First Inc. addresses ransomware threat with its out-of-the-box Immutability solution, delivering plug-and-play, ransomware-proof storage with zero setup hassle, according to Sterling Wilson, Field Chief Technology Officer at Object First. “Ootbi stands for out-of-the-box Immutability, and that is what you get when you buy an Object First secure appliance,” Wilson said. “Threat actors that come in that steal domain admin accounts, that steal the highest level of credentials, go directly for the backup software. We wanted to prevent that. We do that with zero access to root. We provide the security that is third-party tested that the Veeam users today need.” Object First is pioneering ransomware-proof storage for Veeam architectures by combining robust security, simplicity, high performance and broad channel reach. These remedies are achieved through purpose-built solutions, such as Ootbi, according to Wilson. The Smart Object Storage API is pivotal in ensuring seamless integration between Object First and Veeam solutions, especially for ransomware-proof data protection and backup management. By enforcing immutability policies directly at the storage layer, SOSAPI strengthens data resilience and ensures backups remain tamper-proof, according to Wilson. “We have direct integration using the best [application programming interfaces], and even an API that we’ve created together called the SOSAPI, which is the Smart Object Storage API. We leverage it in its full amount, meaning it is an open API that any of the storage vendors can use. We use it intelligently, not to only move the data efficiently, but to also place the data across a clustered environment as data grows.”
KnowBe4’S Just-in-Time training analyses existing security stack and delivers real-time, context-sensitive “nudges” based on users’ current actions to mitigate risky behavior before it escalates by leveraging behavioral science, AI-driven analytics and automation
AI-driven cybersecurity empowers organizations with proactive defenses, accelerated response times and more robust protection. One breakthrough in this space is Just-in-Time AI training, a transformative method that enhances cybersecurity awareness. By delivering real-time, context-sensitive “nudges” based on users’ current actions, KnowBe4 Inc. uses this approach to mitigate risky behavior before it escalates, according to Javvad Malik, lead security awareness advocate at KnowBe4. “The Just-in-Time training or the nudges is where AI can integrate with your existing security stack,” Malik said. “You have firewalls, you have network monitoring controls, you have some [endpoint detection and response], you have some gateway controls, you have a lot of visibility into what people are doing. What AI can do is pull all of that out and analyze it and say, ‘Okay, this user’s now plugged in a USB drive. It’s not a corporate-approved one.’”AI-driven cybersecurity significantly enhances awareness training and user behavior, supporting stronger risk mitigation by leveraging real-time analytics, personalization and automation. KnowBe4 leverages this approach to transform users from potential vulnerabilities into active defenders, greatly strengthening an organization’s human layer of defense against cyber threats, according to Malik. KnowBe4 enables cybersecurity awareness training and human risk management within organizations by leveraging behavioral science, AI-driven analytics and interactive training tools through its comprehensive training platform. This approach transforms employees from potential security liabilities into proactive defenders, according to Malik.
Congruity360 InfoGov generates higher quality pool of AI-ready data and reduces the amount of AI compute and AI storage by identifying redundant and outdated information through metadata, that eliminates 60% to 70% of the data
Congruity360 InfoGov focuses on helping organizations protect and manage unstructured data through AI-driven cyber resilience tools. Congruity360 helps organizations take the bites out of bytes through a process that leverages the promise and capabilities of data classification. By identifying redundant, outdated and sensitive information, the company helps clients understand the data they have and act on it. CEO Mark Ward said, “In order to get smart data, you have to basically limit the amount of garbage that is potentially available for your AI outcomes. We do that by identifying through metadata what information is redundant. It’s copies upon copies or, as you and I both know in the storage world, snapshots across snapshots.” That process yields a higher quality pool of data that is then fed into AI workloads. This solution results in better outcomes and lower costs, according to Ward. “By eliminating anywhere from 60% to 70% of the data, by eliminating rot, we’re able to reduce the amount of AI compute and AI storage required on the backend. With the cost being the cost, that’s a big, big outcome.” With the rise of AI agents, there is also a risk that autonomous bots will act based on erroneous data. Congruity360 has sought to minimize this issue through a solution called CDM Hub that empowers individual data owners. “The CDM Hub was actually developed from one of our large European customers who was managing their [General Data Protection Regulation] exposure,” Ward explained. “It not only gives the end-user owner[ship] of the data, but it’s actually a hierarchical interface so that the management organization … has ultimate say on what data is being used. This hierarchical approach to making sure that human intervention at the appropriate levels is applied to what the machine learning engine produces.”