Originally published The Women’s Tech & Telecom Partnership’s 2026 edition of “The Impact Report

Equipping Americans on the Digital Front Lines to Fight Back in the Agentic Era.

A recent news report detailed the experience of Abigail, a woman living in Southern California. Abigail met Steve on Facebook and fell in love.i Soon their conversations moved to WhatsApp. Over time, they began planning a future together—a future that included buying a beach house.

This required Abigail to sell her condo. Comparable homes in her neighborhood were worth roughly $550,000, but selling it fast mattered more than getting a good price. At Steve’s urging, Abigail listed it for $350,000. A wholesale real estate firm quickly bought the property below market value.

Once the sale was finalized, Abigail prepared to send Steve $70,000.

A few days before Abigail made the transfer, her daughter Vivian intervened. When Vivian pressed her mother for answers, Abigail shared a video from Steve—a personal message addressed directly to Abigail in which Steve called her “my queen.”

Vivian watched closely. Something felt off. Vivian saw what Abigail could not: the video was a deepfake.

There was much more to the scam than fabricated messages. Vivian discovered her mother had sent “Steve” more than $81,000 in money orders, cash, Bitcoin, Zelle payments, and gift cards. By the time authorities documented the case, the money was long gone.ii Recovery prospects remain uncertain as the dispute continues in court.

Abigail’s victimization is hardly unique. According to the FBI, roughly 59,000 people reported being victims of romance scams in 2024 alone.iii The actual figure is likely much higher, as nearly half of Americans are reluctant to admit they’ve been cyber-conned.

Men and women tend to experience scams differently. Data from the Better Business Bureau’s Scam Tracker suggests women are twice as likely to report financial losses, while men are likely to lose larger amounts of money than women.iv Loss patterns vary. Tactics change and are tailored to each target. But scammers do not discriminate by gender, age, income, location, or political affiliation. They target everyone—and work hard to perfect their crime of persuasion, scaling their attempts with emerging technologies like AI.

Scams are as old as time, but generative AI has made them way more convincing.v In Abigail’s case, visual confirmation replaced text-based persuasion. Americans are learning that voice and video are not proof of life. Yet, even as we adapt to that shift, the next generation of scams is emerging.

Generative AI is reactive, generating outputs when prompted.vi Agentic AI is designed to go further: to make plans, execute multi-step actions, adapt to feedback, and operate with limited oversight.vii In commerce, AI agents monitor prices, initiate transactions, and manage purchases on behalf of consumers.viii These capabilities promise efficiency and personalization, but also amplify risks and introduce new threats.

Researchers have found that AI agents can sustain human-level scam contact.ix Industry leaders caution that as agentic commerce expands, scammers will attempt to compromise new tools or deploy their own systems to initiate fraudulent transactions at a speed and volume well beyond human capacity.x

Conversely, agentic AI systems also have tremendous potential to combat scams by offering a more proactive, adaptive, and holistic defense.

AI currently supports personalized consumer protection and real-time intervention.xi On-device AI systems analyze phone and text conversations locally, alerting users when dialogue patterns resemble known scams.xii Financial institutions rely on machine learning models to evaluate hundreds of behavioral signals in milliseconds, flagging anomalous transfers before funds leave an account.xiii In forensic settings, AI-assisted blockchain analytics allow investigators to trace illicit transactions and map laundering networks that once seemed opaque.xiv

Agentic systems could extend those protections by operating directly within high-risk interactions. Startups such as BeeSafe AI, for instance, are developing anti-scam agents that engage scammers in real time, interrupting active scams while collecting threat intelligence and diverting cybercriminals’ time and resources.xv

Similarly, Charm Security’s Fraud Investigation Agent serves as an AI fraud expert, assisting investigators by synthesizing signals across alerts, cases, and customer interactions—even going so far as to interpret human intent and behavioral psychology—to guide faster, higher-confidence decisions across prevention, investigation, and resolution.xvi These systems apply the same automation, scalability, and persistence that criminals seek to exploit, but in service of defense.

It can be easy to blame technology for accelerating scams. But technology is merely a tool that reflects the intent of its user. Scams are not new. What is new is their velocity and sophistication.

Regulation is essential to address cyber scams as they evolve, but regulation cannot be recalibrated often enough to keep up with adaptive tactics. Defensive technologies can. Strengthening defensive capabilities through real-time behavioral analysis, embedded intervention, and intelligence-driven disruption can more immediately combat scams now and in the future.

The question, therefore, is not whether AI agents will enter commerce and communication. They already have.

The question is whether our defensive systems can evolve to keep up with the threats we face—and whether we are willing to fully embrace the technologies necessary to protect Americans on the digital frontlines.


i Kurt Knutsson, “AI deepfake romance scam steals woman’s home and life savings,” Fox News, Feb. 6, 2026. https://www.foxnews.com/tech/ai-deepfake-romance-scam-steals-womans-home-life-savings.

ii Ibid.

iii “U.S. Attorney, FBI, and HSI issue warning about latest romance and other online investment scams,” U.S. Attorney’s Office, Western District of New York, Feb. 12, 2026. https://www.justice.gov/usao-wdny/pr/us-attorney-fbi-and-hsi-issue-warning-about-latest-romance-and-other-online-investment.

iv “Certain demographic groups more vulnerable to scams,” The Daily Advocate, Feb. 18, 2021. https://www.dailyadvocate.com/2021/02/18/certain-demographic-groups-more-vulnerable-to-scams.

v Ibid.

vi Haiman Wong and Tiffany Saade, “The Rise of AI Agents: Anticipating Cybersecurity Opportunities, Risks, and the Next Frontier,” R Street Institute, May 29, 2025. https://www.rstreet.org/research/the-rise-of-ai-agents-anticipating-cybersecurity-opportunities-risks-and-the-next-frontier.

vii Ibid

viii Patrick Cooley, “How agentic AI could turbocharge fraud,” Payments Dive, Nov. 4, 2025. https://www.paymentsdive.com/news/how-agentic-aicould-turbocharge-fraud-payments/804562.

ix Sanket Badhe, “ScamAgents: How AI Agents Can Simulate Human-Level Scam Calls,” arXiv, Aug. 8, 2025. https://arxiv.org/abs/2508.06457.

x Cooley. https://www.paymentsdive.com/news/how-agentic-ai-could-turbocharge-fraud-payments/804562.

xi Wong and Melear. https://www.rstreet.org/commentary/protecting-americans-from-fraudsters-and-scammers-in-the-age-of-ai

xii Ibid.

xiii Ibid.

xiv Ibid.

xv “Stopping Scams Before They Reach Your Customers”, BeeSafe AI, last accessed Feb. 13, 2026. https://beesafe.ai

xvi “Meet Charm,” Charm Security, last accessed Feb. 13, 2025. https://www.charmsecurity.com/#section-about