Protecting Americans from Fraudsters and Scammers in the Age of AI
Nearly all Americans now report receiving scam messages every week, whether by email, phone, or text. From “Nigerian prince” emails to fake toll payment texts, scams haven’t disappeared—they’ve only grown faster, broader, and more sophisticated.
Today, scammers use generative artificial intelligence (AI) to craft polished phishing emails, forge realistic IDs and documents, and create deepfake voices to impersonate trusted individuals. They also exploit moments of stress, uncertainty, and loneliness, capitalizing on tax season, job insecurity, or personal hardship. Such tactics reveal how fraud has evolved into a transnational enterprise that merges technological innovation with organized criminal and state-backed operations often linked to adversarial nations like North Korea, China, and Russia. This means that every dollar lost is not just a personal setback—it’s money stolen from Americans and redirected to fund hostile operations that undermine our national security.
With more than nine in ten Americans identifying scams as a national problem and Americans receiving the most scam calls in the world, we can no longer afford to treat them as isolated consumer risks or challenges left solely to the financial sector. To protect Americans, we must adopt a whole-of-society approach that leverages emerging technologies like AI to strengthen our defenses.
AI’s Role in Combating Financial Fraud and Scams
While bad actors have used AI to refine their techniques and expand their reach, those same capabilities are already helping law enforcement, financial institutions (FIs), and consumers improve visibility, coordination, and accountability across three key fronts.
1. Personalized Consumer Protection and Real-Time Intervention
One of the most promising applications of AI in fraud prevention is personalized consumer protection—stopping scams before they result in financial loss or data exposure. Traditional systems often rely on spam filters, manual reviews, or consumer complaints, which means intervention is imprecise or comes too late. AI changes that equation by learning what “normal” looks like for each user and flagging anomalies as they happen. Whether it’s an unusual login, a suspicious payment request, or even an atypical typing pattern, AI can pause a transaction or warn consumers before money or information changes hands. In many cases, that brief pause—even if it lasts only a few seconds—is precisely what prevents a moment of uncertainty from becoming a costly mistake.
For example, Google has introduced on-device AI safeguards for phone and text conversations, designed to detect when an ongoing dialogue starts to sound suspicious. If a conversation is deemed risky, users are notified through on-screen or audio alerts. These AI models also process data locally, so conversations remain private. Beyond this effort, the Federal Trade Commission has invited researchers and innovators to develop new technological solutions against AI-enabled voice cloning, while FIs like Mastercard are employing AI to verify identities and transactions in real time.
2. Fraud Prevention and Risk Mitigation
Apart from enhancing protection for individual consumers on the front lines, AI enables FIs, technology firms, and the government to combat AI-driven fraud by generating immediate risk scoring, expanding risk-based screening, and automating identity verification. In the public sector, the U.S. Treasury Department has already applied AI and machine learning to prevent and recover more than $4 billion in fraudulent and improper payments over the past year.
In the private sector, Visa uses AI to process more than 300 billion transactions annually. For each transaction, an AI model analyzes over 500 features and assigns a risk score to detect and block enumeration attacks in real time. Emerging solutions, such as agentic AI commerce and open-source AI e-signatures, are expected to introduce dynamic authorization layers; stronger identity binding between known users, devices, and third parties; and automated compliance checks that make fraud prevention more adaptive and transparent.
3. Recovery Support and Enhanced Forensic Investigation
AI is transforming how law enforcement traces and recovers stolen assets when scams succeed. AI-driven data analytics allow investigators to collect, process, and cross-reference vast datasets—from financial records and communications to open-source intelligence—to identify recurring patterns and anomalies that may reveal how fraud networks operate. Not only do these tools improve decision-making and accelerate investigations, they also enable predictive modeling that helps prevent repeat offenses and strengthen recovery efforts. Moreover, by integrating AI and machine learning algorithms into blockchain analytics platforms, investigators, analysts, and forensic accountants can trace laundering patterns and link crypto wallets to criminal entities that would be impossible to track manually.
Earlier this year, federal investigators working with private-sector partners and aided by AI-augmented blockchain analysis traced and seized more than $225 million connected to cryptocurrency investment fraud schemes. Led by the U.S. Secret Service and the Federal Bureau of Investigation, this operation marked a milestone for digital asset recovery and was the largest cryptocurrency seizure in U.S. history.
The expanding role of emerging technologies like AI and blockchain analytics in these investigations has the potential to both accelerate recovery and strengthen deterrence if supported by forward-looking and balanced regulatory solutions.
Ongoing Debates and Financial Regulatory Recommendations
While AI and emerging technologies play a vital role in stemming the tide of scammers, so too does financial regulatory policy. Much of the regulatory landscape is antiquated and geared toward outdated systems that have failed to keep pace with technological advancements. Moreover, financial regulations fail to address the threat of scams in any meaningful way. It’s also vital to consider the stake FIs have in fighting scams. Potential large-scale regulatory reform could meaningfully reduce fraud and scams by enacting the following changes.
1. Modify Existing Reporting and Data-Sharing Regulations
Data privacy is a core feature of federal financial regulatory policy, along with efforts to ensure legality in financial transactions. One such regulation includes Suspicious Activity Reports (SARs) filed with the Financial Crimes Enforcement Network for suspected illegal activity both by and against customers, including suspected scams. Due to liability and regulatory penalty concerns, SARs are tailored in such a way that FIs are unintentionally incentivized to over-report. This lends to what is effectively a reporting black hole, with millions of SARs filed annually at a cost ranging from $300 to $18,000 per report. To the extent that scams are being reported by FIs, the subsequent follow-up and enforcement either comes too late, lacks authority or jurisdiction, or goes unaddressed altogether.
To properly address scams, data-sharing regulations should be modified. This includes a need for suspected scam reporting that’s shareable across FIs, with law enforcement, with other regulatory agencies, and even across sectors (e.g., social media, dating sites). Any such data sharing must occur in real time to address both the threat and consumer privacy concerns effectively. Given the gravity of the threat, this reporting must occur outside of SARs to allow FIs additional legal cover to share data without fear of retribution.
This also means that current SAR reporting requirements must be reduced dramatically to avoid redundancy (and in recognition that much of the reporting has to do with regulatory burdens rather than real threats). Recent regulatory guidance has begun to address this, but significant changes are still needed.
2. Establish a Consumer-Facing Scams Hub
Given the myriad government agencies governing financial policy, efforts geared specifically toward aiding consumers with financial scams are few and far between. Despite its name, the Consumer Financial Protection Bureau (CFPB)—a frequent “problem child” among federal regulators because of its excessive authority and unique funding structure—devotes only a small portion of its operations to consumer scam protection. With the CFPB in need of reform and scams in desperate need of addressing, repurposing the CFPB to act as the consumer-facing financial scam hub could solve both issues simultaneously. This could include a centralized location for scam reporting (a process that’s currently extremely fragmented), resolution, and possible methods of restitution. It could also include real-time sharing in conjunction with the regulatory changes to reporting and data-sharing. Education on scam threats, including how to detect them, could likewise be managed here.
Of course, these same goals could be accomplished by adding scam reporting as a core function of the CFPB without modifying its purpose or creating an entirely new agency. However, neither the current administration nor most of the American public see further bloating of the federal government as necessary or desirable.
Conclusion
Financial services are a critical pillar of everyday life. Each paycheck, transaction, loan, investment, and savings account depends on stability, security, and resilience. When fraud and scams exploit that foundation, the harm extends far beyond individual victims or sectors, eroding public trust, the broader economy, and even our national security.
AI is already proving invaluable in helping us prevent, detect, and recover from cyber-enabled financial crimes. As blockchain applications for forensic analytics and stablecoin payment systems continue to expand, sustained progress will depend on whether innovation, industry, policy, and public awareness evolve in concert