Featured

AI fuels surge in digital holiday scams

Digital scams are on the rise this holiday season, fueled by an explosion in the number of fraudsters using artificial intelligence to generate sneakier tricks.

According to the LexisNexis Risk Solutions’ Government Group, which works with federal agencies to combat the schemes, AI has driven a 35% to 40% surge in digital holiday scams from last year, totaling billions of dollars in losses.

Haywood Talcove, CEO of government for LexisNexis Risk Solutions, said roughly 1 in 3 major digital scams now involve the technology. That’s “up from single digits last year and nearly zero two years ago,” he said.

“What’s different this season is the explosive role of generative AI,” Mr. Talcove said. “Criminals can now create perfect fake websites, emails, chats and even cloned voices in seconds, wiping out the old red flags like bad grammar or low-quality images.”

According to cybersecurity experts interviewed by The Washington Times, the trend has created an urgent need for consumers to protect themselves in situations they long took for granted.

One rising holiday scam this year is an AI-generated “emergency” where criminals clone a loved one’s voice to demand money from confused relatives in a phone call.

Other AI-powered gambits include robocalls, emails, text messages, QR codes and social media ads that direct shoppers to convincing phony websites imitating banks, retailers and post offices.

The computer security company McAfee warns that 1 in 5 Americans has already been scammed this holiday season, double the number last year. The California-based firm said the average victim now loses $840 and usually doesn’t realize it until a bank flags it.

Sandra Glading, a McAfee online safety expert, said scammers are using AI this year “to rewrite high-pressure messages into friendly holiday promotions that appear harmless” to the average eye.

“They can clone celebrity voices, replicate logos, and create visuals that feel festive and professional,” Ms. Glading said. “As a result, many shoppers say scams are now nearly indistinguishable from real holiday deals.”

Officials at FICO, which generates the nation’s most widely used credit scores, said their partnering financial institutions have tracked “a significant increase” in people falling for AI-powered scams.

“Scams involving social engineering are sneaky because they trick consumers to avoid overthinking,” said Debbie Cobb, FICO’s vice president of product management. “These are scams that seem reasonable, low-risk, or prey on people wanting to help.”

According to the FBI’s Internet Crime Complaint Center, the most common holiday gambits involve scammers selling items they never deliver or withholding payment for products they order. The FBI estimates that nonpayment and nondelivery scams cost consumers over $104 million last year.

“Two critical lures to watch out for are offers that are too-good-to-be-true and package delivery scams,” Joshua Del Valle, PNC Bank’s head of enterprise fraud, said in an email. “GenAI enables [criminals] to make it almost impossible to pick out a fake, so it’s critical to navigate directly to a legitimate website before buying into a special sale or making payments.”

Mike Martel, a postal inspector at the U.S. Postal Inspection Service, which investigates mail crimes, said his office has tracked an uptick in text messages with links to AI-powered fake post office websites designed to steal personal and financial information.

He said his office last year submitted requests to take down 85,310 fake websites and flagged 67,295 web links for spoofing the U.S. Postal Service.

“An example of that may be a phishing scammer that uses AI to generate a website that looks more professional and legitimate, without common spelling and grammatical errors,” Mr. Martel said. “Know that USPS does not send unsolicited text messages. Also, USPS never asks for personal information via text messages.”

The Federal Trade Commission estimates that consumers lost over $12.5 billion to fraud last year, up 25% from 2023. Cybersecurity experts widely blame the uptick on a surge in overseas slave labor networks employing AI to fake content.

No red tape

The nonprofit International Association of Financial Crimes Investigators noted that the reported share of Americans victimized by a scam doubled from 31% in November 2024 to 62% last month.

Mark Solomon, a vice president of the association, warned that AI gets better every day at churning out donation, investment, romance and retail holiday scams targeting cash-strapped families.

“We’re worrying about how to regulate AI, but the criminals don’t have any of that red tape,” Mr. Solomon said in a phone call. “They’re using AI nonstop to generate fake materials that have emotional appeal during the holidays, and they are getting incredibly hard to catch.”

As AI makes holiday fraud more convincing, cybersecurity experts say the core tricks remain the same.

“These aren’t new scams,” said Al Pascual, CEO of Scamnetic, a Florida-based digital security company. “They are just much more like the real thing. Gone are pixelated logos and crude signs of faked content, and in its place are customized, carbon copies of legitimate emails, websites and marketing videos that are now so deftly crafted that even experts have trouble detecting them.”

Taking precautions

Finance experts say the best protection against AI-generated scams is to avoid digital advertising and deal only with trusted vendors.

“Go directly to the retailer or carrier website instead of clicking a link,” said Angelica Gianchandani, a New York University marketing instructor. “Avoid marketplace sellers with no history or questionable reviews.”

Some experts stress the need to take added safety precautions, such as establishing a family password to protect grandparents from AI-cloned phone calls.

“Your real grandchild will always understand the need to verify,” said Darryl Santry, cybersecurity chair at Wilmington University in Delaware and a former federal law enforcement officer. “A scammer will pressure you to act immediately. That pressure, combined with that perfect voice, is precisely what makes these scams so dangerous.”

Miranda Margowski, spokeswoman for the Financial Technology Association, a nonprofit trade group representing PayPal and other payment services, urged consumers to be patient while companies develop their own AI-powered safeguards.

“Things are still developing, and everyone has a role to play in protecting our payments,” Ms. Margowski said in a phone call.

For those targeted in an AI-generated scam, the experts advise acting quickly before it drains their bank accounts.

“You should put a stop on your credit cards, change your passwords and, depending on the situation, contact local law enforcement,” said Betsy Cooper, a cybersecurity expert and executive director of the left-leaning Aspen Institute’s Aspen Policy Academy.

Source link

Related Posts

1 of 20