
Online scams have been around since the dawn of the internet, with opportunistic criminals looking to make the most of new technological tools to trick unsuspecting individuals.
But over the past several years, new innovations like generative AI, which can create original content such as text, images, audio and videos, have allowed scammers to level up the believability of their scams.
AI has made massive strides in the past few years that have made it especially tricky to determine if something online is real, from a harmless animal video on Facebook to a serious scam.
“Disinformation has been around for a long time, and the way I think about it is we as humans haven’t evolved, so the same kinds of tactics and approaches still work on us,” said Jason Davis, a misinformation and disinformation expert from the Newhouse School of Public Communications at Syracuse University.
It’s important with the rapidly shifting digital landscape for older adults to stay up to date on internet safety practices. According to FBI data from 2024, people over 60 are the most targeted by scammers and experience the greatest total monetary loss.
Davis said that many scammers are still using their tried-and-true methods, but AI tools have allowed them to increase their effectiveness and the speed at which they attempt scams.
“With generative AI, the ability to sort of launch those [traditional] kinds of attacks, the scale and the speed have gone up exponentially, and so that’s really what has changed,” Davis said. “Often we see exactly the same tactics, just multiplied by an order of magnitude in terms of how prevalent they are and how believable they are.”
According to a consumer alert issued by the Maryland Office of the Attorney General in 2024, scammers had begun using AI to impersonate voices for phone calls, including using computer programs to mimic the voices of people the targets knew and trusted. These scams can impersonate the voice of a close family member or friend asking for urgent help to lower a target’s defenses. They also might claim to be from a governmental organization like the IRS, Medicare, or Social Security.
This doesn’t mean that people should ignore calls for assistance from their loved ones, but the OAG highlighted several red flags to be on the lookout for.
The first red flag was unsolicited contact, especially from people pretending to be part of a government organization. They also said to watch for callers using fear and urgency to override the brain’s logical centers and for callers asking for personal information or payment via gift cards, wire transfers, or cryptocurrency.
But beyond scam calls, there is a whole world of misleading or generative AI content flooding the internet, and Davis laid out several tips to help determine if what you’re seeing online is authentic.
“We think about it in three layers. The first is detection. Is [the content] authentic, or is it synthetic? Is it completely fabricated? The second is attribution. Where is it coming from? Who does it say it’s coming from? And that’s a really important one as well, because it creates authenticity, or [can show] somebody trying to mislead. And then that third layer is characterization. What does it mean to us, depending on our personal views?” Davis said.
Davis said that when analyzing a piece of content online, one tip he uses is Google’s reverse image search feature, which can show if an image has appeared on the internet before and the initial source. This can be done on mobile devices and web browsers by clicking on an image and selecting “Search using Google Lens” from the menu provided.
When viewing posts on social media, AI-generated speech content often has a robotic delivery that lacks tone or casual speech patterns. The Better Business Bureau website adds that AI will unnaturally use repetitive words or phrases while discussing a singular topic.
AI-generated videos and images can be spotted through close examination and often have features melting together, odd body proportions, flawlessly smooth textures of people or objects, inconsistent shadows, and awkward body language.
For news stories and other similar items, Davis recommends searching the title of the story and researching the author to determine if they are a real person and one with a history of reputable reporting.
Davis said taking a moment when consuming content online to determine if the story seems plausible and the source is legitimate is an important step before forming an opinion on the information and sharing it with your network.
Davis also highlighted a new trend online where scammers will create fake profiles and infiltrate people’s online communities using tactics he described as “social engineering.”
Davis said these people will research individuals or online communities and create profiles that fit seamlessly alongside their targets based on common interests, experiences and more. The fake accounts then may try to become sources of information and casual conversation, which can lead to them collecting personal information on their targets.
Davis said there are chatbots that have gotten skilled at extracting information like a person’s job, social network, home address, and phone numbers.
He added that while the information isn’t a Social Security number, it can be equally as damaging in making people targets for traditional scams and increasing the believability of their message.
“I would encourage people to just sort of be very suspicious when you form a new relationship or connection online. Make sure you are scrutinizing [new people] when it comes to deciding before you start having a deeper conversation [with them],” Davis said.
Davis added that at all times when online, if something is suspicious or too good to be true, it never hurts to confirm information with your trusted network, which can serve as a neutral party in making sense of what you’re seeing or hearing.
The Maryland OAG said that if you find yourself being victimized by a scam, immediately cut all contact with the scammer.
If you have given away personal information, contact the Identity Theft Unit of the Consumer Protection Division at [email protected].
And if you have lost money, notify your local law enforcement, the FBI and the Federal Trade Commission.
“To protect yourself, do not provide sensitive information to someone you do not know. In addition, always make sure you are on a secure site when entering credit card or other financial or personal information online. The website should begin with https when you are entering payment information. Finally, if something seems to be too good to be true, it probably is,” Maryland OAG spokesperson Jennifer Donelan wrote to Washington Jewish Week.


