If a scammer has your phone number, you should lock your SIM card, secure your online accounts with strong passwords and block spam calls from your
Phishing attacks occur when cybercriminals trick their victims into sharing personal information, such as passwords or credit card numbers, by pretending to be someone they’re not. Artificial Intelligence (AI) has made it easier for cybercriminals to carry out phishing attacks by writing believable phishing messages, mimicking people’s voices, researching targets and creating deepfakes. By using AI in these attacks, cybercriminals can appear more credible and trustworthy, leading more victims to send them private information or money. Based on a 2024 study commissioned by Keeper®, IT leaders are witnessing AI-powered attacks increase at a rate of 51%.
Continue reading to learn how cybercriminals are using AI in phishing attacks, examples of AI being used and how you can protect yourself against these attacks.
How cybercriminals are using AI to carry out phishing attacks
AI helps cybercriminals commit more dangerous phishing attacks against both individuals and businesses.
AI for writing
Using generative AI tools, like ChatGPT, cybercriminals can create phishing messages without the typical telltale signs: grammatical and spelling errors. To demonstrate how easy this can be for cybercriminals, I asked ChatGPT to draft a phishing email. The content that I inputted was immediately removed and ChatGPT said, “This content may violate our usage policies.” Despite this violation, ChatGPT still produced an example of a phishing email. Here are ChatGPT’s results:
Notice that, unlike typical phishing emails, this message contains no grammatical or spelling mistakes. Cybercriminals who use generative AI tools, such as ChatGPT, do not need to think about how to phrase their demands for information. ChatGPT included urgent language, a place for a cybercriminal to insert a phishing link and even a threat that the target’s account will be suspended otherwise.
AI for researching
Generative AI can also aid cybercriminals in researching their targets. Earlier this year, the FBI warned individuals and businesses about receiving more personalized and tailored messages, which cybercriminals create using AI. Since it takes time for a cybercriminal to research each of their targets, AI speeds up this task by processing and synthesizing information about a cybercriminal’s target. This allows cybercriminals to learn more about their targets and their interests without spending as much time on laborious research.
AI for impersonating
In some situations, cybercriminals might use AI to mimic how people typically write their emails or texts, and they can even use AI to impersonate people’s voices. AI can also generate visual content, such as photos or videos, to impersonate someone their target trusts. After seeing a familiar profile picture or a posted video, someone might believe that they are sending personal information or money to someone they know. However, cybercriminals are using AI to create deepfakes – more on that below.
Examples of AI-enabled phishing attacks
As AI-based phishing attacks become more common, there are several main types that you should watch out for. Let’s look at some examples of AI-enabled phishing attacks.
AI-generated phishing emails and texts
Before AI became a useful tool for cybercriminals to use in phishing attacks, people could easily spot phishing messages over email or text because they usually contained a variety of spelling and grammatical issues. For example, most phishing emails had obvious typos or awkwardly phrased sentences, as if the person writing the message did not speak the language. However, AI-generated phishing emails and texts are more inconspicuous since AI eliminates these clear errors.
For example, a cybercriminal could send an AI-generated phishing email to a targeted employee at a large company. The employee might be trained to detect phishing emails if they contain obvious signs of typos and grammatical issues. However, if the phishing email is convincing enough and the cybercriminal impersonates the employee’s boss, the message could seem legitimate to the employee, and they might share confidential information over email.
AI-enabled vishing attacks
Vishing, a type of phishing attack that occurs over phone calls, has become more dangerous due to AI tools. In typical vishing attacks, the cybercriminal impersonates a company or individual the target trusts to increase the chances of receiving money or information. However, in AI-enabled vishing attacks, cybercriminals can pretend to be someone their target knows by analyzing familiar voices from video or audio recordings. Once AI has processed these voices, the cybercriminal can mask their own voice and speak as if they are the target’s relative, friend or boss. AI used in this way makes the vishing attack more convincing if the target recognizes the voice on the other end of the phone.
This has been seen recently in technologically advanced grandparent scams, which involve cybercriminals downloading videos or audio recordings of their target’s loved ones from social media. Cybercriminals use AI to impersonate a loved one’s voice, tricking an elderly person into believing their family member is in danger, which will typically result in money being sent.
AI-enabled spear phishing attacks
Spear phishing attacks occur when a cybercriminal targets a specific individual or organization by using prior information about them, such as their name, email address or phone number. Although this may sound similar to a traditional phishing attack, spear phishing is a more personalized and targeted cyber attack, making it difficult for a target to recognize that a message is part of a phishing attack. Cybercriminals are using AI for research purposes to collect information about their targets, then use generative AI tools like ChatGPT to write emails and texts based on the information gathered.
For example, let’s say a cybercriminal wants to target a college student interning with a major financial company. To save time, the cybercriminal uses AI to analyze the target’s social media and other public records. Once AI has gathered enough information about the target and their role at their organization, the cybercriminal can enter this information into a generative AI tool and write a personalized email posing as the target’s coworker or boss to convince them to share confidential information.
AI-generated deepfakes
Although deepfakes have existed for quite some time, they are becoming harder to detect due to advances in AI technology. Deepfakes are forms of media, such as photos, audio recordings and videos, that alter an aspect of a person to make them appear to be someone they are not. For example, recent deepfakes on social media depict celebrities engaging in inappropriate activities with other celebrities or ordinary people when those interactions never actually occurred. They are often used with malicious intentions, such as blackmail or false information. AI-generated deepfakes are becoming so realistic that even law enforcement is having difficulty discerning whether an image or video is fake. Due to how natural these deepfakes are starting to look, cybercriminals are using AI to conduct various phishing attacks because many people believe what they see.
How to protect yourself against AI-enabled phishing attacks
Despite AI-enabled phishing attacks growing more frequent and harmful, there are still ways to protect yourself from these advanced cyber attacks.
Never click unsolicited links and attachments
To keep your private information and devices safe, never click on links or download attachments from unsolicited messages. If you receive an unsolicited email or text message urging you to click a link to verify your account information, this is most likely a scam. By even clicking on a link without being careful, you can risk malware being installed on your device that can steal your personal data. You can check if a link is safe by hovering over the URL or safely copying and pasting the link into a URL checker like Google Transparency Report.
The same goes for unsolicited messages with attachments: don’t download them onto your device. For example, if an unsolicited email contains a PDF or spreadsheet that you are urged to open, do not click on it because it most likely contains malware. If you click on or download an unsolicited attachment, you may trigger a malware infection that puts your private information at risk of being sent to the cybercriminal and compromised. Check if an attachment is safe by using antivirus software, double-checking that the sender is legitimate and ignoring any messages marked as spam by your email provider.
Don’t respond to unsolicited requests for your personal information
You should ignore any unsolicited requests over email, text or phone that ask for your personal information. If you receive a phone call from an unknown number claiming to be a familiar person or company, consider why they would be calling out of the blue to ask for your personal information before responding. Here are some common types of personal information you should be wary of sharing in unsolicited requests:
- Credit card information
- Usernames and passwords
- Bank account numbers
- Medical insurance information
- Social Security numbers
- Company trade secrets
Be cautious of deals that seem too good to be true
When cybercriminals utilize AI to research, it becomes easier for them to figure out their target’s interests. Do not believe any deals that seem too good to be true, especially if they appeal to something you’ve been wanting. For example, if you receive an email or text saying you won a lifetime supply of your favorite cosmetic product, be cautious, as this is likely a ploy to get you to click on something dangerous or share personal information.
Create a safe word with your family
A safe word or family code is something you should create to verify a suspicious person’s identity. For example, if someone calls you or a family member and claims to be someone you know, ask the caller for your safe word. If the caller gets the word wrong, you know the call is part of a phishing attack. Make sure your safe word is not something a cybercriminal or AI could guess by researching about you or your family online. Having a unique safe word will help protect you and your family from falling for AI-enabled phishing attacks.
Use a password manager
One of the best ways to protect yourself against AI-enabled phishing attacks is by using a password manager like Keeper. A password manager not only stores your passwords in a safe place but also protects your information against phishing attacks if it has an autofill feature. Keeper Password Manager’s feature, KeeperFill, will only fill in your login credentials if the website stored in your digital vault matches the website you are visiting.
Stay up to date on the latest cybersecurity news
Because AI continues to evolve and cyber attacks change over time, it’s important to stay informed about the most recent cybersecurity news and events. Many blogs and articles are published daily to cover the latest data breaches, scamming trends and emerging cyber threats. An easy way for you to stay in-the-know is by following several cybersecurity blogs and websites, including Dark Reading, SecurityWeek and Keeper Security Blog.
Avoid falling victim to AI-enabled phishing attacks
Any kind of phishing attack can result in your private information or money being stolen by a cybercriminal, but AI-based phishing attacks are evolving quickly and becoming even more dangerous. Protect yourself and your privacy by avoiding unsolicited links or attachments, not sharing personal information in unsolicited requests, creating a safe word and using a password manager.
Start your free 30-day trial of Keeper Password Manager today to start protecting your passwords and online accounts from cybercriminals.