header banner
Default

Hackers are utilizing AI as a weapon to enhance their phishing attacks


Table of Contents

    This story is part of Dark Horizons, our new series about science and technology’s dimmest corners and outermost limits. Here is a second, third and fourth piece — as well as more about the package.

    AUTHORITARIAN governments and criminal gangs are racing to use artificial intelligence to turbocharge cyberattacks that could unlock the doors guarding our most sensitive secrets and our most vital infrastructure — raising the dangers for hospitals, schools, cities and businesses that already regularly fall prey to fraudulent emails from hackers who go on to sow chaos.

    AI is poised to dramatically improve the success of phishing emails, hackers’ most devastatingly effective technique for breaking into people’s computers, email accounts and company servers. Already, more than 90% of cyberattacks begin with phishing messages, which masquerade as legitimate communications from friends, family or business associates to trick people into entering their password on a fake login page or downloading malware that can steal data or spy on them. And by harnessing AI’s text-generation and data-analysis capabilities, hackers will be able to send even more convincing phishing messages, ensnaring more victims and causing untold amounts of damage.

    Experts believe Russia and China are studying malicious uses of AI, including for phishing. And in the criminal underground, talented developers are already building and selling access to custom AI platforms like WormGPT, which can generate convincing phishes, and FraudGPT, which can create fake websites to support phishing campaigns.

    “We fully expect attackers to increasingly harness AI to create their phishing campaigns,” said Phil Hay, senior security research manager at the cyber firm Trustwave.

    This looming revolution in phishing — powered by the same technology that has dazzled users with its ability to write essays and produce fake portraits in seconds — has alarmed some cybersecurity experts, who worry about a  looming rush of malicious messages too sophisticated for most people to spot.

    “If AI takes off the way that we're talking about now,” said Bryan Ware, a former head of the federal Cybersecurity and Infrastructure Security Agency’s cyber division, “phishing attempts will become more and more serious, harder and harder to prevent, harder and harder to detect.”

    Colin Anderson Productions pty ltd/ Getty Images

    THE POWER of a phishing message lies in its ability to convince its target to click a link, download a file or provide information. The more suspicious the message looks, the less likely the target is to fall for it. With artificial intelligence at their disposal, experts say, hackers won’t have any problem creating messages that pass this test.

    “You can have it establish a realistic phishing message that will compel somebody to click on a link or open an attachment that they ordinarily wouldn’t,” said Adam Meyers, the head of counter-adversary operations at the cyber firm CrowdStrike.

    One of AI’s biggest advantages is that it can write complete and coherent English sentences. Most hackers aren’t native English speakers, so their messages often contain awkward phrasing, grammatical errors and strange punctuation. These mistakes are the most obvious giveaways that a message is a scam.

    With generative AI platforms like ChatGPT, hackers can easily produce messages in perfect English, devoid of the basic mistakes that Americans are increasingly trained to spot. AI systems “give foreign-language speakers the ability to have English-language content that is indistinguishable from [that of] an English-native speaker,” said Ware, who is now the chief development officer at the security firm ZeroFox.

    Microsoft is already seeing cyber criminals and government hackers “using AI to refine the language that they use in their phishing attacks … and make them somewhat harder to detect,” Tom Burt, the company’s corporate vice president of customer security and trust, told reporters during a recent briefing.

    And AI isn’t just helping with English. Until recently, one of the most common financially motivated phishing scams, known as business email compromise, was virtually nonexistent in Japan, because the most prolific attackers didn’t speak Japanese. Now, thanks to ChatGPT, there’s been “a notable uptick in campaigns targeting Japanese companies in local language, which effectively has opened up new virgin territories for attackers,” said Jennifer Duffourg, a spokesperson for the email security company Proofpoint.

    But it’s not just cleaner text that will make these AI-powered phishes more effective. Hackers can also exploit AI’s ability to parse massive quantities of data and generate messages and recommendations based on its findings.

    By analyzing a database of information about a target — like stolen emails, leaked health records or even publicly searchable data — AI can generate phishing messages designed specifically to fool that target.

    Nick Reese, a former director for emerging technology policy at the Department of Homeland Security, said AI will excel at “going through message data for many years and being able to turn around and say, ‘This is the type of message that you should write. … This is what we need to offer, what we need to say, in order to get this person to click on the link.’”

    During a recent test in partnership with a health care organization, researchers at IBM were able to trick ChatGPT into generating a custom phishing message for that organization. The resulting email was almost as convincing as a human-crafted phish — and it only took five minutes to create, as opposed to the 16 hours that IBM’s team typically needs to manually create such a message. IBM warned that “the emergence of AI in phishing signals a pivotal moment in social engineering attacks.”

    AI can also analyze people’s online messages and learn how to impersonate them in phishing messages sent to their friends, family or professional contacts.

    Programs like ChatGPT can already generate speeches designed to sound like they were written by William Shakespeare, Donald Trump and other famous figures whose verbal and written idiosyncrasies are widely documented. With enough sample material, like press statements or social media posts, an AI program can learn to mimic a corporate executive or politician — or their child or spouse.

    AI could even help hackers plan their attacks by analyzing organizational charts and recommending the best targets — the employees who serve as crucial gatekeepers of information but might not be senior enough to constantly be on guard for scams.

    Not all hackers will need these advanced AI capabilities. Many cybercriminals are “really successful without AI, and they're going to be more successful with the minimal use of AI,” Ware said. But the idea of government spies using AI to trick specific high-value targets as part of espionage campaigns “seems very credible,” he added.

    Microsoft hasn’t yet seen hackers using AI this way, Burt said, but “I’d be surprised if we don’t.”

    Beyond crafting messages and analyzing data, AI can rapidly generate the digital trappings of a convincing ruse, like websites, social media accounts and even synthetic imagery like profile photos.

    Sophisticated hackers already use fake LinkedIn profiles and corporate web pages to make their phishing emails’ false personas seem authentic to targets who try to research them. With the help of AI programs, Ware said, hackers will be able to stand up these Potemkin profiles much more quickly, making their attacks “more believable [and] more credible.”

    By combining AI’s generative and analytical powers, said Reese, who is now the co-founder and managing partner of the AI firm Frontier Foundry, “you actually end up with a really powerful attack, because not only is the email cleaned up grammatically, it's also on a subject that you are likely to react to, and it's backstopped if you actually Google it.”

    Colin Anderson Productions pty ltd/ Getty Images

    IN RECENT years, corporate employees and parents of tech-savvy young people around the world have been trained to look for obvious mistakes and oddities in emails that mark them as phishing attempts. These red flags are one of the few lessons that cybersecurity professionals have successfully ingrained into regular people’s minds. “A lot of people look at phishing messages and laugh, because we've all been conditioned in a certain way to recognize this,” Reese said.

    But better phishing messages could render much of that training obsolete and require a new emphasis on more proactive forms of scrutiny, like calling one’s boss to verbally verify a suspicious payment order or texting a friend to confirm that they really emailed a strange file out of the blue. Those are steps that will likely seem awkward to many, and it will take years for these new recommendations to become familiar habits — and in the meantime, hackers will exploit that gap with smarter phishing messages.

    “What happens when … the thing that we're used to seeing is no longer valid?” Reese said. “If we have to go back and retrain, that’s going to become a huge weakness.”

    People who haven’t heard or internalized the new warnings might start missing phishing messages, believing that they’re staying vigilant while exposing their organizations to massive data breaches or costly ransomware infections. “We rely heavily on people being able to spot the issue, forward the email to the IT quarantine and keep us safe in that way,” Reese said.

    Clever hackers might even barrage a company with obvious phishing emails and slip in a few sophisticated messages enhanced with AI. Employees would spot what they’re trained to spot, while the more convincing AI-produced fakes would glide through unnoticed. “This idea of hiding in the noise … is what I see as the future,” Reese said. “And I think that AI is going to play a big role in enabling that.”

    Colin Anderson Productions pty ltd/ Getty Images

    FACING more creative, AI-powered phishes, companies might start relying more on technological defenses than human vigilance. Email security companies already use algorithms to block suspicious messages based on technical details like whether their senders have verified their identities. The analysis of technical data about emails “will be more informative than the content itself,” Meyers said. And more companies could start mandating the use of multi-factor authentication, which adds an extra security code to the login process. This would prevent hackers from accessing systems using only usernames and passwords stolen through phishing attacks.

    “Better written emails and novel lures will only get you so far,” said Trustwave’s Hay.

    Ware thinks the future of phishing prevention will also require “some innovative and new technologies.” He noted that researchers are studying ways to detect images generated by AI, and he said various “digital watermarking” techniques could help users distinguish between real and fake messages from banks, schools and other institutions.

    And there will still be a role for human diligence. Andrew Lohn, a senior fellow at Georgetown University’s Center for Security and Emerging Technology’s CyberAI Project, said he’s optimistic that people will “start looking less at the words of the message” and more at technical things like the sender’s email address or the destinations of the links they’re clicking.

    AI could also help mitigate its own harms. Powerful AI systems can spot warning signs that people miss. “A lot of the technology that we're talking about here can also be used by the good guys,” Reese said.

    Colin Anderson Productions pty ltd/ Getty Images

    ONE of the few certainties about AI is that it will proliferate rapidly in the coming years. 

    Every top-tier government cyber army likely has an AI research program dedicated to identifying new offensive and defensive uses of the technology. And as WormGPT and FraudGPT demonstrate, cybercriminals are already hard at work weaponizing AI for profit. 

    So far, cybersecurity firms are seeing scant evidence of AI in hacking campaigns. “Effective operational use remains limited,” wrote researchers at Mandiant, though they said AI could “significantly augment” hackers’ capabilities. Duffourg, the Proofpoint spokesperson, said AI-based phishing campaigns “make up an incredibly small percentage” of total attacks. IBM’s research team also hasn’t seen widespread use.

    It’s hard to predict the exact consequences of the AI revolution for phishing campaigns. Cybercriminals are unlikely to use AI’s advanced analytical features for run-of-the-mill scams. But sophisticated criminal gangs might lean on some of those tools for major ransomware attacks, and government-backed hacking teams will almost certainly adopt these capabilities for important intelligence-gathering missions against well-defended targets.

    “The intent or the desire to have access is there,” said Meyers, but “how they're going to use it is the big question.”

    And the easier it becomes to use AI for cyberattacks, the more likely it is that innovative attackers will come up with previously unimagined uses for the technology. When the computing power necessary to run advanced AI systems “becomes cheap enough,” Ware said, “many things are possible.”

    That uncertainty only underscores the need to prepare new defenses now — before the emerging trend of AI-powered phishing attacks becomes overwhelming.

    “I don't know that it's a crisis today,” Reese said, “but it seems to me that that's where things are going.”

    Sources


    Article information

    Author: Joshua Ramos

    Last Updated: 1700333882

    Views: 1136

    Rating: 4.3 / 5 (117 voted)

    Reviews: 93% of readers found this page helpful

    Author information

    Name: Joshua Ramos

    Birthday: 2003-09-12

    Address: 52217 Dawn Expressway Suite 764, Cruzview, SD 08997

    Phone: +4689850273706568

    Job: Accountant

    Hobby: Backpacking, Sewing, Fencing, Pottery, Cocktail Mixing, Astronomy, Surfing

    Introduction: My name is Joshua Ramos, I am a clever, treasured, important, unwavering, talented, fearless, intrepid person who loves writing and wants to share my knowledge and understanding with you.