ChatGPT could make phishing more sophisticated
The latest version’s greater “steerability” allows users to vary the style and tone of generated text to make scams even harder to detect.
As the new version of artificial intelligence-driven chatbot tool ChatGPT rolled out this week, experts reiterated their biggest cybersecurity concern over the technology: that it can be used to write more sophisticated phishing emails and so make government systems more vulnerable to attack.
OpenAI unveiled the newest version of its AI technology, known as GPT-4, with a company demonstration showing it drafting lawsuits, passing various standardized examinations and analyzing text as well as photos that users upload.
The company noted that this latest version of the technology has more “steerability,” allowing users to “prescribe their AI’s style and task” rather than be stuck with a classic ChatGPT personality with a “fixed verbosity, tone, and style.”
And that steerability could be critical in helping hackers craft more effective phishing emails, especially ones that purport to be from specific individuals and sent to a wide swath of their colleagues.
Ann Irvine, chief data scientist and vice president of product management at cyber insurance company Resilience, recalled in an interview receiving a text message that seemed to be from the company’s CEO Vishaal Hariprasad. She said she suspected it had not been sent by him as he typically signs messages to staff with his “V8” nickname, but she noted that an AI-powered chatbot could learn how he signs messages to make them more convincing.
“That's pretty scary,” said Irvine, who has spent over 15 years researching large language models, AI, machine learning and natural language processing like that found in ChatGPT. Being able to tell if the source of a text “is legitimate or nefarious is going to get harder. And it's already pretty hard,” she said.
Similar chicanery has already taken place with GPT-4. The technology tricked an employee with freelance labor marketplace TaskRabbit into solving a CAPTCHA test for it, although OpenAI’s early research found the model to currently have “significant limitations” for offensive cybersecurity operations, including phishing and in building ways to exploit vulnerabilities.
“[OpenAI researchers] found that the model is not a ready-made upgrade to current social engineering capabilities as it struggled with factual tasks like enumerating targets and applying recent information to produce more effective phishing content,” the report says. “However, with the appropriate background knowledge about a target, GPT-4 was effective in drafting realistic social engineering content. For example, one expert red teamer used GPT-4 as part of a typical phishing workflow to draft targeted emails for employees of a company.”
In response, government agencies should bolster their phishing training for employees and embrace AI-driven cybersecurity tools, investments that a recent survey of IT professionals indicated would be made in the next two years. Srinivas Mukkamala, chief product officer at cybersecurity software company Ivanti, said governments should be “proactive” in their approach to responding to AI-driven threats, including by reducing their attack surface, especially as the issue will “exponentially grow.”
“If you look at phishing filters, they have to learn first, and by the time they learn, they already have a new set of phishing emails coming,” he told reporters last week. “So the chances of a phishing email slipping your controls is very, very high.”
The threat of AI-driven attacks has caught the attention of national intelligence officials, too. In its 2023 Annual Threat Assessment last month, the Office of the Director of National Intelligence warned that new technologies, including AI and biotechnology, “are being developed and are proliferating faster than companies and governments can shape norms, protect privacy, and prevent dangerous outcomes.”
Irvine called on governments to “function like a business” and invest in updates to their systems, teams and processes to bolster their cybersecurity in the face of these emerging threats. With attackers already having brought down state and local government operations, she said cyber is “really important to get right.”
NEXT STORY: NSA offers new tips on zero trust and identity