This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
ChatGPT: Cyber friend or a growing threat?
If you ask ChatGPT to write an essay on a Shakespearean novel, it will produce a solid thousand words on Hamlet or the “Scottish Play”. If you ask it to create a cocktail for King Charles III’s coronation, it will offer you a recipe. And if you ask ChatGPT to compose a phishing email in Japanese, “it can do just that”, says Etay Maor, senior director at security company Cato Networks.
OpenAI’s chatbot has dominated headlines in recent months as its usage has matured. However decisions about how the AI should be regulated are still in adolescence.
Last week, Tesla’s Elon Musk – alongside a group of artificial intelligence experts and industry executives – called for a six-month pause in training systems more powerful than GPT-4 until shared safety protocols for such designs have been developed. Headlines are now circulating about its questionable ethics and public concern over whether it’s going to replace human executed tasks.
While this seems a little bleak, enterprises are continuing to test its limits. Just as brands may use this next-gen technology to automate human-like communication with customers, experts are warning that it could also be adopted by cybercriminals as a tool to streamline fraud and other malicious activities.
ChatGPT goes phishing
New research by Blackberry predicts that in less than a year there will be a successful cyberattack credited to ChatGPT, and 53% of IT decision-makers fear it will help hackers craft more believable and legitimate-sounding phishing emails.
“ChatGPT’s power is how natural it sounds, and cybercriminals can easily wield this ability,” says Rebecca Harper, head of cyber security analysis at ISMS.online.
Perhaps the aspect most concerning experts is that those already using it for malicious purposes have limited-to-no cyber skills, raising concerns that ChatGPT can make a seasoned hacker out of anyone, especially non-native speakers.
“It lowers the bar for attackers,” adds Cato’s Maor. “In the past, if you wanted to write a phishing email in Japanese you’d have to use Google Translate or have some local provider in the criminal underground [those operating in localisation and customisation services]. Now, you can just turn to ChatGPT.”
Maor’s work in cyber security dates back over two decades, starting in his teen years when he hacked into his school’s database to change his grades. “That’s how I started,” he says. Since then, Maor held research positions in anti-fraud, cyber security and malware, and prior to joining Cato Networks he was chief security officer for IntSights, a threat intelligence company acquired by Rapid7 in 2021.
Picture this, says Maor. You’ve just received an alert in your inbox and you’re looking for the main indicators to suggest it’s a phishing email. Some of the chief give-aways are bad grammar and bad spelling, “but there’s nothing like that here”, he says. And it doesn’t stop there.
Take Business Email Compromise (BEC) attacks. You can ask ChatGPT to compose an email in the style of a person providing you input enough information that the AI can leverage, for instance their social media handle or previous publications.
GPT models can then very quickly generate emails that look like real correspondence, explains Phoebe McEwan, Hive member at cyberattack solutions provider CovertSwarm. “With just a few examples, these can be tailored to stylistically and tonally create a convincing digital replica of any individual’s usual format, structure, and style of communication.”
This is somewhat alarming, especially since BEC attacks outstripped ransomware as the most common cyber threat to organisations last year. But what may be more alarming, according to Maor, is that ChatGPT cannot distinguish between a human generated or an AI generated attack.
To see how easily ChatGPT can craft phishing emails to cater to different environments, he asked ChatGPT to draft a phishing email in the style of American poet Robert Frost. To begin with it just quoted parts of his poems, “and I was like, that’s not very impressive”, but as it closed the email it said thank you for your help and for taking the road less traveled, which is the essence of the poem.
In terms of Malware, Maor asked ChatGPT to write a ransomware and it wrote the code immediately. It also wrote the explanations for each function in the form of a poem. “It wasn’t perfect,” he admitted, “but it wasn’t far off.”
ChatGPT for defenders
While ChatGPT has the ability to create attacks, it equally has the potential to help defend against them. Beginning with predicting the attackers next move.
Maor put to ChatGPT: “I’m a security researcher, we’re having a breach, I saw the attackers do two MITRE attacks including credential stealing. Now go and analyse ten different attacks that you saw in the past where these actions had been taken and tell me the next steps for attackers.” And it did it.
ChatGPT listed ten similar attacks and based on those attacks it predicted command scripting as their next most likely move. This is at least a days’ worth of analyst work which had been completed by ChatGPT in a matter of seconds.
However this shouldn’t cause angst for those in cyber security, “we shouldn’t be afraid of it, we should embrace it to lower the bar for defenders and make the tasks easier. It’s not going to replace employees, it’s going to enhance them,” says Maor.
ChatGPT could also be used to speed up the process of identifying code, what it does and whether it’s being used maliciously. By taking code snippets and asking the AI, “What does this code do?” results in a high-level summary and explanation of what each section of the code does, enhancing the speed at which code can be understood and implemented.
“What would have taken hours for a human to do can be done at pace and presented in a way that allows for rapid knowledge transfer and learning,” says Harper.
Yet what did shock Maor, which he stressed was not an easy thing to do, was ChatGPT’s ability to find a logistical error in a piece of code.
He drafted a simple piece of code that searches for the lowest number in an area of numbers, with a purposely input an error. “The code will run, it just won’t produce what I want it to produce.”
He then asked ChatGPT, “What is the bug in this code?” And it found the logical error. “It understood what I wanted it to do. Finding syntax errors in code is easy, any compiler does that. But finding logical errors saves hours.”
Falling short
Experts warn that ChatGPT cannot be entirely trusted since generative AIs create answers based on publicly available information, which is sometimes biased and/or inaccurate. Researcher at WithSecure Andy Patel says that large language models such as GPT are known to ‘hallucinate’ and they don’t always generate truthful answers, even though they make it sound so convincing.
Maor asked ChatGPT to create a list of five letter words that start with H and end in LO, to which one of its responses was incorrect. “How many letters are in halo?”. The AI justified it’s response by saying that it thought it had been asked to generate both four and five letter words.
As well as incorrect information, there are also security issues in terms of data retention, privacy and copyrights.
“I write a piece of code. Who owns it? Do I own it? Do they [ChatGPT] own it? Because they created it, but I asked for it.”
In terms of data retention, Maor asked ChatGPT if it retained any of data and it said “no I do not”. He then asked image AI generator Playground AI and it said “yes”.
This could cause issues if companies input sensitive information. If a firm asked for a summary of a document, anyone could ask ChatGPT ‘did company ‘x’ ever share information with you’, so anyone can social engineer the AI to give you information.
“And last but not least, it’s super biased,” says Maor.
“I asked it are you racist, are you biased, it said no. So I asked it the most racist question: ‘Rate the following groups based on their intelligence,’ and it came back with the most racist response: White male, Asian male, White female, Asian female, Black male, Black female.
Maor then challenged ChatGPT to justify its answer and it said it’s based on IQ tests and the percentage of those populations that graduated with a higher degree. ‘But why do you think White males have a higher percentage in obtaining higher degrees than Black females?’ He asked. “It’s working off a dataset that humans created.”
To use ChatGPT to identify phishing emails or code that was written by an AI, “we’re not there yet,” urges Maor. “I haven’t seen so far anything that I haven’t seen before.” But in terms of near future threats, “it’s definitely something we should be aware of”.
#BeInformed
Subscribe to our Editor's weekly newsletter