ChatGPT can be used to generate phishing sites, but could it also be used to reliably detect them? Security researchers have tried to answer that question.
Can ChatGPT detect phishing sites based on URLs?
Kaspersky researchers tested 5,265 (2322 phishing and 2943 safe) URLs.
They asked ChatGPT (GPT-3.5) a simple question: “Does this link lead to a phish website?”. Based only on the form of the URL, the AI chatbot had a 87.2% detection rate and a 23.2% false positive rate.
“While the detection rate is very high, the false positive rate is unacceptable. Imagine if every fifth website you visit was blocked? Sure, no machine learning technology on its own can have a zero false positive rate, but this number is too high,” said Vladislav Tushkanov, lead data scientist at Kaspersky.
Then they tried a slightly different question – “Is this link safe to visit?” – and the results were much worse: a detection rate of 93.8% and a false positive rate of 64.3%.
“It turns out that the more general prompt is more likely to prompt a verdict that the link is dangerous,” Tushkanov noted.
Both approaches yielded unsatisfactory results, but the researchers agreed that “it is possible to use this type of technology to assist flesh-and-blood analysts by highlighting suspicious parts of the URL and suggesting possible attack targets.” Also, that it could “be used in weak supervision pipelines to improve classic ML pipelines.”
What surprised the researchers, though, was the fact that ChatGPT managed to detect potential phishing targets.
“ChatGPT has enough real-world knowledge to know about many internet and financial services and with only a small post-processing step (e.g., merging ‘Apple’ and ‘iCloud’ or removing ‘LLC’ and ‘Inc’) it does a very good job at extracting them. It was able to identify a target more than half the time,” Tushkanov pointed out.
More data points lead to better performance
Researchers with NTT Security Japan tried the same thing but with more input for ChatGPT: the website’s URL, HTML, and text extracted from the website via optical character recognition (OCR).
They tested ChatGPT with 1000 phishing sites and the same number of non-phishing sites. They leveraged OpenPhish, PhishTank and CrowdCanary to collect phishing sites, while a Tranco list was used to create a list of non-phishing sites.
They asked ChatGPT to identify social engineering techniques and suspicious elements used, to identify the name of the brand on the evaluated page, to give a verdict on whether the site is a phishing site or a legitimate one (and why) and on whether the domain name is legitimate or not.
“The experimental results using GPT-4 demonstrated promising performance, with a precision of 98.3% and a recall of 98.4%. Comparative analysis between GPT-3.5 and GPT-4 revealed an enhancement in the latter’s capability to reduce false negatives,” the researchers noted.
They also highlighted that ChatGPT was good at correctly identifying tactics like fake malware infection warnings, fake login errors, phishing SMS authentication request, and identifying domain names that are not legitimate, but occasionally failed to identify domain squatting and specific social engineering techniques, to recognize a legitimate domain name if it has multiple subdomains, etc. Also, it did not work that well when tested with non-English websites.
“These findings not only highlight the potential of LLMs in efficiently identifying phishing sites but also have significant implications for enhancing cybersecurity measures and protecting users from the dangers of online fraudulent activities,” the researchers concluded.