LinkedIn has introduced three new features to fight fake profiles and malicious use of the platform, including a new method to confirm whether a profile is authentic by showing whether it has a verified work email or phone number.
Over the past couple of years, LinkedIn has become heavily abused by threat actors to initiate communication with targets to distribute malware, perform cyberespionage, steal credentials, or conduct financial fraud.
This abuse has been demonstrated time and time again by the Lazarus North Korean Hacking group, which commonly approach targets over LinkedIn with fake job offers.
However, these fake job offers lead to the installation of malware that allows the threat actors to gain access to a target’s device, and potentially corporate network, or conduct multi-million cryptocurrency hacks.
Google has also seen Russian SVR hackers targeting LinkedIn users with Safari zero-day vulnerabilities, and other researchers have seen groups targeting LinkedIn users to steal Facebook advertiser accounts.
More recently, Brian Krebs has been reporting on the massive number of fake LinkedIn profiles that are believed to be used for scams and other malicious purposes.
Fighting fake accounts
Today, LinkedIn announced that it has begun to display more information about accounts to verify their authenticity, actively hunt for fakes using AI, and warn users when they receive suspicious messages.
The first step to fighting fake accounts on LinkedIn is introducing a new “About this profile” section that gives users information like when the user created their profile, if the holder has verified their number, and if they linked a work email.
If a cybercriminal was to use a fake/impersonated account to approach a target on LinkedIn, they’d have to invest unrealistic amounts of time maintaining and operating a fake account that has a believable creation date.
Also, having no access to a corporate email from the impersonated company, it would be challenging for threat actors to validate their accounts as authentic.
The second step is to use AI to catch accounts using AI-generated images as profile photos to give a false sense of authenticity, which is a clear sign of fraudulent activity.
“Our new deep-learning-based model proactively checks profile photo uploads to determine if the image is AI-generated using cutting-edge technology designed to detect subtle image artifacts associated with the AI-based synthetic image generation process without performing facial recognition or biometric analyses.” – LinkedIn
Lastly, LinkedIn now displays warnings when a chat participant proposes to take communications outside the platform.
Sophisticated actors have employed this trick on various occasions where they approach victims on social media platforms, build a rapport, and then propose taking communications on a “safer platform.”
In most of these cases, the victims are convinced to download an IM clone, which installs a modified version of a communication app along with spyware.
The FBI also recently warned of “pig butchering” scams where threat actors contact people (the “Pigs”) on social media to build a fake relationship and then use it to steal cryptocurrency.
Only time will tell if these safety features will prove adequate to stop bad actors from abusing LinkedIn, but the targeted measures the platform introduced should make hackers’ operations much harder.