My Blog
Business

Not just Twitter. LinkedIn has fake account problem it’s trying to fix


Anyone who depends on LinkedIn to search for jobs, find business partners or other opportunities is probably aware that the business social media site has had issues with fake profiles. While that is no different than other social media platforms including Twitter and Facebook, it presents a different set of problems for users who look to use LinkedIn for professional purposes.

Between January 1 and June 30, more than 21 million fake accounts were detected and removed from LinkedIn, according to the company’s community report. While 95.3% of those fake accounts were stopped at registration by automated defenses, according to the company, there was a nearly 28% increase in fake accounts caught compared to the previous six-month period. LinkedIn says it currently has more than 875 million members on its platform.

While the Microsoft-owned professional social media platform has rolled out new features in recent months to help users better determine if someone contacting them is a real or fake profile, cybersecurity experts say there are several things that users on the platform can do to protect themselves.

Creators of fake LinkedIn profiles sometimes try to drive engagement through content that links to malicious sites, said Mike Clifton, executive vice president and chief information and digital officer at Alorica, a global customer service outsourcing firm.

“For example, we see those that revolve around posts and content promoting a work event, such as a webinar, that uses real photos and people’s real information to legitimize the information and get others to register, often on a fake third-party Web site,” Clifton said.

How to avoid getting duped by fraudulent profiles

Cybercriminals often rely on a human touch to give LinkedIn users the impression that the fake profile belongs to someone they know, or is two degrees removed from someone they know. “This has been going on for years, and at this point can still evade even sophisticated fraud detectors,” Clifton said. “Like we remind our employees and customers, it’s important to stay vigilant and engage cautiously on social networks to protect your information.”

Recruiters who rely heavily on LinkedIn to search for prospective employees can find fake profiles especially troublesome, said Akif Khan, vice president and analyst at research firm Gartner.

“In addition, in other areas of fraud management — for example, when suspicious ecommerce transactions are being manually reviewed — agents will look across social media sites including LinkedIn to try and see if [a] person has a credible digital footprint which would suggest that they are a real-person rather than a fake identity,” Khan said. 

For these reasons it can serve the purposes of bad actors to have fake LinkedIn profiles, Khan said.

Gartner is seeing the problem of phony accounts across all social media platforms. “Bad actors are trying to craft fake identities and make them look real by leaving a plausible-looking digital footprint across different platforms,” Khan said.

It’s more likely that the fake profiles are set up manually, Khan said, however, where bad actors are creating large numbers of fake profiles — which can be used to abuse advertising processes or to sell large volumes of followers or likes on-demand — they’ll be using bots to automate that process of creating fake accounts.

The challenge for LinkedIn users is that profiles on social media platforms are easy to create and are typically not verified in any way. LinkedIn has asked users who encounter any content on the platform that looks like it could be fake to report it to the company. Users should specifically be on the lookout for profiles with abnormal profile images or incomplete work history, and other indicators including inconsistencies in the profile image and education.

“Always seek corroboration from other sources if you’re looking at an account and are making decisions based on what you see,” Khan said. “The bigger issue here is for the platforms themselves. They need to ensure that they have appropriate measures in place to detect and prevent automated account creation, particularly at large scale.”

What LinkedIn is doing to detect fakes and bots

Tools for detection do exist, but using them is not an exact science. “Verifying the identity of a user when creating an account would be another effective way to make it more difficult to set up fake accounts, but such identity proofing would have an impact in terms of cost and user experience,” Khan said. “So these platforms are trying to strike a balance in terms of the integrity of accounts and not putting users off creating accounts,” he said.

LinkedIn is taking steps to address the fake accounts problem.

The site is using technology such as artificial intelligence along with teams of experts to remove policy-violating content that it detects before the content goes live. The vast majority of detected fake accounts are caught by automated defenses such as AI, according to a blog post from Oscar Rodriguez, vice president of product management at LinkedIn.

LinkedIn declined to comment further.

The company is also collaborating with peer companies, policymakers, law enforcement and government agencies in efforts to prevent fraudulent activity on the site.

In its latest effort to stop fake accounts, LinkedIn rolled out new features and systems in October to help users make more informed decisions about members that they are interacting with, as well as enhancing its automated systems that keep inauthentic profiles and activity off the platform.

An “about this profile” feature shows users when profiles were created and last updated, along with information about whether the members had verified phone numbers and/or work emails associated with their accounts. The goal is that viewing this information will help users in deciding whether to accept a connection request or reply to a message.

LinkedIn says rapid advances in AI-based synthetic image generation technology have led to the creation of a deep learning model to better catch profiles made with AI. AI-based image generators can create an unlimited number of unique, high-quality profile photos that do not correspond to real people, Rodriguez wrote in the blog post, and phony accounts sometimes use these convincing, AI-generated profile photos to make a profile appear more authentic.

The deep-learning model proactively checks profile photo uploads to determine if an image is AI-generated, using technology designed to detect subtle image artifacts associated with the AI-based synthetic image generation process — without performing facial recognition or biometric analyses, Rodriguez wrote.

The model helps increase the effectiveness of LinkedIn’s automated anti-abuse defenses to help detect and remove fake accounts before they can reach members.

The company also added a warning to some LinkedIn messages that include high-risk content that could impact user security. For example, users might be warned about messages that ask them to take conversations to other platforms, because that might be a sign of a scam.

Related posts

Amazon to replace Walgreens in Dow Industrial Average next week

newsconquest

Anti-war challenger Duntsova barred from running against Russia’s Putin

newsconquest

Bank of America thinks this stock has 9% upside

newsconquest

Leave a Comment