Yang, one of the creators of Botometer, said he hadn’t heard from Musk’s team and was surprised to see the world’s richest man had used his tool.
“To be honest, you know, Elon Musk is really rich, right? I had assumed he would spend money on hiring people to build some sophisticated tool or methods by himself,” Yang told CNN Business Monday. Instead, Musk opted to use the Indiana University team’s free, publicly available tool.
Twitter has repeatedly argued that bots are not actually germane to the completion of the deal, after Musk signed a binding contract that does not include any bot-related carve-outs. Still, the company hit back in a response to Musk’s answer noting that Botometer uses a different method than the company to classify accounts and “earlier this year designated Musk himself as highly likely to be a bot.”
Botometer does indeed look at the issue somewhat differently, according to Yang. The tool does not show whether an account is fake or spam, nor does it attempt to make any other judgment about the account’s intent. Instead, it shows how likely an account is to be automated — or managed using software — using various considerations such as the time of day it tweets, or whether it’s self-declared to be a bot. “There’s overlap of course, but they’re not exactly the same thing,” he said.
The distinction highlights what could become a key challenge in the legal fight between Musk and Twitter: There is no singular, clear definition of a “bot.” Some bots are harmless (and in certain cases, even helpful) automated accounts, such as those that tweet out weather or news updates. In other cases, a human might be behind a fake or scam account, making it hard to catch with automated systems designed to weed out bots.
Botometer provides a score from zero to five that indicates whether an account appears “human-like” or “bot-like.” Contrary to Twitter’s characterization, the tool has at least since June rated Musk’s account as around a one out of five on the bot scale — indicating there’s almost certainly a human behind the account. It shows, for example, that Musk tweets fairly consistently across all days of the week and the average hours of his tweeting mirror a human schedule. (A bot, by contrast, might tweet all throughout the night, during hours when most humans are sleeping.)
But in many cases, Yang said, the difference between bot or not can be blurry. For example, a human could log in and tweet from what is normally an automated account. With that in mind, the tool isn’t necessarily useful for affirmatively classifying accounts.
“It’s tempting to set some arbitrary threshold score and consider everything above that number a bot and everything below a human, but we do not recommend this approach,” according to an explanation on the Botometer site. “Binary classification of accounts using two classes is problematic because few accounts are completely automated.”
What’s more, Twitter’s firehose only shows accounts that tweet, so evaluating it would leave out bot accounts whose purpose is, for example, simply to boost the follower counts of other users — a form of inauthentic behavior that doesn’t involve tweeting, Yang said.
Musk’s legal team did not immediately respond to a request for comment on this story. But Musk’s answer does acknowledge that his analysis was “constrained” due to limited data provided by Twitter and the limited time he had to conduct the evaluation. It added that he continues to seek additional data from Twitter.
There is private data from Twitter — such as IP addresses and how much time a user spends looking at the app on their devices — that could make it easier to estimate whether an account is a bot, according to Yang. However, Twitter claims that it’s already provided more than enough information to Musk. It may be hesitant to hand over such data, which could be a competitive risk or undermine user privacy, to a billionaire who now says he no longer wants to buy the company and has even hinted at starting a rival platform.