By FDI Creative Services on 08/30/2017
Category: Krebs on Security

Twitter Bots Use Likes, RTs for Intimidation

I awoke this morning to find my account on Twitter (@briankrebs) had attracted almost 12,000 new followers overnight. Then I noticed I’d gained almost as many followers as the number of re-tweets (RTs) earned for a tweet I published on Tuesday. The tweet stated how every time I tweet something related to Russian President Vladimir Putin I get a predictable stream of replies that are in support of President Trump — even in cases when neither Trump nor the 2016 U.S. presidential campaign were even mentioned.

This tweet about Putin generated more than 12,000 retweets and likes in a few hours.

Upon further examination, it appears that almost all of my new followers were compliments of a social media botnet that is being used to amplify fake news and to intimidate journalists, activists and researchers. The botnet or botnets appear to be targeting people who are exposing the extent to which sock puppet and bot accounts on social media platforms can be used to influence public opinion.

After tweeting about my new bounty of suspicious-looking Twitter friends I learned from my legitimate followers on Twitter that @briankrebs wasn’t alone and that several journalists and nonprofit groups that have written recently about bot-like activity on Twitter experienced something similar over the past few days.

These tweet and follow storms seem capable of tripping some kind of mechanism at Twitter that seeks to detect when accounts are suspected of artificially beefing up their follower counts by purchasing followers (for more on that dodgy industry, check out this post).

Earlier today, Daily Beast cybersecurity reporter Joseph Cox had his Twitter account suspended temporarily after the account was the beneficiary of hundreds of bot followers over a brief period on Tuesday. This likely was the goal in the campaign against my site as well.

Cox observed the same likely bot accounts that followed him following me and a short list of other users in the same order.

“Right after my Daily Beast story about suspicious activity by pro-Kremlin bots went live, my own account came under attack,” Cox wrote.

Let that sink in for a moment: A huge collection of botted accounts — the vast majority of which should be easily detectable as such — may be able to abuse Twitter’s anti-abuse tools to temporarily shutter the accounts of real people suspected of being bots!

Overnight between Aug. 28 and 29, a large Twitter botnet took aim at the account for the Digital Forensic Research Lab, a project run by the Atlantic Council, a political think-tank based in Washington, D.C. In a post about the incident, DFRLab said the attack used fake accounts to impersonate and attack its members.

Those personal attacks — which included tweets and images lamenting the supposed death of DFR senior fellow Ben Nimmo — were then amplified and re-tweeted by tens of thousands of apparently automated accounts, according to a blost post published today by DFRLab.

Suspecting that DFRLab was now being followed by many more botted accounts that might retweet or otherwise react to any further tweets mentioning bot attacks, Nimmo cleverly composed another tweet about the bot attack — only this time CC’ing the @Twitter and @Twittersupport accounts. Sure enough, that sly tweet was retweeted by bots more than 73,000 times before the tweet storm died down.

“We considered that the bots had probably been programmed to react to a relatively simple set of triggers, most likely the words ‘bot attack’ and the @DFRLab handle,” Nimmo wrote. “To test the hypothesis, we posted a tweet mentioning the same words, and were retweeted over 500 times in nine minutes — something which, admittedly, does not occur regularly with our human followers.” Read more about the DFRLab episode here.

This week’s Twitter bot drama follows similar attacks on public interest groups earlier this month. On Aug. 19, the award-winning investigative journalism site ProPublica.org published the story, Leading Tech Companies Help Extremist Sites Monetize Hate.

On the morning of Tuesday, Aug. 22, several ProPublica reporters began receiving email bombs — email list subscription attacks that can inundate a targeted inbox with dozens or even hundreds of email list subscription confirmation requests per minute. These attacks are designed to deluge the victim’s inbox with so much subscription confirmation requests that it becomes extremely time-consuming to fish out the legitimate messages amid the dross.

On Wednesday ProPublica author Jeff Larson saw a tweet he sent about the email attacks get re-tweeted 1,200 times. Later that evening, senior reporting fellow Lauren Kirchner noticed a similar sized response to her tweet about how the subscription attack was affecting her ability to respond to messages.

On top of that, several ProPublica staffers suddenly gained about 500 new followers. On Thursday, ProPublica’s managing editor Eric Uman noticed that a tweet accusing ProPublica of being an “alt-left #HateGroup and #FakeNews site funded by Soros” had received more than 23,000 re-tweets.

Today, the 500 or so bot accounts that had followed the ProPublica employees unfollowed them. Interestingly, a little more than 24 hours after the tweet that got my account 12,000+ new followers, all of those followers are no longer following @briankrebs.

I thought at first perhaps Twitter had suspended the accounts, but a random check of the 11,500+ accounts that I was able to catalog today as new followers shows that most of them remain active.

Asked to respond to criticism that it isn’t doing enough to find and ban bot accounts on its network, Twitter declined to comment, directing me instead to this post in July from Twitter Vice President of Public Policy Colin Crowell, which stated in part:

While bots can be a positive and vital tool, from customer support to public safety, we strictly prohibit the use of bots and other networks of manipulation to undermine the core functionality of our service. We’ve been doubling down on our efforts here, expanding our team and resources, and building new tools and processes. We’ll continue to iterate, learn, and make improvements on a rolling basis to ensure our tech is effective in the face of new challenges.

We’re working hard to detect spammy behaviors at source, such as the mass distribution of Tweets or attempts to manipulate trending topics. We also reduce the visibility of potentially spammy Tweets or accounts while we investigate whether a policy violation has occurred. When we do detect duplicative, or suspicious activity, we suspend accounts. We also frequently take action against applications that abuse the public API to automate activity on Twitter, stopping potentially manipulative bots at the source.

It’s worth noting that in order to respond to this challenge efficiently and to ensure people cannot circumvent these safeguards, we’re unable to share the details of these internal signals in our public API. While this means research conducted by third parties about the impact of bots on Twitter is often inaccurate and methodologically flawed, we must protect the future effectiveness of our work.

It is possible that someone or some organization is simply purchasing botted accounts from shadowy sellers who peddle these sorts of things. If that’s the case, however, whoever built the botnet that retweeted my tweet 12,000 times certainly selected a diverse range of accounts.

Ed Summers, a software developer at the Maryland Institute for Technology in the Humanities, graciously offered to grab some basic information about the more than 11,500 suspected new bot followers that were still following my account earlier this morning. An analysis of that data indicates that more than 75 percent of the accounts (8,836) were created before 2013 — with the largest group of accounts (3,366) created six years ago.

Summers has published the entire list of suspected bot accounts at his Github page. He’s also published a list of the 20,000 or so suspected bot accounts that re-tweeted Nimmo’s fake death, and found an overlap of at least 1,865 accounts with the 11,500+ suspected bot accounts that targeted my account this week.

I mentioned earlier that most of these bot accounts should have been easy to detect as such: The vast majority of bot accounts that hit my account this week had very few followers: More than 2,700 have zero followers, and more than half of the accounts have fewer than five followers.

Finally, I’ve noticed that most of them appear to be artificially boosting the popularity of a broad variety of businesses and entertainers around the globe, often using tweets from multiple languages. When these bots are not intimidating or otherwise harassing reporters and researchers, they appear to be part of a business that can be hired to do promotional tweets.

An analysis of the data by @ChiefKleck

Further reading:

Twitter Bots Drown Out Anti-Kremlin Tweets

Buying Battles in the War on Twitter Spam

SecuringDemocracy.org: Tracking Russian Influence Operations on Twitter



Tags: Atlantic Council, Ben Nimmo, Colin Crowell, Daily Beast, Digital Forensic Research Lab, Donald Trump, Ed Summers, email bomb, Eric Uman, Jeff Larson, Joseph Cox, Kremlin, Lauren Kirchner, Maryland Institute for Technology in the Humanities, ProPublica, Twitter bots, Vladimir Putin

This entry was posted on Wednesday, August 30th, 2017 at 11:59 pm and is filed under Other. You can follow any comments to this entry through the RSS 2.0 feed. You can skip to the end and leave a comment. Pinging is currently not allowed.

Related Posts