By: Account Coordinator Brook O’Meara-Sayen
Since the 2016 presidential election, a news cycle barely goes by without at least a cursory mention of ‘bots.’ As Robert Mueller’s Special Council investigation continues to move forward, it has become increasingly apparent that Russian state agents utilized ‘bots’ to successfully sow dissent during our this past election. These bots are Twitter Bots, Artificial Intelligence (AI)-powered Twitter accounts impersonating humans.
Twitter now estimates more than 50,000 such bots crafted in Russia were utilized by the shadowy Internet Research Agency to sow discord in our electoral process. They worked, in large part, because real Twitter users often could not discern these accounts had no human at the keyboard. Orchestrated use of bots led to online ‘movements’ and promoted divisive hashtags.
O’Neill Now is starting a new series on bots on our blog, discussing how and why they can be used, but first we need to understand what a bot is.
At the core, Twitter bots are an extremely simple concept. A bot is a piece of code or a computer program that controls a Twitter account and posts without human supervision. They can be used for a myriad of things, such as auto-creating Venn Diagrams, or sorting the pixels of images to create art. Most of the time Twitter bots are completely harmless and were created to serve a specific function. These accounts are easily identifiable and many even acknowledge their lack of a soul in the bio. They are, in essence, tools with a public facing function–and Twitter gives them the platform they require to serve the people who need their service.
As AI has risen to prominence, it was only a matter of time before someone married the two concepts, either for a legitimate goal–like automating customer service complaints–or an illegitimate one–like, say, promoting a negative hashtag about a competitor. The marriage of AI and Twitter Bots resulted in a child called SocialBots.
SocialBots bots are supposed to act like humans, posting at random times, “sleeping,” talking about mundane behaviors, etc. A SocialBot might even have a database of “human things” which will allow it to tweet about how annoying it is to do laundry, even though it’s just a few lines of code. But their ability to masquerade as a human and influence public sentiment is what makes them controversial.
So, why can a bot move public sentiment on a topic when a real person can’t? A bot can be copied again and again without limits. Together these bots can tweet the same news story and hashtag simultaneously. They can trick a target audience into believing these tweets are 50,000 people and not 50,000 lines of code.
This adds a layer of uncertainty to the social media giant: is that trending topic trending because people care about it, or does one guy with an army of bots care about it? Does my favorite politician/actor/writer/entrepreneur really have that many followers or are half of them bots created to boost their numbers?
In later installments we’ll discuss how to spot a bot, how to make one, and specific instances when bots made a difference online.