Bot or not?

Reading Time: Minutes
“How are you?” “Not so good, I’m having issues with my family.” “Tell me more.” “Where should I start?” “From the beginning.”
Who is talking to whom here? This could be a snippet of conversation between a psychotherapist and his patient. It could be between friends on a park bench or in a text. Not quite right, but the last guess is close. It is actually a snippet between a person and “Eliza”, the prototype of all modern chatbots.

The German American computer scientist Joseph Weizenbaum demonstrated Eliza way back in 1966 already, as a computer program with a simple dialog algorithm that can have an interactive conversation with a human. This has since been called the Eliza Effect, and today it’s the basis of all chatbots. Bot is short for robot. A chatbot is nothing more than a computer program that can automatically respond to questions posed by a human partner and make sense, more or less. How’s the weather? Where can I buy my favorite pants on sale? What can I make with milk, 3 eggs and apples?
Bild
Helpful assistants

Lots of chatbots use databases to call up their answers, based on keywords and text modules. When they are asked a question, they merely find the response they need in the database. Usually, they are programmed to help in everyday life: They can be found everywhere today in the form of weather bots or media bots that make internet searches easier, or as helpful customer service contacts for websites. The next step for bots are those that can understand spoken language, like the digital assistants Siri or Alexa. These have artificial intelligence. Unlike a database bot, intelligent bots have programming that helps them better imitate human behavior. They are constantly “learning” to achieve these ends.

Social bots

Chatbots that appear as accounts in social networks and seem to be a real human being behind the account are called “social bots”. Nowadays they can be found in large numbers on Facebook, Twitter, YouTube and co. and have become a hotly discussed issue.

Why are they so debated? Because among their many talents they can systematically collect information about other users. They also spread intended information in the form of comments that appear under specific products – or political opinions. It’s about making a splashy presentation and dominating the conversation so as to persuade people that some particular opinion is prevailing online. Bots can be very active, more active than humans – so they comment more, making it seem like the opinion they push is prevalent. This is what happened recently in the US elections, and now the question is what impact social bots had on real people’s opinion formation and whether they could influence how people vote.

Forming an opinion

It’s clear that social bots cannot always be spotted right away for what they are. It has been quite a while since Eliza was first developed, and the technological capabilities for simulating human behavior have advanced tremendously. Having a conversation that is without mistakes, that follows and becomes an in-depth dialog with chatbots is still difficult. But more often than not it needn’t get that far. Sometimes simple statements are enough to get a reaction from people and provoke discussion. And there it is again: the Eliza Effect.

“I can’t start at the beginning.” “I understand.” “Do you even care?” “That’s not an easy question.” “But you must care.” “Go on.”

How can I detect social bots?

Here are a few tips for recognizing social bots:

Have a good look at the account itself: Who is supposed to be writing? Is the profile information nonsensical or too short/incomplete or empty? Who are this person’s friends/followers?
You can be suspicious of accounts that post a lot every day, often sending separate messages to multiple other accounts at the same time. This simultaneous activity would be very hard for a human. Above all, the content is important: Does the account always post the same or similar content?
Human beings need a certain amount of response time – bots only need seconds to respond or share content. Also, bots like other posts from other account at a much higher rate, too; the effect is that those opinions with the greater number of likes only seem to be the majority opinion.
Bots can usually be identified by paying close attention to whether or not they sound like human communication – whether they actually answer concrete questions. Does the person you’re supposedly talking to react in a way you would expect? A major tip is the way they write, their style. Many bots are programmed to pick up words from messages they are responding to and repeat them or synonyms in their own posts. Also, are the posts simplistic or give detailed argument? Bots are not good debaters.
Services like Bot or not from the Indiana University Bloomington or botswatch.de can show you how social bots act or help to identify them.
More in our “Opinion formation” dossier
/mediabase/img/4333.jpg Social media can influence the personal views and opinions of children and young people. Digital opinion leaders
/mediabase/img/4008.gif Social networks are great for spreading useful information but there’s a lot of false information out there, especially on such networks. Fact, fact, fake, fact

Markus Beckedahl interview

The loud minority

News

News
24.05.2018
Facebook responds to EU General Data Protection Regulation (GDPR)
On 25 May 2018, the European General Data Protection Regulation enters into force....
20.04.2018
Twitter study: Fake news spreads faster than correct information
Untrue allegations and fake news spread on Twitter much faster and reach more people...

Children’s Page

Bild
This way to Teachtoday’s tips for kids with their first cell phone.
Tips for children

Share this article!

Post the article with one click!
Share