To measure a machines success in imitating human behavior various techniques can be applied. This is commonly known as the Turing test, as developed by Alan Turing (1950). The effectiveness of bots can also be evaluated using a classical Turing test where the quantity and the quality of a bots output is measured and tested on a human test person.
Social network services such as Facebook have introduced certain security techniques to identify bots from humans such as CAPTCHAs (machine unreadabla pictures that can only be parsed by a human user) or automatic detection of massive friending, fake profile names and other unregular behavior. The effectivness of these measures is limited as Huber, et al. (2009) conlude: Continue reading
At the time of writing social bots have still been barley examined. Only few academic research can be found on this topic and it is only possible to identify few design approaches of sophisticated social media bots. The few documented approaches have been published by researchers. It is to assume that there are other social bots that have not been identified yet.
We can identify social bots of three types: Continue reading
Bots, also known as Web Robots or Internet Bots, are software that is used to do simple and repetetive tasks to substitute human labor. The most widespread use of bots is in web site spidering or web site crawling where these programs crawl and index web sites to create a map of the internet.
Bots are part of the internet since the very beginning. There is a growing number of bot types that can be encounter nowadays. Apart from web crawlers, bots have been widely used as spambots to distribute spam emails or as chatterbots (also called talk bots, chatterboxes or artificial conversational entitities), bots that simulate human conversation, mainly in a chat room or instant messaging environment, and intent to fool human users into thinking that the program is a human being. Continue reading