Effectiveness of Social Media Bots.

To measure a machines success in imitating human behavior various techniques can be applied. This is commonly known as the Turing test, as developed by Alan Turing (1950). The effectiveness of bots can also be evaluated using a classical Turing test where the quantity and the quality of a bots output is measured and tested on a human test person.

Ineffective countermeasures:

This image shows a CAPTCHASocial network services such as Facebook have introduced certain security techniques to identify bots from humans such as CAPTCHAs (machine unreadabla pictures that can only be parsed by a human user) or automatic detection of massive friending, fake profile names and other unregular behavior. The effectivness of these measures is limited as Huber, et al. (2009) conlude:

“Although Facebook has, in principle, countermeasures against ASE attacks, our proof of concept ASE bot was not detected or blocked by Facebook during our experiments. This can be explained with the security measures of Facebook which are primarily concerned with unsolicited bulk messages. This makes our ASE bot almost impossible to detect as it, compared to Spam bots, targets very few people and aims to behave like a normal user.”

Huber et al. (2009) where applying the Turing tests on an ASE bot that chats with users on Facebook to recruit them for a malicious online survey. Their bot generates artificial replies using the Artificial Intelligence Markup Language (AIML).

20% of bots pass a simple Turing test:

In another Turing test Lauringer et al. (2010) measured that “The test subjects rated 80% the likelihood of talking to a bot after having exchanged three messages with the bot, as opposed to 4% when they were talking to a human.”

These experiments show that bots can still easily be identified by human beings, especially when it is through a conversation that lasts more than three exchanges. Still there is a relevant chance (20%) for a bot to stay unindentified over three messages.

It depends on the environment:

As a result of the realboy experiment Marra and Coburn (2008) conluded that it is easier for a bot to pass on a reduced micro-blogging environment than in an environment with space for profound conversation.

“The 140 character Turing Test is easier: We found that passing the Turing Test is signficantly easier when each message is a 140 character tweet. Tweets tend to be disconnected and poorly written. Also, since we were duplicating entire tweets, each one should be believable by itself. At the remaining challenge of tweeting about topics that are relevant to a community’s interests, our algorithm, which periodically randomly selects and posts a tweet containing a few random popular community keywords, seemed to be effective. Some Realboys were more successful than others in terms of follow-back rate.” (Marra, Coburn, 2008: ../conclusions.html)

In annother experiment conducted by Nanis et al. about the effectivness of social bots in Twitter proves that social bots can create human to human interaction (follows), by observing a group of 2700 twitter accounts during two test periods, one without bots and one with bots.

“During the control period, target groups saw an average connection rate of 0.626 new follows per day. In the experimental period, this average rate increased to 0.901 new follows per day, yielding a +43% change from the control period to the experimental period, averaged across all target groups .” (Nanis, Pearce, Hwang, 2011)

The growth of 43% has resulted from an the average connection grows of 9 bots. The most effective bot even yielded a change of 355%.

“In sum, socialbots remain consistently effective at interacting with users and attracting followers. And further, we believe that recent improvements to the codebase have increased socialbots’ effectiveness. ” (Nanis, et al.)

Nanis et al. conlcude that “these findings indicate the first successful attempts at automatically and programmatically shaping the topology of online communities.”

HUBER, M. & KOWALSKI, S. & NOHLBERG, M. & TJOA, S. (2009). “Towards automating social engineering using social networking sites”. In CSE (3), p. 117–124. IEEE Comp. Soc., 2009.

LAURINGER, Thomas & PANKAKOSKI, Veikko & BALZAROTTI, Davide & KIRDA, Engin (2010). “Honeybot, Your Man in the Middle for Automated Social Engineering”, 3rd USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET), San Jose, April 2010

MARRA, Greg & COBURN, Zack (2008). “Realboy – Believable twitter bots”, from http://ca.olin.edu/2008/realboy/

NANIS, Max & PEARCE, Ian & HWANG, Tim (2011). “PacSocial: Field Test Report “, from http://pacsocial.com/files/pacsocial_field_test_report_2011-11-15.pdf, last modified January 2012.

TURING, Alan M. (1950). “Computing Machinery and Intelligence”, MIND A Quarterly Review of Psychology and Philosophy, vol. 59, issue 236, 433-60. From http://mind.oxfordjournals.org/content/LIX/236/433.full.pdf+html

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>