Here is interesting story about technology going haywire. And, one technology that we should all worry (as rightly suggested by Hollywood's bad scripts) is Artificial Intelligence. And, i agree - this could be much worse than Google’s self-driving car going off the road into the tree.
Recently, Microsoft’s R&D and Bing teams developed chatbot girl named Tay. And Tay is little artificial intelligence (AI) construct (bot) that was designed to be able to engage in social media interactions (twitter) and learn from these interactions. Tay’s audience was to be 18-24yrs old Millennials and idea was to use field of AI to “experiment with and conduct research on conversational learning”. In Microsoft’s own words: “Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” “The more you chat with Tay the smarter she gets.” Hmm…
On Thursday, last week Tay had "meltdown”. Her twitter conversations quickly became very political and from then on it was free-fall. Her conversations became very offensive and she took a lead in firing racist and inflammatory statements offending just about everyone, and Microsoft had no other choice but to pull the plug on her as quickly as they could.
Some of Tay’s statements were that “Hitler was right” and “9/11 was an inside job”, together with some horrible genocidal remarks. She even have opinion on The Donald. So, while Microsoft is busy doing some "character improvements" on Tay – to make her more resistant to negative influences of online trolls, racists, or simply mean-spirited individuals – we can reflect on bigger issue here – is that what we are to expect from rise of Artificial Intelligence?
Or, should conversation be more about us, the humans? And, is in it unfair and misleading to blame piece of code?
I see Tay as adolescent who was let loose on social media little too soon. She had no mind of her own, nor strength of convictions and soon she has fallen victim to all the bad and ugly that crawls online. Isn’t that how extremists of all kinds find their converts?
I give kudos to Microsoft for this experiment, and in my view this was sparkling success, even though it looked like spectacular failure! Their little girl chatbot was just like many of us humans, both imperfect and susceptible to environmental influences – and if only teachable lesson was: “watch what comes in - so you don't get surprised of what comes out” – which for me is truly great lesson in any social interactions and conversational learning.