Microsoft – Ditches Twitter Chatbot as it Learned a Bit Too Much

Recently, Microsoft made a Twitter chatbot to chat like a teenage girl to deal with lonely netizens on a Saturday night. However, and even more recently, the tech giant axes the artificial intelligence in less than a day after it has been launched. Why? Because it learned too fast without giving much thought. Therefore, it actually learned to dish out racist, sexist, and offensive comments.

Microsoft – Ditches Twitter Chatbot as it Learned a Bit Too Much

Microsoft Pulled Out Chatbot Due to Offensive Remarks

What was supposed to be the coordinated effort of Microsoft and the people of the Internet to bring forth a smart learning artificial intelligence backfired immensely just after a few hours of its official launch. Tay, the name of said AI, responded in many appropriate ways which was seen as the fault of many netizens trying to play tricks with the chatbot. Since the artificial intelligence is always learning, it also learned how to give out racial, sexist, and downright offensive slurs.

Kris Hammond, a computer scientist, stated that “I can’t believe they didn’t see this coming.” The creation of Microsoft’s Tay was made to be an experiment by the firm’s researchers to learn more about human conversation. On its website, the AI was targeted to an audience of 18 to 24-year-olds. Furthermore, it was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”

You can’t expect all people to act nicely within the World Wide Web, and a lot of people know that the Internet is full of trolls. Hence, it does make one wonder why the tech giant didn’t really see this instance coming. It started to use a lot of slang and did try to provide humorous answers whenever users would send the AI messages or even photos. The company stated that “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” And boy did she get smart.

Once users started blaring out the messages and comments to Tay, the AI found itself learning how many netizens spoke. Trolling or not, it made the artificial intelligence capable enough of dealing verbal blows to a lot of people. There are even reports that it would even make references to Hitler.

Microsoft had the following to say about the fate of their artificial intelligence: “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.”


Product Information Only

This website and its content (including links to other websites) are presented in general form and are provided for informational purposes only. does not sell any products on this site and, to the maximum extent permitted by law, excludes all liability and makes no warranties or representations that the products written about on this site are fit for any particular purpose, or are suitable for any particular use or by any particular person. is not responsible for the practices of owners of other websites and makes no representations or warranties about the products available for sale on those other sites.

Please check product content information carefully before purchasing any product on another site via a link provided on this site or otherwise.