The customers taught hatred in Microsoft robot and multiplied inappropriate comments before being shut down by Microsoft. A Microsoft robot with artificial intelligence and capable of interacting with the increase in Internet users was withdrawn Thursday by the US computer after multiplying hateful or racist diatribes on the basis of what he taught malicious people. The robot was designed with the look of a naive young teenager able to learn from its interactions with users. The educational purpose project was cut short when people decided to teach him particularly verbal abuse.
"Unfortunately, in the first 24 hours when she was online, we saw a coordinated effort by some users to abuse Tay and ability to communicate so that it responds inappropriately," said Microsoft in a statement. "It is as much a social and cultural experience that technique," however nuanced the group.
While the machine was designed to study the ability to learn and become smarter as and extent of its interactions online, she started tweeting pro-Nazi, sexist or racist. She also posted comments in support of the Republican candidate for the White House Donald Trump. In his last message on Twitter, Tay said: "See you humans, I need to sleep after so much talk today," says Microsoft.
These abusive messages were deleted, but many are still circulating in the form of screenshots. Microsoft, for its part, is now working to improve the Tay software. Besides Twitter, the robot was designed to interact on various platforms courier and especially targeted the "millenials" born between 1981 and 1996.