The real story, of an `echo` gone wrong, is vastly less interesting, and is about as important as typing '8008' into your calculator and showing it to your teacher. (Even though if you look at the chatbot code MS released later, it's not obvious at all how exactly Tay would 'learn' in the day or so it had before shutdown.)Īnyway, long story short, the Tay incident is either entirely or mostly bogus in the way people want to use it (as an AI safety parable). (This is why lots of people still 'know' Cambridge Analytica swung the election, or they 'know' the Twitter facecropping algorithm was hugely biased, or that 'Amazon's HR software would only hire you if you played lacrosse', or 'this guy was falsely arrested because face recognition picked him' etc.) If you look at the very earliest reporting, they mostly say it was repeat-after-me functionality, hedging a bit (because who can prove every inflammatory Tay statement was a repeat-after-me?), and then that rapidly gets dropped in favor of narratives about Tay 'learning'. It's hard to say given how most of the relevant material has been deleted, and what survives is the usual endless echo chamber of miscitation and simplification and 'everyone knows' which you rapidly become familiar with if you ever try to factcheck anything down to the original sources. As best as I can tell, there may have been a few milquetoast rudenesses, of the usual sort for language models, but the actual quotes everyone cites with detailed statements about the Holocaust or Hitler seem to have all been repeat-after-mes then ripped out of context. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.Possibly all of it. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. To do AI right, one needs to iterate with many people and often in public forums. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay. We take full responsibility for not seeing this possibility ahead of time. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. "Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values." "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," it said. Nobody who uses social media could be too surprised to see the bot encountered hateful comments and trolls, but the artificial intelligence system didn't have the judgment to avoid incorporating such views into its own tweets. However, within 24 hours, Twitter users tricked the bot into posting things like "Hitler was right I hate the jews" and "Ted Cruz is the Cuban Hitler." Tay also tweeted about Donald Trump: "All hail the leader of the nursing home boys." Tay was set up with a young, female persona that Microsoft's AI programmers apparently meant to appeal to millennials. Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets. Users could follow and interact with the bot on Twitter and it would tweet back, learning as it went from other users' posts. Yesterday the company launched "Tay," an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans. Microsoft got a swift lesson this week on the dark side of social media.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |