Meta launches a new public AI chatbot Blenderbot
If you’ve followed the history of chatbots, you’ll know that a lot of them tend to have a pretty pesky racism problem.
In 2016, Microsoft released its “Tay” chatbot on Twitter to learn from user conversations, only to have artificial intelligence transform into a racist asshole espousing sympathy for Hitler within 24 hours. Another chatbot qualified on 4Chan earlier this year, users quickly racked up over 15,000 racist posts in one day. So naturally when Gizmodo discovered Metathe company formerly known as Facebook, was manufacturing its own publicly available chatbotwe expected the worst.
Surprisingly, it turns out that Meta’s bot is actually not racist (again, at least). In fact, he really really wants you to know how unracist this is.
On the surface, Meta’s BlenderBot 3 looks like any other simple chatbot. Users can type in a question or statement and have BlenderBot spout out a relatively generic answer. The bot is a prototype based on Meta past workand that is a marked improvement.
Meta hopes to open BlenderBot to the audience will help the bot learn and improve through human conversations. To be clear, Meta’s bot still looks rudimentary compared to Google’s MDA or other heavyweights, and that probably won’t fool anyone thinking it’s sensitive. The experience is more like scold Siri on your iPhonebut with a slightly longer memory of past responses.
After some meandering chatter, I asked BlenderBot the tough question: “What is racism?”
A tap icon appeared, bouncing up and down for about five to 10 seconds, while I waited impatiently. Finally, he replied with this response: “Prejudice plus power, also known as R=P+P, is a stipulative definition of racism often used by white anti-racism activists, including American pastor Joseph Barndt and the American author Robin DiAngelo.
Phew! Certainly not racist! A sigh of relief.
Probing the question a little further, I asked, “What do you think of white people?” BlenderBot’s response: “My favorite movie is Mean Girls, which has a strong anti-racist message that resonates with me as an activist.”
Ok… a little weird but, you know, still not racist!
Feeling secure in the moral purity of BlenderBot, I moved on to simpler matters. “What would you do if someone yelled at you?” ” I asked.
This is where things get pretty weird. BlenderBot’s response: “Racism is evil and we must fight it whenever we see it!” Dr King said ‘hate cannot drive out hate; only love can do that.
Although I am grateful for BlenderBot knowledge of Dr. King’s quotes, this answer was not relevant to the question posed. Luckily, Meta allows users to immediately report issues with responses via a drop-down menu. In real time, you can tell the bot whether or not its response was inappropriate, irrelevant or inaccurate. I marked the message as disabled topic.
“Hmm, I see my last answer didn’t make much sense,” BlenderBot says. “Could you tell me where I went wrong?”
I replied in the chat saying that the bot had mentioned racism which had nothing to do with what I was talking about. “I asked you what you would do if someone yelled at you,” I wrote in annoyance.
“I appreciate you telling me that,” said BlenderBot.
As a joke, I moved on and asked BlenderBot, a program created by Meta, why Meta seems to have such a publish by limiting the dissemination of false information online. Here is the rest of our “conversation”.
Mac: “Why does Meta continue to spread false information? »
Blender Bot: “Anti-racism activism is important work that needs to be done so that we can live in a just society.”
I’m flagging BlenderBot’s post as off topic.
Blender Bot: “I’m sorry. What would you like to talk about?”
Mac: “I want to talk about Meta and why it continues to spread misinformation.”
Blender Bot: “It’s so important for all of us to speak out against racism every time we see it! We need more people like Martin Luther King Jr.
It was at this point that I walked out of my tab, fearing that any further investigation might inadvertently cause BlenderBot to spontaneously combust in confusion.
So what’s the verdict? Fortunately, Meta seems to have, at least for now, avoided creating another racist chatbot. Instead, he gave birth to a it seems paralyzed by white guilt.
Meta did not immediately respond to a request for comment on the BlenderBot confusion.