![]() ![]() Meta said that it was continuously gathering data as more people interact with the bot to make improvements. “Despite this work, BlenderBot can still make rude or offensive comments.”īut Meta also claimed its latest chatbot is “twice as knowledgeable” as predecessors as well as 31% more improved on conversational tasks while being factually incorrect 47% less often. “Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” the company said. Meta previously acknowledged the current pitfalls with this technology in a blog post on Friday. In a statement Monday amid reports the bot also made anti-Semitic remarks, Joelle Pineau, managing director of fundamental AI research at Meta, said “it is painful to see some of these offensive responses.” But she added that “public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.” “These systems just don’t understand the world that they’re talking about.” “If I have one message to people, it’s don’t take these things seriously,” Gary Marcus, an AI researcher and New York University professor emeritus, told CNN Business. The colorful responses from BlenderBot show the limitations of building automated conversational tools, which are typically trained on large amounts of public online data. While there’s potential value in developing chatbots for customer service and digital assistants, there’s a long history of experimental bots quickly running into trouble when released to the public, such as with Microsoft’s “Tay” chatbot more than six years ago. ![]() Google's offices stand in downtown Manhattan on Octoin New York City. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |