Meta brings its state-of-the-art AI chatbot to the web for you to talk

The advance of chatbots and their public use for research around the uses of artificial intelligence has been slowed down after the disaster that occurred about five years ago around Taye, the Microsoft development which was closed for less than 24 hours having been made public. Meta wants to reverse this scenario.

Meta’s AI research team recently released the latest version of its Blenderbot chatbot, a third-generation chatbot with 175 billion parameters.

Google is losing a leader in its AI business and it’s the best thing that can happen

With the public test of this chatbot, Meta wants to close the knowledge supply gap suffered by these artificial intelligence developments.

In general, chatbots are trained in controlled spaces that attempt to mimic real-life situations while controlling the risks of bringing in unwanted learnings, such as discriminatory or hateful language or expressions.

Although many scenarios can be reproduced in the lab, it is impossible to cover all available edges, so the public Meta test is a step to improve learning based on real experiences.

Mark Zuckerberg, CEO of Meta, explains clearly:

“Researchers cannot predict or simulate all conversation scenarios in research settings on their own. The field of AI is still a long way from truly intelligent AI systems that can understand, interact and converse with us like other humans can. To create models that are more adaptable to real-world environments, chatbots need to learn from a diverse and broad perspective with people “in the wild”. »

What the new Meta chatbot can do

The third generation of BledeBot promises to surpass the information recall capabilities of its predecessor. While the BladeBot 2 could retrieve data from previous conversations, such as searching for details on a certain topic on the Internet, the new chatbot could not only assess data it picks up from the web, but also data from people he talks to. .

In this process, the delivery of unsatisfactory responses has been taken care of with the incorporation of user feedback into the template to prevent said error from happening again. Additionally, an algorithm is used to detect unreliable or malicious responses from humans to reduce margins for poor learning.

“By using data that indicates right and wrong answers, we can train the classifier to penalize low-quality, toxic, contradictory, or repetitive statements, and statements that are generally not helpful. our public demo , interactive and live allows BlenderBot 3 to learn organic interactions with all kinds of people. research”. Zuckerberg mentioned in a blog post.

With these advancements, the new Meta chatbot is expected to be able to establish more natural conversations, supported by its OPT-175B language model which has been updated to be 60 times larger than the previous model.

“We found that compared to BlenderBot 2, BlenderBot 3 provides a 31% improvement in overall score for conversational tasks as assessed by human judgment. They are also considered twice as knowledgeable, while facts are wrong 47% less often. Compared to GPT3, Current Questions is more up-to-date 82% of the time and more specific 76% of the time,” confirmed the CEO of Meta.

Leave a Comment