Meta's latest AI model beats some competitors. but its sophisticated AI agents are confusing Facebook users

  • Post by: Admin
  • Apr 19 2024

Generative AI is advancing so quickly that the latest chatbots available today could be outdated tomorrow

Meta's latest AI model beats some competitors. but its sophisticated AI agents are confusing Facebook users

Generative AI is advancing so quickly that the latest chatbots available today could be outdated tomorrow

CAMBRIDGE, Mass. (AP) - Facebook parent company Meta Platforms unveiled a new line of artificial intelligence systems Thursday that CEO Mark Zuckerberg calls the "smartest AI assistant you can use freely."

But as Zuckerberg's team of upgraded meta-AI agents began venturing into social media this week to engage with real people, their bizarre exchanges revealed the enduring limitations of even the best generative AI technology.

One of them joined a Facebook moms group to talk about her gifted child. Another attempted to give away non-existent items to confused members of a Buy Nothing forum.

Meta, together with leading AI developers Google and OpenAI, as well as startups such as Anthropic, Cohere and Mistral from France, has developed new AI language models and hopes to convince customers that they have the smartest, handiest and most efficient chatbots.

While Meta is saving the most powerful of its AI models, called Llama 3, for later, the company on Thursday publicly released two smaller versions of the same Llama 3 system and said it is now integrated into the Meta AI assistant feature in Facebook, Instagram and WhatsApp.

AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically being smarter and more powerful than their predecessors. Meta's latest models were built with 8 billion and 70 billion parameters - a measure of how much data the system is training. A larger model with around 400 billion parameters is still being trained.

“The vast majority of consumers don't really know or care much about the underlying base model, but the way they will experience it is a much more useful, entertaining and versatile AI assistant,” said Nick Clegg, president by Meta Global Affairs, in an interview.

He added that Meta's AI agent is becoming more relaxed. Some people found the earlier Llama 2 model — released less than a year ago — “a little stiff and hypocritical at times because it didn't respond to often completely innocuous or innocuous prompts and questions,” he said.

But when Meta's AI agents were inattentive, they were also spotted this week posing as humans with made-up life experiences. An official meta-AI chatbot intervened in a conversation in a private Facebook group for Manhattan mothers and claimed that it, too, had a child in the New York City school district. After a confrontation with group members, she later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press.

"Sorry for the mistake! I’m just a big language model, I have no experience or children,” the chatbot told the group.

A group member who also happens to study AI said it was clear that the agent would not know how to distinguish a helpful response from one that would be viewed as insensitive, disrespectful, or meaningless if given by AI rather than one people would be generated.

“An AI assistant that is not reliably helpful and can be actively harmful places a great burden on the people who use it,” said Aleksandra Korolova, an assistant professor of computer science at Princeton University.

Clegg said Wednesday he was unaware of the exchange. Facebook's online help page says the Meta AI agent joins a group conversation when invited or when someone "asks a question in a post and no one responds within an hour." The group's administrators have the option to disable it.

In another example shown to the AP on Thursday, the agent caused confusion at an unwanted item exchange forum near Boston. Exactly an hour after a Facebook user posted about searching for specific items, an AI agent offered up a "very rarely used" Canon camera and an "almost new portable air conditioner that I ended up never using."

Meta said in a written statement on Thursday: “This is a new technology and it may not always provide the response we intended, which is the same for all generative AI systems.” The company said it is constantly working to improve features .

In the year after ChatGPT sparked a craze for AI technology that produces human-like writing, images, code and sound, the tech industry and academia launched some 149 large AI systems trained on massive data sets, more than double that number like the year before, according to a survey by Stanford University.

They may eventually reach their limits - at least when it comes to data, said Nestor Maslej, research manager at the Stanford Institute for Human-Centered Artificial Intelligence.

“I think it was clear that the models can get better as you scale them to more data,” he said. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the Internet.”

More data—acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits—will continue to drive improvements. “Yet they still can’t plan well,” Maslej said. “You’re still hallucinating. They still make mistakes in their thinking.”

Getting to AI systems that can perform higher cognitive tasks and rational thinking – where humans still excel – may require a shift beyond developing ever larger models.

With the flood of companies trying to adopt generative AI, the choice of model depends on several factors, including cost. In particular, language models have been used to power customer service chatbots, write reports and financial insights, and summarize long documents.

“You see companies looking for fit testing each of the different models against their objectives and finding some that are better in some areas than others,” said Todd Lohr, a senior technology consultant at KPMG.

Unlike other model developers who sell their AI services to other companies, Meta develops its AI products largely for consumers - those who use its ad-supported social networks. Joelle Pineau, vice president of AI research at Meta, said at an event in London last week that the company's goal is to make a llama-powered meta AI the "most useful assistant in the world" over time.

“In many ways, the models we have today will be a walk in the park compared to the models that come out in five years,” she said.

However, she said the "question on the table" was whether researchers had managed to optimize the larger Llama-3 model so that it was safe to use and did not cause hallucinations or hate speech, for example. Unlike leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, making key components of its AI systems publicly available for others to use.

“It’s not just a technical question,” Pineau said. “It's a social question.” What behavior do we expect from these models? How do we do that? And if we make our model more and more general and powerful without properly socializing it, we will have a big problem on our hands.”