ENTRE News – Meta announced their newest artificial intelligence (AI) assistant which is claimed to be smarter than other AI chatbots for their social media platforms such as WhatsApp, Facebook, Instagram and Messenger. On its official website, Meta says that this AI assistant was built using the Llama 3 model. According to Meta, the AI assistant can be used for free on Meta’s social media platforms.
“You can use Meta AI on Facebook, Instagram, WhatsApp and Messenger to get things done, learn, be creative and connect with things that are important to you,” said Meta on its official website, Thursday (18/4).
The AI chatbot will be available in English in the United States, Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe.
Furthermore, these chatbots can be used in feeds, chat, search, and more across Meta applications to get things done and access information in real-time. Apart from that, Meta AI can also create images faster.
Mark Zuckerberg, CEO of Meta, admitted that Meta AI aims to be the smartest artificial intelligence assistant and can be used by people all over the world.
“The goal is for Meta Ai to become the smartest AI assistant that can be used for free by people all over the world. With Llama 3, you basically feel like you are there,” said Mark, citing The Verge.
The social media giant has publicly released the Llama model for use by developers building AI applications. This is part of an effort to catch up with other companies that also have AI technology.
Meta is optimistic that this free option could be their way of blocking competitors’ plans to gain revenue from their technology. This strategy has raised security concerns from critics who are wary of the possible use of this model by irresponsible parties.
Quoting Reuters, Meta equipped Llama 3 with new computer coding capabilities and provided it with images and text in this training, although for now the model will only produce text, Meta Chief Product Officer Chris Cox FOR4D said in an interview.
More advanced reasoning, such as the ability to create longer, multi-step plans, will follow in future versions, he added. Versions planned for release in the coming months will also be capable of “multimodality,” meaning they can produce both text and images.