Meta unveils more cautious approach to ChatGPT frenzy

Meta unveils more cautious approach to ChatGPT frenzy

Meta has unveiled its own version of artificial intelligence behind apps such as ChatGPT
Meta has unveiled its own version of artificial intelligence behind apps such as ChatGPT. Photo: Lionel BONAVENTURE / AFP/File
Source: AFP

PAY ATTENTION: Enjoy reading our stories? Join YEN.com.gh's Telegram channel for more!

Facebook-owner Meta on Friday unveiled its own version of the artificial intelligence behind apps such as ChatGPT, saying it would give access to researchers to find fixes to the technology's potential dangers.

Meta described its own AI, called LLaMA, as a "smaller, more performant" model designed to "help researchers advance their work," in what could be seen as veiled criticism of Microsoft's decision to release the technology widely, while keeping the programming code secret.

Microsoft-backed ChatGPT has taken the world by storm with its ability to generate finely crafted texts such as essays or poems in just seconds using technology known as large language models (or LLM).

LLM is part of a field known as generative AI that also includes the capacity to execute images, designs or programming code almost instantaneously upon a simple request.

Read also

Facebook, Instagram roll out paid subscription in Australia, New Zealand

Usually the more staid actor in big tech, Microsoft has deepened its partnership with OpenAI, the creator of ChatGPT, and earlier this month announced the technology would be integrated into its Bing search engine as well as the Edge browser.

Google, seeing a sudden threat to the dominance of its search engine, quickly announced it would release its own language AI, known as Bard, shortly.

PAY ATTENTION: Follow us on Instagram - get the most important news directly in your favourite app!

But reports of disturbing exchanges with Microsoft's Bing chatbot -- including it issuing threats and speaking of desires to steal nuclear code or lure one user from his wife -- went viral, raising alarm bells that the technology was not ready.

Meta said these problems, sometimes called hallucinations, could be better remedied if researchers had improved access to the expensive technology.

Thorough research "remains limited because of the resources that are required to train and run such large models," the company said.

Read also

One billion users, but bans mount up for TikTok

This was hindering efforts "to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation," Meta said.

OpenAI and Microsoft strictly limit access to the technology behind their chatbots, drawing criticism that they are choosing potential profits over improving the technology more quickly for society.

"By sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating these problems," Meta said.

New feature: Сheck out news that is picked for YOU ➡️ click on “Recommended for you” and enjoy!

Source: AFP

Authors:
AFP avatar

AFP AFP text, photo, graphic, audio or video material shall not be published, broadcast, rewritten for broadcast or publication or redistributed directly or indirectly in any medium. AFP news material may not be stored in whole or in part in a computer or otherwise except for personal and non-commercial use. AFP will not be held liable for any delays, inaccuracies, errors or omissions in any AFP news material or in transmission or delivery of all or any part thereof or for any damages whatsoever. As a newswire service, AFP does not obtain releases from subjects, individuals, groups or entities contained in its photographs, videos, graphics or quoted in its texts. Further, no clearance is obtained from the owners of any trademarks or copyrighted materials whose marks and materials are included in AFP material. Therefore you will be solely responsible for obtaining any and all necessary releases from whatever individuals and/or entities necessary for any uses of AFP material.