News, Social Media

Meta Announced That It Will Keep Releasing AI Tools Despite Leaks

Written by Muhammad Muneeb Ur Rehman ·  1 min read >

Meta Platforms Inc said on Tuesday it will continue to release its artificial intelligence (AI) tools to approved researchers despite claims on online message boards that its latest large language model (LLM) had leaked to unauthorized users.

“While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness,” Meta said in a statement.

Facebook-owner Meta maintains a major AI research arm and last month released LLaMA, short for Large Language Model Meta AI. Meta claimed that the model can achieve the kind of human-like conversational abilities of AI systems designed by ChatGPT creator OpenAI and Alphabet Inc while using far less computing power.

Unlike some rivals such as OpenAI, which keeps tight wraps on its technology and charges software developers to access it, Meta’s AI research arm shares most of its work openly. But AI tools also contain the potential for abuse, such as creating and spreading false information.

To avoid those kinds of misuse, Meta makes its tools available to researchers and other entities affiliated with the government, civil society, and academia under a non-commercial license after a vetting process.

In its statement, Meta said its LLaMA release was handled in the same way as previous models and that it does not plan to change its strategy.

“It’s Meta’s goal to share state-of-the-art AI models with members of the research community to help us evaluate and improve those models,” Meta said.

Meta’s goal isn’t simply to replicate GPT. It says that LLaMA is a “smaller, more performant model” than its peers, built to achieve the same feats of comprehension and articulation with a smaller footprint in terms of compute*, and so has a correspondingly smaller environmental impact. (The fact that it’s cheaper to run doesn’t hurt, either.)

But the company also sought to differentiate itself in another way, by making LLaMA “open”, implicitly pointing out that despite its branding, “OpenAI” is anything but. From its announcement:

“Even with all the recent advancements in large language models, full research access to them remains limited because of the resources that are required to train and run such large models. This restricted access has limited researchers’ ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation.”

Read More:


Written by Muhammad Muneeb Ur Rehman
Muneeb is a full-time News/Tech writer at He is a passionate follower of the IT progression of Pakistan and the world and wants to educate the people of Pakistan about tech affairs. His favorite part about being a tech writer is tech reviews and giving an honest and clear verdict to his readers. Contact Muneeb on his LinkedIn at: Profile