By Huma Ishfaq ⏐ 10 months ago ⏐ Newspaper Icon Newspaper Icon 3 min read
Meta Considers Halting Development Of Risky Ai Systems

Mark Zuckerberg, CEO of Meta, has committed to the eventual development of artificial general intelligence (AGI) that will be publicly accessible. AGI is basically defined as AI that can perform every work a human can. However, Meta hints in a new policy document that it might hold off on releasing an AI system built in-house if specific conditions are met.

There are two categories of artificial intelligence systems that Meta deems too dangerous to release: “high-risk” systems and “critical risk” systems. The document is referred to as the Frontier AI Framework by Meta.

According to Meta, “critical-risk” systems have the potential to cause a “catastrophic outcome [that] cannot be mitigated in [a] proposed deployment context,” while both “high-risk” and “critical-risk” technologies can assist with cyber, chemical, or biological attacks. In contrast, critical risk systems are very reliable and trustworthy, whereas high-risk systems may make attacks easier to carry out.

What Threats Are on Meta’s Radar?

Meta cites a few instances, such as the “proliferation of high-impact biological weapons” and the “automated end-to-end compromise of a best-practice-protected corporate-scale environment.” Meta admits that their document does not contain a full list of potential disasters; nevertheless, it does contain what Meta considers to be “the most urgent” and likely to occur as a direct consequence of launching a powerful AI system.

Contrary to popular belief, the document claims that “senior-level decision-makers” assess the work of both internal and external researchers before Meta uses their findings to categorize system risk. Why? A system’s riskiness cannot be determined via evaluation science since it is not “sufficiently robust as to provide definitive quantitative metrics,” according to Meta.

Meta has stated that it will restrict internal access to any system it deems high-risk and will not release the system until it has taken measures to “reduce risk to moderate levels.” However, Meta states that it will halt development until the system is rendered less risky and adopt unspecified security measures to prevent the system from being exfiltrated if it is determined to be a critical-risk system.

It seems that criticism of Meta’s “open” approach to system development has prompted the business to release the Frontier AI Framework, which will adapt to the shifting AI landscape. Meta had previously promised to share the framework in advance of this month’s France AI Action Summit. Meta, in contrast to OpenAI and similar businesses, has decided to make its AI technology publicly available, though not in the traditional sense of the word (“open source”).

The open-release model has had both positive and negative effects on Meta. The Llama family of artificial intelligence models developed by the business has amassed hundreds of millions of downloads. At least one American enemy, however, has allegedly utilized Llama to build a defense chatbot.

Meta vs DeepSeek: A Comparative Open-AI Policy

Meta might also be trying to draw comparisons between its open AI policy and that of DeepSeek, a Chinese AI company, by releasing its Frontier AI Framework. Also, DeepSeek’s systems are freely available to everybody. However, the AI employed by the corporation lacks adequate protections and can be readily manipulated to produce hazardous and toxic outputs.

“[W]e believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI,” In the document, Meta states, “it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk.”