Meta intends to replace as many as 90% of its internal risk assessment human reviewers in key applications. These include WhatsApp, Facebook, and Instagram. Meta is reportedly striving to improve the efficacy and effectiveness of monitoring and compliance processes within the company’s ecosystem. Although AI experts and human rights activists are not particularly pleased with the situation, it could potentially come back to haunt them.
Meta Human Reviewers Vs. AI
Internal documents indicate that AI may also evaluate sensitive areas such as AI safety, youth risk, and content integrity, despite Meta’s assertion that human evaluators will continue to address novel and complex issues. Current and former employees are apprehensive about the possibility of reduced scrutiny and the potential for unintended consequences to be overlooked as a result of this change.
Critics contend that the approval of features without a comprehensive human evaluation could result in real-world damage if AI is heavily relied upon for risk assessments. A former Meta executive expressed apprehension that the new process could facilitate quicker launches with less rigorous scrutiny, thereby increasing the probability of negative consequences.
Will Meta AI Follow Regulatory Compliance?
Meta is automating risk assessments as regulatory scrutiny intensifies. A 2012 agreement with the U.S. Federal Trade Commission requires Meta to conduct privacy reviews for all new products. The company claims the AI-driven process meets this requirement and emphasizes that human judgment will still handle complex cases.
EU requires human reviewers and interaction due to more severe criteria mandated by the Digital Services Act. Compliance with local legislation is ensured by not deploying Meta’s AI-driven risk evaluations in EU regions.
Especially in areas that impact user safety and privacy, Meta’s AI integration depends on keeping a balance between efficiency and full control.