China’s Cyberspace Administration has issued comprehensive draft regulations governing AI-generated “digital humans,” requiring prominent labeling, explicit consent for using personal likenesses, and special protections for minors amid concerns over emotional dependency and deepfake abuse.
The draft rules, released on April 3, 2026, and open for public comment until May 6, mark China’s latest attempt to balance technology ambitions with preventing risks from unfettered AI development. The regulations come as AI avatars and virtual personas have exploded across Chinese social media and e-commerce platforms. All virtual human content must carry a prominent “digital human” disclosure wherever it appears, ensuring users can distinguish AI-generated personas from human-operated accounts.
The rules prohibit creating a digital human that resembles a real person without explicit consent, especially if it involves sensitive personal data. Using someone’s image, voice, or biometrics for modeling requires explicit, separate consent, with guardians required to approve any data involving minors under 14.
The most sharply drawn provisions address minors, prohibiting virtual intimate relationships with children under 18, including simulated family members or romantic partners, as well as services that may trigger excessive spending or compromise physical or mental well-being.
Violations carry fines ranging from 10,000 yuan ($1,460 or Rs 406,400) to 200,000 yuan ($29,300 or Rs 8.1 million).The draft comes amid growing public concern after a video of an elderly woman unknowingly chatting with a hyper-realistic avatar of her dead son garnered over 90 million views on Weibo.
The avatar, created by company Super Brain, mimicked her son’s speech patterns and movements so closely that she believed it was him.State news agency Xinhua reported that China’s digital human industry was worth around 4.1 billion yuan ($600 million or Rs 16.7 billion) in 2024, having grown 85% year-over-year.
China is not the only country implementing strict regulations on AI-generated content and deepfakes. Multiple jurisdictions worldwide have enacted or are finalizing comprehensive frameworks to address similar concerns.
The European Union is implementing the most comprehensive framework through its AI Act, which comes into effect in August 2026. The legislation requires all AI-generated content, including deepfakes, to be clearly labeled as artificially manipulated. Violations can result in fines up to €35 million (Rs 11.3 billion) or 7% of a company’s global annual turnover.
The United Kingdom criminalized sharing non-consensual intimate deepfakes through the Online Safety Act 2023, with penalties up to two years in prison. The Data (Use and Access) Act 2025, which came into force on February 6, 2026, criminalized the creation of intimate images without consent.
The UK government launched the Deepfake Detection Challenge in February 2026, bringing together over 350 participants including INTERPOL.The United States passed its first federal deepfake law in May 2025 with the TAKE IT DOWN Act, which criminalizes non-consensual intimate imagery including AI-generated deepfakes.
Platforms must remove such content within 48 hours, with penalties including monetary fines and custodial sentences up to three years. Over 45 states have enacted their own deepfake laws, with Tennessee’s ELVIS Act protecting voice and likeness as property rights.
Australia criminalized deepfake sexual material in 2023, making it a federal crime with up to six years imprisonment to create or share realistic fake intimate images without consent.
Germany is considering criminal penalties following a high-profile case involving TV presenter Collien Ulmen-Fernandes.
