China proposed regulations restricting AI-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration of China released the draft rules on Saturday.
The rules target what regulators describe as “human-like interactive AI services.” These include AI systems that simulate human personality traits and emotionally engage users through text, images, audio or video.
According to BBC, once finalised the rules will apply to all AI products and services in China. The public can submit comments on the draft until January 25.
Prohibited Content
The draft regulations set strict limits on what AI chatbots can say or do. Chatbots cannot generate content that encourages suicide or self-harm. Verbal violence or emotional manipulation that damages users’ mental health will be banned.
Generating gambling-related, obscene or violent content is also prohibited. Algorithmic manipulation that pushes users toward irrational or harmful decisions will be blocked.
Content that endangers national security, damages national honour and interests or undermines national unity is also on the banned list.
Mandatory Human Intervention for Suicide
The most notable provision concerns suicide-related conversations. When a user mentions suicide, AI providers must ensure a human immediately takes over the conversation.
Providers must also promptly notify the user’s guardian or designated emergency contact. This requirement applies especially to minors and elderly users.
Special Protections for Children
The draft rules include comprehensive safeguards for children. AI firms must offer personalised settings. Time limits on usage will be required.
Parental consent must be obtained before providing emotional companionship services. This provision reflects concerns about AI companion apps’ impact on young users.
Why Now
Overt AI influence on human behaviour came under increasing scrutiny this year. OpenAI CEO Sam Altman said in September that one of the most difficult issues for the company is how its chatbot responds to suicide-related conversations.
A month earlier, a US family filed a lawsuit against OpenAI after their teenage son died by suicide. OpenAI announced over the weekend it is hiring a “Head of Preparedness” to assess AI risks including mental health impacts.
In China, AI companion apps and digital celebrities have rapidly proliferated. This month a woman in Japan married her AI boyfriend.
Global First
Winston Ma, adjunct professor at NYU School of Law, said the rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics. Compared with China’s generative AI regulation in 2023, Ma said this version “highlights a leap from content safety to emotional safety.”
In the US, platforms such as Character.AI have faced lawsuits alleging harmful psychological effects on teenagers. In Europe, regulators have fined companies like Replika and ordered corrective measures.
Experts say these developments signal a global shift toward treating human-like AI as a high-risk category subject to closer regulation rather than purely experimental consumer technology.
For more information about AI safety, check out our is ChatGPT safe guide.
Impact on Companies
The rules come at a critical time for Chinese AI startups. Companies like Minimax and Z.ai have filed for Hong Kong IPOs. The regulations could affect these plans.
Providers crossing user thresholds or launching new anthropomorphic functions must conduct formal security assessments and file reports with regulators. App stores will be tasked with enforcing compliance through listing reviews and removals.
The draft also introduces regulatory sandboxes allowing controlled experimentation. This signals Beijing’s intent to permit innovation while conditioning AI growth on demonstrable compliance and social responsibility.
Data Controls
The draft imposes strict data controls on AI providers. Use of interaction data and sensitive personal information for model training requires explicit consent. Encryption and deletion options are mandatory.
Safeguards around emotional and behavioural data will be tightened. This addresses concerns about how AI companions collect and use intimate user information.
The framework combines tiered risk-based supervision with full-chain governance spanning model development to deployment. Third-party evaluations will certify compliant AI systems with ongoing monitoring required.

