China's New AI Ethics Framework: How 10 Government Departments Are Building Accountability Into Every AI Project
China has launched a comprehensive ethics review system for artificial intelligence, jointly issued by 10 government departments including the Ministry of Industry and Information Technology. The trial guideline establishes mandatory ethics reviews for AI research, development, and deployment, moving beyond principles and into measurable, auditable processes embedded throughout an AI project's lifecycle .
What Are the Three Core Values Driving China's AI Ethics Reviews?
Rather than creating a standalone AI law, China is building what experts describe as an "institutionalized ethics-compliance layer" around artificial intelligence development. The guideline centers on three main value axes that ethics review bodies must evaluate :
- Human Well-Being: Assessing the scientific and social value of AI projects, their contribution to human welfare and sustainable development, and the balance between potential risks and benefits.
- Fairness and Justice: Examining data selection criteria, algorithm design rationality, and measures to prevent bias, discrimination, and what the guideline calls "algorithmic exploitation" such as unfair treatment of particular groups or manipulative pricing strategies.
- Controllability and Trustworthiness: Ensuring AI systems remain robust in unpredictable environments, allow user control and intervention, and include continuous monitoring with contingency plans.
These themes echo earlier Chinese AI ethics documents, but the new guideline ties them explicitly to a formal review process with concrete checkpoints and technical requirements .
How Do Companies Actually Conduct These Ethics Reviews?
The guideline requires organizations to establish AI science and technology ethics committees that include experts in AI technology, application, ethics, and law. These committees must be guaranteed adequate staff, premises, funding, and independence to function effectively .
When a project leader applies for ethics review, they must submit detailed documentation including the project plan with background, objectives, participating institutions' qualifications, staff, funding sources, algorithm mechanisms, data sources and acquisition methods, testing and evaluation approaches, expected products, and target application fields. Applicants must also provide a comprehensive ethics risk assessment and prevention plan that identifies risks for the intended application, outlines monitoring and early-warning measures, and describes prevention strategies .
The ethics committee or an entrusted service center must make a decision within 30 days of accepting the application, though this timeline can be extended with documented reasons. If an applicant disagrees with the decision, they can appeal within three working days, and the committee must re-decide within seven working days if the appeal provides sufficient justification .
What Specific Technical Issues Must Ethics Reviews Address?
The guideline lists several concrete dimensions that ethics review bodies must examine across six key areas. Beyond the three core values, reviewers must also assess transparency and explainability, responsibility and traceability, and privacy protection .
For transparency and explainability, committees must verify that AI systems disclose their purpose, logic, interaction methods, and potential risks, with technical means to increase how understandable the system is to users. Responsibility and traceability requires logging and measures ensuring full-chain traceability for data, algorithms, models, and systems, plus verification of personnel qualifications. Privacy protection demands safeguards for data collection, storage, processing, and use, as well as oversight of research on new data technologies .
The guideline also pushes developers and operators to document dataset composition, justify model architectures and optimization targets, and show risk-mitigation measures. This transforms ethics from abstract principles into measurable, auditable processes embedded in AI lifecycle management .
How Is China Supporting the Technical Infrastructure for AI Ethics?
Rather than relying solely on human judgment, the guideline emphasizes building technical tools and infrastructure to support ethics reviews. China plans to promote "orderly" open-sourcing of high-quality datasets specifically designed for AI ethics review, allowing auditors and researchers to test systems and methods .
The government also aims to strengthen development of general risk-management, assessment, and auditing tools, including toolkits for testing robustness, detecting bias, improving explainability, and conducting red-teaming exercises where security experts attempt to break systems. Additionally, China plans to explore risk assessment based on application scenarios, with more demanding checks for high-risk uses like critical infrastructure, healthcare, or social governance .
The guideline includes industrial-policy elements designed to create market incentives for ethical AI. It encourages promotion and wider use of AI products and services that comply with scientific and technological ethics, creating a quasi-compliance market advantage. The framework also calls for protection of intellectual property rights in AI ethics review technologies themselves, such as proprietary tools, methods, and platforms used for ethical assessments .
What Happens When AI Ethics Risks Change After Approval?
The guideline establishes ongoing oversight mechanisms rather than one-time approvals. Mandatory follow-up reviews must occur at least every 12 months, and ethics committees can order suspension or termination of projects if ethics risks change significantly .
For lower-risk activities, minor modifications that don't worsen the risk-benefit ratio, or follow-up reviews without major changes, the guideline allows "simplified procedures" handled by at least two designated committee members. This tiered approach balances thoroughness with practicality, avoiding unnecessary bureaucracy for routine updates .
The Ministry of Industry and Information Technology and the Ministry of Science and Technology, along with other departments, will publish and update a list of AI activities requiring expert re-review. Projects on this list must undergo expert re-review organized by competent departments after initial committee or service-center review, adding an additional layer of scrutiny for the highest-risk applications .
Why Should Global AI Developers Care About China's Approach?
China's ethics framework signals a shift in how governments are approaching AI governance. Rather than waiting for comprehensive AI legislation, the guideline embeds accountability into the development process itself through mandatory reviews, technical auditing tools, and ongoing monitoring. This approach may influence how other countries structure their own AI governance frameworks, particularly those considering similar ethics-first strategies .
The guideline also reflects growing recognition that AI fairness and transparency require more than good intentions. By requiring documentation of dataset composition, justification of model design choices, and technical measures to prevent bias and discrimination, China is institutionalizing practices that many AI developers are already adopting voluntarily. For companies operating internationally or seeking to expand into Chinese markets, understanding these requirements becomes essential for compliance and market access .