Navigating Regulatory Trends and Best Practices for Ethical AI Governance in Mainland China

Carol Lee, Terence Law, Tom Huang and Lanis Lam
Author: Carol Lee, CISM, CRISC, CDPSE, C|CISO, CCSP, CEH, CIPM, CSSLP; Terence Law, CISA, CFA, CISSP, CPA, Certified Banker; Tom Huang, CISA, PMP, Prince2; Lanis Lam, CISA, AWS (SAP), CPA
Date Published: 14 August 2023

Artificial intelligence (AI) adoption is on the rise. According to a World Economic Forum report, 75% of enterprises will deploy AI within the next 5 years.1 This growing adoption underscores the urgency of robust governance frameworks to ensure responsible and ethical AI practices. Governments around the world have responded by formulating regulations to govern AI development and deployment, including the proposal of the European Union (EU) AI Act,2 Singapore's Model AI Governance Framework 2.03 and the United States Algorithmic Accountability Act.4 As a pivotal player in the global AI landscape, Mainland China has released several regulations, including the most recent Measures for the Management of Generative Artificial Intelligence Services in April 2023. It is essential for organizations and technology risk professionals to understand the evolving AI regulatory landscape shaping the future of AI in Mainland China.

Overview of Mainland China's AI Landscape

China has built a solid foundation for the AI economy and talent pools in the past decade and continues investing to be a powerhouse in AI adoption. China AI private investment aggregation reached US$95.1 billion by 2022 and is ranked the second country globally (the United States is first with US$248.9 billion invested; the United Kingdom is third with US$18.2 billion invested).5 China also has the most journals, conferences and repository publications focused on AI,6 making significant contributions to AI globally. In addition, China’s search giant Baidu announced the upcoming launch of Ernie Bot, which is a ChatGPT-like app.7 The tech giant Alibaba developed its own ChatGPT-style generative AI tool, Tongyi Qianwen in 2023.8

China Regulatory Framework and Laws on AI

China has taken a technology-driven and harm-reduction approach in relation to cyberspace and AI governance.

How It Started
In 2017, the China Cybersecurity Law laid out data protection and network security provisions.9, 10 Data protection covers AI-related data and emphasizes the need to conduct regular risk assessments and implement measures to protect against cyberthreats.

In 2018, the Cyberspace Administration of China (CAC) released Provisions on the Security Assessment of Internet Information Services Having Public Opinion Properties or Social Mobilization Capacity.11, 12 It stated that enterprises should conduct security self-assessments when they bring new services online, introduce new technologies or applications and expand their existing services in either functionality or user base, which helps pave the way to the ethical and responsible use of AI in online services with significant societal influence.

In 2019, a new regulation for online audio and video content management was published to address the risk of deepfakes.13, 14 It requires any online services to acquire qualification before releasing audio or video posts for users.

The Personal Information Protection Law
This law, created in 2021, focuses on the protection of personal information and sets strict requirements for the collection, storage, processing and transfer of such data.15 Enterprises that plan to adopt AI to use with their products or service offerings should have the necessary product and service design to handle the opt-out requests of automated decision making from the data subjects and retain its processing records of automated decision-making activities for at least 3 years.

Draft Security Specification and Assessment Methods for Machine Learning Algorithms
This law was released for public consultation in 2021.16 The draft standard outlines not only the security requirements of machine learning algorithms but also the security of algorithm applications during the complete system life cycle (i.e., design, development, verification, testing, deployment, operation, maintenance, upgrading, decommissioning and removal stages). It provides recommendations on confidentiality, integrity, availability, controllability, robustness and privacy indicators for the assessment. It sets out a foundation for responsible AI by design.

The Ethical Norms for the New Generation Artificial Intelligence
This 2021 standard17 puts forward 6 fundamental ethical requirements:18

  • Enhance the well-being of humankind—AI products and services should respect human rights and aim to improve people’s livelihood and the economy’s sustainable development.
  • Promote fairness and justice—AI products and services should respect diversity, encourage inclusivity, help vulnerable and underrepresented groups, and provide alternatives to these groups when needed.
  • Protect privacy and security—AI products and services should handle personal information based on principles of lawfulness, justifiability, necessity and integrity.
  • Ensure controllability and trustworthiness—AI products and services should give users full decision-making power. Users have the right to suspend AI products or services against their information at any time, and the AI products and services should be under human control to ensure the correctness and quality of the algorithm.
  • Strengthen accountability—Human beings should be the ultimate liable subjects of any AI products and services. The responsibility of all stakeholders should be clarified and clearly stated.
  • Improve ethical literacy—Awareness of AI risk and issues should be promoted. AI ethics and governance should be actively promoted and adopted.

Internet Information Service, Algorithmic Recommendation Management Provisions
In 2022, these provisions19, 20 were created to govern how algorithms for online recommendations suggesting what to buy, watch or read are generated within Mainland China.

This includes using recommender or similar content decision algorithms in apps and websites used in Mainland China, algorithmic recommendation mechanisms, and services implemented by third parties (e.g., Douyin, Little Red Book, Taobao) to promote ecommerce products.

In EU General Data Protection Regulation (GDPR) terms, algorithmic recommendation technologies are equivalent to automatic decision making or profiling. Most notably, the provisions lay down the technical and organizational standards for the fair use of algorithmic recommendation technologies and introduce a new set of user rights. Organizations in Mainland China that employ such technologies should review their policies and practices and adjust them as needed.

Cross Board Transfers
In 2022, the Security Assessment Measures on Cross-border Transfers of Data was created to regulate cross-border data transfers, protecting the rights and interests of the citzens of China regarding personal information.21 Data processors with transfers of personal data (including AI-processing data) out of China have to file a self-assessment to the CAC. The self-assessment, similar to EU GDPR Data Protection Impact Assessment (DPIA), aims to assess:

  • The legitimacy and necessity of processing the data by the recipient outside of China’s jurisdiction
  • The volume, scope, data category and personal sensitivity of the outbound data
  • Appropriate obligations and technical measure management undertaken by the recipient outside China’s jurisdiction
  • Whether data protection responsibilities and obligations are sufficiently stipulated in the contract

The approval of the cross-border data transfer self-assessment is valid for 2 years.

Generative AI
Provisions on the Administration of Deep Synthesis Internet Information Services went into effect on 10 January 2023.22, 23 These new provisions extend the Algorithmic Recommendation Management Provisions to govern generative AI—algorithms used to create text, images, audio, video, virtual scenes or virtual digital humans. Criminals have used generative AI to produce, copy and disseminate illegal or false information or assume other people's identities to commit fraud. To stop this, these new provisions impose obligations on deep synthesis technology providers (who use mixed datasets and algorithms to produce synthetic content, such as deepfakes) and users. The provisions outline that:

  • Fake news is prohibited.
  • Providers must verify users' real identity information.
  • Providers must monitor for illegal or negative information.
  • Providers must have convenient portals for user appeals and public complaints.
  • App stores must review deep synthesis services for safety.
  • Services that enable the editing of biometric information such as faces or voices must prompt users to notify and obtain explicit consent.
  • Security assessments are required for services that generate or edit biometric information.
  • Generated content must have a technical but nonobtrusive indication that it is generated.

In addition, the Interim Measures for the Management of Generative Artificial Intelligence Services, was released in July 2023 and will be effective starting 15 August 2023. These measures are designed to manage generative AI products and services provided to Mainland China’s general public such as ChatGPT-like services and address issues regarding their use such as content moderation, information distortion and abuse, algorithmic bias and prejudice, and transparency.24, 25, 26 The measures reinforce existing legal requirements and impose additional obligations on generative AI service providers, including:

  • Implement measures to prevent discrimination based on race, ethnicity, religion and nationality.
  • Respect the intellectual property rights of others and refrain from using algorithms, data and platforms for unfair competition.
  • Take measures to ensure the authenticity of generated information and avoid the generation of disinformation.
  • Mark pictures, videos and other generated content in accordance with the Provisions on the Administration of Deep Synthesis Internet Information Services. Data labelling rules must be formulated clearly with quality assessment and assurance measures.
  • Sign service agreements with service users with rights and obligations of both parties.
  • Establish a complaint and report mechanism to accept and process individuals’ requests for reviewing, correcting and deleting their personal information in a timely manner in accordance with the Personal Information Protection Law.
  • Perform filings and security assessments on generative AI algorithms in accordance with the Provisions on the Management of Algorithmic Recommendations in Internet Information Services if service with public opinion properties or capacity for social mobilization exist.

Conclusion

Mainland China is gradually enriching its responsible AI ecosystem through various regulations based on evolving risk and threats. While AI regulations in many countries have not yet been enacted, China has been imposing mandatory assessments since 2017. Enterprises with plans to adopt AI that operate within China or serve China users must understand how to comply with all AI-related Mainland China regulations. Closely monitoring global regulatory development is vital to building tomorrow's business.

Endnotes

1 World Economic Forum, The Future of Jobs Report 2023, Switzerland, 2023
2 European Commission, “A European Approach to Artificial Intelligence
3 Info-Communications Media Development Authority and Personal Data Protection Commission Singapore, Model Artificial Intelligence Governance Framework 2.0, Singapore, 21 January 2020
4 US Congress, Algorithmic Accountability Act of 2022, S.3572, 117th Congress, USA, 3 February 2022
5 Stanford Institute for Human-Centered Artificial Intelligence, Artificial Intelligence Index Report 2023, USA, 2023
6 AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, The AI Index 2023 Annual Report, USA, April 2023
7 Baidu, Inc., “Baidu Announces First Quarter 2023 Results,” 16 May 2023
8 Alibaba Group, “Alibaba Cloud Unveils New AI Model to Support Enterprises’ Intelligence Transformation,” 11 April 2023
9 Cyberspace Administration of China, “China Cybersecurity Law,” 7 November 2016
10 Creemers, R.; G. Webster; P. Triolo; “Translation: Cybersecurity Law of the People’s Republic of China,” Stanford University, Stanford, California, USA, 29 June 2018
11 Cyberspace Administration of China, “Internet Information Service Security Assessment Regulations With Public Opinion Attributes or Social Mobilization Capabilities,” 15 November 2018
12 Creemers, R.; “New Rules Target Public Opinion and Mobilization Online in China,” New America, 21 November 2018
13 Cyberspace Administration of China, “Notice on Printing and Distributing the ‘Regulations on the Administration of Network Audio and Video Information Services,” 29 November 2019
14 China.org.cn, “China Issues Regulation for Online Audio, Video Services,” 30 November 2019
15 Ke, X.; V. Liu; Y. Luo; Z. Yu; “Analyzing China's PIPL and How it Compares to the EU's GDPR,” International Association of Privacy Professionals, 24 August 2021
16 National Information Security Standardisation Technical Committee of China, Information Security Technology—Security Specification and Assessment Methods for Machine Learning Algorithms, 4 August 2021
17 Ministry of Science and Technology of the People’s Republic of China, “New Generation Artificial Intelligence Code of Ethics’ Released,” 26 September 2021
18 International Research Center for AI Ethics and Governance, “The Ethical Norms for the New Generation Artificial Intelligence, China,” 27 September 2021
19 Creemers, R.; G. Webster; H. Toner; “Translation: Internet Information Service Algorithmic Recommendation Management Provisions—Effective March 1, 2022,” Stanford University, Stanford, California, USA, 10 January 2022
20 Cyberspace Administration of China, “Provisions on the Administration of Internet Information Service Algorithm Recommendations,” 4 January 2022
21 Office of the Privacy Commissioner for Personal Data, Hong Kong, “Mainland’s Personal Information Protection Law
22 Cyberspace Administration of China, “Provisions on the Administration of Deep Synthesis of Internet Information Services,” 11 December 2022
23 Allen & Overy, “China Brings Into Force Regulations on the Administration of Deep Synthesis of Internet Technology,” 1 February 2023
24 Cyberspace Administration of China, “Interim Measures for the Management of Generative Artificial Intelligence Services,” 13 July 2023
25 Data Guidance, “China: CAC publishes Interim Measures on Generative AI,” 13 July 2023
26 China Law Translate, “Translation: Interim Measures for the Management of Generative Artificial Intelligence Services,” 13 July 2023

Carol Lee, CISM, CRISC, CDPSE, C|CISO, CCSP, CEH, CIPM, CSSLP

Is the head of cybersecurity at a Hong Kong-listed company. She leads the enterprise wide cybersecurity and privacy program to support the cloud-first and enterprise digital transformation strategies. She was awarded the Global 100 Certified Ethical Hacker Hall of Fame and the Hong Kong Cyber Security Professionals Award in recognition of her determination and commitment to assuring the safety of the cyberworld.

Terence Law, CISA, CFA, CISSP, CPA, Certified Banker

Is the director of emerging technology of the ISACA China Hong Kong Chapter. He is the head of innovation and data for the internal audit department of a leading financial institution. With more than 20 years of experience in auditing, cybersecurity and emerging technologies, he is passionate about giving back to the community and volunteers his time and expertise to promote the adoption of industry-leading practices.

Tom Huang, CISA, PMP, Prince2

Is the head of technology and operational risk management at Livibank. He has extensive experience in financial service industry, with expertise in risk management in technology, cybersecurity, data and the cloud. He leverages technologies to develop innovative and safe solutions that enhance customer experience in day-to-day life. He participates in industry forums and is always happy to share insights with the community.

Lanis Lam, CISA, AWS (SAP), CPA

Is a partner in the technology consulting practice of KPMG China. She is a leader in the field of technology risk management with more than 17 years of experience. Her expertise spans across various areas such as artificial intelligence, cloud computing, third-party risk management, cybersecurity and system resilience. She is passionate about discussing and sharing ideas on emerging technologies.