Back to AI compliance
Core interpretation of the "Measures for the Identification of Artificial Intelligence Generated Synthetic Content"

Core interpretation of the "Measures for the Identification of Artificial Intelligence Generated Synthetic Content"

AI compliance Admin 2 views

The "Measures for the Identification of Artificial Intelligence-Generated Synthetic Content" is jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration of Radio and Television, and will come into force on September 1, 2025. The measures clearly stipulate that all AI-generated synthetic content (text, pictures, audio, video, virtual scenes, etc.) must have both explicit and implicit identifications to protect the public's right to know, maintain network information security, and support content traceability and supervision.


1. Scope of application and legal basis

  1. Scope of application: All platforms, enterprises and related users that provide artificial intelligence generation synthetic content services.
  2. Legal basis: Cybersecurity Law, Provisions on the Management of Deep Synthesis, Interim Measures for the Management of Generative Artificial Intelligence Services and other relevant regulations.


2. Explicit identification required

  1. text: Mark prompts such as "AI-generated" at the beginning, end, or prominent position on the interface.
  2. Images and videos: Embed visible identification within your content and remind you at the beginning, middle, or end of the video.
  3. Audio: Highlighted by voice prompts or prominently in the playback interface.
  4. Virtual Scene: Obvious prompts during entry or use.
  5. Downloaded or exported content must be explicitly identified and must not be removed.


3. Implicit identification requires

that information
  1. such as generation attributes, service provider information, and content number be written in the file metadata.
  2. Encourage the use of tamper-proof technologies such as digital watermarks.


4. Platform responsibility

  1. The communication platform needs to prominently remind the content containing implicit identification.
  2. Prompts should also be given for content detected as AI-generated or suspected AI-generated.
  3. User claims should be provided and content dissemination elements should be documented.
  4. When reviewing and listing, the application distribution platform needs to confirm whether it provides AI synthetic content services and verify the implementation of the logo.


5. User responsibilities and prohibited behaviors

  1. If users need to remove explicit marks, they must sign an agreement and keep operation records for no less than 6 months.
  2. It is prohibited to forge, tamper with, delete or conceal explicit and implicit logos.
  3. It is forbidden to provide tools or services for others to remove identification.


6. Implementation and supporting standards

  1. and measures are implemented simultaneously with the national standard "Network Security Technology Artificial Intelligence Generated Synthetic Content Identification Method".
  2. Supporting guidelines and coding rules will also be implemented to help platforms and enterprises implement them.


FAQsQ

: What is the difference between explicit and implicit identification?

A: Explicit identification is a prompt that users can see directly, while implicit identification is information embedded in file metadata or digital watermarks.

Q: What happens if I violate the marking requirements?

A: Relevant departments will investigate legal responsibility in accordance with the law, and may face administrative penalties or even criminal liability if the circumstances are serious.

Q: How long does it take to complete the platform transformation?

A: The measures will come into effect on September 1, 2025, and the platform needs to complete the system and process transformation before then.

Recommended Tools

More