CHANG TSI
Insights
In recent years, the rapid development of generative artificial intelligence technology has been reshaping the ecosystems of various industries, showcasing remarkable creativity in areas ranging from content creation to image generation. However, behind this technological wave lies significant legal risks that cannot be ignored—the extensive use of copyright-protected works during model training may lead to infringement disputes. From the introduction of the "Interim Measures for the Management of Generative Artificial Intelligence Services" to the series of rulings in the "Ultraman Case," legislative and judicial practices have drawn clear compliance boundaries for enterprises. How can a balance be struck between technological innovation and copyright protection? This article will analyze the latest legislative developments and typical cases to reveal the core risks and provide strategies for enterprises to address them.
China's current regulatory framework for generative AI focuses on the "Regulations on the Administration of Deep Synthesis Internet Information Services" and the "Interim Measures for the Management of Generative Artificial Intelligence Services." The former focuses on deep synthesis technologies (such as face swapping and voice cloning) and emphasizes the prevention of false information, while the latter addresses the entire process of generative AI services, explicitly requiring the legalization of data sources and the obligation to protect intellectual property rights. The legislative logic reflects a "category-based regulation" approach: adopting a relatively permissive stance toward the data input and model training stages to encourage technological innovation, while strictly controlling the content output stage to prevent generated materials from substantially replacing original works. For instance, Article 7 of the "Interim Measures" explicitly stipulates enterprises must use data from legal sources and obtain authorization or meet fair use conditions when involving copyrighted works. This "combination of leniency and strictness" in regulatory measures not only provides room for technological development but also establishes a legal safeguard for rights holders.
Judicial practice has further refined the boundaries for determining infringement. In the "Guangzhou Ultraman Case," the defendant platform was found to infringe on the copyright holder's reproduction and adaptation rights because users could input the prompt "Ultraman" to generate similar images. Notably, while the court ruled to stop the infringement and awarded damages of CNY10,000, it rejected the plaintiff's claim to delete the training data. This ruling sends a clear signal: during the early stages of AI technology development, the judiciary is more inclined to block infringing outcomes through technical measures rather than directly negating the training process itself. Similarly, the second-instance ruling in the "Hangzhou Ultraman Case" focused on the platform's duty of care. The court held that while the platform did not directly participate in the process of users generating infringing content, its failure to establish an effective complaint-handling mechanism constituted indirect infringement. Both cases centered on whether the platform fulfilled its reasonable duty of care and whether there was intentional bad faith, specifically examining whether the platform foresaw the potential infringement consequences and whether it had implemented a complaint and reporting mechanism to prevent infringement. Additionally, the determination of platform liability is closely tied to its technical control capabilities and business model. Fee-based platforms, due to their profit-driven nature, are required to bear a higher duty of care.
Legal compliance is becoming a core competitive advantage for AI companies. In the future, enterprises will need to build a risk control system driven by both technology and legal frameworks: technical teams should focus on developing algorithms to block infringement, while legal teams must dynamically track legislative changes and participate in the formulation of industry standards. In this collision between technological revolution and legal rules, only companies that proactively embrace compliance can achieve sustainable growth. From data cleansing to labeling generated content, from designing user agreements to conducting emergency response drills, every step must be infused with legal insight. For AI companies, compliance is not just a shield to avoid litigation but also a cornerstone for earning market trust. As judicial rules become more defined, those companies that find a balance between technological innovation and legal boundaries will ultimately gain a competitive edge in the AI wave.