In-depth Analysis Series on the Latest "Draft Amendments to the Patent Examination Guidelines (Solicitation for Comments)" Regarding New Examination Rules for Artificial Intelligence and Vid

CHANG TSI
Insights

August05
2025

(The 2nd Article) Changes in Drafting Rules for AI-Related Patent Applications, Corresponding Cases, and Analysis with Recommendations

On April 30, 2025, the China National Intellectual Property Administration (CNIPA) issued a notice soliciting public opinions on the "Draft Amendments to the Patent Examination Guidelines (Solicitation for Comments)." The notice was accompanied by a comparison table of the amendments and explanatory notes, inviting feedback from the public.   

Among the revisions, the most notable updates include changes to the examination rules for the artificial intelligence field in Section 6, Chapter 9, and the introduction of examination rules for the video codec field in Section 7, Chapter 9. Specifically:  

  1. In the first and second parts of Section 6, Chapter 9, the examination criteria for artificial intelligence-related patent applications have been revised, with corresponding case studies provided.  
  2. In the third part of Section 6, Chapter 9, new requirements for drafting specifications for artificial intelligence-related patent applications have been introduced, along with relevant case studies.  
  3. In Section 7, Chapter 9, new provisions have been added for the examination of invention patent applications containing bitstreams.  

This series, authored by Irene, provides an in-depth analysis of these changes in examination rules for artificial intelligence and video encoding/decoding fields. The series is divided into three articles corresponding to the above-mentioned amendments (1)-(3). This article focuses on analyzing the changes in drafting rules for AI-related patent applications, providing examples, and offering relevant analysis and recommendations.

I. Changes and Summary of Drafting Rules for AI-Related Patent Applications

In this revision, Section 6.3.1 introduces new drafting requirements for specifications related to AI cases, and Section 6.3.3 provides examples of drafting specifications for such invention patent applications [Example 20] and [Example 21].

1. Newly Added Drafting Requirements for AI Patent Application Specifications

The newly added drafting requirements explicitly clarify the degree of disclosure needed for AI-related patent applications. These requirements aim to break the vague limitations of the "black box" approach, ensuring that the "black box is no longer opaque." This allows technical personnel in the art to understand the specific implementation of AI algorithms or models based on the disclosed content in the specification. The specific amendments are as follows (underlined parts indicate newly added content):

6.3.1 Drafting of Specifications

The specification for invention patent applications containing algorithm features or business rules and methods should clearly and comprehensively describe the solution adopted by the invention to address its technical problem. On the basis of technical features, the solution may further include algorithm features or business rules and methods that are functionally mutually supportive and interact with the technical features.  

If the invention involves the construction or training of an artificial intelligence model, the specification should generally provide a clear record of the necessary modules, levels, or connection relationships of the model, as well as the specific steps and parameters required for training.  

If the invention involves the application of an artificial intelligence model or algorithm in a specific field or scenario, the specification should generally clearly describe how the model or algorithm integrates with the specific field or scenario, and how the input and output data of the algorithm or model are configured to demonstrate their intrinsic relationships, so that a person skilled in the art can implement the solution of the invention in accordance with the content recorded in the specification.

2. Newly Added Corresponding Examples

The newly added Example 20, titled "Method for Generating Facial Features," and Example 21, titled "Cancer Prediction Method," illustrate two different scenarios: insufficient disclosure and sufficient disclosure in the specification. The specific examples are as follows (all newly added content):

Example 20: Method for Generating Facial Features

Overview of the Application

This invention patent application shares information between each second convolutional neural network using a feature region image set generated by a first convolutional neural network equipped with a spatial transformation network. This reduces memory usage and improves the accuracy of facial image generation.

Claim of the Application

  • A method for generating facial features, comprising:
  • acquiring a facial image to be identified;
  • inputting the facial image to be identified into a first convolutional neural network to generate a feature region image set for the facial image to be identified, wherein the first convolutional neural network is configured to extract feature region images from facial images;
  • inputting each feature region image in the feature region image set into a corresponding second convolutional neural network to generate regional facial features for the feature region image, wherein the second convolutional neural network is configured to extract regional facial features from the corresponding feature region images;
  • generating a facial feature set for the facial image to be identified based on the regional facial features of each feature region image in the feature region image set; 
  • wherein the first convolutional neural network is further equipped with a spatial transformation network to determine the feature region of the facial image; and 
  • inputting the facial image to be identified into the first convolutional neural network to generate a feature region image set for the facial image to be identified comprises: inputting the facial
  • image to be identified into the spatial transformation network to determine feature regions of the facial image to be identified; and inputting the facial image to be identified into the first convolutional neural network to generate the feature region image set for the facial image to be identified based on the determined feature regions.

Relevant paragraphs of the specification

The method for generating facial features provided in an embodiment of the present application first generates a feature region image set of the facial image to be identified by inputting the acquired facial image to be identified into a first convolutional neural network. The first convolutional neural network can be used to extract feature region images from the facial images. Then, each feature region image in the feature region image set can be input into a corresponding second convolutional neural network to generate regional facial features for the feature region image. The second convolutional neural network can be used to extract regional facial features from the corresponding feature region image. Subsequently, the facial feature set for the facial image to be identified can be generated based on the regional facial features of each feature region image in the feature region image set. In other words, the feature region image set generated by the first convolutional neural network can be shared with each of the second convolutional neural networks. This reduces the amount of data, thereby reducing memory usage, and also helps improve generation efficiency.

To improve the accuracy of the generated results, the first convolutional neural network can also be provided with a spatial transformer network to determine the feature regions of the facial image. In this case, the electronic device can input the facial image to be identified into the spatial transformer network to determine the feature regions of the facial image to be identified. In this way, the first convolutional neural network can extract images matching the feature regions on the feature layer of the input facial image to be identified based on the feature regions determined by the spatial transformer network, thereby generating a feature region image set for the facial image to be identified. The specific location of the spatial transformer network in the first convolutional neural network is not limited in this application. The spatial transformer network can determine the feature regions of different features of different facial images through continuous learning.  

Analysis and Conclusion

The invention patent application seeks protection for a method for generating facial features. To improve the accuracy of facial image generation results, the first convolutional neural network can be equipped with a spatial transformation network to determine the feature region of the facial image. However, the specification does not disclose the specific placement of the spatial transformation network within the first convolutional neural network.

A person skilled in the art will recognize that the spatial transformer network, as a whole, can be inserted at any position within the first convolutional neural network, forming a nested structure of convolutional neural networks. For example, the spatial transformer network can serve as the first layer of the first convolutional neural network or as an intermediate layer within the first convolutional neural network. These positions do not affect its ability to identify the feature region of an image. Through training, the spatial transformer network can identify the feature regions where different features reside in different facial images. Thus, the spatial transformation network can not only guide the first convolutional neural network in feature region segmentation but also perform simple spatial transformations on input data to improve the processing efficiency of the first convolutional neural network. Therefore, the model employed in the invention patent application has a clear hierarchy, as well as the inputs/outputs and relationships between the layers. Both convolutional neural networks and spatial transformer networks are well-known algorithms, and a person skilled in the art can construct the corresponding model architecture based on the above description. Therefore, the solution sought to be protected in the invention patent application has been fully disclosed in the description and complies with the provisions of Article 26, paragraph 3 of the Patent Law.

Example 21: Cancer prediction method

Overview of the Application

The invention patent application provides a method for predicting cancer based on biological information. By using a trained malignancy-enhanced screening model, blood routine test indicators, blood biochemical test indicators, and facial image features are jointly used as inputs to the screening model to obtain a predicted value of malignant tumor incidence, thereby solving the technical problem of improving the accuracy of malignant tumor prediction.

Claim of the Application

  • A cancer prediction method based on biological information, comprising:
  • obtaining blood routine test sheet and blood biochemistry test sheet of a person to be screened, identifying detection indicators, age, and gender in the blood routine test sheet and blood biochemistry test sheet;
  • obtaining a front face image of the person to be screened without makeup, and extracting facial image features;
  • predicting a predicted value of malignant tumors in the corresponding person to be screened based on an enhanced screening model for malignant tumors;
  • wherein the training process of the enhanced screening model for malignant tumors includes: constructing a large-scale population sample set, the sample contains blood routine, blood biochemistry and facial images of a same person; using the blood routine, blood biochemistry and face image features to establish learning samples; using the learning samples to train the machine learning algorithm model to obtain the enhanced screening model for malignant tumors.

Relevant paragraphs of the specification

At present, when tumor markers are used to identify malignant tumors, the standard for tumor markers does not definitively confirm malignancy when the values exceed the threshold, nor does it rule out malignancy when the values are below the threshold. Predicting cancer based on tumor markers has low accuracy. This application improves the accuracy of identifying various malignant tumors by utilizing blood routine test indicators, blood biochemical test indicators, and facial images. This application, while leveraging blood test data, also references the health status of the individual to be screened as reflected in their facial image, allowing for a more accurate prediction of the probability of malignancy. The selection of features for the malignancy-enhanced screening model can utilize some or all of the indicators from blood routine and blood biochemical tests.

Analysis and Conclusion

The technical problem the invention patent application aims to solve is how to improve the accuracy of predicting malignant tumors. To address this problem, the solution utilizes a trained malignancy-enhanced screening model, combining blood routine test indicators, blood biochemical test indicators, and facial image features as inputs to predict malignancy probabilities. However, blood routine and blood biochemical tests—two common biochemical testing methods—each contain dozens of indicators. 

The specification does not disclose which specific indicators are key to improving the accuracy of tumor prediction, nor does it clarify whether all indicators are referenced or whether different weights are assigned to each indicator for prediction purposes. Technical personnel in the art cannot determine which indicators can be used to assess malignancy. Additionally, based on current scientific research, apart from a few specific cancers such as facial skin cancer, the correlation between facial features and the likelihood of developing malignant tumors remains uncertain. The specification does not disclose or provide evidence of a causal relationship between the "factors influencing the judgment" and the "judgment results."  

Furthermore, the specification does not provide any validation data to demonstrate that the accuracy of identifying various malignant tumors using this solution is higher than using tumor markers or significantly exceeds the accuracy level of randomly predicting malignancy probabilities. Technical personnel in the art, based solely on the content disclosed in the specification, cannot confirm that the solution proposed in this application can solve the stated technical problem. 

Therefore, the technical solution sought for protection in the invention patent application has not been sufficiently disclosed in the specification, and the specification does not comply with the requirements of Article 26, Paragraph 3 of the Patent Law.

In [Example 20], the method for generating facial features is considered sufficiently disclosed, as technical personnel in the art can reasonably implement the solution despite the lack of explicit specification regarding the spatial transformation network's position. 

In contrast, [Example 21], the cancer prediction method, fails to explain the correlation between specific biological indicators and the model, making the technical solution unimplementable and thus considered insufficiently disclosed.

II. Recommendations for Applicants of AI-Related Patents

For AI patent applications, the specification should provide detailed descriptions of the algorithm's structure, parameter settings, model architecture, training steps, and their relevance to solving the technical problem. This ensures the technical solution's feasibility and avoids insufficient disclosure caused by the "black box" issue.

In many previous AI patent applications, artificial intelligence models were often treated as "black boxes," which could lead to insufficient disclosure in the specification. The newly added drafting requirements in the draft for comments clearly state that for cases involving artificial intelligence models, the specification should explicitly record the necessary modules, levels, or connection relationships of the model, the specific steps and parameters required for training, as well as the relationships between the model or algorithm and the application scenario, input data, and output data. This helps prevent insufficient disclosure in the specification.

In [Example 20], the specification describes the specific processes of the first convolutional neural network and the second convolutional neural network in image processing. The data flow clearly establishes the connection relationships between the two networks, and the functions of both networks are described, especially the detailed functionality of the spatial transformation network. Although the spatial transformation network's position within the first convolutional neural network is not specified, this content falls within the knowledge expected of technical personnel in the art. Therefore, the specification in this case is considered sufficiently disclosed.

In [Example 21], three main issues arise:

  1. One skilled in the art cannot clearly identify which blood routine indicators are critical to improving the accuracy of tumor prediction based on the disclosed content, raising concerns about the clarity of the screening model's training process.  
  2. The correlation between the "factors influencing the judgment" and the "judgment results" is unclear.  
  3. No evidence is provided to demonstrate that the technical solution in the application achieves better results.  

Based on these issues, the specification in this case is considered insufficiently disclosed.

The demonstration of these two cases indicates that CNIPA will adopt stricter standards for examining specifications in AI-related invention patent applications in China. This also implies that applicants need to further enhance their skills in drafting specifications, understanding model structures, and analyzing weights and parameters in the model training process.

III. Conclusion

The latest draft amendments introduce more detailed requirements for drafting specifications in AI patent applications to be submitted to CNIPA. These amendments clarify that, to address the AI "black box" issue, applicants must disclose the model architecture, training steps, and data associations in their specifications to ensure the feasibility of their technical solutions.

The draft is still in the public consultation stage, and the final rules may be subject to adjustments. As the regulatory landscape for AI patents continues to evolve, it is crucial for applicants seeking protection for innovative technologies in China to stay informed about these changes.

At Chang Tsi & Partners, we specialize in navigating the complexities of AI-related patent applications. Whether you are interested in learning more about the examination standards for AI patents in China or have plans to file AI-related patent applications, we are here to assist you. 

For more detailed updates and insights on China's AI patent regulations, please contact Irene Wang @ Chang Tsi & Partners. We continue to monitor developments in this dynamic area of intellectual property law. In the next article, we will introduce amendments to the "Examination Rules for Invention Patent Applications Containing Bitstreams," providing an in-depth analysis of adjustments to examination standards in areas such as data stream processing and encoding technologies, and exploring their impact on technology protection in industries like telecommunications and computing. Stay tuned for more updates!

Irene Wang
Counsel | Patent Attorney
Related News