最优秀的中国互联网法律律师事务所之一

China’s Personal Information Standard: The Long March to a Privacy Law

Greenleaf,Livingston   2018-03-06 20:18
On 5 September 2017, Chinese authorities circulated the latest version of a draft standard entitled ‘Information Security Techniques - Personal Information Security Specification’ (the ‘Standard’), now submitted to the National Standardization Committee for approval. Proposed and managed by the National Information Security Standardization Technical Committee (‘TC260’), the Standard provides the most detailed specifications yet for how Chinese authorities will interpret and apply existing data privacy laws to private and publicsector entities. TC260 is under the leadership of the Cyberspace Administration of China (CAC). A finalized Standard is expected to be published before the end of first quarter 2018.

The Standard’s release follows the 30 August 2017 publication of four other draft technical standards intended to further the implementation of the 2016 Cybersecurity Law, which took effect 1 June 2017. These additional drafts (also for public comment) target security assessments for cross border transfers, general security requirements for network products and services, and the security of critical information infrastructure.

In an earlier article, we assessed the data privacy provisions of the 2016 Cybersecurity Law as ‘China’s most comprehensive and broadly applicable set of data privacy principles to date’, going beyond the five main laws and regulations dealing with data privacy enacted from 2011-14. We pointed out that the Cybersecurity Law has added new or more explicit requirements concerning data correction rights, deletion, re-use and disclosure, breach notification to users, data export restrictions and data localization requirements. Still missing, however, were several common elements found in other jurisdictions’ data privacy laws, such as explicit user access rights, requirements on data quality and special provisions for sensitive data, as well as no specialist data protection authority (DPA). The omission of the first of these -- explicit subject access rights -- means that China’s law does not yet include one of the most fundamental elements of a data privacy law.

In this article, we examine the additional elements the Standard brings to our understanding of the 2016 Cybersecurity Law, and whether it advances China on its long march towards a national data privacy law.
 
Assessing the role of Standards
 
Before turning to the specific provisions of the new Standard, some background on the role of standards in China’s data privacy regime. Under the PRC standards regime, the Standard is designated as a “voluntary national standard” (GB/T). . An earlier standard, the 2013 MIIT Guidelines, is also voluntary but designated a ‘national guiding technical document’ (GB/Z). Such designations contrast with the “GB” standard, which is mandatory.

The 2013 MIIT Guidelines covers similar territory but limits its coverage to personal information (‘PI’) collected on public and commercial information systems. The new Standard has no such limitations and appears targeted to cover PI collection activities by entities and organizations both public and private. It is conceivable that the 2013 MIIT Guidelines may be amended or rescinded in light of the more detailed specifications found in the new Standard.

Though both standards lack the force of law, they may be relied upon by PRC courts and regulatory authorities to assess whether a given organization’s data protection practices are in compliance with the more broadly drafted provisions found in existing PRC data privacy law and regulation. Best practices dictate that such standards should be followed closely by all covered entities operating in China.

The leader of the personal data protection project for TC260 responsible for drafting the Standard has also published an explanation of the data protection policies in the Cybersecurity Law, which assists in understanding the purposes of the Standard, and which should be regarded as an ‘authoritative and balanced view’ of the overall approach being taken by the Chinese government.
 
Comprehensive scope – Public sector, and whole private sector
 
The Standard’s Article 1 makes clear that it is intended to cover both public and private actors:

[This Standard] applies to all kinds of organizations to standardize their personal information processing. It is also applicable to competent authorities, third party assessment organizations and other organizations to monitor, manage, and evaluate personal information security. (emphasis ours)

However, with regard to the 2016 Cybersecurity Law, is not as clear that it applies to public sector organizations, although Chinese experts consider that this is definitely the case.

The Standard’s overall reach is also comparatively broader than the 2016 Cybersecurity Law. While that law focuses primarily on the activities of ‘network operators’ and operators of ‘critical information infrastructure,’ the Standard specifies a broader ‘personal information controller’ (PIC), defined as any ‘organization or person that has the power to decide the purpose and method of processing personal information’ (art. 3.4).

The Standard also does not limit itself to just digital data, defining “Personal Information” as ‘information that is recorded electronically or otherwise’ (emphasis ours). The appearance of such expansive language therefore suggests that the Standard is intended to provide comprehensive coverage of all actors, public and private, for all of their personal information collection and processing activities.
 
‘Personal information’ – A conventional or ‘revolutionary’ definition?
 
With respect to the Standard’s definition for PI, we note the addition of some additional terms that suggest a potential revolutionary new definition for this term.

To begin with, the Standard adopts the international standard ‘capacity to identify’ approach (‘Any information that is recorded, electronically or otherwise, can be used solely or in combination with other information to identify the identity of a natural person ...’ (art. 3.1)). This is similar to the language used in the 2016 Cybersecurity Law’s Article 76(5).

However, the Standard then adds an additional clause to the standard definition for PI for information that ‘can reflect the activities of a natural person.’ This suggests a fairly expansive broadening away from information that “identifies” an individual to any information that may “reflect” a specific person (without necessarily identifying) them.

While it is difficult to understand precisely what is meant by this addition, it may be intended to refer to any information which gives an organisation the capacity to interact with a person on an individuated basis -- such as behavioural targeted marketing using data which does not enable the PIC to identify the data subject. If this is the case, this use of language would be a major step beyond the existing definitions used by the OECD/CoE (1980/81) and many other jurisdictions. It would also mean that the extensive list of examples for “personal information” in the Standard’s Note 1, and the much longer list provided in its Annex A (‘Examples of Personal Information’) (both longer than in the Cybersecurity Law) would then be taken as applying irrespective of whether they have the capacity to identify (in standard usage).

Chinese experts confirm that they view the use of the ‘identify’ language to include ‘singling out’ a person’s data from a group or population without knowing the specific identity of that individual, and consider this approach consistent with many jurisdictions and ISO standards. In this way, they do not see their definition of ‘personal information’ as going beyond its meaning in the EU Directive or GDPR. However, others may see it as a more radical use. The Chinese use of these terms, if not a revolutionary approach to data privacy laws, is at least one which is at the more advanced end globally.

Is there an inconsistency between this interpretation and the Standard’s clear statement (discussed below) that ‘anonymized’ data is not personal information? We do not think so as it appears the Standard establishes three distinct categories of data:
 
  1. (i) Identifiable Data. PI which gives the PIC who holds it the capacity to identify a data subject (i.e. is identifiable data);
  2. (ii) Non-identifiable Data. PI which does not allow the PIC who holds it to identify a data subject, but does enable interactions with that person (e.g. behavioural advertising or other uses of ‘reflective’ information). This data has not been anonymized, but it does not enable the PIC to identify (i.e. the person’s identity could be recovered, but not by this PIC at the present time). So it is data which is neither identifiable nor anonymous.
  3. (iii) Anonymized Data. Data which has been anonymised (i.e. processed so identity cannot be recovered), whether it was originally in category (i) or (ii). It is no longer PI, and can be used for behavioural marketing or other uses, irrespective of privacy laws.
 
This tripartite categorisation of data affecting persons does reflect modern realities, and could therefore give a reasonable interpretation of the Standard’s definition of PI. However, further clarity is needed on what PRC authorities intend by this potentially revolutionary “reflecting on” language.
 
‘Anonymized’ vs ‘de-identified’ data
 
As noted above, the Standard distinguishes carefully between ‘anonymized’ and ‘de-identified’ personal information.

“Anonymization” is defined as a ‘technical procedure to process personal information to make the personal data subject unidentifiable and whereby such processed information cannot be recovered’ (which presumably means ‘re-identified’) (art. 3.13). Here, the information is anonymized to the point that no additional data or technical procedure may make it possible to reverse the process and identify the data subject. A note in the Standard clarifies that “[p]ersonal ‘information that has been anonymized does not count as personal information,’ thereby resolving an ambiguity in the Cybersecurity Law (art. 42) as to whether security and other obligations might persist once the person’s identify ‘cannot be recovered’ from the data.

“De-identified” data, on the other hand, goes through a similar technical procedure as anonymized data, but the data subject may still be identified with the help of additional information. For instance, where data saved under a pseudonym (and not the users’ real name) is ultimately found to refer to that user following an analysis of the specific information within, or with the aid of other information.

While these provisions make it clear that such anonymized data is not subject to data privacy protections, it remains somewhat unclear whether a copy of personal information which is still being processed can also be anonymized (so that a copy would then be free of privacy restrictions). It is therefore ambiguous whether the 2016 Cybersecurity Law, and the Standard, allow ‘big data’ processing of such personal data via anonymisation, but authoritative comment suggests that this may be the case, and we expect that this is the correct interpretation.

PICs are strongly suggested to “immediately de-identify” personal information upon collection. Following PIC collection, ‘[p]ersonal information should be immediately deidentified … and ‘technical and managerial measures should be taken to separately store the de-identified data and information that can be used to restore the identification, and it should be ensured that no particular individual will be identified during subsequent processing’ (art. 6.2). These are very strong security measures. Such information continues to be ‘personal information’ while processing because it is being processed in an identifiable but not identified form. However, the question of whether personal information still undergoing processing can be anonymised is not resolved by these provisions.

PICs are further strongly suggested to ‘delete or anonymize’ personal information ‘as soon as possible’ after processing of it has stopped (art. 6.4(c)), and also ‘[a]fter the personal data subject has closed their account’ (art. 8(b)). In other words, once use of the information is completed for either reason, it may not be held as personal information (even de-identified personal information).

A PIC must carry out an annual PIA, including assessing ‘risks that the data set re-identifies personal data subject after anonymization or de-identification’ (art. 10.2(b)(4)).
 
‘Sensitive’ information now protected
 
The Cybersecurity Law does not provide special protections for categories of ‘sensitive’ information, and previous Chinese laws have not done so, with the exception of the 2013 MIIT Guidelines, which took a similar ‘potential adverse impact’ approach as that found in the new Standard.

The Standard defines ‘personal sensitive information’ as ‘personal information the leakage, disclosure, or abuse of which could easily endanger personal and property safety, and easily lead to the harm of one’s personal reputation and mental & physical health, or lead to discriminatory treatment’ (art. 3.2). Note 1 adds a short list of examples,  and Annex B gives more details and examples of the risks of leakage, illegal provision and abuse which can lead to such an assessment. It also adds ‘Generally, the personal information of children under 14 years of age and the private information of natural persons shall fall under Personal Sensitive Information.’ Table B.1 ‘Examples of Personal Sensitive Information’ names many categories of such information, divided into categories of ‘Personal property information’, ‘Pathological and health information’, ‘Personal biometric information’, ‘Personal identity information’, ‘Network identity information’ and ‘Other information’ (in itself encompassing what many laws would include as ‘sensitive’ information).

The consequences of information being regarded as ‘personal sensitive information’ are:

  1. ●  Explicit consent, defined as a user’s written consent or “proactive affirmat[ion]”, is required for collection (art. 5.5), and the notice required for such collection is specified (with details in Appendix C).
  2. ●  Consent from a guardian is required of a person under 14 years of age (art. 5.5(c)).
  3. ●  It must be stored using encryption and other security measures, and only a summary of biometric information should be stored (art. 6.3).
  4. ●  Special access controls must be implemented (art. 7.1(e)).
  5. ●  Where such information is to be shared or transferred, the data subject must be informed (art. 8.2(c)).
  6. ●  If such information is to be disclosed publicly, the data subject must be informed (art. 8.4(c)), and biometric information must not be so disclosed (art. 8.4(f)).
  7. ●  Background checks are required of employees in significant contact with such information (art. 10.4(a)).

These requirements give a good idea of the extent of detail with which the Standard applies to most other aspects of personal data processing. Not all data privacy laws or regulations are as thorough though it remains to be seen how actively PRC authorities will apply the provisions relating to sensitive information in actual practice.
 
Data subject access
 
Neither the Cybersecurity Law nor any other law or regulation provides an explicit right for data subjects to access their PI file held by an organization, though some laws provide rights to correct errors in that file. In contrast, the Standard provides in three locations that data subjects should be provided with such access, and clarifies what should be accessible.

In Article 7.4, a fairly broad general access right is provided to the data subject:

‘Personal information controllers shall provide personal data subjects with access to the following information: (a) the personal information held about the data subject, or type of personal information; (b) the source(s) of the above personal information, as well as the purpose for which it is used; [and] (c) any third party who has obtained the above personal information, identity or type’ (art. 7.4).

This is reiterated later in the document in the ‘Principle of subject participation’, which obliges PICs to ‘provide the personal data subject methods to access, amend, and delete their personal information’ (art. 4(g)). The Standard also requires that a PIC’s mandatory Privacy Policy must set out ‘[r]ights of personal data subjects and the realization mechanism(s), such as the methods to access, to rectify, to delete, to cancel an account, to withdraw consent, to obtain a copy of personal information, to restrict automated decisions by information systems, etc.’ (art. 5.6). And Annex C gives an example of how this can be done.

Finally, the Standard provides that responses to such data subject requests should be provided within thirty days, and without a fee for ‘reasonable requests’, and provides a list of situations where PICs ‘may decline to respond’ (art. 7.11).

A similar “right to access” has also been included in the latest draft of the PRC e-commerce Law, suggesting these requirements may soon be adopted more formally in future laws and regulations.
 
‘Minimization’ of collection
 
The Cybersecurity Law provides that PICs must “abide by the principles of legality, propriety, and necessity,” and prohibits them from gathering “personal information unrelated to the services they provide” (art. 41). The Standard modifies this with a stricter approach that provides that ‘personal information collected shall be directly related to the fulfilment of a product or service’s business function’, and that, without the collected personal information, the business function of a product or service could not be fulfilled’ (art. 5.2(a)). This is much the same as a ‘necessary for’ test of minimal collection, or ‘minimization’ as the heading to Article 5 suggests.

Further ‘minimization’ requirements are that the frequency of automated collection should be the minimum frequency necessary, and that the quantity of information collected indirectly should be the minimum necessary, for the functions of the business (art. 5.2(b) and (c)).
 
Restrictions on automated processing
 
Another new element in the Standard is that data subjects should be provided with ‘methods to appeal’ when PICs make ‘decisions based on automated decisions by information systems (such as personal credit, loan limits, or interview screening based on user profiling), which significantly influence the personal data subject’s rights and interests’ (art. 7.10). Similar restrictions are found in the EU data protection Directive, but not in the Cybersecurity Law or other Chinese laws.
 
Conclusions
 
The Standard is an important step forward in the evolution of China’s data privacy protections because of its comprehensive scope; the potential breadth of its definition of ‘personal information’; inclusion for the first time of extra protections for ‘personal sensitive information’; explicit inclusion of a right access; collection minimization, and appeals against automated processing. It is important to reiterate that this article discusses a draft Standard, from which the final Standard may diverge, and this will need to be checked.

The most significant implications of this Standard for businesses operating in China are:

  1. ●  Its application to all private sector organisations involved in ‘personal information processing’, whether customers, employees or others.
  2. ●  The definition of ‘personal information’ could potentially be interpreted more broadly than under some European or similar laws, even if it is not intended to be broader that the full scope of EU definitions. Therefore, considerable care must be taken in any use of any data relating to a person, at least until the approach of Chinese authorities is clear.
  3. ●  The definition of ‘personal sensitive information’ is both open-ended, but also with named categories much broader than in many other laws, thus requiring great care.
  4. ●  The suggested obligations in relation to subject access, minimum collection of data, and restrictions on automated processing, because these are not found in other laws. 

This article has focused only on those aspects of the Standard that add something significant to the Cybersecurity Law and earlier laws. There are many other aspects of the Standard that set out other aspects in more detail than before, even though they do not involve significant departures on matters of principle. These should be considered carefully.

Any country’s data privacy protections need to be assessed against the extent of data surveillance practices by private and public sector organisations within the country (and data surveillance laws enabling such practices). In China such a balanced assessment is crucial, because of the extent of surveillance inherent to its authoritarian government and party, and in which the key players of its market economy are implicated. To what extent the increasingly strong data privacy protections discussed in this article will in practice act to restrain these surveillance practices, or are intended to do so, is beyond the scope of a short article. Our intention is to provide a more precise understanding of the incremental growth of the data privacy protections.

(Article Source:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3128593)