There are no specific laws, regulations, guidelines or case law concerning the development or use of artificial intelligence (AI) in Hong Kong. The interpretation of AI-related issues will therefore need to draw from laws and regulations in the fields of constitutional law, intellectual property, data privacy, discrimination, and competition law.
1 . Constitutional law and fundamental human rights
Human rights in Hong Kong are enshrined in the Basic Law of the Hong Kong Special Administrative Region (the Basic Law) and the Hong Kong Bill of Rights Ordinance (Cap. 383) (the BORO). The BORO incorporates provisions of the International Covenant on Civil and Political Rights into the laws of Hong Kong. Non-exhaustive domestic constitutional provisions protecting human rights, which are the most vulnerable to improper uses of AI, are discussed below.
1.1. Domestic constitutional provisions
Right to privacy
The use of AI as a forecasting and profiling tool often involves collecting and processing large amounts of personal data.
Article 30 of the Basic Law protects the freedom and privacy of communication of Hong Kong residents, whereas Article 14 of the BORO provides that no person shall be subjected to arbitrary or unlawful interference with their privacy, family, home or correspondence. The privacy of individuals in relation to personal data is also protected by the Personal Data (Privacy) Ordinance (Cap. 486), which is addressed below in Section 3. Data.
Freedom of person and movement
Where AI is applied to facial-recognition technology for border control and the prevention and detection of crime, the risk of mis-identification may encroach upon the freedom of person and movement.
The freedom of person is protected by Article 28 of the Basic Law and Article 5 of BORO, which prohibits the arbitrary or unlawful arrest, search, detention or imprisonment. Article 31 of the Basic Law and Article 8 of the BORO confer freedom to move within and to leave Hong Kong.
Right to equality
The increasing use of AI in algorithmic decision-making may also produce unintentional, discriminatory results, for example in the contexts of hiring practices and insurance pricing.
The right to equality is protected by Article 25 of the Basic Law and Article 22 of the BORO. At a statutory level, there are a number of legislations in Hong Kong that give horizontal effect to the right to equality, which are addressed below in Section 4. Bias and discrimination.
Freedom of speech, of the press and of publication
Another application for AI is the moderation of online content, such as hate speech and violent content. In view of technical limitations such as incomplete datasets and nuances in language and culture, automated moderation may pose a threat to freedom of expression.
In Hong Kong, the freedom of speech, of the press and of publication, are protected by Article 27 of the Basic Law and Article 16 of the BORO. As mentioned, Article 30 of the Basic Law also protects the freedom of communication.
1.2. Human rights decisions and conventions
There is currently no case law in Hong Kong concerning human rights violations relating to AI.
If a legislative or executive act made by a public body (or a body exercising a public function) is suspected of violating human rights, then a person with sufficient interest in the matter may challenge the constitutionality of the act by judicial review at the Court of First Instance of the High Court. However, judicial review is limited to matters of a public nature.
2 . Intellectual property
The most relevant forms of protection for AI based solutions in Hong Kong are offered by patent registration and copyright (for which no registration is required). Confidential information can also be a useful form of protection for AI as it is less vulnerable to reverse engineering. It may be possible to protect AI-generated works by patent and copyright, although the ownership of such rights is uncertain at present. Similarly, where the use of AI is claimed to infringe on third-party intellectual property rights, the AI itself will unlikely be held liable, and the person ultimately responsible will depend on the rules of liability under tort laws as well as the contractual relationship of the parties involved.
A key instrument to protecting AI is by obtaining patent registration under the Patents Ordinance (Cap. 514). AI-related inventions are patentable if they are new, involve an inventive step, and are susceptible of industrial application.
Under the Patents Ordinance, the right to a patent normally belongs to the inventor, who must be identified in the patent. With no statutory guidance or case law in Hong Kong regarding the inventor of AI-generated inventions, the Hong Kong Court will likely adopt the position in the UK and other common-law jurisdictions and conclude that the AI itself lacks the necessary legal personhood to be considered as the inventor. Therefore, the invention and right to the patent will likely belong to the AI creator or their employer if made in the course of the creator’s normal duties of employment.
AI which is integrated into the source code can be protected by copyright as a computer program, being a form of literary work under the Copyright Ordinance (Cap. 528). Unlike patents, there is no requirement for registration of copyright in Hong Kong, as copyright will arise automatically if the work is original and recorded in material form. Copyright in literary works lasts for 50 years from the end of the calendar year in which the author dies.
With respect to AI-generated works, the Copyright Ordinance provides that the author of computer-generated works is taken to be the person by whom the arrangements necessary for the creation of the work are undertaken, and this person will also be the first owner of the copyright. Subject to legislative and judicial guidance, the author of an AI-generated work may well be the original programmer, owner, or user of the AI. It is unclear at the moment whether works generated autonomously by AI can satisfy the requirement of “originality”, meaning that the work involved the author’s skill, labour and effort.
Separately, infringement issues may arise in the use of copyright materials, such as newspaper articles and photographs, in the process of training AI. The Copyright Ordinance provides for exhaustive forms of fair dealing exceptions to infringement instead of a non-exhaustive “fair use” approach. Currently, there is an exception to fair dealing for the purpose of research, but not specifically for training AI or for text and data mining.
2.3. Trade secrets/confidentiality
AI can also be protected as confidential information by the common law and equitable action of breach of confidence. The elements of this action are:
- the information has the necessary quality of confidence, meaning it involves a minimum intellectual effort and is not public property or public knowledge;
- the information was imparted in circumstances importing an obligation of confidence; and
- there was an unauthorised use of that information to the disclosing party.
Although the threshold for this form of protection may be lower than for patent and copyright, it does not forbid a party from using confidential information which is acquired by independent means. Thus, the strength of protection of AI as confidential information will depend on the AI’s complexity and the extent to which it is vulnerable to reverse engineering.
3 . Data
The Personal Data (Privacy) Ordinance (PDPO) (Cap. 486) is the main legislation in Hong Kong regulating the collection, use, transfer, processing and storage of personal data. It applies to data users who control the collection, holding, processing or use of personal data in or from Hong Kong and is enforced by the Privacy Commissioner for Personal Data (PCPD). The PDPO distinguishes between a “data user” and “data processor”. A data user is a person who, alone or jointly or in common with others, controls the collection, holding, processing or use of personal data; whereas a data processor is a person who processes personal data on behalf of another person and does not process the data for any of its own purposes. The PDPO only regulates data users but not data processors.
3.1. Domestic data law treatment
The PDPO does not specifically address AI. However, the PCPD recently published a Guidance Note on the Ethical Development and Use of AI (AI Guidelines) in August 2021.
While these guidelines are not legally binding, the PCPD may take non-compliance with these guidelines into consideration when determining whether a data user has contravened the DPPs.
The AI Guidelines are principle-based, drawing from the reports, frameworks, and guidelines of the European Commission, OECD, UNESCO, Japan and Singapore. Nevertheless, the AI Guidelines do contain some bright-line, practical guidance on the interface between AI and data protection, which we set out below.
Data users intending to develop and use AI should conduct a risk assessment prior to commencing. Factors to be considered include:
- the permissible uses of the data used to train AI models (Training Data), and whether the use of Training Data in such a manner would comport with the original purpose for collection and use of the Training Data (DPP 3);
- the volume of the Training Data required is not excessive (DPP 1);
- sensitivity of the data involved and whether it is necessary for the intended purposes (DPP 1);
- quality of the data involved e.g., accuracy of the data (DPP 2);
- security of the personal data when used to develop or used by the AI (DPP 4); and
- the probability of privacy risks arising and the potential harm that may result.
A key risk mitigation measure against the use of AI is the exercise of human oversight; bearing in mind the potential for harm, data users should exercise a greater degree of human involvement where there are greater potential risks arising from the use of the particular AI, and vice versa.
Since the development of AI models depends heavily on the data that is input into the system, another area of focus prior to the commencement of AI development is data preparation.
Data users should ensure that they comply with the PDPO requirements when preparing data for use in AI development by:
- collecting sufficient amounts of personal data that are not excessive (DPP 1);
- only using personal data in a manner that coheres with the original purpose for collection of the personal data, unless the voluntary and express consent of data subjects has been collected prior, or the personal data has been anonymised (DPP 3);
- ensuring that personal data is accurate before use (DPP 2(1));
- ensuring that personal data is adequately secured, taking into consideration the sensitivity of the personal data and potential harm arising from misuse (DPP 4); and
- securely discarding or irreversibly anonymising personal data when the original purpose of collection has been achieved (DPP 2(2)).
Data users should also minimise the personal data used in the development of the AI to lower the potential for harm. Some collection techniques suggested by the AI Guidelines include:
- limiting the collection of personal data to that absolutely necessary for the purpose of the AI;
- using anonymised, pseudonymised or synthetic data to train AI models;
- applying “differential privacy” techniques to datasets before using the datasets to train AI models;
- using federated learning for training AI models so as to avoid unnecessary pooling of data in a central database and reduce data protection risks; and
- only retaining personal data for as long as necessary for the development and use of the AI.
Lastly, for evidencing compliance, data users should ensure that proper processes are in place to demonstrate its compliance with the PDPO.
3.2. General data protection regulation
See Section 3.1 Data: Domestic data law treatment, above. The GDPR does not apply to Hong Kong.
3.3. Open data & data sharing
As mentioned above, DPP 3 of the PDPO prohibits the use of personal data for a new purpose which is not, or is unrelated to, the original purpose when collecting the data, unless with the data subject’s express and voluntary consent.
This is even in the case where the data is made publicly available, as demonstrated in the “Do No Evil” case in 2013, where a mobile application (the App) aggregated publicly available data from the Judiciary, Companies Registry and Official Receiver’s Office to facilitate the conduct of due diligence searches on data subjects. The PCPD highlighted that the App’s use of the personal data in such a manner exceeded the reasonable expectation of the data subjects on their publicly available information and was not consistent with the original purposes of collecting the data subjects’ personal data. Together with the considerations highlighted above in Section 3.1 Data: Domestic data law treatment, data users should bear DPP 3 in mind when using “open data” or “shared data”.
3.4. Biometric data: voice data and facial recognition data
While biometric data is considered “sensitive” personal data in other jurisdictions (e.g., India and China), there is no recognised concept of “sensitive” personal data under the PDPO and there are no additional restrictions specifically imposed on sensitive personal data. Instead, under DPP 4 (data security), the potential harm to a data subject in the event of unauthorised processing is a key consideration. Accordingly, the more “sensitive” personal data is, the greater the degree of security.
Furthermore, the PCPD has published guidelines on the collection and use of certain biometric data (Biometric Guidelines). These guidelines highlight the need for caution when handling these categories of personal data and set out practical guidance on the proper collection and use of such data.
Biometric data such as DNA samples or retinal images may be deemed more “sensitive” as wrongful disclosure would result in more adverse consequences. Thus, in accordance with DPP 4, data users should adopt a higher data protection standard when dealing with more sensitive biometric data.
The Biometric Guidelines also promulgate other biometric data-specific recommendations, including:
- where possible, retain only the biometric templates; securely destroying/discarding the original biometric samples;
- avoid storing biometric templates in a central database;
- regularly and frequently purge biometric data; and
- obtain express and voluntary consent for new purpose.
The Biometric Guidelines also suggest conducting a Privacy Impact Assessment (PIA) when collecting and using biometric data.
Finally, when determining whether the use of biometric data is proportionate to the intended purpose, data users should consider whether:
- the measure pursues a legitimate aim;
- the measure is rationally connected with advancing that aim;
- the measure is necessary for advancing that aim; and
- a reasonable balance has been struck between the rights of the individual and the benefits of the use.
(Hysan Development Co Ltd v. Town Planning Board  HKCFA 66 at  – .)
As mentioned above, while these guidelines are not binding, the PCPD may take non-compliance with these guidelines into consideration when determining whether a data user has contravened the DPPs (i.e., in the event of a complaint or breach resulting in an investigation).
Biometric data is also touched on in the AI Guidelines, which highlights real-time identification of individuals through the use of biometric data as an AI use case that is potentially more prejudicial and hence requires a greater degree of human oversight.
4 . Bias and discrimination
4.1. Domestic anti-discrimination and equality legislation treatment
In Hong Kong, there is currently no AI-specific anti-discrimination legislation. The use of AI in certain protected areas (e.g., employment, education, provision of goods, services or facilities, disposal or management of premises etc.) will be governed by the existing anti-discrimination ordinances, namely, Sex Discrimination Ordinance (Cap. 480), Disability Discrimination Ordinance (Cap. 487), Family Status Discrimination Ordinance (Cap. 527) and Race Discrimination Ordinance (Cap. 602).
Under the anti-discrimination ordinances, there are essentially two types of discrimination – direct and indirect discrimination. “Direct discrimination” occurs where a person is treated less favourably than another person in the same or not materially different circumstances on the ground of any of the “protected attributes” (i.e., sex, marital status, pregnancy, breastfeeding, disability, race or family status), whether or not it is the dominant reason. The intention of the discriminator is irrelevant. For example, if a tertiary institution uses AI to screen out candidates of a particular race, this will give rise to direct race discrimination.
“Indirect discrimination” occurs where (i) a requirement or condition is applied equally to all but a smaller proportion of persons with a protected attribute can comply with the condition or requirement; (ii) the condition or requirement is not justifiable; and (iii) the person who cannot comply with the requirement suffers a detriment. For instance, if an employer uses an algorithmic decision-making system to make promotion decisions and this gives rise to the application of a condition or requirement where a smaller proportion of female employees can comply and so fewer female employees are promoted, then this may give rise to indirect sex discrimination, unless the employer can justify that requirement or condition. In determining whether a condition or requirement is justifiable, the Court will consider whether the objective is legitimate, whether the means used to achieve that objective is reasonable and look at balancing the impact on the individual against the reasonable needs of the AI-user (e.g., the employer).
In certain particular contexts, for example an employment context, employers will also need to ensure that the use of AI does not contravene employment-related protection. These include ensuring that members of a trade union are not discriminated against on the grounds of trade union activities (which is protected under the Employment Ordinance (Cap. 57)) and they do not discriminate against a person on the basis of a spent conviction which is protected under the Rehabilitation of Offenders Ordinance (Cap. 297).
5 . Trade, anti-trust and competition
In Hong Kong, competition is primarily regulated by the Competition Ordinance (Cap. 619), which is largely influenced by and modelled on overseas statutes. The Competition Ordinance prohibits restrictions on competition through three competition rules:
- The First Conduct Rule prohibits any two or more undertakings from engaging in an agreement, a concerted practice, or a decision of trade association, which has the effect or object of restricting competition. Examples of serious anti-competitive conduct include price-fixing, restriction of output, bid-rigging, and market-sharing. It is also well-settled that the exchange of commercially sensitive information amongst competitors can give rise to a concerted practice.
- The Second Conduct Rule prohibits an undertaking with substantial degree of market power from abusing its power with an object or effect of restricting competition.
- The Merger Rule prohibits anti-competitive mergers and acquisitions, but this is currently limited in its application and only covers undertakings holding a carrier licence within the meaning of the Telecommunications Ordinance (Cap. 106).
The Hong Kong Competition Commission (HKCC) and the Competition Tribunal are responsible for enforcing the Competition Ordinance and imposing sanctions for contravention of the provisions in the Competition Ordinance respectively.
5.1. AI related anti-competitive behaviour
The emergence of AI and machine learning is starting to impact antitrust enforcement globally and we can see the potential for this to inform HKCC’s enforcement priorities under the First and Second Conduct Rules in Hong Kong.
For the purposes of the First Conduct Rule, issues may arise in relation to what has come to be described as “algorithmic collusion”, which might be explicit collusion where several competitor undertakings adopt a common pricing algorithm, or tacit where independent self-learning machines somehow “learn” to collude. This phenomenon was considered in the UK case involving online sales of posters and frames (the Trod’s case, CMA Case 50223, Online sales of posters and frames, 12 August 2016), where online retailers relied on automated repricing software. It was held by the Competition and Markets Authority that the parties had infringed competition law by participating in an agreement not to undercut each other prices. Given the approach to information exchange under the First Conduct Rule, the use of algorithms is an area which might well attract the attention of HKCC in the future.
With regards to the Second Conduct Rule, competition concerns may arise when AI is being utilized in the provision of goods or services in a digital market. There has been a particular focus internationally on large tech firms (including search engines, online retailers and social networks, especially multi-sided internet platforms), in which consumers provide personal data on one side of the platform in exchange for a particular service, with the personal data then used to provide a different service to the other side of the platform. The networking effects can result in a concentration of market power and potentially lead to abuse of such market power.
5.2. Domestic regulation
The Competition Ordinance does not contain any provisions which specifically cover or even mention AI and, to date, none of HKCC’s various publications (including its various published Guidelines and Enforcement Policy) have touched on the question of AI either. Nevertheless, the proliferation of AI is a global phenomenon and Hong Kong is unlikely to be spared from having to address the issues accompanying the use of algorithms.
6 . Domestic legislative developments
For the time being, there is no proposal on the horizon to legislate for the development and use of AI in Hong Kong. However, there are indications that the Hong Kong Government is conscious of the potential that AI presents and the need for oversight.
For example, the Hong Kong Monetary Authority (HKMA) issued two circulars relating to AI in November 2019, namely “High-level Principles on Artificial Intelligence” and “Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions”. Noting that banks are increasingly adopting or planning to adopt AI and big data analytics applications, the circulars provide guiding principles which banks are expected to take into account when designing and adopting such applications.
More recently, in the Government’s public consultation for updating the copyright regime which concluded in February 2022, the Government recognised that copyright issues relating to AI have generated substantial international discussion. In view of the different views and ongoing debate at international level, the Government will continue to study this issue and revisit the issue in the future.
7 . Frequently asked questions
1. What can we do to protect our new AI technology at the development stage?
Regardless of the field of application, it is crucial to ensure that the ownership of the technology and any related intellectual property rights will fully vest in you and that the technology is kept confidential. This includes clearly defining the background and foreground IP involved, including the AI-generated works, and having robust ownership and confidentiality clauses in employment agreements where the technology is developed in-house and/or in collaboration agreements where external research institutions or technology companies are involved.
2. Can we use copyright materials that are published online for the purpose of training our AI?
There is no specific exception to copyright infringement under Hong Kong copyright law for the training of AI or for text and data mining. It would be prudent to identify the copyright owner of the training materials (for example, a research institution or a copyright collection society) and seek a licence to use the copyright work.
3. Do we need to conduct a privacy impact assessment when utilising new AI technology in our business?
There is no strict requirement for a privacy impact assessment when utilising new AI-technology in business, though a privacy impact assessment is highly recommended if the new AI technology would have a significant impact on personal data privacy (e.g., where the collection is intrusive or involves a larger number of data fields or volume of data).