The regulation of artificial intelligence (AI) has started to take concrete form in Canada. The general policy aspirations that underlie today’s Canadian AI regulatory landscape can be found articulated at a high level in the Montreal Declaration for a Responsible Development of Artificial Intelligence (the Declaration) published in 2018 by the Université de Montréal. The Declaration aimed to spark collective dialogue on ethical issues surrounding AI and proposed an ethical framework to guide the development of AI towards morally and socially desirable ends. Several of the principles in the Declaration were echoed in the Directive on Automated Decision-Making (the Directive) issued by the Treasury Board of Canada in 2019, which mandated federal institutions to conduct impact assessments prior to integrating AI into decision-making processes in order to ensure transparency, accountability, legality, and procedural fairness. Although the impact of the Directive on businesses was limited to companies providing technology to the federal government, it foreshadowed the arrival of new bills that would contain provisions addressing the use of AI in the private sector. On June 16, 2022, the federal government introduced Bill C-27, titled The Digital Charter Implementation Act, 2022. One of three pillars of the bill is the introduction of the Artificial Intelligence and Data Act (AIDA). If adopted, the AIDA would become Canada’s first law specifically dedicated to regulating AI.
1 . Constitutional law and fundamental human rights
1.1. Domestic constitutional provisions
In Canada, the powers for the federal government and the provinces/territories are defined in the Constitution Acts, 1867 and 1982, which are part of the “Constitution of Canada”. The Constitution is the supreme law of Canada, and any law that is inconsistent with its provisions is of no force or effect, to the extent of the inconsistency.
The formal structure of the Constitution suggests that the various legislative bodies are each confined to their own jurisdiction and act independently of each other. However, effective policies often require joint or coordinated action. This is particularly true where human rights call for nationwide minimum standards. In Canada, human rights are protected by both federal and provincial legislation. We discuss the Charter in this section and the impact of provincial human rights legislation below in Section 4. Bias and discrimination.
The Charter is Canada’s most important piece of human rights legislation. It sets out the rights and freedoms that Canadians believe are necessary in a free and democratic society. Sections 7, 8, 9 and 15 of the Charter (referenced below) are likely to play a prominent role in ensuring that AI, and particularly government controlled AI, is developed in a manner that respects fundamental human rights.
1.2. Human rights decisions and conventions
Section 7 of the Charter asserts the right of everyone to not be deprived of “life, liberty and security of the person” except in accordance with the principle of fundamental justice. Liberty interests are at stake when a law imposes the penalty of imprisonment or where a prisoner has residual liberty restricted through transfer to more secure institutions (May v. Ferndale Institution, 2005 SCC 82). Security of the person has been interpreted to mean health, safety, and personal autonomy (R. v. Morgentaler (No. 2),  1 S.C.R. 30; Carter v. Canada, 2015 SCC 5). Applications of AI that could offend section 7 of the Charter include using AI to assess recidivism in prisoners, or evaluate priority in access to life-saving care.
Section 8 of the Charter asserts the right to be “secure against unreasonable search or seizure”, which limits the techniques available to the police to look for and obtain evidence of wrongdoing. Privacy rights are behind the protection against unreasonable search or seizure (Hunter v. Southam,  2 S.C.R. 145). For example, the Supreme Court of Canada has held that seizure of personal information, such as medical information, without a warrant by the authorities, is an unreasonable seizure (R. v. Dersch,  3 S.C.R. 768). Personal information has also been held to include electronic information such as the names and addresses associated with ISP subscriber information (R. v. Spencer, 2014 SCC 43).
Section 9 of the Charter asserts the right not to be “arbitrarily detained or imprisoned”, including by police for the purpose of investigation. The right not to be arbitrarily detained places limits on investigative detention by the police. The police may only detain a person briefly, who may have been implicated in a recent or ongoing criminal offence, if they have “reasonable suspicion” (R. v. Mann, 2004 SCC 52). Police departments that use AI systems that involve profiling or facial-recognition technologies, for example, will have to meet the burden of showing reasonable grounds to suspect an individual is connected to particular crime. Moreover, biased data that leads to inadvertent racial profiling may result in unconstitutional detention, since the Supreme Court of Canada has held that using racial profiling in detaining an individual will be relevant in determining whether detention is arbitrary (R. v. Le, 2019 SCC 34).
Finally, Section 15 of the Charter asserts the right to equality before the law and to equal protection and equal benefit without discrimination. In Andrews v. Law Society of B.C.,  1 S.C.R. 143, the Supreme Court of Canada held that section 15 of the Charter requires substantive and not merely formal equality. Therefore, it applies to laws that are:
- discriminatory on their face;
- discriminatory in their effect; and
- discriminatory in application.
Substantive equality allows the court to consider and identify adverse effects on a class of persons distinguished by a listed or analogous personal characteristic in otherwise facially neutral laws (Fraser v. Canada, 2020 SCC 28).
Given the breadth of the above-referenced Charter protections, AI systems that involve profiling or discriminatory outcomes based on any of the Charter-protected attributes, or that impede upon any of the other referenced fundamental rights, will be subject to particular scrutiny.
2 . Intellectual property
The law of intellectual property relating to AI has not developed significantly as of the date of this writing. Two key areas (patent and copyright) are premised on the idea that an inventor or an author is a person, as opposed to another autonomous form of intelligence. However, legislators are actively putting thought into amending these regimes to deal with the challenges of AI technologies, notably in the copyright space.
As in other jurisdictions, Canada’s Patent Act only permits the patenting of certain subject matters. Notable exclusions are “mere scientific principle[s] or abstract theorem[s]” (see section 27(8)) that do not combine in a physical form or do not have physical effects and relate to the manual or productive arts. The Canadian Intellectual Property Office (CIPO) has developed specific practices on “computer-implemented inventions” in its Manual of Patent Office Practice (Manual). In the Manual, CIPO states that computer programs, data structures and computer-generated signals cannot, per se, be claimed as patentable subject matters. In a 2020 guidance document, CIPO considers data structures to be abstract ideas that are unpatentable. Consequently, algorithms (including AI systems) must combine into a physical form, or improve the functioning of a computer, to be considered patentable subject-matter in Canada. By way of example, CIPO suggested that a medical diagnostic method would be patentable if it combined an algorithm establishing correlations between results with physical steps, such as performing medical tests and analysing samples. While the algorithm would not be patentable, the diagnostic method incorporating it could be protected under existing Canadian law.
On the question of whether an autonomous AI system could be an inventor, Canadian courts and legislators have not yet made a clear determination, and the Patent Act does not define the key terms “inventor” and “person” in such a way as to include or exclude non-human inventors. Given the split in other countries on AI being a sole or joint inventor of a patentable invention, guidance will be needed either through legislative reform or a test case.
In Canada, the Copyright Act governs copyright and protects, notably, rights in software, including source and object code, as well as databases, all important for AI technologies. The Government of Canada is as yet to update the Copyright Act to address AI specifically, but in 2021 it published a consultation paper to shine light on how the challenges created by the intersection of copyright and AI might be addressed in future amendments to the Act.
Under current jurisprudence interpreting the Act, it is unlikely that an AI system would be considered an “author” of works it generates. Copyright terms connect to the lifespan of human author(s) and the concept of “moral rights” within Canadian law presupposes a human author with certain inalienable interests connecting to that person’s honour or reputation. Thus, courts in Canada have stated confidently that “a human author is required to create an original work for copyright purposes”, but have accepted that human intelligence can be significantly assisted by computer systems. What is required is more than “automation” but rather a modicum of “skill and judgment” in assembling the work with the help of technology (Geophysical Service Inc. v Encana Corp., 2016 ABQB 230 at paragraph 88-95 (www.canlii.ca/t/gppg3#par88)).
The Consultation Paper raises three possible approaches that a future Copyright Act might take:
- make AI-generated works ineligible for protection (i.e., keep those works in the public domain);
- attribute authorship to the human(s) behind the AI; and
- permit “authorless” protection (i.e., no moral rights would attach).
The Consultation Paper also deals with:
- Text and data mining permissions relating to the complexity of obtaining a vast number of authorizations for large quantities of data used to feed AI systems. There is at least the possibility of a targeted exception applying to facilitation of copying for the purpose of informational analysis.
- Infringement and liability issues surrounding who might be liable for infringing AI technology. Culpability for copying becomes less clear as human involvement decreases.
2.3. Trade secrets/confidentiality
In Canada, trade secret law is based on the common law of confidential information (except in Quebec, where civil law principles apply). In Canada, algorithms, including AI systems, can be protected as confidential information, including formulae, compilations of information, techniques, processes and patterns, often through confidentiality agreements and control measures. No registration is required and confidential information law does not require specific subject-matter, unlike patent law. In Canada, obtaining or communicating a trade secret by fraudulent means is a crime under section 391(1) of the Criminal Code.
3 . Data
3.1. Domestic data law treatment
Canadian data protection laws have historically been based on the fair information principles (FIPS), but they are likely to undergo important changes in the short to medium term to bring them to par with current international standards, in particular GDPR. Notably, Canadian legislators have expressed a desire to ensure Canada retains its current favorable adequacy status granted by the European Union (EU).
The Canadian federal government has in recent years put data governance at the forefront with the publication in 2019 of Canada’s Digital Charter. Its stated goal is to make Canada a “competitive, data-driven, digital economy”, notably to favor AI innovation. Following the adoption of this Charter, the Federal legislator launched a project to replace the Personal Information Protection and Electronics Documents Act (PIPEDA). This would have been done by Bill C-11, introduced in 2020, that would have created the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (PIDPTA), but this bill failed to pass prior to the dissolution of the Parliament before the last Federal election. A new version of Bill C-11 is expected to be re-introduced in the current parliamentary session.
3.2. General data protection regulation
The applicable data protection laws for the private sector are PIPEDA and the substantially similar Provincial statutes:
- Alberta Personal Information Protection Act, SA 2003, c P-6.5.
- British Columbia Personal Information Protection Act, SBC 2003, c 63.
- Quebec Act Respecting the Protection of Personal Information in the Private Sector, CQLR c P-39.1 (Private Sector Act).
There are also a variety of Provincial statutes that apply to Personal Health Information.
The Province of Quebec recently enacted Bill 64 to amend its privacy laws, which will come into force gradually between September 2022 and September 2024. This Bill notably amends Quebec’s Private Sector Act to bring it closer to the GDPR model. Like the GDPR, Bill 64 also includes provisions that apply to certain types of AI systems: it is the first Canadian law to subject automated decision-making based on personal information to transparency requirements, including regarding the factors that led to a decision (section 12.1).
3.3. Open data & data sharing
Canada has introduced a wide variety of Open Data Initiatives to better streamline secured sharing of personal information. At the Provincial level for instance, the Province of Quebec introduced Bill-19 which allows for the sharing of a patient’s medical information for the purposes of research. At the Federal level, the Government has been examining potential avenues to bring Open Banking to Canada. In April 2021, the Advisory Committee in their Final Report recommended that a system of Open Banking could be introduced that would allow Canadians to safely and conveniently share their banking information with accredited third party service providers.
PIPEDA governs the disclosure of personal information by private businesses and specifically carves out an exception for the disclosure and use of personal information without the need for proper knowledge and consent, if it is for the purpose of statistical, scholarly study or research (see Subsection 5(2)(c) and 5(3)(f)). However, under the Accountability Principle of PIPEDA, private businesses that share the personal information in its possession or custody remain responsible for that information even if that personal information is transferred to a third party for processing. Therefore private businesses should ensure that they put in place contractual or other safeguards to ensure personal information is properly protected and handled once transferred. Conversely, service providers that receive personal information from a client need to ensure the data they receive was obtained in compliance with consent requirements. The Office of the Privacy Commissioner of Canada (the OPC) confirmed in its 2019 AggregateIQ decision that this obligation of service providers also applies when the client is located in a foreign jurisdiction (See PIPEDA Findings #2019-004 at www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2019/pipeda-2019-004/). In this case, Aggregate IQ was providing political advertising services and had links with Cambridge Analytica and the OPC concluded that it did not appropriately verify consent.
At the Provincial level, Quebec’s Bill-64 requires service providers to use personal information only for the purpose for which it is provided, and requires businesses to ensure that any personal information they transfer out of Quebec will receive comparable protection in the jurisdiction that the personal information is transferred into (see Section 17 & 18.3 of the Private Sector Act).
3.4. Biometric data: voice data and facial recognition data
The regulation of AI in the context of facial recognition and voice data in Canada is limited. While PIPEDA’s consent requirements continue to apply to the use of facial recognition and voice data and any associated AI, its use is limited only by PIPEDA’s consent requirements.
In October 2020, the Office of the Privacy Commissioner reported on their investigation into the use of facial recognition technology by Cadillac Fairview at various locations across the country. In their Report, it was found that Cadillac Fairview was using facial recognition technology and was having a service provider collect and store the facial information of individuals without their proper consent. At the conclusion of the investigation, Cadillac Fairview agreed to delete the personal information that was not required for legal purposes and had advised that they had ceased using the technology in July 2018 and had no current plans to resume using it. In its subsequent Clearview AI decision (2021), the OPC also found that Clearview AI, which was maintaining a database of billions of photos of individuals taken from the internet without consent to train its facial recognition technology, had violated federal and provincial privacy laws even if it was no longer offering its technology in Canada. (See PIPEDA Findings #2021-001 at www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2021/pipeda-2021-001/.)
The OPC in November 2020 released its Regulatory Framework for AI: Recommendations for PIPEDA Reform. In this framework, the Commissioner took the position that consent alone is insufficient to properly regulate the use of AI in this space and made a raft of recommendations. The CPPA would have implemented some of these recommendations.
Based on Quebec’s Act to Establish a Legal Framework for Information Technology, Organizations in Quebec have to disclose the existence of a biometric database to regulatory authorities prior to its entry into service. Quebec’s Bill 64 also introduced an amendment to the Private Sector Act that classifies biometric data (which include voice and facial data) as “sensitive personal information”, subjecting this type of data to express consent requirements.
4 . Bias and discrimination
AI and machine learning systems in Canada are regulated, in part, by Canada’s various human rights statutes, which promote the equal treatment of Canadians and protect against discrimination.
Unrepresentative data, flawed assumptions and restrictive algorithmic classifications can lead to discriminatory results. In order to stay on the right side of Canada’s domestic human rights regime, AI and machine learning systems operating in the Canadian context must mitigate against the risk of propagating or exacerbating bias and potentially discriminatory patterns in their decision-making.
4.1. Domestic anti-discrimination and equality legislation treatment
As referenced in section 1, above, the Charter affords Canadians certain protections against human rights violations committed by governments in relation to governmental activities. However, it does not provide any guarantee of equality or protection against discrimination by non-governmental entities. These concepts are instead enshrined in Canada’s federal Human Rights Act, R.S.C. 1985, c. H-6, as well as its various provincial and territorial human rights acts and codes.
These laws, and their other provincial and territorial equivalents, flow in large part from section 15 of the Charter (as well as Canada’s international human rights obligations) and prohibit discrimination on specific, enumerated grounds including: gender, race, age, religion, disability and family status. While varying slightly in their language and scope, each promote the equal treatment of Canadians in employment, commercial and other contexts.
Canada’s human rights laws recognize that discrimination can take place in various forms, including direct, indirect, constructive and/or “adverse effect” discrimination. Some statutes, including Manitoba’s Human Rights Code, C.C.S.M, c. H 175 recognize “systemic discrimination”. This concept has been interpreted by the Supreme Court of Canada as referring to “practices or attitudes that have, whether by design or impact, the effect of limiting an individual’s or a group’s right to the opportunities generally available because of attributed rather than actual characteristics” (Moore v. British Columbia (Education),  3 S.C.R. 360, referencing Canadian National Railway Co. v. Canada (Canadian Human Rights Commission),  1 S.C.R. 1114).
The Supreme Court has defined discriminatory distinctions as ones “based on grounds relating to personal characteristics of the individual or group” that create “burdens, obligations, or disadvantages on such individual or group not imposed upon others…” (Andrews v. Law Society of British Columbia,  S.C.J. No. 6). AI systems are often built around the use of categorized sets of data, and algorithms which classify individuals based on gender, race, age or any other enumerated human rights grounds pose a far greater risk of generating discriminatory outcomes under Canadian law (and triggering a corresponding human rights claim) than those which avoid such classifications.
As referenced further in Section 6. Domestic legislative developments below, the LCO has recently called for comprehensive AI regulatory reform, in Ontario and across the country, including to better ensure that AI systems comply with Canada’s human rights laws.
5 . Trade, anti-trust and competition
Thus far, there has been no legislation specifically addressing the intersection of AI and competition law in Canada, nor has there been any enforcement actions or litigation. However, as seen in other jurisdictions, AI may raise competition issues. While the Competition Bureau (Bureau) has released some high-level guidance on the topic, primarily focusing on AI in the context of the digital economy, its past comments have held that the principles-based nature of the Competition Act (Act) is often sufficient to address AI-related harms. However, recent statements from the government have confirmed that the Act is being reviewed to consider how best to tackle “today’s digital reality”.
5.1. AI related anti-competitive behaviour
As referenced above, AI has the potential to raise a myriad of competition issues. Three key issues are outlined below in turn.
Many have expressed concern about AI both facilitating collusion between competitors and making it harder for regulators to detect collusion. Types of potential algorithmic collusion include a messenger conspiracy (where companies use algorithms to implement a cartel) and the algorithm-enabled hub-and-spoke conspiracy (where firms use the same algorithm and centralize their decisions through the same company that offers algorithmic pricing). However, in its 2018 “Big Data” publication, the Bureau stated that it does not believe computer algorithms require a rethinking of competition law enforcement (see www.competitionbureau.gc.ca/eic/site/cb-bc.nsf/eng/04342.html). Irrespective of the medium used to facilitate the agreement, any agreement between competitors to fix prices, allocation markets, or lessen supply contravenes section 45 of the Act and is a criminal offense.
Abuse of Dominance
Section 79 allows the Competition Tribunal, upon application from the Bureau, to issue a prohibition order and monetary penalties where one or more dominant firms have engaged in a practice of anti-competitive acts, and the practice has had, is having, or is likely to have, the effect of preventing or lessening competition substantially in a market. It is worth noting that, at the time of writing, proposed amendments to the Act to allow private abuse applications have been introduced.
AI raises two key issues in respect of abuse of dominance in Canada. First, AI can be instrumental in establishing dominance, particularly in digital markets. AI can be a differentiator among firms in terms of how quickly they are able to respond to market changes, how easily and accurately they can forecast and interpret data, how efficiently they are able to develop better and/or cheaper products that respond to consumer preferences, etc. This can all contribute to the accumulation of market power by a single firm. Second, AI may also be used for abusive purposes. For example, digital platforms may use AI to gate-keep or self-preference. In fact, the Bureau initiated an investigation into Amazon’s self-preferencing tactics in 2020, which is still in progress.
Under the Act, the Bureau reviews mergers to determine whether they are likely to harm competition in a market in Canada. With the advent of AI technology, the Bureau must now determine how to assess the value of data and AI in merger reviews, such as the impact on economies of scope and scale, how it can be leveraged in vertical and conglomerate mergers, and the added challenges of predicting competitive effects in a constantly evolving space.
Based on past statements, the Bureau generally believes the merger provisions of the Act are sufficiently flexible as to account for mergers in technologically advanced markets. However, the Bureau is advocating for amendments that may impact how they consider AI-related harms. Notably, the Bureau has asked for the removal of the efficiencies defence from the Act, such that merging parties cannot rely on the anticipated efficiencies of a merger to offset the mergers’ anti-competitive effects.
5.2. Domestic regulation
Currently, there is no specific legislation or regulation that addresses AI in the competition context in Canada. The Act relies on general, flexible standards for assessing conduct, rather than industry or subject-specific rules. However, legislative reforms are anticipated in the near term and some changes may touch upon AI-related issues.
6 . Domestic legislative developments
Canada’s Artificial Intelligence and Data Act
Following a global trend in artificial intelligence regulation, Canadian legislators appear to be poised to move beyond policy frameworks and adopt hard law (see www.mccarthy.ca/en/insights/blogs/snipits/artificial-intelligence-canadian-and-international-trends). On June 16, 2022, the federal government introduced Bill C-27, titled The Digital Charter Implementation Act, 2022. One of three pillars of the bill is the introduction of the Artificial Intelligence and Data Act (AIDA). If adopted, the AIDA would become Canada’s first law specifically dedicated to regulating AI.
The focus of the AIDA is on “regulated activities,” defined as “designing, developing or making available for use an artificial intelligence system or managing its operations” as well as “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system” in the course of international or interprovincial trade and commerce. The AIDA also sets out specific requirements for “high-impact systems” which are a subset of AI systems. This approach of governing AI systems based on their level of risk is likely inspired by the EU’s approach. While the AIDA does not define what constitutes a high impact system, it states that the federal government will establish its formal definition through regulation at a later date.
Below is an overview of the AIDA’s most notable new requirements.
- System assessment. Anyone responsible for an AI system must assess whether it is a high impact system according to regulation to be provided by the federal government.
- Risk management. Anyone who is responsible for a high-impact AI system must establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system.
- Monitoring. Anyone responsible for a high-impact AI system must establish measures to monitor compliance with the risk management measures, as well as their effectiveness.
- Data anonymization. Anyone carrying out a regulated activity and who processes or makes available for use anonymized data in the course of that activity must establish measures with respect to the manner in which the data is anonymized and the use or management of anonymized data.
- Record keeping. Anyone carrying out a regulated activity must keep records describing in general terms the measures they establish for risk management, monitoring, and data anonymization as well as the reasons supporting their assessment of whether a system is a “high-impact system.”
- Publication requirements for developers. Anyone who manages or makes available for use a high-impact system must publish, on a publicly available website, a plain-language description of the system that includes an explanation of:
- how the system is intended to be used;
- the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make;
- the mitigation measures set up as part of the risk management measures requirement; and
- any other information prescribed by regulation.
- Notification in event of harm. Anyone responsible for a high-impact system must notify the Minister as soon as possible if the use of the system results in, or is likely to result in, material harm.
Significant penalties for non-compliance:
The AIDA would also introduce significant penalties that are greater in terms of magnitude than those found in Bill 64 or the EU’s General Data Protection Regulation:
- Administrative monetary penalties. The federal government will have the ability to establish an administrative monetary penalty scheme for violations of the AIDA and regulations made under it.
- Fines for breaching obligations. It is an offense for anyone to contravene their governance or transparency requirements. Breaching those obligations can result in fines of up to the greater of USD 10,000,000 and 3% of gross global revenues. When it is prosecuted as a summary offense the fine is up to the greater of USD 5,000,000 and 2% of gross global revenues.
- New Criminal Offenses related to AI systems. The AIDA proposes creating new criminal offenses:
- knowingly using personal information obtained through the commission of an offence under a federal or provincial law to make or use an AI system;
- knowingly or recklessly designing or using an AI system that is likely to cause harm and the use of the system causes such harm; and
- causing a substantial economic loss to an individual by making an AI system available for use with the intent to defraud the public. For businesses, committing these offenses can result in a fine of up to the greater of USD 25,000,000 and 5% of gross global revenues. When it is prosecuted as a summary offense the fine is up to the greater of USD 20,000,000 and 4% of gross global revenues.
Considering the anticipated long road ahead before the EU Act is finalized and adopted by the European Parliament and Council, it is conceivable that Canada could leapfrog the EU as the first jurisdiction in the world to adopt a comprehensive legislative framework to regulate the responsible deployment of AI.
7 . Frequently asked questions
1. Do individuals in Canada have a right not to be subject to automated decision-making?
Unlike article 22 of the EU GDPR, Quebec’s Bill 64 does not contain a specific right not to be subject to an automated decision. Instead, Bill 64 provides for rights of transparency and explainability, to correct errors in personal information used to make the automated decision, and to submit observations.
2. What are the transparency and explainability obligations applicable to automated decision-making in Canada?
According to Quebec’ Bill 64, “any person carrying on an enterprise who uses personal information to render a decision based exclusively on an automated processing of such information must inform the person concerned accordingly, not later than at the time it informs the person of the decision”. And at the person’s request, the organization must inform him or her “of the personal information used to render the decision and of the reasons and the principal factors and parameters that led to the decision”.
3. Is the use of biometric information subject to incremental regulatory obligations in Canada?
Yes. In July 2020, the CAI updated its guide on the use of biometrics titled Biometrics: Principles and Legal Duties or Organizations. This guide specifically applies in cases where biometry is used for identification or authentication purposes. The CAI defines biometry as “the set of techniques used to analyze one or more of a person’s unique physical, behavioral or biological characteristics in order to establish or prove his or her identity”. Biometric data is personal information (PI) whether it is in the form of a static print (fingers, hand geometry, voice, keyboard strokes, etc.) or dynamic (animated images or print, or images or prints with a time dimension). It also includes biometric data in digital forms or translated into codes “that are derived from the images by means of an algorithm”. The CAI also considers biometric data as a highly sensitive form of PI that is subject to express consent requirements. Moreover, requirements to disclose biometrics characteristics database are found in Quebec’s Act to establish a legal framework for information technology, at sections 44 and 45. We note that Bill 64 modified those sections. Section 44 now provides that an organization cannot use biometric data to verify or confirm an individual’s identity except:
- if such verification or disclosure was previously disclosed to the CAI; and
- with the express consent of the individual concerned.
Section 45 now also provides that the creation of a database of biometric characteristics must be disclosed to the CAI “promptly and not later than 60 days before it is brought into service”. The CAI also has the power to make orders to determine how the database is set up, used, consulted, released and retained, and how the data is to be archived or destroyed. The CAI can suspend or prohibit the bringing into service or order the destruction of the database, if the database does not respect its orders or constitutes an invasion of privacy.