On 14 September 2018, the Ministry of the Economic Development (Ministry) launched a call for applications from relevant experts to form a group of 30 members responsible for identifying a National Strategy on Artificial Intelligence. This group began their activities on 18 January 2019, mainly focusing on:
- research and investments;
- data use and exploitation;
- regulatory framework and ethical impacts; and
- enhancement of public administration services.
The group of experts drafted a document which included some proposals for a National Strategy for Artificial Intelligence, published by the Ministry on July 2020 (www.mise.gov.it/index.php/it/198-notizie-stampa/2041246-intelligenza-artificiale-online-la-strategia). A public consultation on this strategy was launched on October 2020.
Additionally, on 24 November 2021, a Strategic Programme for Artificial Intelligence (AI) 2022-2024 was published, as a result of the joint work of the Ministry of Education, University and Research, the Ministry of the Economic Development and the Minister of Technological Innovation and Digital Transition (https://assets.innovazione.gov.it/1637777513-strategic-program-aiweb.pdf).
1 . Constitutional law and fundamental human rights
A possible adverse impact on fundamental human rights has been one of the first concerns at domestic level. The group of experts mentioned above identified four main principles in the National Strategy for Artificial Intelligence to this extent:
- respect of individual’s autonomy (AI systems must not compromise the self-determination of users);
- harm prevention;
- fairness (fair distribution of benefits stemming from AI as well as related costs); and
- explainability (i.e., decision-making transparency).
Moreover, under the above-mentioned Strategic Programme, the Italian government has committed to governing AI and mitigating its potential risks, particularly to safeguard human rights and ensure an ethical deployment of AI.
1.1. Domestic constitutional provisions
Even though there are no specific Italian provisions relating to AI systems, the Italian Constitution provides for different principles and provisions which AI can affect. These include the fundamental principles (such as those aimed at recognising and protecting the inviolable rights of individuals, equality among citizens, effectiveness of access to work and working conditions) and the main civil, political, economic and social rights and freedoms (for example, personal freedom, freedom of domicile, correspondence, free movement, freedom of associating, professing religious beliefs, freedom of expression, right of defence, and so on). According to scholars, AI systems are capable of both strengthening and interfering with those main principles, rights and freedoms. Accordingly, AI could booster the evolution of constitutional duties by giving rise to new rights and freedoms, such as the right to transparency regarding the rationale and logics behind any AI decision and the right to have the decision not only based on automated processing.
1.2. Human rights decisions and conventions
The Italian Council of State (the highest administrative Court) has ruled on algorithmic decision-making by public administration that affects individual rights and freedoms, even though such decisions concern algorithms in general, i.e., not strictly those related to AI. By re-iterating some principles laid down at international and European level, like the European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL), the Court clarified that decisions based on fully automated systems should comply with the following principles:
- transparency of the algorithm must be guaranteed in all aspects, as for the identity of the creators, the process used for its creation, the decision mechanism, including the priorities identified in the evaluation process and the data selected as relevant;
- the decision must not be the result of the sole automated process, i.e., a minimum of human intervention must always be ensured; and
- non-discrimination should be ensured, according to which the controller should use appropriate mathematical or statistical procedures for profiling, as well as putting in place appropriate technical and organisational measures, also to avoid inaccuracies and minimise the risk of errors (Consiglio di Stato sez. VI, 13/12/2019, decision no. 8472).
Italy has signed and ratified the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Strasbourg, 28.1.1981).
2 . Intellectual property
Artificial intelligence (AI) raises new legal issues when it comes to non-human creation of creative and original content, whether it is eligible for protection and whom should be granted the relevant rights.
According to art. 45, paragraph 2 of the Industrial Property Code (Legislative Decree, 10 February 2005, no. 30 (IP Code)), an AI system, like any other mathematical method and/or computer programme, is not an invention and therefore it is not patentable, unless certain requirements are satisfied:
- it produces a technical effect resulting from its execution that goes beyond the normal interaction between the software and the computer; and
- it is original and the result of an intellectual creation.
A different issue concerns the ownership of patent rights for inventions created by AI.
Under the IP Code, moral rights on the patent may be attributed only to a natural person and can be exercised only by the inventor (and, after his death, by the descendants).
Attribution of rights arising from the invention to a person other than the inventor/natural person is allowed only in limited exceptions. For example, in respect of employee inventions, the rights deriving from the invention belong to the employer, and for a research team’s invention the rights belong to the relevant research entrepreneur.
In case of a computer-implemented invention, the right to the patent belongs, therefore, to the employer or the research entrepreneur who owns the AI.
For the protection of works, Italian Copyright Law (Law no. 633/1941) requires a creative character as well as originality, as a result of the author’s intellectual creation. Protection is granted only to human intellectual creations, with the author – a natural person – being the only eligible creator.
The main obstacle for AI creations to be copyright eligible is, therefore, the absence of human creativity.
Neighbouring rights seem the most appropriate way to exploit AI commercial benefits, as they offer more adequate and effective instruments for the protection of the interests involved.
The legal protection on such works could therefore be configured as a sui generis right, similar to the existing rights for non-original databases (meaning databases whose content, presentation/ verification, has been obtained through substantial investment).
The question is still open in Italy and said sui generis right, that is a specific property right, might be a possible solution to protect these computer-generated works as assets resulting from huge financial and business investments by those who funded them, rather than as purely creative works.
2.3. Trade secrets/confidentiality
As an antithesis to patent protection, AI algorithms could be protected as secret information.
Trade secrets are governed by articles 98 and 99 of the IP Code and are defined as business information and technical and industrial experience, including commercial information, subject to the legitimate control of the owner, where such information is secret, has an economic value and is subject to reasonable measures in order to maintain secrecy.
Unlike patents, no application or registration is required to obtain trade secret protection when the above requirements are satisfied.
3 . Data
In 2019 the Italian main regulatory Authorities (the Data Protection Authority (the Garante), the Competition Authority (ICA) and the Communications Authority) concluded a joint inquiry on Big Data, where they highlight that Artificial Intelligence – especially machine learning – in the field of big data is a powerful tool for extracting new knowledge.
3.1. Domestic data law treatment
At domestic level personal data are covered by a comprehensive set of rules under Legislative Decree no. 196/2003. As for non-personal data, there is no specific legislation, except for open data (see below Section 3.3 Data: Open data & data sharing).
In 2016, the Garante found it unlawful for a platform to collect several categories of personal data from sources other than the data subjects for the purpose of calculating a ‘reputational rating’ of those data subjects, as it was likely to have a serious impact on their (professional and private) lives. The Supreme Court of Cassation confirmed Garante's decision, stating that – even though this activity might not in principle be considered as unlawful – the consent of those subjects could not be considered legitimately obtained where the algorithm’s execution logic are unknown to the data subjects.
3.2. General data protection regulation
Legislative Decree no. 196/2003 includes different provision that specify and integrate the EU’s General data protection regulation: for instance, authorisations and requirements related to the processing of special categories of personal data, as well as various criteria for application of administrative sanctions etc.
3.3. Open data & data sharing
The legislation in force (Legislative Decree no. 82/2005 and Legislative Decree no. 36/2006) aims at fostering free access to the data owned by public administrations, public enterprises and certain private companies providing public services. These must create, collect, store and make available and accessible their data via information and communication technologies, so that they can be used both by other public administrations and by private entities and individuals. To this end, the Italian Government has adopted a national data strategy and is creating a digital national data platform. The use of such data should be free of charge, except for the possibility for the owner of the data to recover the costs incurred for copying, making available and disclosing the documents, as well as for the anonymisation of personal data or for further data protection measures. This is without prejudice to the limits to the accessibility and disclosure of data laid down under applicable legislation.
3.4. Biometric data: voice data and facial recognition data
Pursuant to Law Decree no. 139/2021, converted into Law no. 205/2021, it is forbidden to install and use, in public places or places open to the public, video surveillance systems with facial recognition technologies based on the processing of biometric data. This ban falls both on private entities and on public authorities and will last until 31 December 2023 or, in any case, until a specific set of rules covering this area is adopted.
On February 2022, the Garante ruled on a case involving processing of biometric data: it fined the US-based company Clearview AI for carrying out biometric monitoring activities via its AI systems, namely, for creating profiles using facial images extracted from public web sources via web scraping, without an appropriate legal basis and in a non-transparent manner.
4 . Bias and discrimination
To date, there is no specific domestic regulation in Italy focusing on bias and discriminatory practices in the field of AI.
4.1. Domestic anti-discrimination and equality legislation treatment
AI-related acts of discrimination are potentially covered by a range of existing pieces of law based on the general principle laid down under article 3 of the Italian Constitution – according to which “all citizens shall have equal social dignity and shall be equal before the law, without distinction of gender, race, language, religion, political opinion, personal and social conditions” – and on European Union law provisions (mainly related to direct and indirect discrimination). Accordingly, there are specific rules on equal treatment and bans on direct and indirect discriminatory practices with reference to:
- immigration law (Legislative Decree no. 286/1998);
- access to work, goods and services, home, healthcare, education and social security (Legislative Decree no. 215/2003);
- work and working conditions (Legislative Decree no. 216/2003; Law no. 300/1970; Legislative Decree no. 276/2003); and
- disabilities (Law no. 67/2006).
Further rules have been laid down with reference to gender equality in working contexts, in particular article 37 of the Italian Constitution, according to which “Working women are entitled to equal rights and, for comparable jobs, equal pay as men. Working conditions must allow women to fulfil their essential role in the family and ensure appropriate protection for the mother and child”.
Additionally, it is again worth mentioning Italian case law on algorithmic discrimination – even though this is not related to AI – in the area of treatment of workers and tender procedures. A first instance tribunal (see Tribunale Bologna, sez. lav., ord. no. 31/12/2020) found that an algorithm used by a food delivery company was discriminatory, whereby ‒ to be eligible for a certain delivery ‒ workers were selected on the basis of ranking parameters that indiscriminately penalised all forms of absence from work (i.e., even lawful ones, like strikes). The Italian Council of State in a case concerning secondary school teachers’ rankings, has considered it possible to proceed with the use of algorithms in the evaluation procedures of the public administration, but with guarantees of transparency (i.e., the algorithm must be capable of being understood by reference to its authors, to the procedure used for its elaboration, to the decision mechanism, including the priorities assigned in the evaluation and decision procedure and the data selected as relevant) and of verification in court (Consiglio di Stato, sez. VI, 08/04/2019, decision no. 2270).
5 . Trade, anti-trust and competition
For a few years now, the ICA has been studying the phenomenon of big data and assessing the impact on businesses of the use of artificial intelligence, in order to update its competition and consumer protection interventions (see the 2020 Annual report on the ICA’s activity at www.agcm.it/dotcmsdoc/relazioni-annuali/relazioneannuale2019/Relazione_annuale_2020.pdf).
In its inquiry on Big Data (see above Section 3. Data), the ICA stated that, at least theoretically, machine learning mechanisms such as dynamic pricing algorithms can lead to tacit collusion between undertakings. More in detail, according to the ICA, the high transparency of online markets (i.e., the wide availability of data on competitors’ prices and other relevant information), the frequency of price adjustments (i.e., the ability of algorithms to monitor markets in real time by being able to change prices instantaneously and continuously), as well as the ability to learn optimal pricing strategies through machine learning, mean that the use of pricing algorithms has the potential to facilitate collusion. However, the ICA itself admitted that tracing similar violations of Article 101 TFEU is particularly complex, especially in case of sophisticated algorithms, characterised by machine learning mechanisms.
The ICA’s enforcement activity in relation to AI is still very limited.
5.1. AI related anti-competitive behaviour
The risk that AI mechanisms could facilitate the standardisation of competitors’ conduct was one of the concerns that led to the ICA commencing proceedings no. I844 – ANIA Anti-fraud project on 3 November 2020. The subject matter of these proceedings was the creation, by the insurance companies’ trade association (ANIA), of a platform accessible to insurance companies aimed at enabling the detection of fraudulent events, by identifying the most recurring types.
The ICA expressed its concern that this project could lead to the development of common algorithms aimed at determining homogeneous fraud risk indicators which, in turn, could have led to the standardisation of the conduct of the insurance companies.
On 21 September 2021, the ICA closed its proceedings without a finding of infringement, by accepting ANIA’s commitments. These included affording insurance companies the possibility of weighing fraud risk indicator factors differently, as well as the use of an “anomaly detection” algorithm and not a ‘self-learning’ one. According to the ICA, these measures would sufficiently mitigate the risk of conduct standardisation as described above.
On the other hand, the ICA welcomed the use of AI-based mechanisms to prevent violations of consumer protection laws. Specifically, in 2020, it accepted commitments by three marketplace operators that introduced image-matching technology and algorithms aimed at detecting listings of products related to the COVID-19 pandemic that used images or wording that could mislead consumers as to the possibility of preventing the contagion, or that were characterized by price gouging (see the 2021 Annual report on the ICA’s activity at www.agcm.it/dotcmsdoc/relazioni-annuali/relazioneannuale2020/Relazione_annuale_2021.pdf).
5.2. Domestic regulation
To date, there is no specific domestic regulation in Italy focusing on the trade, anti-trust and competition law aspects of AI.
6 . Domestic legislative developments
Even though there are no main legislative developments, the set-up of a group of experts and a general national strategy, as well as of a strategic programme for 2022-2024, shows that the Italian Government is strongly active in the AI field.
In particular, the strategic programme identifies:
a) six objectives, i.e., mainly related to fostering innovation and research and to the development of better policies and services to citizens;
b) 11 priority sectors, i.e., industry and manufacturing, education, agri-food, culture and tourism, health and wellness, environment and networks, banking and finance and insurance, public administration, smart cities, national security and information technology; and
c) three areas of intervention, attracting talents for an AI-driven economy, increasing funds for advanced research in AI, and creating an AI-friendly environment in the public and private sector.
7 . Frequently asked questions
1. Does a specific regulation on AI exist in Italy?
No, however general principles apply and certain AI solutions have been already been scrutinized by courts and authorities.
2. Which aspects of AI have been mainly subject to enforcement in Italy?
To date, the main enforcement on AI technology has involved data protection aspects. There have been additional enforcement activities which have focused on employment issues and usage of AI in the public sector.
3. Will any AI-focused law be issued in the short term in Italy?
This is unlikely. According to the strategy of the Italian government, other non-legislative actions are envisaged for creating an AI-friendly environment in Italy capable to attract investments.