CHATBOT E WATERMARK

09/02/2023

The democratization of chatbots, thoughts run to ChatGPT, imposes a method to recognize texts generated by an Artificial Intelligence. These include the hypothesis of watermarking, which also protects the copyright.

 

These days even the media have discovered what insiders have known for some time now, namely that AI is already here, and that it will change the world. One of the issues that has been most discussed is the risk of plagiarism by students, who have already begun using AI chatbots to prepare exam answers or to craft essays. Certainly the risk of students cheating using AI is high, and it seems very difficult to be able to identify when this occurs. It was then proposed to use watermarking technology, which is a kind of electronic mark consisting of a set of information, visible or hidden, placed within a file so that it is freely accessible but permanently marked. In other words, watermarking would make it possible to distinguish what was actually created by a human being and what is the product of artificial intelligence. The discipline of watermarking can be traced back to the rules on copyright and trademarks, where in copyright law (Art. 102quater and 102quinquies Law 633/41) it is provided that holders of copyright and related rights may affix technological protection measures, i.e., electronic rights regime information identifying the protected work or material, as well as the author or any other rights holder, to the protected work or material. The information may also contain indications about the terms or conditions of use of the work or materials, as well as any number or code representing the information itself or other identifying elements. The unlawful removal or alteration of electronic information, or the distribution or dissemination of goods from which the electronic information itself has been removed or altered, constitutes a crime punishable by imprisonment of six months to three years and a fine (Article 171ter Law 633/41). The Industrial Property Code (Legislative Decree 10/2005) then protects trademarks by, on the one hand, preventing the manufacturer’s trademark from being suppressed or altered and, on the other hand, civilly and criminally sanctioning those who alter or counterfeit a trademark (Art. 20 I.P.C. and Art. 473 Penal Code). The rules governing digital trademarks and their infringement therefore have long existed in our legal system and constitute a safeguard against the violation of intellectual property rights. Given their nature and function (that of preventing the unauthorized use of intellectual property rights), however, it is open to question whether the proposal to affix a watermark or digital mark to AI results is systematically acceptable, and whether it is actually suitable for solving the problems posed in practice by AI, including students’ attempts to cheat on exams.

The first problem that arises is that watermarking in AI could present implementation problems, since it would work in reverse to what happens in intellectual property. In the latter area, the watermark is affixed to what is authentic, and by difference it is possible to say that goods without a watermark are counterfeit. In AI this situation could not occur: in other words, one could say with certainty that a certain document was made by AI when a watermark is affixed to it; but in the mirror case of the document without watermark one could not be certain that it is a creation of the human intellect, it being possible that there are AIs without watermarks, the results of which could circulate without allowing to distinguish them from those made by a human being. Moreover, at present there is no rule requiring that AI results be watermarked, and even if such a rule were introduced, it is conceivable that it could be circumvented by parties interested in passing off AI results as a product of human ingenuity. Additionally, there is currently no technology that can distinguish one from the other through intrinsic file analysis, since artificial intelligence was born precisely to make products that are indistinguishable from those of the human intellect. Just remember that this is a technology based on passing the so-called “Turing test,” or “Imitation Game,” in which it is not possible to understand whether the answers to the interlocutor’s questions are provided by a human being or a machine.

The second problem that arises is that at present the interest in watermarking AI results in general appears nonexistent, if not counterproductive, since making such a declaration produces the loss of exclusive rights. In fact, according to the current majority opinion, at least in Western countries, AI results cannot be protected through intellectual property systems (such as copyright or patents), when they are not made through the intellectual contribution of human beings. Intellectual property requires that the creative, or inventive activity be carried out by a natural person, and that exclusive rights are originally acquired at the time of creation by the human being who is the author or inventor of the intangible good. In this sense, among others, the U.S. Copyright Office (decision Feb. 14, 2022 https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf) and the European Patent Office (decision Dec. 21, 2021 https://www.epo.org/law-practice/case-law-appeals/communications/2021/20211221.html) have refused to grant copyright or invention right protection to the results automatically realized by AI, without creative or inventive intervention by human beings. For this reason, it is believed that to date, many works of authorship or inventions actually made by AI are not declared as such but are instead attributed to human beings in order to obtain statutory protection for authorship and inventions. It is presumable, therefore, that the affixing of a watermark declaring the provenance of an intangible good by an AI is scarcely desired by those who intend to benefit from the exploitation of the good itself through the application of exclusive rights provided by intellectual property. In any case, it seems clear that the watermark that can be used in the context of AI cannot be equated with the watermark normally used as a digital mark of authenticity affixed to intellectual works. In the latter case, in fact, the rights holder has a clear advantage in affixing the watermark to all of his or her works, so that he or she can identify counterfeit goods (those without a watermark) on the market and can then take action to have them removed from the market and compensated for damages. In AI, exactly the opposite would happen in many cases: the AI owner would have an interest in making his or her works indistinguishable from those made by human beings.

However, the proposal to watermark AI results could have other aspects of interest, related to the desirability of the public being adequately informed of the provenance and nature of products on the market, so that they are also placed in a position to make more informed choices. In this view, the individual right of the AI holder should be balanced with the public interest, and the result of the balancing could be to oblige the AI holder to always make it knowable to the community when a product is the result of AI, just as is the case in certain areas that regulate the labeling of goods on the market. Indeed, according to the Consumer Code (Art. 5 ff.), consumers have the right to receive essential information, particularly on the safety, composition and quality of products and services. Now, the provenance of a result from AI could be such information, concerning the quality of the product or service, or even its safety (under certain circumstances).

On the other hand, the system could also be rewarding for the AI holder, since the provision of AI-related information to the consumer could also be a pro-competitive factor when linked to the assurance of certain quality and lawfulness characteristics of the AI. In one respect, in fact, the watermark could perform the functions of a “certification mark,” that is, attesting to the presence in the product of certain characteristics, which could be related to the process of realization of AI results, its quality, or other relevant features. The “certification mark” is registered and managed by accredited and neutral institutions, authorities and bodies, which prepare a regulation of use and discipline the conditions of use of the mark, as well as the manner of verification and surveillance. In this sense, one could think – for example – of watermarks that can certify the process of data acquisition by AI, so as to ensure that it is authentic and verified data, and not fake news instead. It would also be possible to certify the process by which outputs are produced, so that it is known what mechanisms the AI operates through, and to ensure that they meet certain basic requirements (e.g., completeness and predictability, or compliance with parameters of inclusion and nondiscrimination). The watermarking under discussion would present not only ethical and social advantages, but also – trivially and directly – economic ones, since there could be a specific market precisely for those AI products that meet particular quality characteristics, and thus are able to attract a qualified audience, for example of consumers who are attentive to the fact that the news is verified and reliable, and is not instead fake news. In this context, watermarking could also act as a stimulus for AI developers to move toward the realization of transparent AI systems, in which the mechanisms, processes, and algorithms underlying the system itself are “in the clear” and therefore knowable by the public. This issue relates back to the core of AI, namely, its transparency and controllability. In fact, one of the most relevant question marks regarding the development of AI consists in the risk that it may tend to be protected through commercial and industrial secrecy, thanks to which a technical solution may obtain exclusive right protection, potentially unlimited in time, provided that it has an economic value in itself and is not known to the public in its precise configuration. However, the protection through secrecy – while abstractly suitable from the individualistic standpoint of the AI creator’s interest – could lead to undesirable results for the community, because it could render the principles on which the AI is based unknowable, and also conceal possible biases or errors that could possibly afflict the AI. It is precisely for this reason that recently both the Supreme Court (Judgment 14381/2021) and the Council of State (Judgment 881/2020) have ruled in the sense that, at least in certain cases, the collective interest should prevail over the protection of the IA, excluding the application of the protection for trade secrets. For example, in the case of public competitions, for the awarding of positions of various types (from teaching to judicial roles), it would seem appropriate for the selection process to be transparent and comprehensible to all, and thus when an AI is used it must be based on publicly knowable and comprehensible mechanisms, and cannot for that very reason be kept secret.

Another possible interesting take on the use of watermarking could involve the works used to power and drive AI. A number of recent court cases have brought to the fore situations in which it was alleged that the AI had reached a certain and high level of expertise in an abusive manner through the use of massive databases of prior works, all of which had been subjected to unauthorized acts of reproduction and processing (see the action commenced in January 2023 by Getty Images before the London Court against Stability Images https://newsroom.gettyimages.com/en/getty-images/getty-images-statement). If it were technologically possible to make a watermark that AI cannot eliminate, then it would also accomplish the goal of at least preventing certain types of misuse, and forcing AI developers to remunerate the exploitation of the intellectual property by which they are able to make AI.

In conclusion, what seems necessary is to identify principles through which to govern the development of AI. Indeed, history teaches us that the technology cannot be stopped, and on the other hand this would perhaps not be a desirable outcome in the long run anyway, although certainly at first the effects of the technology on society may have a clearly negative impact (indeed, one thinks of the likely massive loss of jobs due to the implementation of AI in many sectors, including intellectual professions, journalism, music and video). What is appropriate and necessary to do, however, is to ask how the technology should evolve, and to ensure that this is done in ways that respect the basic principles of our legal systems and society. In other words, it must be ensured that these AI systems remain knowable and transparent, so that widespread control can be exercised over how they operate, and so that the presence of any “bias” in them can be promptly identified, and just as promptly corrected.


MS. SIMONA LAVAGNINI SPEAKER AT THE WEBINAR “LA TUTELA DELLA PI NELL’ERA DI IMPRESA 4.0”

17/11/2022

Next December the 5th, 2022 from 14:30 to 17:00, will be held the webinar “La tutela della PI nell’era di Impresa 4.0”, organized by “Punto Impresa Digitale” and Turin provincial anti-counterfeiting committee with the cooperation of con INDICAM, in which Ms. Simona Lavagnini, founding-partner of LGV Avvocati Law Firm, will attend as speaker.

 

The webinar will focus on the new challenges that digitalization is providing to Intellectual Property right and will represent an opportunity for professionals of the field to discuss on the relationship between NFTs, Artificial Intelligence and Copyright law following the implementation of Directive 2019/790, and the perspectives expected for the 2023.
Ms. Simona Lavagnini will attend discussing about the growing role of AGCOM and its new powers of regulation and intervention following the implementation of the European directive.
Credits will be provided for National Bar Association and for the Order of IP Consultants.
For more information about the webinar and the series of meetings, please refer to the following link: https://www.to.camcom.it/20221205-IP-IA.


MS. SIMONA LAVAGNINI SPEAKER AT THE WEBINAR “L’ITALIA E LA DIRETTIVA DIRITTO D’AUTORE NEL MERCATO UNICO DIGITALE: LUCI E OMBRE DEL D. LGS. 177/2019” ORGANIZED BY ALAI ITALIA

07/11/2022

Next November the 8th, 2022 from 18:00 to 19:00, will be held the webinar “L’Italia e la direttiva diritto d’autore nel mercato unico digitale: luci e ombre del d. lgs. 177/2019”, the first one of the serial seminars organized by ALAI Italia, in which Ms. Simona Lavagnini, founding-partner of LGV Avvocati Law Firm, will attend as speaker.

 

The webinar, the first of the serial seminars “La trasposizione direttiva (UE) 2019/790 – il diritto d’autore nel mercato unico digitale” organized by ALAI Italia, will provide an overview of the framing of the new rules of the European directive in the copyright system and will be an opportunity to offer experts’ thoughts on the matter and the stakeholders’ expectations from each sector involved.
Ms. Simona Lavagnini will attend discussing about the growing role of AGCOM and its new powers of regulation and intervention following the implementation of the European directive.
For more information about the webinar and the series of meetings, please refer to the following link: http://www.alai-italia.it/.


MS SIMONA LAVAGNINI SPEAKER AT THE EVENT “DIGITAL SINGLE MARKET AND ARTIFICIAL INTELLIGENCE: ETHICS AND LAW IN DIGITAL TRANSITION” ORGANIZED BY HOFFMANN EITLE S.R.L.

17/10/2022

Next October 28, 2022 will be held the conference “Digital Single Market And Artificial Intelligence: Ethics And Law In Digital Transition”, organized by Hoffmann Eitle S.r.l. and AIPPI, with Simona Lavagnini, founding-partner of LGV Avvocati, as speaker.

 

The conference will be divided into four sessions and will concern the topic of Artificial Intelligence and its technological and economic implications, with a focus on ethical, legal and procedural rules. Ms. Simona Lavagnini will participate in the panel discussion on the impacts of AI on the intellectual property system.
The event will take place at the offices of Hoffmann Eitle S.r.l. in Milan starting from 9.30 a.m.
For more information about the event and to get accredited please refer to the following link.


SIMONA LAVAGNINI AND ALESSANDRO BURA IN THE FIRST EPISODE OF INDICAM’S IPonSUMMER PODCAST

4/08/2022

LGV Avvocati keeps you company even during the summer holidays. Simona Lavagnini and Alessandro Bura took part in the first episode of INDICAM ON AIR’s summer podcast #IPonSUMMER, dedicated to trending topics and legislative news related to the world of intellectual property.

 

In the first episode of the summer podcast series “IPonSUMMER” promoted by INDICAM, Simona Lavagnini, founding partner of LGV Avvocati, and Alessandro Bura proposed some interesting observations on the case Louboutin v. Amazon Europe and, in particular, on the Conclusions issued on the matter by the Advocate General of the Court of Justice of the European Union. The central issue of the intervention is whether there is direct liability on the part of the operator of an e-commerce platform for the sale by third parties of counterfeit goods on that platform and whether the operator actually makes a “use” of the trademark within the meaning of Article 9(2) of EU Regulation No. 2017/1001.