ChatGPT in practice: the principal legal issues

ChatGPT is an AI tool that has come to be immensely popular recently as well as a hot topic among lawyers. This and similar systems are quickly becoming commonplace in commercial practice, and this has also given rise to grave legal concerns and risks. Even at the moment, the risk connected with using such tools in somebody’s business, and with employees and contractors using these systems, can be mitigated using specific legal instruments.

ChatGPT is a system that generates text and responses to commands in natural-sounding language. The versatility of ChatCPT and similar systems means that they can be used in many fields, such as IT (including creating software), marketing, journalism and science. At the same time, they entail certain notable legal risks.

Firstly, there is the important question of intellectual property. ChatGPT and similar training (Deep Learning) tools make use of large databases containing information that may be protected under copyright or intellectual property rights. This raises serious concerns as to whether AI operators might infringe the rights of authors or protected material on which these systems base their “learning”. This has led to the first litigation cases and action on the part of lawmakers in various jurisdictions, including in the EU. Meanwhile, the legal risk in this respect lies primarily with AI system suppliers.

Secondly, there is a lot of doubt as to the status of content in fact generated by ChatGPT and similar systems – mainly whether it might constitute works protected under copyright law. If this is the case, then the immediate legal question is who owns the right to those works. This is an area not currently covered by any legislation and is very unclear. For this reason, these issues need to be addressed in agreements with the suppliers of these systems. Most of all, agreements with contractors need to contain the relevant provisions.

Data protection, especially personal data, is another crucial issue. ChatGPT is capable of generating responses using the inserted data, which may include personal data, business secrets, or privileged information. Thus it is important to implement the appropriate security and data safeguards, and thus the respective policies, such as means of rendering data anonymous, so that the feeding of data into ChatGPT does not breach the law or non-disclosure obligations.

The third crucial issue is errors in content generated by ChatGPT and similar tools, and supplier liability. Although ChatGPT is an AI tool and works by analyzing enormous amounts of data, there continues to be a risk of errors or inaccuracies in the generated responses. This could cause measurable loss if the generated content was used in business. Thus it is important to bear in mind that suppliers of systems of this kind explicitly stipulate that they are not liable for loss caused when using them.

In summary, ChatGPT and similar systems might be highly beneficial tools, but also entail certain legal risks. These risks need to be examined, and the appropriate precautions need to be taken. A crucial element in this regard is entering into the relevant agreements with business counterparts, particularly contractors, and introducing the relevant procedures and policies on using artificial intelligence. Even today, this will protect an organization’s interests and ensure greater safety when using new and extraordinarily useful tools in the organization’s day-to-day activities.

Please visit our website at https://www.traple.pl/en/ and contact us at [email protected] for further information on our professional services.