What Are the Legal Implications of ChatGPT?
Generative AI tools like ChatGPT are capable of quickly generating human-sounding text for various purposes.
However, the surging popularity of these tools has led to concerns about the implications of their development and use. Some of these potential issues include:
Being aware of these issues can help you use ChatGPT safely and responsibly.
ChatGPT legal issue 1: Data privacy
The widespread use of ChatGPT raises concerns about data privacy and regulation.
ChatGPT conversations
OpenAI (the developer of ChatGPT) generally stores ChatGPT conversations as training material for future models. This could potentially lead to legal issues if users input confidential or sensitive information that is later reproduced by the tool.
For example, if an employer were to use ChatGPT to draft employee contracts, it’s possible that personal data such as employees’ names, addresses, and income information could be saved as future training material.
To prevent conversations being used in this way, users can manually disable chat history. They can also request that OpenAI delete the content of their past conversations.
Data regulation
Users should familiarize themselves with relevant laws pertaining to the use of their data.
Two important privacy regulations that OpenAI claims to adhere to are:
- CCPA (California Consumer Privacy Act), which protects the privacy rights of Californian citizens
- GDPR (General Data Protection Regulation), which protects the privacy and data rights of EU citizens
Moreover, the undeclared use of these tools by businesses can have legal implications. For example, the California Bot Disclosure Law dictates that businesses using generative AI tools in consumer interactions (e.g., in customer support communications) must clearly indicate this to consumers.
ChatGPT legal issue 2: Copyright infringement
ChatGPT’s training process and outputs may raise copyright issues.
Training data
ChatGPT is trained on vast quantities of internet sources. The usage of some of these sources may infringe on the intellectual property rights of third parties.
EU regulators are currently developing legislation to ensure transparency about the sources used in training AI algorithms.
However, it’s unclear how such problems will ultimately be resolved. In the US, the Federal Trade Commission (FTC) has previously used a measure called “algorithmic disgorgement.” This requires companies to delete algorithms that were trained on improperly sourced data.
Ownership of outputs
It’s somewhat unclear who actually owns the copyright of ChatGPT outputs.
According to OpenAI’s terms of use, users have the right to reproduce ChatGPT outputs for any purpose, including publication (as long as the publisher allows the use of AI writing, of course).
However, ChatGPT outputs are not always unique. This could lead to potential legal issues if the same output is used commercially by different users. OpenAI’s policy doesn’t really clarify this issue. It states: “Other users may also ask similar questions and receive the same response. Responses that are requested by and generated for other users are not considered your Content.”
OpenAI also claims that users are legally responsible for the content of such outputs, meaning users may be liable if they reproduce an output that contains copyrighted material. However, it’s not clear how users can know whether this is the case, as the tool is unable to provide accurate citations.
Users should be aware of such issues and use AI-generated outputs as a source of inspiration instead of reproducing them verbatim.
ChatGPT legal issue 3: Reproducing biases/inaccurate information
ChatGPT is trained on large datasets which may include hidden biases or limitations.
The tool itself does not have the ability to understand the implications of its outputs. In spite of OpenAI’s efforts to eliminate these problems, ChatGPT is not always trustworthy and still occasionally generates responses that contain discriminatory or inaccurate information.
For example, in April 2023, an Australian mayor began a defamation lawsuit against OpenAI for ChatGPT’s inaccurate claim that he was arrested and charged with bribery in 2012.
Users who publish inaccurate information generated by ChatGPT could potentially suffer reputational damage or, in extreme cases, even charges of libel.
It’s important to verify the accuracy of AI-generated responses against a credible source and to critically consider the risk of bias on any topic.
Other interesting articles
If you want more tips on using AI tools, understanding plagiarism, and citing sources, make sure to check out some of our other articles with explanations, examples, and formats.
Using AI tools
Plagiarism
Frequently asked questions about ChatGPT
- Who owns ChatGPT?
-
ChatGPT is owned by OpenAI, the company that developed and released it. OpenAI is a company dedicated to AI research. It started as a nonprofit company in 2015 but transitioned to for-profit in 2019. Its current CEO is Sam Altman, who also co-founded the company.
In terms of who owns the content generated by ChatGPT, OpenAI states that it will not claim copyright on this content, and the terms of use state that “you can use Content for any purpose, including commercial purposes such as sale or publication.” This means that you effectively own any content you generate with ChatGPT and can use it for your own purposes.
Be cautious about how you use ChatGPT content in an academic context. University policies on AI writing are still developing, so even if you “own” the content, you’re often not allowed to submit it as your own work according to your university or to publish it in a journal. AI detectors may be used to detect ChatGPT content.
- Can I publish text written by ChatGPT?
-
According to OpenAI’s terms of use, users have the right to reproduce text generated by ChatGPT during conversations.
However, publishing ChatGPT outputs may have legal implications, such as copyright infringement.
Users should be aware of such issues and use ChatGPT outputs as a source of inspiration instead.
- Who owns the copyright of ChatGPT outputs?
-
According to OpenAI’s terms of use, users have the right to use outputs from their own ChatGPT conversations for any purpose (including commercial publication).
However, users should be aware of the potential legal implications of publishing ChatGPT outputs. ChatGPT responses are not always unique: different users may receive the same response.
Furthermore, ChatGPT outputs may contain copyrighted material. Users may be liable if they reproduce such material.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
2 comments
Mark Nasil
June 30, 2023 at 9:02 AMWhat are the legal implications of Chat GPT relating to code generation?
Eoghan Ryan (Scribbr Team)
July 5, 2023 at 11:22 AMHi Mark,
Using ChatGPT to generate code may cause issues related to intellectual property. ChatGPT outputs are not unique, so other users may receive the same code in an output and have a similar claim to ownership.
It's also possible that code that is protected by copyright may be reproduced by the tool. Using this code may be considered copyright infringement.