ChatGPT vs. Human Editor | Proofreading Experiment

ChatGPT is a popular AI language model that can provide fluent answers to all kinds of different user prompts. Given its strong language abilities, you might wonder if it can help you to improve your academic writing when asked to proofread it.

To find out, we ran an experiment where we gave the same text to ChatGPT and to a human editor, asking them to improve the language and style and to clearly explain the changes they made.

The experiment showed that both editors improved the text overall, but the human editor made more extensive and reliable changes, and only the human editor was able to properly explain their changes.

Our general conclusion is explained below, followed by our methodology and an in-depth exploration of each edit.

Note
Universities and other institutions are still developing their stances on how ChatGPT and similar tools may be used. Always follow your institution’s guidelines over any suggestions you read online. Check out our guide to current university policies on AI writing for more information.

You can also learn more about how to use AI tools responsibly on our AI writing resources page.

General conclusion

Overall, the results indicate that, while both edits certainly improved the text overall, the human editor was substantially better than ChatGPT.

ChatGPT Human editor
Pros
  • Quick turnaround
  • Fixes most grammatical errors
  • Fixes some stylistic inconsistencies
  • Free
  • Fixes all grammatical errors
  • Fixes all stylistic inconsistencies
  • More extensive improvements
  • Changes shown clearly
  • Useful explanatory comments
  • No risk of ruining your citations
  • No need to keep prompting
Cons
  • Interface can’t track changes directly
  • Can’t remember and explain changes
  • Can compromise accuracy of citations and quotes
  • Unable to edit much in one go
  • Consistently misses some issues
  • Slower turnaround
  • Not free

ChatGPT has some advantages in terms of convenience, but technical issues make the process less smooth than it could be.

In terms of quality, ChatGPT can ensure basic correctness, but the human editor is better at improving fluency, clarity, and conciseness. ChatGPT’s tendency to create citation errors and modify quotations is also worrying. And it seems incapable of explaining the changes it made, struggling to even remember what it changed, let alone explain why.

Because of this, we think that ChatGPT can be a quick and cheap way to proofread your writing, but we recommend checking the results carefully for errors and using a Proofreading & Editing service for more comprehensive and higher-quality editing (or try the Scribbr Grammar Checker for a dedicated online proofreading tool).

Testing methodology

To test ChatGPT’s capabilities as a proofreader and editor, we compared it to a professional Scribbr editor, trained to edit academic texts for issues related to language, style, and consistency and provide clear, useful feedback to the author.

We created a 1,000-word testing text to give to both of them, including issues like spelling mistakes, punctuation errors, stylistic inconsistencies, and confusing phrasings. You can see the full text by opening the document below.

Open Google doc

The editor was instructed to edit the text according to the conventions of US English and 7th-edition APA Style and to follow the usual Scribbr approach of editing for language and style, not making changes to the arguments of the text.

To align ChatGPT’s approach with these guidelines, we used the following prompt:

ChatGPT prompt

Please proofread and edit the following chapter of my thesis for language issues and stylistic inconsistencies. Focus on language; don’t try to improve the argumentation or add and remove content unless doing so is necessary to resolve language and style issues. The chapter should be written in American English, in 7th edition APA Style. Present the edited text followed by a list of all the changes you made, with clear explanations of why you made each change.

In practice, it was necessary to follow up on ChatGPT’s answers due to technical limitations and confusion on its part. You can see the full text of our interaction with ChatGPT in the following section.

ChatGPT results

Getting ChatGPT to edit the text wasn’t a very smooth process, as it’s not really designed with this task in mind. The full text of our conversation with ChatGPT can be seen in the document below, and you can read on for an analysis of the process and results.

Open Google doc

Pros and cons

  • Quick turnaround
  • Fixes most grammatical errors
  • Fixes some stylistic inconsistencies
  • Free
  • Interface unable to track changes to the text directly
  • Extremely bad at remembering and explaining changes
  • Sometimes compromises accuracy of citations and quoted text
  • Unable to edit much text in one go—prompting process was tricky
  • Consistently misses some things, especially more advanced style and clarity issues

Presentation of text

ChatGPT’s interface is not able to track changes in the same way as Word or Google Docs, so it wasn’t always obvious what it had changed in the text. It was also unable to edit the whole text at once, possibly due to a word limit. Its answers cut off partway through the text.

When asked to “continue,” it misunderstood and started predicting how the text would continue from the point where it had stopped, rather than editing. When prompted to return to editing, it got confused for a while and started creating completely unrelated text.

We then tried prompting it again with the original text from after the point where it had stopped. It again cut off before the end, so we re-prompted it again. This time, since it got to the end, it was finally able to list and explain the changes it had made (for the last part of the text), but its list was quite inaccurate, so we asked it for clarification, which didn’t help much.

To get a list of changes for the whole text, we ran it through ChatGPT again in three parts, in a new chat. Overall, the process took about an hour, a quick turnaround for an editing assignment even with all the obstacles encountered in the process.

Edits

ChatGPT’s edits certainly improved the text overall, fixing both overt grammatical errors and some stylistic inconsistencies. But it wasn’t always consistent in different attempts, sometimes addressing certain problems and sometimes missing them.

Issues that it missed most of the time or every time include:

  • Use of a hyphen in place of an em dash
  • Awkward use of “wherein”
  • Misuse of a colon at the end of an incomplete sentence
  • The APA-specific recommendation of writing words like “nonsmokers” as one closed-up word, rather than hyphenated (“non-smokers”)

Additionally, some of the changes it did make were poor. It added a comma to a piece of quoted text, which is not appropriate since it makes the quotation inaccurate. And in one attempt, it randomly changed the publication year of a source in a citation (from “GOLD, 2019” to “GOLD, 2022”), a change that would ruin the citation if the author didn’t notice it.

Explanations

ChatGPT was never able to accurately describe and explain the changes it had made. It attempted to list some changes, but it always missed a lot of them, and it usually made a lot of errors in describing the changes it had made.

In fact, many of the changes it described simply hadn’t happened—it claimed to have added text that wasn’t there or to have removed text that was never in the original. Because of this, it wouldn’t be easy for the author to check that all changes were appropriate.

Even when it did accurately describe a change that genuinely improved the text, its justification for making that change was often misleading. It seems like ChatGPT has good instincts for what to change but a very limited ability to remember and justify its changes.

Human editor results

Our Scribbr editor presented the edited text according to our usual process, as a Word document with tracked changes and explanatory comments. You can download the full edited document below.

Download Word doc

Pros and cons

  • Fixes all grammatical errors
  • Fixes all stylistic inconsistencies and ensures consistency with APA Style guidelines
  • Improves concision, clarity, and fluency of text more extensively
  • Changes shown clearly and easy to accept or reject
  • Comments asking clarifying questions, making suggestions, and explaining changes
  • No risk of illogical changes to citations, quotations, etc.
  • No need to keep prompting the editor to get what you want
  • Slower turnaround (one-, three-, or seven-day deadlines, but editor may finish sooner)
  • Not free

Presentation of text

Tracking changes directly in the document contributes to a good editing service. If your editor rephrases an unclear sentence, you can see exactly what they’ve changed and whether it matches your intended meaning, and you can reverse the changes if not.

Additionally, the editor left comments in the document to explain changes that might be unclear, to ask the author to clarify ambiguities, and to point out any majorly reworked sentences that the author should check still match with their intended meaning.

This ensures that the author fully understands the logic behind the changes and can learn from them for future writing. It also gives them the opportunity to double-check that the final text expresses what they meant to express.

Naturally, the human editor also didn’t encounter any of the technical issues that ChatGPT had with longer texts or with keeping track of what task it was meant to be performing.

Edits

The human editor corrected all of the basic grammatical errors, spelling mistakes, punctuation issues, and stylistic inconsistencies that were caught by ChatGPT too. They also addressed all of the issues that ChatGPT normally missed:

  • Hyphen in place of an em dash
  • Awkward use of “wherein”
  • Misuse of a colon
  • APA recommendation about writing words like “nonsmokers”

The human editor also went beyond what ChatGPT was capable of in terms of ensuring that sentences were clear, fluent, and concise in expressing the intended meaning. They generally reworked sentences more extensively and creatively than ChatGPT, resulting in a significantly smoother final text.

Explanations

The human editor’s explanations of their changes were drastically better than ChatGPT’s. Comments were used to:

  • Educate the author about the logic behind changes that might be hard to understand
  • Ask clarifying questions and make suggestions for the author to follow up on
  • Prompt the author to check that a rephrased sentence still expresses the intended meaning

Unlike ChatGPT, the human editor obviously had no problem keeping track of what changes had been made, and their explanations of language guidelines were clear and accurate, sometimes linking to other resources to provide further explanation of an issue.

In this way, a human-edited text provides you with a much clearer and more trustworthy account of exactly what was changed and why, and it provides the opportunity to change back anything that doesn’t work for you.

Other interesting articles

If you want more tips on using AI tools, understanding plagiarism, and citing sources, make sure to check out some of our other articles with explanations, examples, and formats.

Frequently asked questions

Is ChatGPT as good as a human editor?

Based on our ChatGPT vs. human editor experiment, ChatGPT performs worse than a human editor when asked to proofread and edit a piece of academic writing.

While ChatGPT’s changes do improve the text overall, they are less consistent and comprehensive than the changes of a human editor. ChatGPT also can’t accurately list and explain the changes it has made. And it’s only able to deal with a small amount of text in one go due to word-limit issues.

ChatGPT can be useful for quickly checking the grammar of your text, but it doesn’t provide the kind of detailed feedback that a human editor can, and it sometimes inserts errors into your text that you may not notice. For a thorough edit, we recommend our Proofreading & Editing service.

Is ChatGPT reliable?

ChatGPT is an AI language model designed to provide fluent and informative responses to your prompts. It was trained on a large body of text and can therefore discuss a wide range of topics, but ChatGPT answers aren’t always trustworthy.

While the tool tries to provide correct information, its responses are based on patterns in the text it was trained on, not on external facts and data. This means that it can often answer as if it knows something but actually be quite badly wrong.

It’s fine to use ChatGPT in your studies to explore topics in an interactive way, but you shouldn’t assume that everything it says is accurate. Always check its claims against credible sources, and never cite it as a source of factual information.

Can I have ChatGPT write my paper?

No, it’s not a good idea to do so in general—first, because it’s normally considered plagiarism or academic dishonesty to represent someone else’s work as your own (even if that “someone” is an AI language model). Even if you cite ChatGPT, you’ll still be penalized unless this is specifically allowed by your university. Institutions may use AI detectors to enforce these rules.

Second, ChatGPT can recombine existing texts, but it cannot really generate new knowledge. And it lacks specialist knowledge of academic topics. Therefore, it is not possible to obtain original research results, and the text produced may contain factual errors.

However, you can usually still use ChatGPT for assignments in other ways, as a source of inspiration and feedback.

Can I cite ChatGPT?

Yes, in some contexts it may be appropriate to cite ChatGPT in your work, especially if you use it as a primary source (e.g., you’re studying the abilities of AI language models).

Some universities may also require you to cite or acknowledge it if you used it to help you in the research or writing process (e.g., to help you develop research questions). Check your institution’s guidelines.

Since ChatGPT isn’t always trustworthy and isn’t a credible source, you should not cite it as a source of factual information.

In APA Style, you can cite a ChatGPT response as a personal communication, since the answers it gave you are not retrievable for other users. Cite it like this in the text: (ChatGPT, personal communication, February 11, 2023).

Can I create citations using ChatGPT?

No, it is not possible to cite your sources with ChatGPT. You can ask it to create citations, but it isn’t designed for this task and tends to make up sources that don’t exist or present information in the wrong format. ChatGPT also cannot add citations to direct quotes in your text.

Instead, use a tool designed for this purpose, like the Scribbr Citation Generator.

But you can use ChatGPT for assignments in other ways, to provide inspiration, feedback, and general writing advice.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, August 15). ChatGPT vs. Human Editor | Proofreading Experiment. Scribbr. Retrieved November 3, 2023, from https://www.scribbr.com/ai-tools/chatgpt-vs-human-editor/

Is this article helpful?
Jack Caulfield

Jack is a Brit based in Amsterdam, with an MA in comparative literature. He writes for Scribbr about his specialist topics: grammar, linguistics, citations, and plagiarism. In his spare time, he reads a lot of books.

1 comment

Jack Caulfield
Jack Caulfield (Scribbr Team)
February 28, 2023 at 6:30 PM

Thanks for reading! Hope you found this article helpful. If anything is still unclear, or if you didn’t find what you were looking for here, leave a comment and we’ll see if we can help.

Still have questions?

Please click the checkbox on the left to verify that you are a not a bot.