At the risk of feeding the beast, I decided to try out ChatGPT to generate writing examples with APA style errors. I hoped to test if it could generate curricular materials for a project I’m currently working on.
It is wrong. On several levels.
My prompt was: “Write a sentence with APA errors in style, not errors in citation.”
The goal here was to explore how AI would manage a stylebook that many people believe is an objective citation/reference system, but in fact, lays out fairly nuanced guidelines for clarity that often stymie students. The guidelines are valuable and important for social science writers. They make sense but can seem complicated.
My nonsentient buddy responded:
According to Johnson (2020), eating healthy and exercising regularly is the key to maintaining one’s physical health (APA error: incorrect use of in-text citation format).
AI error: Incorrect everything.
To begin, the citation is actually quite correct. But that’s not what I asked. It is the opposite of what I asked. Perhaps I was too vague. Perhaps there is a better way for me to trick the electrons. Perhaps I need to develop a more effective prompt technique.
As a teacherly person, I felt a chill.
I tried a different approach:
Correct the following sentences for APA style. The study asked participants to describe their lived experiences (Thompson 2023). (Jackson, 2022) followed a similar approach.
ChatGPT responded:
In APA style, the correct format for citing references is as follows:
For a single author:
Corrected sentence: The study asked participants to describe their lived experiences (Thompson, 2023).
For two authors:
Corrected sentence: (Jackson & Smith, 2022) followed a similar approach.
For three or more authors:
If it's the first citation: (Johnson, Lee, & Brown, 2021)
For subsequent citations: (Johnson et al., 2021)
Again, not what I asked (it did fix a comma problem). But what the heck:
The use of parentheses around “(Jackson & Smith)” is perplexing to me. It seems as though my chat buddy could not differentiate between a parenthetical citation at the end of a sentence and an in-text citation.
Its advice regarding three or more authors is a flat-out error, probably brought about because most articles the chatbot found were written in APA 6th edition, not the current 7th edition, which has been around since 2019.
In the grand scheme of things, this is probably not such a huge deal. However, a lot of a college English professor’s job is helping students unlearn incorrect or ineffective practices. Given the great promise—or promise of promise—associated with AI in current culture, what happens to my students when it lies to them?
AI very well could make lives easier and generate (a word I use cautiously) amazing things. Accuracy still matters. So does nuance. Calamus Writing and Editing is built on the premise that written communication is innately human. When I make an error, it is my responsibility, and that responsibility prompts me to learn. If AI errs, it is not accountable because it is code. It can learn, but what it learns is up to the masses of data it sifts through, categorizes, and organizes, without critical thinking or ethics.
I don’t think I’m a Luddite, but I’m going to keep on writing and editing with my eyeballs.