Love at First Site
How ChatGPT’s popularity among students impacts education
On November 30, 2022, non-profit research company OpenAI released a new artificial intelligence (AI) chatbot called ChatGPT to the general public. In the four months following its release, the tool has attracted millions of users who take advantage of it for everything from business to leisure.
Unfortunately, the introduction of ChatGPT has created a new issue for educators: students have been using it to write entire essays with just a few keystrokes.
“It’s definitely being abused among students for taking the place of basic writing skills,” English II teacher Courtney Pepe said. “It’s sad because it could be used for such wonderful things. Unfortunately, there are going to be groups that destroy that. The bad is going to outweigh the good.”
The model works by pulling information from the text database it was trained on. However, what distinguishes ChatGPT from previous AI models is that it generates its responses on the fly rather than resorting to a list of pre-programmed responses. This feature combined with a “memory” of about 3000 words means that the user can prompt the model to refine and add to previous responses.
“It’s very easy for people to get stuff done,” sophomore Callum Lucas said. “Sometimes you might need to write something quick but you don’t have the time and it can just snap, boom, you got something.”
ChatGPT raises many moral questions. According to the machine itself, plagiarism is “the act of using someone else’s work or ideas without giving proper credit to the original author.” But as one board member for Brown University’s Academic Code Committee said, this definition is much more difficult to apply when the work and ideas come from a machine.
For teachers, students using a machine to write their work is problematic regardless of whether it falls under the traditional definition of plagiarism.
“It’s no different from finding something on SparkNotes,” English III Teacher Rob Bass said. “It’s the exact same thing. But what’s so insidious about this is, we have Turnitin.com and plagiarism checkpoints we can do that this machine now sidesteps [by] creating a new essay. Then it gives the student the moral quandary of, ‘Do I accept this work that’s not mine?’”
There are other problems with ChatGPT, too. Disinformation researchers say that if users word their prompts skillfully enough, the bot can churn out convincing lies – even with the safeguards the creators have in place. However, a ChatGPT user can also receive misinformation unintentionally – the machine cannot distinguish between credible and untrustworthy sources and may generate false information without warning.
“Fact-checking wise, it’s very bad,” Pepe said. “In English you go through the process of validating a source and making sure that it’s reliable and that you can duplicate the information across multiple sources. ChatGPT does not do that. It’s just going to pull any information that has similar qualities or similar language so that doesn’t mean the information used to begin with is even correct.”
In order to combat the rise of ChatGPT in academia, Princeton University student and computer science major Edward Tian created the app GPTZero. The app determines whether text was written by a human or by ChatGPT by assessing two factors: the complexity of the text and the variations between sentences. Though most experts agree that GPTZero is relatively accurate, it is not foolproof, and ChatGPT’s text is still sometimes mistaken for human writing.
“I think it shouldn’t be a difference at all, actually,” sophomore Aadithya Dharmaraj said. “I don’t think humans are ready to face it yet. AIs are smarter than us. It’s a simple, plain fact. They’re not getting any dumber.”
For educators, the current ChatGPT situation can inspire déjà vu. In 2001, Wikipedia, a free online encyclopedia, was launched. Like the chatbots of today, this facilitated the spread of misinformation–anyone, regardless of whether they were qualified to do so, could edit a Wikipedia article to their heart’s content.
“I ran activities in my class in which I had them go on to really popular websites and edit it live time and change information,” Pepe said. “Then I’d pull it up on my computer and prove to them that just because you find the information online on Wikipedia doesn’t mean it’s actually correct information. At that time, we were doing it for fun. With ChatGPT, there’s not another person sitting on the other end of it, so it’s not that you’re getting the same type of misinformation.”
Other comparisons can be drawn across subjects, as well.
“To me, it’s closer to the advent of the pocket calculator in the 80s,” Bass said. “The legend is that math teachers were furious and it was like this technological revelation, that now kids were going to be cheating with these calculators. I think it’s better to reframe the conversation rather than just being terrified that we got these little robot essay generators. I think it’s [better to] educate students on how it’s just another tool and how best to use it.”
ChatGPT isn’t just being used for essays – it’s everywhere. And if people start to depend on chatbots to convey ideas for them, some imagine that will be the end of human connection.
“The only way you can get [your thoughts] to everybody else is your personal expression,” Bass said. “It’s basic, simple human communication, which generates empathy. We want people to know the way we feel and think, and understand the way they feel and think. If now, your skills have dried up because since you were 15-years-old you’ve been having a robot do it, we all get siloed. We’re not together, relating to one another; we’re just using predictive text to communicate.”
ChatGPT and other AI models may have drawbacks in some parts of education, but they show promise in other areas. For example, AI can even out the playing field for younger students or those learning English for the first time.
“I think most of the sentences are correctly structured, but the information is not the best,” Pepe said. “When you’re just teaching the grammar or structure, it could be used to help lower level students improve their writing because they can now mimic different types of sentence structures they may not have been exposed to.”
By having AI automate simple, repetitive tasks, humans could move onto more important work. It’s even possible that AI could write code for better versions of itself.
“You don’t even have to type in the code, it does it for you,” Dharmaraj said. “It can code other machines for you. This means programmers could skip years of work and just leave it up to the machine which does it for them. This is revolutionary.”
ChatGPT may have passed the bar exam, but there is one final milestone it and other AI models have yet to reach.
“I think machines have caught up to humans in everything except creativity,” Dharmaraj said. “That’s the last line humans have to call themselves humans. If humans lose creativity, I think we [will] have been completely defeated. The day AI has creativity is the day AI gains sentience. Then only can we fully say AI is superior to humans.”