While grading essays for his world religions course last month, Antony Aumann, a philosophy professor at Northern Michigan University, read what he said was easily “the best paper in the class.” He explored the morality of the burka bans with clean paragraphs, appropriate examples, and rigorous arguments.
Instantly a red flag was raised.
Mr. Aumann confronted his student about whether he had written the essay himself. The student confessed to having used ChatGPT, a chatbot that provides information, explains concepts and generates ideas in simple sentences and, in this case, he had written the article.
Alarmed by his discovery, Mr. Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In subsequent drafts, students have to explain each revision. Mr. Aumann, who may forego essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to rate the chatbot’s responses.
“What’s going to happen in class is no longer, ‘Here are some questions, let’s talk about it amongst us humans,’” he said, but “it’s like, ‘What’s this alien robot thinking, too?’”
Across the country, university professors like Mr. Aumann, department heads, and administrators are beginning to revamp classrooms in response to ChatGPT, sparking a potentially huge change in teaching and learning. Some professors are completely redesigning their courses, making changes that include more oral exams, group work, and handwritten rather than typed assessments.
The moves are part of a real-time fight with a new wave of technology known as generative artificial intelligence. ChatGPT, which was launched in November by the OpenAI artificial intelligence lab, is at the forefront of change. The chatbot generates eerily articulate and nuanced text in response to brief prompts, and people use it to write love letters, poetry, fan fiction, and their school assignments.
That has affected some middle and high schools, with teachers and administrators trying to discern whether students are using the chatbot to do their schoolwork. Some public school systems, including in New York City and Seattle, have since banned the tool on school Wi-Fi networks and devices to prevent cheating, though students can easily find workarounds to access ChatGPT. .
In higher education, colleges and universities have been reluctant to ban the artificial intelligence tool because administrators doubt the measure is effective and don’t want to infringe on academic freedom. That means the way people teach is changing.
“We try to institute general policies that certainly support a faculty member’s authority to lead a class,” rather than focus on specific methods of cheating, said Joe Glover, president of the University of Florida. “This will not be the last innovation we will have to deal with.”
The rise of OpenAI
The San Francisco company is one of the most ambitious artificial intelligence laboratories in the world. Here’s a look at some recent developments.
That’s especially true when generative AI is in its infancy. OpenAI is expected to release another tool soon, GPT-4, which is better at generating text than previous versions. Google has built LaMDA, a rival chatbot, and Microsoft is discussing a $10 billion investment in OpenAI. Silicon Valley startups including Stability AI and Character.AI are also working on generative AI tools.
An OpenAI spokeswoman said the lab recognized that its programs could be used to trick people and was developing technology to help people identify text generated by ChatGPT.
At many universities, ChatGPT has now jumped to the top of the agenda. Administrators are setting up working groups and hosting university-wide discussions to respond to the tool, with much of the guidance on adapting to the technology.
At schools like George Washington University in Washington, DC, Rutgers University in New Brunswick, NJ, and Appalachian State University in Boone, NC, teachers are phasing out open-book take-home assignments, which are became a dominant method of assessment in the pandemic, but now they seem vulnerable to chatbots. Instead, they are opting for in-class assignments, handwritten papers, group work, and oral exams.
Gone are prompts like “write five pages about this or that.” Instead, some teachers are crafting questions they hope are too smart for chatbots, asking students to write about their own lives and current events.
Students are “plagiarizing this because assignments can be plagiarized,” said Sid Dobrin, chair of the English department at the University of Florida.
Frederick Luis Aldama, a professor of humanities at the University of Texas at Austin, said he planned to teach newer or specialized texts that ChatGPT might have less information about, such as William Shakespeare’s early sonnets instead of “A Midsummer Night’s Dream.” “.
The chatbot can motivate “people who lean towards canonical primary texts to reach beyond their comfort zones for things that are offline,” he said.
Should the changes fall short of preventing plagiarism, Mr. Aldama and other professors said they plan to institute stricter standards for what they expect of students and how they grade. Now it is not enough for an essay to have just a thesis, an introduction, supporting paragraphs, and a conclusion.
“We need to up our game,” Aldama said. “The imagination, creativity, and innovation of analysis that we generally think an A item should filter down to B rank items.”
Universities also aim to educate students on new AI tools. The University at Buffalo in New York and Furman University in Greenville, South Carolina, said they planned to incorporate a discussion of AI tools into required courses that teach beginning or freshman students about concepts like academic integrity. .
“We need to add a scenario around this, so students can see a concrete example,” said Kelly Ahuna, who heads the office of academic integrity at the University at Buffalo. “We want to prevent things from happening instead of catching them when they happen.”
Other universities are trying to draw boundaries for AI Washington University in St. Louis and the University of Vermont in Burlington are drafting revisions to their academic integrity policies so that their definitions of plagiarism include generative AI
John Dyer, vice president of enrollment services and educational technologies at Dallas Theological Seminary, said the language in his seminary’s honor code felt “a bit archaic anyway.” He plans to update his definition of plagiarism to include: “using text written by a generation system as your own (for example, entering a notice into an artificial intelligence tool and using the result in a document).”
The misuse of AI tools will most likely not end, so some professors and universities said they planned to use detectors to stamp out such activity. Plagiarism detection service Turnitin said it would add more features to identify AI, including ChatGPT, this year.
More than 6,000 professors from Harvard University, Yale University, the University of Rhode Island and others have also signed up to use GPTZero, a program that promises to quickly detect AI-generated text, said Edward Tian, its creator and senior at Princeton University. .
Some students see value in adopting AI tools for learning. Lizzie Shackney, 27, a student at the University of Pennsylvania School of Law and School of Design, started using ChatGPT to brainstorm documents and debug sets of coding problems.
“There are disciplines that want you to share and don’t want you to spin the wheels,” he said, describing his computer science and statistics classes. “The place where my brain is useful is understanding what the code means.”
But she has qualms. ChatGPT, Shackney said, sometimes misexplains ideas and miscites sources. The University of Pennsylvania also hasn’t instituted any regulations on the tool, so it doesn’t want to rely on it in case the school bans it or finds it cheating, she said.
Other students have no such scruples and share on forums like Reddit that they have submitted assignments written and solved by ChatGPT, and sometimes for other students as well. On TikTok, the #chatgpt hashtag has more than 578 million views, with people sharing videos of the tool writing documents and solving coding problems.
One video shows a student copying a multiple-choice test and pasting it into the tool with the caption, “I don’t know about you, but I just want Chat GPT to take my finals. Have fun studying.”