This article has been edited since it first appeared in print.
Artificial intelligence has become a prominent topic of conversation on high school and college campuses. Some view the use of AI-generated text as cheating or cutting corners, while others are more welcoming to the new technology.
“Human beings are so limited,” says Stew Fortier, the CEO and founder of Type.ai, a tech company that has created an AI tool for professional writers. Fortier, a University of Virginia graduate and one of the few tech developers to find success with an AI program, recently raised approximately $3 million to fund the AI-led document editor he created in 2022. Like similar generative AI programs, it can organize information, write first drafts and summarize hefty topics — and it’s accessible to anyone with a smart phone or computer.
Chatbots like Chat GPT-3 and Type.ai are programs developed to support “people with writing skills [while] being able to leverage these tools and do better work,” Fortier says.
AI can’t generate its own opinions or new concepts, but it can collect and organize high-quality information from countless sources based on a user’s prompt. ChatGPT produces text based on user input. It can write anything from a poem to computer code. The text it generates is based on data collected from across the internet, using computer code to assemble comprehensible text.
Type is similar but is focused on writing bodies of text from prompts input by the user. According to Fortier, his target audience is anyone who writes content to grow a business or an audience, although it serves many purposes.
Before developing his chatbot, Fortier was captivated and inspired by the early release version of the GPT-3 engine. “My mind was completely blown. It was absolutely unbelievable to me that this AI, this language model, could produce human-like text. It was fully readable and grammatically correct. The ideas were coherent. It could talk about a wide range of topics.”
Stew Fortier, founder and CEO of Type.ai (Photo courtesy Stew Fortier)
Assessing Risk
That very capacity and AI’s theoretical impact on society has many concerned. In higher education, the potential impact is mixed at the moment.
There are many benefits to the efficiency and breadth of knowledge of these artificial intelligence engines. Some positive ways chatbots might be used on college campuses include brainstorming ideas for discussion topics, writing outlines for essays and developing new writing prompts.
There are many ways students can misuse the programs, as well. They can copy and paste the details of an assignment, such as a discussion board post, into a program and generate a response without ever putting pen to paper. Not only does using AI as a homework helper compromise academic integrity, but it can undermine learning because even trained, sophisticated AI language models can produce flawed outputs.
L. Andrew Bell, Ph.D., a technology consultant and operations manager at the University of Richmond, says he saw an uptick in the use of ChatGPT at the end of the spring 2023 semester — and that it’s not necessarily a bad thing. “It’s a tool that is going to be ubiquitously used across campus,” Bell says. “I’m trying to figure out how to incorporate it into my pedagogy.” Bell has used AI to generate sample essays for his students to critique.
According to Bell, the majority of student use is allowed; he sees most students using it to manage workload rather than to flat-out cheat. He notes that AI tools can help students work quickly and efficiently, not unlike a calculator. “With the advent of the calculator, it reduced a certain barrier to accomplishing certain mathematical tasks,” Bell says. “I think the same could be said about these generative-text AIs, where we can leverage them to get a more sophisticated argument faster.”
In Virginia Schools
Colleges and universities around the state are approaching this novel problem in various ways. The consensus is that, while ChatGPT and similar programs can easily aid students in academic dishonesty by completing written assignments for them, an outright rejection of the technology is just as harmful as the perceived threat.
Earlier this year, a panel at Virginia Commonwealth University discussed the role AI can play in the learning environment. It centered on some of the common gripes about AI programs in academia: plagiarism, faculty grading and student assignments. However, the panel unanimously agreed that ChatGPT should not be banned in classrooms; indeed, instructors should devote time to studying the program and how they can incorporate it into their curricula.
Milos Manic, Ph.D., is a professor of computer science and director of the Cybersecurity Center at VCU. He teaches a course on artificial intelligence for computer science students, incorporating the social, ethical and legal aspects of AI. Although he’s been teaching similar courses for about 15 years, Manic says students began showing more interest in the legal and ethical implications three or four years ago.
“Tools like ChatGPT and others have become so overwhelmingly available and easy to use,” Manic says. “Now, [students have] started waking up — better late than never — and realizing the challenges. … They’re starting to get concerned.”
Plagiarism and cheating are not new issues, so universities should be trying to embrace AI tools rather than trying to control their presence, he adds.
The University of Richmond encourages faculty to determine a personal policy for the use of AI-generated text in student assignments and suggests they communicate that policy and their expectations for students in their syllabi.
Micah Voraritskul, founder of VerifiedHuman (Photo courtesy Micah Voraritskul)
Looking Forward
AI in education is not necessarily a problem that needs to be mitigated, according to Micah Voraritskul, the founder of VerifiedHuman and a former Richmond resident. He believes the key is being able to effectively differentiate between AI-generated and human work — and then teaching students to use the tool ethically and effectively.
VerifiedHuman provides standards that writers, artists and musicians can sign and adopt, verifying their creative process and work as having been completed by a human. The standards are a trust-based alternative to legislative or technological methods of AI detection. “There’s something about the human contribution here that matters, right?” Voraritskul says. “The value of that human experience coming through in the work matters.”
In terms of education, AI programs such as ChatGPT could become a “massive problem” if left unchecked, Voraritskul says. “Billions of students across the planet are being subverted in their ability to think and articulate what they think; 90% of learning is assessed in writing, in every language. So, if teachers can’t tell that their students are engaging with the material, and coming up with their own original thinking, then how can they be sure their students are learning anything and not just plugging a prompt into a GPT language model?”
Voraritskul says the wide accessibility of chatbots and language models could have a dramatic impact on students’ critical thinking skills. “Imagine an entire generation of students who can’t think critically on their own,” he says. “Sure, they can write a GPT prompt, but they don’t do the work to weigh and counter-balance all of the options in the problem. … To me, that’s way scarier, because what does the world look like in 20 years when you have an entire generation of people that only know how to write great GPT prompts?”
Instead, Voraritskul hopes schools will adopt VerifiedHuman’s standards, work with the company to validate students’ work and teach students how to use the tools to spur greater success. “We can’t be afraid of AI. We have to learn to teach students how to use it ethically,” he says. “It needs to be a tool that we show them how to use. If we’re giving them a writing prompt, we’re working with them to put that prompt in ChatGPT, see what it gives them, if the outline has any validity to it. Ethically, we’ll be able to create greater work than they ever could have without it.”