ChatGPT (and Other Generative AI Tools) at NC State
ChatGPT and other generative artificial intelligence (AI) tools have been growing rapidly. This edition of the OFE newsletter is meant to introduce our new website on Navigating AI (created in collaboration with the Data Science Academy and other campus partners), to provide faculty with a frame of reference for how to think about AI in relation to their teaching and research, and to create awareness regarding campus resources.
How does generative AI work?
ChatGPT, and most of the AI instances in the news for recent and rapid evolution, is a large language model (LLM). Basically, it’s software that has been fed billions or trillions of pages of information (articles, books, or parts of the Internet), and thus is very good at guessing a likely logical answer to any question. It does this by building at a sentence level (next logical sentence given the one it just created) and at the word level (next logical word given the rest of the sentence so far). In other words, it’s not “thinking” as it gives each answer. A good analogy is a young child who remains a toddler, who can mimic words but has no clue what they are actually saying. Even worse, the system–like toddlers–may outright invent (“hallucinate”) facts, names, quotes, and book/journal/article titles and sound confident as it does.
Are NC State students using AI?
The simple answer is yes. In the history of the Internet and smartphone apps, ChatGPT raced to 100 million users faster than any other web application by far. Early indications are that 80% or more college students have tried ChatGPT at least once, and close to half of them use it or other AI applications regularly, including on classroom assignments. Generative AI is quickly being integrated into a variety of tools students use on a regular basis.
So, are all my students expert users of generative AI?
Likely this is not the case. While almost all students have dabbled in ChatGPT at least a little bit, their experience with this or similar software products will vary widely. Even among those who have tried it, a portion will know to avoid using it inappropriately in class assignments. Therefore, it’s best not to assume the worst of students. At the same time, it’s also not safe to assign them advanced AI-related work as if all students have already mastered the basics, because not all of them are experts. If you’re assigning them to create AI output, it’s better to “assume nothing” and provide detailed assignment directions for a broad spectrum of student experiences. Visit the website on Navigating AI for information on how to manage the use of AI in your courses, and for sample syllabus language that might be useful.
Where can I learn more about educational uses of Chat GPT and other generative AI?
The Office for Faculty Excellence (OFE) is offering faculty a 3-part professional development workshop series on AI online via Zoom this Fall entitled AI Insights.You can register at this Reporter link. Participants will complete 3 sessions covering topics from basic to advanced uses of generative AI in the classroom and will receive a certificate and a digital badge upon completion of the series.
Also, stay tuned for ongoing AI-related workshops and events sponsored by the newly relaunched Campus Writing and Speaking Program (CWSP) [Co-Directors Kirsti Cole <firstname.lastname@example.org> in the Department of English and Roy Schwartzman <email@example.com> in the Department of Communication], which now operates through OFE.
How do I detect student use of AI if they try to cheat?
There are numerous free AI detectors on the Internet if you’re willing to paste text, and there is also a tool built into the Turnitin tool in Moodle that provides an AI score the same way it provides an originality score. However, the numerical output of Turnitin’s AI detector should be used with great caution. These detectors generate false positives (a student using Grammarly the way others use spell check will return a false positive, for instance), and false negatives. There have been numerous documented cases about human-created text being flagged as 100% AI, as well as the reverse—cases of AI-created text marked as 0% AI. Since neither a score of 0% nor a score of 100% is trustworthy, there are no useful “cutoff” points to consider as evidence that text is AI-generated. It is often helpful to use multiple AI detectors to compensate a bit for the limitations of any one detection tool.
Faculty should, however, consider their instincts about the work submitted. Does the submission include a wildly and incongruously wrong fact, especially one that should be common knowledge? Is the work “voiceless” or lacking personality, or perhaps written with a tone more like a 50-year-old compliance lawyer than an undergraduate? Are many of the submitted papers in this class very similar to each other in specific evidence (if not tone)? Most of all, do you have a pen-and-paper writing sample collected earlier that shows each student’s writing style and acuity? Be aware, however, that even years of experience do not make you perfect at detecting AI-generated text, either. Some students really do write in a way that mimics current AI output. Be open to the idea of student exceptionalism and consider a plan that provides students with a chance to prove themselves.
The Office of Student Conduct provides resources for promoting academic integrity and addressing academic misconduct in your courses. A link to their policies can be found on the Navigating AI page.
Is it wishful thinking to try to ignore or avoid AI? Is AI inevitable?
We know that AI is here to stay. This is not a passing fad. Once calculators became inexpensive and ubiquitous, K-12 teachers and college professors had to find ways to adapt their teaching and their homework practice to the new reality, and this situation is no different. In fact, it’s patently obvious that AI will not only remain, but become better over time (perhaps quite rapidly, if this first year is any indication), which implies the AI revolution will be much broader than students cheating on tests and essays.
As such, it is important to train our students for their future careers, which almost certainly WILL include AI. To give our students the best chances in the future workplaces, we need to be certain they are as “AI Fluent” as we can make them. This will require switching from a mentality of avoiding/detecting AI, to a mentality of helping students use/create with AI. One key baseline expectation is that we acculturate students to always adding value to AI output. It’s not enough for them to become experts at prompt engineering—though that will be necessary, too. They must learn that to survive in future workplaces, they will need to always demonstrate that they contributed significantly to the software’s output, and in so doing justify that their skill set makes them invaluable as an employee.
Thus, we should begin thinking about assignments that not only encourage AI use, but actually require it. A general rule of thumb is to ask students to generate a specific type of AI output, then do something with it: revise it, augment it, critique it, etc. Encourage students not only to add value to the AI output, but to be able to recognize that doing so is a foundational skill they must always prioritize. We should also encourage students to use AI for brainstorming, as this type of assistance will surely become even more advanced, refined, and ubiquitous in the years to come.
The OFE Navigating AI website has many more ideas for assignments in the age of AI, as well as links to teaching resources for online classes from DELTA.
How can I convince students to use AI only in the ways I prescribe?
Your most valuable tool in this effort is transparency. Students are most likely to turn to cheating when they don’t sense that the assignment has enough value for them to expend time and effort. If you explain WHY this assignment exists and what benefits to their future careers it could bring, they are more likely to approach the assignment in the spirit you intended.
It can also help to remind students before large assignments that you are attempting to train their minds for their future careers, and any attempt to shortcut that training (such as using AI to write an essay) is fundamentally cheating their future selves. Acknowledge that it may sound like a cliché, but emphasize that students really are cheating themselves if they take shortcuts. Workplaces are not going to want employees who just know how to use AI. They are going to want employees who know how to use artificial intelligence AND add value to the AI output with their human experience, knowledge, and skills.
Where can I go for questions or more assistance?
The Office for Faculty Excellence (OFE) stands ready to answer faculty questions about ChatGPT and other AI. Faculty are invited to email firstname.lastname@example.org for help. Questions about student disciplinary options should go to the Office of Student Conduct (email@example.com).