Generative AI - Helpful or Harmful?
I learned about chatGPT later than most of my peers. My first encounter with it happened to be in the first class of my first semester at Tufts- Philosophy of Biology- in Spring of 2023. It was one of the only times I had felt particularly old at twenty-three, since most of my undergraduate classmates had already heard of, played with, and formed opinions on the publicly available software. Tufts had yet to release an official policy for student use of chatGPT, so the professor put forth his own opinions heavily discouraging the use of generative AI in the highly essay-based class.
I’d say this is the general impression most of us had at that point- that chatGPT was a writing tool that could pump out a difficult essay in a pinch. Expectations were that it would make passing classes easier while making learning worse, since students wouldn’t actually do any of the work; essentially, that it was pure, plain cheating. Now, as I approach my last semester at Tufts, I want to look back at those initial first impressions and see how students such as myself have actually used chatGPT in our academic lives, and consider whether or not it has been more help or hindrance in our education.
As a Biology master’s student, my use of chatGPT will be very different from graduate students in other majors. Despite it being a language generator, I have used cGPT for everything but writing. It has helped me get my footing by rewording and summarizing major concepts from classes. Being a full-time researcher who can’t always attend daytime office hours, it saved me in a class that was a bit above my coding paygrade, helping interpret error messages and suggesting fixes. As it turns out, my computer simply wasn’t capable of running the code efficiently, and I had to implement additional code to make the computational load easier. Other students have claimed to use it for data processing, something I find alarming, since chatGPT can’t actually perform quantitative functions. Ask it to count the number of “A’s” in a block of text, and you are liable to get the wrong answer.
This brings me to my intended point; students in programs that don’t usually interface with much computer science would greatly benefit from a high-level, conceptual course or seminar on what can be expected from generative AI. Some professors in my classes have already found ways to integrate this awareness into classwork. For example, a fantastic exercise in my Advanced Genetics class had students compare AI-generated summaries of a scientific paper to our own understanding of it, pointing out inaccuracies, missing information, and AI “hallucinations”. My general conclusion? Take everything with more than a few grains of salt, and treat AI as if you were asking a newer, more inexperienced peer what they knew about a given topic.
As previously stated, my perspective on the matter is limited. With all the discourse about AI in academics floating around campus, I think it would be interesting to supplement my musings with data. If you are a graduate student, undergraduate, or other member of the Tufts community who has found chatGPT useful or an obstacle to learning, please let me know how here.