MLA, CCCC, and ChatGPT
The MLA and CCCC have formed a commission, of some sort, to develop a policy, of some sort, with regards to ChatGPT. They are asking people to take a survey. I did, and I decided to save my answers so that I could think about them some more.
Has your institution, department, or other unit proposed or developed policies about ChatGPT or other AI text generation technologies? If yes, please describe it.
Like a lot of people, I wasn’t fully prepared for ChatGPT et al to emerge quite so quickly as a force. Luckily, for me, I had already planned a semester in which active learning in the classroom would take place. To that I simply added more in-class writing assignments, essentially dodging the issue of ChatGPT for the time being. In my discussion with my students, who were more clueless than some of the folk running around proclaiming the sky was falling, I simply noted that I expected them to turn in work that was theirs alone. But I should note that I had not yet fully experimented with the possibilities yet. Havign done so, I still remain fairly unconcerned: I was underwhelmed by ChatGPT’s responses to my prompts. What ChatGPT offered were the kind of competently to well-written non-answers produced students who have been well groomed but ill educated.
Have you developed any classroom policies about students’ use of ChatGPT or other AI text generation technologies? If yes, please describe it.
I am working on a digital storytelling class for Fall 2023, and I think I am going to build at least one ChatGPT experiment into the class. Students already know the technology is out there: how can they use it for good? And by that perhaps there is also room for “their own good,” given the inequities they face. Honestly, if they wanna highjack instutitional systems that already imagine them as cogs to be churned out for a labor market, who am I to blow against the wind?
What concerns, if any, do you have about use of ChatGPT and other AI text generation technologies in teaching?
My chief concern is that it will only devalue writing further, eroding the already weakened position of the humanities. Why write when you can have a bot of some kind do it for you? I think we’re going to see a host of weak scholarship and science get published in a lot of places – we’re all thinking lower-tier journals but we may also find ourselves surprised. Given that universities have farmed out tenure and promotion to journals and presses, why wouldn’t people do it?
I honestly don’t know that I won’t encounter such an article, or book, and be able to tell. But, again honestly, there is so much formulaic writing (and analysis) already out there getting shoved through various publication pipelines, that it might as well be bots. That is, I would argue that we have already boticized ourselves. (That’s a terrible verb, but you get my meaning.)