I want to see your mind grappling with issues on the page, I told a student this morning, but in the back of my mind I heard echoing a statement I heard at last week's training on Artificial Intelligence in the classroom: We need to teach students how to use AI responsibly. The gap between those statements is a place of anguish.
If it were up to me, my students would never resort to AI--and they wouldn't plagiarize either, or read online summaries instead of actual books or sit through an entire semester without taking a single note in class. They would love to read and write and would use their writing assignments as a place to take risks and play with words and engage with ideas, drawing on their reading to support claims originating in their own heads, not manufactured by a machine.
I'm dreaming, of course. If the AI detectors can be believed, I've had very few students resorting to AI in my classes this semester, but I frequently see papers parroting back inane ideas gleaned from online summaries, with a few quotes tossed in as proof-texts.
At last week's training I was reminded that it's unrealistic to expect my students to express truly original thoughts. They are, after all, college students with little experience in literary analysis, so anything that strikes them as original has probably already been said better by someone else. The most we can expect, then, is for students to make creative use of whatever unoriginal thoughts they come up with.
And I'm okay with that. They may not have read enough to know whether their ideas are entirely original, but at the very least I want them to make an analytical claim, to stake out a space for their own interpretation based on support from the text. A mind at work on the page--is that so much to ask?
But the recent AI training session wants something different. To teach students to use AI responsibly, I should outline parameters of acceptable use: allow them to use AI to develop an outline, for instance, and then allow the student to fill in the blanks. I agree that such an approach might be useful in some contexts, but it defies the most basic advice I give students who are learning to analyze literature: Always start with the text itself. Don't approach it with preconceived notions but instead choose a passage that interests you and dig into it until you understand why. Take it apart and examine the pieces, then put them back together and see how they work. Let the text inspire the points of the analysis.
Starting with an AI-generated outline will only encourage proof-texting--approaching the text to locate quotes to support a preconceived idea. This method tends to produce superficial readings that pay little attention to metaphor, structure, imagery, or how form and content work together to create meaning. This may be the best I can hope for from some students, but how can I get students to dig deeply into a text when they're looking only for what matters to a machine?
I want to see a mind at work, but too often I see the results of machine thinking, which, to my mind, isn't very interesting.
2 comments:
The professors of our required core writing intensive course have a new assignment this year where they're having students write ask AI to write paper on a specific topic and then they have to critique it not using AI. So far it has been working as hoped-- students are shocked at how the AI written paper is often flat-out just wrong.
I have seen assignments like that and I would definitely use one in a composition class, in which we spend significant time on information literacy. I had hoped, however, not to have to do that in a literature class. I know, I'm dreaming.
Post a Comment