All generative AI systems are human in the loop systems

The widespread adoption of generative AI tools like ChatGPT might seem like it is an impending crisis of authenticity, a threat of undetected cheating. But I think the bigger challenge is about short-circuiting our awareness of how we make meaning in the world.

All generative AI systems are human in the loop systems
Medieval hamster in the loop system

It's summertime and for higher ed it looks like there are a lot of workshops and trainings on generative AI, well-intentioned and meant to get faculty up to speed on whatever it was that slammed into consciousness this past year. I know a lot of educators are finally catching a breath, but looking ahead in concern about how things like ChatGPT will make it hard to distinguish real student work from AI-assisted or wholly generated AI work. That sounds like a crisis of authenticity, a threat of undetected cheating, but I think it's something else.

Rather than framing the risk as one around cheating, I think a better framing might be in terms of consent and meaning. Generative AI can be powerful in part because it creates a human in the loop system where we can interact, refine, adjust; but there is a flip side there, where the basic functioning of these tools requires that we lend it some of our cognitive capacity, often without us being conscious or consenting to what is going on.

Generated content has no inherent meaningfulness; that is only born after the fact, when a human views it and applies the human filter of experience and sense making and language and image and communication which sees the output of a series of calculations as meaningful rather than hallucinatory nonsense. For any of these systems to work, humans are required as observers, because right now only humans can make meaning.

Everything is a human in the loop system because with humans, all that generated content is meaningless.

Students can use generative AI to stand in for themselves in a teacher-student loop. We call that cheating. And it doesn't require AI. You could have your roommate write the assignment or a parent. Or, as commonly happens, you might get others to help on some bits. Depending on the context, that can be ok or that can be verboten. It's a grey area that typically gets negotiated for each class. Instructors set parameters for how that sort of thing should go, parameters defined in part on pedagogical goals. If it is pedagogically useful for students to get outside help, then such things are encouraged. If it is harmful to student learning to have others help them, then such things are discouraged.

But it works both ways. Teachers can also use generative AI, either to stand in for their voice with comments and feedback, or to handle the data of student submissions in different ways. They could, at the most basic level, get summaries of what students are discussing. Or they could use generative AI to refashion and anonymize student responses in ways that serve other pedagogical aims.

We're used to the idea that teachers can do with student work what they will, but in this context it is helpful to see this as just the flip side of a power relationship that rests on shared authenticity and mutual meaning-finding. If students cheat it may be because they don't find meaning or value in the assignment (yes, typically a short-sighted sort of view, but not illegitimate per se). Or it may be that students don't see the point of the skill being taught (again, perhaps short-sighted). Teachers may distrust what they get from students and therefore, to root out the subset of bad actors. they impose a form of shortcut on their end, trying to batch process results in a way that they can still do their job amidst the constraints of time and resources. Ultimately that back and forth isn't new at all, even if the tools evolve. That matters because we already have frameworks for thinking about this. How do you set clear goals and expectations for students? How do students help themselves and achieve their goals? Presumably most students don't intend on becoming frauds and cheats for life. I think it is good to assume in general that students do in fact want to learn and grow even if they also want to find easier paths and not waste time on things they find less meaningful.

Meaningful. Let me underline that. Teaching should be meaningful. Learning should be meaningful. If it isn't, if we haven't plugged into that, then it's not a surprise that generative AI should be a threat. Rather than focusing on technological capabilities or how to meet the threat of generative AI with the technological wonder of anti-chatGPT or AI-detectors (though this is not to say those don't have a place), there's something we can do right now by focusing on the human side of things. What is meaningful here? What matters to students about learning? What matters to instructors about learning? Focus on communicating that more clearly than ever before. That is something that is also well within the skill set of any teacher, more so than having to learn x, y, or z aspect of the technology flavor of the week.

Framing this as human in the loop systems should make clear that the pressing concern isn't cheating per se. That's one use case with particular contours. This is ultimately about how students and instructors communicate. It's about that level beyond expectation setting. How do we communicate the why of what we're doing? And how do we listen to the legitimate wants, needs, and meaning-making of learners?

Related: This recent piece is well worth a read for a case that what is going on in these systems is leveraging known aspects of cons: https://luttig.substack.com/p/hallucinations-in-ai