Why AI chatbots make bad teachers - and how teachers can exploit that weakness
In the nearly three years since OpenAI's ChatGPT burst onto the scene, the use of artificial intelligence has invaded not only daily work and leisure but also the field of education. The Pew Research Center reports that a quarter of US adults consult a bot to learn something, up from 8% in 2023 -- almost as many as use them for work. The percentage is higher the higher one's level of education is, interestingly.
That surge in usage had put teachers and professors in a quandary about students' propensity to use bots to get answers to questions rather than think through problems deeply. Pew reports that a large number of teachers fear a crisis in education.
A flawed approach
Somewhat more nuanced, the scholarly journal Daedalus, in its issue on trends in education, last year concluded that "We cannot predict how education will be affected in the long term by large language models and other AI-supported tools, but they hold the possibility to both promote and distort current approaches to teaching and learning."
So, what are educators -- and AI developers -- to do if the world has found a way around traditional education? OpenAI's answer to the question is a new feature introduced to ChatGPT last week, Study Mode, which my colleague Sabrina Ortiz explored. As Sabrina relates, Study Mode will respond to a prompt with a plan of study and ask questions about goals.
But in my experience, Study Mode adds very little to my efforts to learn a language. The differences with plain old ChatGPT are minor. And I had to make many efforts to steer Study Mode in the right direction. The lesson here is clear: As with all language models, you only get out of it what you put into it.
A teacher's role
The problem lies not with the bot itself but rather with its limitations as a substitute for human instruction. ChatGPT in study mode has been shaped to conform to the most common approaches to things. All language models tend to stay within what's likely, or highly probable, from moment to moment, which may be appropriate for reviewing material for a test but is not stimulating for a learner of anything.
That is one of the reasons that ChatGPT and other bots are regularly acing the kinds of standardized tests where US students increasingly fail: The program has mastered the routine, the regurgitation of rote information. It's pretty clear that the bot lacks a higher level of understanding of what it means to teach someone, namely, what educators call a curriculum.
A new approach
In a good education, after all, one ends up with more questions than answers. Of course, we bot users aren't experts in curriculum, either. That's why we generally offer mostly the same prompts: "Explain to me…", "Tell me the reason why…", and "Help me learn X."
As users, we're stuck when we don't know what to ask next. In the past year of on-and-off study, I haven't stuck with ChatGPT as regularly as I should if I really want to learn a language. As the novelty wore off, my resolve waned.
A message for educators
There's a clear message for OpenAI and other AI developers here: Both Study Mode and normal ChatGPT are too much shaped to produce a kind of common ground in the typical exchange without any real sense of how to bring a student to the point of asking questions that open up their desire to learn more.
There's little innovation here, a lot of rote lesson plans. There's also a message for stressed-out teachers and professors here: It's natural for people to reach out for answers. If students are going to go to bots for answers, and they certainly are, then probably the right approach is to help them find ways to create more questions out of the bot rather than playing the cop and trying to prevent them from using bots.
A new curriculum
Why not flip things around? Why not help the students push the bot to the point where an entire topic has become sufficiently complex that the bot returns with more and more questions rather than simply providing answers like an authority?
Think of that as The New Curriculum, or, with a nod to today's programming methods, "Vibe Pedagogy," a way to hack the bot to get to something more interesting, more stimulating.
The future of education
Education is only at risk from AI if the teacher is assumed to be the final authority. If, instead, education is viewed as finding out just how much there is to know, how many open questions there are in a field of study, then there is no danger in students using technology to open up more and more questions.
Let's learn about the machine along the way -- its strengths and limitations. As mentioned above, the bots have aced all the standardized exams. There is no point in forcing humans to endure dull regurgitation of the facts. Much better would be to stimulate curiosity and question-asking, which humans still do better than bots.