Students and teachers at New York City schools no longer have access to OpenAI’s text generation language model ChatGPT, following fears that it may “spell the end of high school English”.
As reported by Chalkbeat (opens in new tab), Jenna Lyle, a spokesperson for NYC’s Department of Education claimed that “negative impacts on student learning, and concerns about the accuracy and safety of content” drove the ban.
Simply put, the local education authority is worried that students will use the artificially intelligent ChatGPT to write their graded work for them, making them unlikely to engage with the material, and harder for those grading the work to tell it apart from work written entirely by a human.
ChatGPT and the ‘threat’ to education
Educators are also worried about the risk of ChatGPT offering up incorrect information to students, but this may be less problematic than the potential for AI to offer up offensive and racist content (opens in new tab).
On that basis alone, it makes sense that ChatGPT is filtered for content, however, the argument that ChatGPT is single handedly destroying the high school humanities, as suggested by one teacher in The Atlantic (opens in new tab) in December 2022, could be somewhat hyperbolic.
It’s true that ChatGPT makes an absolute mockery of the way that the humanities are currently run – making short work of the strict, formulaic ways that students are taught to write at both the high school and undergraduate level.
However, to say that “High School English”, or the humanities at large, is vastly under threat assumes that there is only one way of teaching those subjects, and vastly overestimates their current value to students.
If high school students are so unmotivated about the subject in front of them that they’re driven to let AI writers do the work rather than engage, that should be ringing alarm bells to educators not that their system is crumbling, but that the system was never use-appropriate in the first place.
Robert Pondiscio, a Senior Fellow for the American Enterprise Institute (AEI), argued in an op-ed (opens in new tab) in December 2022 that the purpose of high school education is to attain “language proficiency”, rather than to grapple with knowledge. He claims that AI’s threat to education is overblown, because AI produces work that students could never fully comprehend, “let alone pass them off as their own work”.
In short, he believes that AI-produced work is not fit for purpose in an educational setting.
In the same month, Peter Greene, another English teacher, argued much the same thing in Forbes (opens in new tab), suggesting that teachers run their assignment prompts through ChatGPT, and if its response is a good, credible piece of work, then the assignment should be “refined, reworked, or simply scrapped.”
A stubborn unwillingness to adapt how students are taught in lieu of fear mongering around AI will be taking hold in higher education, too.
If undergraduate students prefer to use AI rather than engage with their ideas, perhaps higher education institutions should consider whether they offer anything of value, when the structures for grappling with knowledge and ideas can be sidestepped by a readily available language model.
The fact that students are willing to do so, despite buying into the process, should show that higher education is a stagnant job mill, with students there to receive the piece of paper at the end. It does show that, but nobody at the top wants to listen.