If you’re still remotely skeptical about whether the explosive developments in AI will impact higher education, a recent academic study done by researchers from Princeton, New York University and the University of Pennsylvania may finally cause you to embrace the new horizons.
How will Language Modelers like ChatGPT Affect Occupations and Industries? found that of the 20 occupations most exposed to AI language modeling capabilities, 14 of them were postsecondary teachers.
The authors—Ed Felten, Manav Raj and Robert Seamans—specified that their definition of “exposure” does not delve into the specific effects of AI on an occupation, which leaves the question of whether these jobs will be augmented—or, as you may fear, substituted—undefined.
“Everyone’s focusing on how students are going to use AI, but there’s not a lot of focus on how it’s going to impact educators, the teachers themselves,” says Joseph Wison, co-founder of Studicata, an online bar exam preparation service. For context, law teachers were the fifth-most exposed occupation to AI.
To create the formula for this study, the authors drew from one of Felten’s earlier models that studied overall occupational exposure to AI. The AI Occupational Exposure (AIOE) model linked 10 AI applications – such as reading comprehension, speech recognition and language modeling – to 52 human capabilities defined by a U.S. Department of Labor database. As advances in language modeling capabilities like ChatGPT surged (there’s already a better version), the new formula used in this current study updated the AIOE to only account for AI’s language modeling application.
While only two postsecondary occupations featured in the top 20 on Felten’s original AIOE calculation, postsecondary faculty and staff make up almost 75% of the list when adjusted for language modeling. This doesn’t seem to be a coincidence either. Felten et al. found a 98% correlation with original AIOE scores.
Top 20 occupations after language modeling adjustment, of 774 different occupations
As axiomatic this study proves the impending prevalence of AI in higher education, professors are reluctant to take the leap forward. According to a report from BestColleges, more than half of all college students surveyed reported that “their instructors have not openly discussed the use of AI tools like ChatGPT” (54%) and “that their instructors or schools haven’t specified how to use AI tools ethically or responsibly” (60%). A considerable amount of faculty and staff are actively working against the wave of AI language modeling: 31% reported course materials and honor codes explicitly prohibiting AI tools. This does not bode well with the 9% of school assignments and college essays that Copyleaks flagged for containing AI-generated content across the tens of thousands of U.S. public research universities it looked into in January and February.
Wilson believes it would be a “disservice” to students to resist updating the way we teach the new generation of students if we only focus on the fear of cheating with A.I. tools. He believes Studicata’s online contact with students via Youtube directly contributes to his team “being on the pulse” of strategies to effectively engage students.
“The first step is to communicate with students what is acceptable use and then from there, it’s time to start building a framework. Are we allowing it? What’s okay? What’s not?” he says. “We have to mitigate cheating, but at the same time we have to teach familiarity of these tools because the future is going to be tech-enhanced lawyers.”
Wilson specifies recent development in AI-assisted legal brief drafting software and research platforms as tools many first-year lawyers will encounter.
He also notes how Microsoft and Google’s recent decision to integrate AI tools into Office and Google Suite will make student—and teacher— use of AI inevitable. “If that’s the way your technology is going, you would have to adapt,” says Wilson. “They have to build and articulate these guidelines.”