AI can help fix student evaluations
With large language models to provide reports and analysis, educators can make use of generative AI to improve the process of student evaluations, writes Adnan Ajšić
You may also like
Standard end-of-semester student evaluations of teaching are widely recognised as flawed. As some universities begin to phase them out, partly because of the potential legal liability they represent, an opportunity has emerged to make tweaks, bring cutting-edge technology onboard and reimagine the process of faculty evaluation.
It is not difficult to understand why such a flawed instrument would have been used so widely for so long. Beyond their usefulness as a political instrument, student evaluations are simple and cheap to procure, and they enable easy (if flawed) comparisons. Although change is now under way, most institutions will not follow suit for some time, particularly because alternatives such as peer and portfolio evaluations are not without their own problems.
What is wrong with student evaluations?
Of course, we might also want to retain student evaluations. The problem is not in the concept of student evaluations itself but rather how the information is elicited and used. Universities typically neglect to train students on how to do evaluations, then poll them at the end of the semester in a way that often encourages frustration venting and is of little value to either faculty or students themselves. This leads to poor outcomes for everyone: faculty work is misevaluated, administrators are compelled to act on bad information, and students miss out on benefits from improvements to the quality of instruction.
- Resource collection: AI transformers like ChatGPT are here to stay
- Give educators the skills to bring assessment into the future
- Cut down your marking time by using whole-class feedback
Arguably, we need a better way of doing student evaluations. However, to have any hope of wide adoption, this new method would have to be equally simple and convenient while curtailing problems and improving outcomes.
Elicit a different kind of evaluation mid-semester
If we want student evaluations to provide meaningful, actionable feedback that can result in real-time improvements to the quality of instruction, we must change how and when we conduct them. Rather than ask a series of closed questions about the course and the instructor towards the end of the semester, we should encourage open-ended self-reflection on course experience at mid-semester. This can be achieved via a simple prompt that asks students to reflect on any course expectations they had prior to the semester, as well as their experience of the course and any changes to their knowledge or abilities as a result of having taken the course. Further questions should encourage students to examine their own investment in the course and what areas of the course have worked for them up to that point (or not) and why. Finally, the prompt should invite students to think about whether they would do anything differently if they had to start over, and what those things would be.
In my own experimentation with this approach, response rates have been high, students typically respond thoughtfully, and the results are more useful and equally easy to collect anonymously (via an online learning platform such as iLearn or myriad other accessible digital tools). The additional benefit is the opportunity for students to hold themselves accountable and thereby arrive at a more objective evaluation of their course experience, which combines formative and self-assessment.
If I obtain this information at mid-semester, I still have time to think about my teaching and introduce changes to try to improve the course.
Employ a large language model to process the data
This type of student evaluation is easy to collect and aggregate into a small corpus, but it is discursive and potentially time-consuming to analyse. AI technology such as the large language models (LLMs), including ChatGPT, however, means this task can be accomplished easily and quickly. All we need to do is upload our corpus of (anonymous or anonymised) student evaluations and write a suitable prompt for analysis. The art of prompt engineering is under development, and what kind of prompt we want ultimately depends on what kind of information and value we want to extract from the evaluations. But prompts are easy to write, test and adapt for specific contexts and purposes, and they can be standardised for use in formal administrative procedures.
Generate an executive summary-style report
To make this method of eliciting and reporting student evaluations of teaching as convenient as the one now in use, the output of automated analysis of student evaluations should be well structured, reasonably concise and easy to peruse. This too is a matter of inclusion of proper instructions in the prompt. Free-to-use LLMs can conduct a variety of objective qualitative and quantitative analyses and produce executive summary-style reports automatically and quickly, although the quality and reliability of analysis, as well as the consistency of output, are better with a paid subscription. Again, in my own experimentation, I have found it useful to generate a one-page executive summary structured around analytical categories that emerged from automated analysis (but this too could be modified according to needs).
This process requires careful analysis of institutional needs as well as time for experimentation and testing on the front end. But once the prompts have been formulated and tested, it is straightforward and largely automated. This way of doing student evaluations is also flexible in the sense that it can be used to supplement existing methods or as a replacement for them, by faculty themselves, as part of peer review, or by administrators. Importantly, it retains the students’ voice, while generating better information that helps everyone.
Adnan Ajšić is associate professor in the department of English in the College of Arts and Sciences at the American University of Sharjah, UAE.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.