Can LLMs reliably mark responses to exam questions?
Thematic Community
Online
In this workshop, we'll investigate how Large Language Models (LLMs) can support teachers in providing rapid, detailed feedback on student examination responses. Through live demonstrations across multiple subjects and question types, we'll evaluate the reliability, consistency and usefulness of AI-generated assessments.
Key Focus Areas:
- Examining AI's ability to accurately mark responses to examination questions of varying complexity
- Analysing the quality and actionability of AI-generated feedback
- Exploring practical ways teachers can leverage LLMs to increase opportunities for knowledge application and practice
We will use a mix of AI tools and AI chatbots like ChatGPT as interfaces to LLMs.