Skip to main content

Can LLMs reliably mark responses to exam questions?

When 21 Jan 2025 Start16:00 End17:00
Organised by Becci Peters
Hosted by CAS AI Community
Community Type

Thematic Community

Event Type

Online

In this workshop, we'll investigate how Large Language Models (LLMs) can support teachers in providing rapid, detailed feedback on student examination responses. Through live demonstrations across multiple subjects and question types, we'll evaluate the reliability, consistency and usefulness of AI-generated assessments.

Key Focus Areas:

  • Examining AI's ability to accurately mark responses to examination questions of varying complexity
  • Analysing the quality and actionability of AI-generated feedback
  • Exploring practical ways teachers can leverage LLMs to increase opportunities for knowledge application and practice

We will use a mix of AI tools and AI chatbots like ChatGPT as interfaces to LLMs.

For further information

Becci Peters

compatsch@bcs.uk