Case Study 3 : Assessing learning and exchanging feedback

Assessments in an age of Generative AI

Contextual Background 

We are currently revalidating the BA Fine Art course at Chelsea, a process where we rewrite the course for the next 10 years. With the rapid development of Generative AI (GenAI) tools, such as Chat GPT, I wonder what assessment models will still be relevant in a decade?

GenAI is increasingly used by the students on the BA FA course to generate their work, which isn’t in and of itself a problem, notwithstanding the huge environmental impact [Bashir et al (2024)]. In the context of climate crisis, the environmental impact does demand an ethical and critical engagement with these tools.

Evaluation 

I try to make these hidden environmental impacts visible by discussing this research in Year meetings. The aim is to support students in making ethical and sustainable decisions in relation to their interactions with GenAI.  I also draw attention to UAL’s clear policy (UAL 2024), which states “You may not use AI to generate your work unaltered that you submit for assessment as if it was your own work” and any use of AI must be cited and labelled as AI generated.

Despite this policy, my team and I regularly have students, especially in the case of essays, claim what we suspect are GenAI authored texts as their own. Tools like TurnItIIn have much more limited effectiveness than they claim [see AlAfnan (2023)].

GenAI content is produced by myriad machine-led decision-making processes that are entirely unknowable on part of the student (or indeed the technologists who created them) [see Beales (2018)]. If the purpose of assessments is to help students to develop an understanding of quality [Sadler (1989)], then the circumventing of the assessment process through the submission of work by GenAI completely defeats the point.

Moving forwards 

I am interested in exploring two avenues in response:

  1.  Integrating self-assessment

Race (2001) proposes a series of questions to support self-assessment which I have included here:

• What do you think is a fair score or grade for the work you have handed in?

• What was the thing you think you did best in this assignment?

• What was the thing that you think you did least well in this assignment?

• What did you find the hardest part of this assignment?

• What was the most important thing you learned in doing this assignment? [Ibid p.15]

These questions elicit higher-level thinking and reflection skills that are situated in the subjective experience of individual students and might help illuminate any academic malpractice in terms of GenAI usage. I am intending to experiment with these as part of our Unit 7 assessments in May 2025.

  • In-person assessment

There has been some discussion of the potential of using more Vivas as assessment points in response to GenAI [Dobson (2023)] but there are complications in the context of widespread social anxiety. In Y2, we are proposing informal in-person assessments and terming these ‘Studio Visits’. Within a ‘Studio Visit’, tutors would visit the student’s studio space to reflect on work in progress as well as finished pieces. This parallels real-world experiences of interested curators or collectors coming to visit an artist in their studio. During the Studio Visit, a conversation would take place between tutor and student which would seek to draw out the 5 areas of the Level 5 UAL marking matrix.

547 words

References 

AlAfnan, M.A. and MohdZuki, S.F. (2023). Do Artificial Intelligence Chatbots Have a Writing Style? An Investigation into the Stylistic Features of ChatGPT-4. Journal of Artificial Intelligence and Technology, 3(3). doi.org/10.37965/jait.2023.0267.

Bashir, N. et al. (2024) ‘The Climate and Sustainability Implications of Generative AI’, An MIT Exploration of Generative AI [Preprint]. doi:10.21428/e4baedd9.9070dfe7.

Beales, Katriona in conversation with William Tunstall-Pedoe (2019), Unintended Consequences? In: Interface Critique Journal 2. Eds. Florian Hadler, Alice Soiné, Daniel Irrgang. DOI: 10.11588/ic.2019.2.66989

Beckingham, S, Lawrence, J, Powell, S, & Hartley, P (eds) 2024, Using Generative AI Effectively in Higher Education : Sustainable and Ethical Practices for Learning, Teaching and Assessment, Taylor & Francis Group, Oxford. Available from: ProQuest Ebook Central. [Accessed 14 March 2025]

Dobson, S. (2023). Why universities should return to oral exams in the AI and ChatGPT era. [online] The Conversation. Available at: https://theconversation.com/why-universities-should-return-to-oral-exams-in-the-ai-and-chatgpt-era-203429. [Accessed 15 March 2025]

Race, Phil (2001) LTSN Generic Centre A Briefing on Self, Peer and Group Assessment

Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional Science, [online] 18(2), pp.119–144. Available at: https://link.springer.com/article/10.1007/BF00117714.

UAL (2024). Student guide to generative AI. [online] UAL. Available at: https://www.arts.ac.uk/about-ual/teaching-and-learning-exchange/digital-learning/ai-and-education/student-guide-to-generative-ai.[Accessed 15 March 2025]

Leave a Reply

Your email address will not be published. Required fields are marked *