Don’t fear the robot: future-authentic assessment and generative

Professor Phillip Dawson, Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Melbourne, Australia. Twitter: @phillipdawson

Generative artificial intelligence is now capable of producing outputs that appear to satisfy some learning outcomes. At the time of writing, a pre-print study claims to have been able to mostly pass the US Medical Licensing Exam using ChatGPT (Kung et al., 2022). Everyday educators are experimenting with these tools and finding that to greater or lesser extents their assessments are vulnerable to ‘Aigerism’: “breaking the rules by basing your work on AI generated content, despite the fact that content did not exist prior to your request.”(bazpoint (Twitter user), 2022) However, a cheating perspective is not the only way to consider the role of generative artificial intelligence in assessment.

This presentation considers generative artificial intelligence in the context of future-authentic assessment, which Dawson and Bearman (2020) define as “assessment that faithfully represents not just the current realities of the discipline in practice, but the likely future realities of that discipline.” It argues that tools like ChatGPT are already part of the graduate experience of life, work and civic engagement, and that capability with these tools should be considered a learning outcome in and of itself.

Taking this view, the challenge for educators changes from being “how do I ban or detect aigerism” or “how do I design tasks that AI can’t do”, towards how to faithfully represent a world where these tools are normal. Viewing these tools within the context of previous technology panics, assessment has a long history of transitioning from worry about new technologies such as writing, calculators and the Internet, to embracing them and even incorporating them into learning outcomes (Dawson, 2020).

Key References

bazpoint (Twitter user). (2022).  Twitter. Retrieved 20 January 2023 from https://twitter.com/bazpoint/status/1600074403655450629

Dawson, P. (2020). Cognitive Offloading and Assessment. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 37-48). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_4 

Dawson, P., & Bearman, M. (2020). Concluding Comments: Reimagining University Assessment in a Digital World. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 291-296). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_20 

Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2022). Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models. medRxiv, 2022.2012.2019.22283643. https://doi.org/10.1101/2022.12.19.22283643 

Professor Phillip Dawson, Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Melbourne, Australia. Twitter: @phillipdawson

Biography

Professor Phillip (Phill) Dawson is the Associate Director of the Centre for Research in Assessment and Digital Learning (CRADLE) at Deakin University. Phill has degrees in education, artificial intelligence and cybersecurity, and he leads CRADLE’s work on cheating, academic integrity and assessment security. His two latest books are Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (Routledge, 2021) and the co-edited volume Re-imagining University Assessment in a Digital World (Springer, 2020). Phill’s work on cheating is part of his broader research into assessment, which includes work on assessment design and feedback. In his spare time Phill performs improv comedy and produces the academia-themed comedy show The Peer Revue.