Deep dive with the AI Assessment Scale: Level 1

2024 will see GenAI powered apps, chatbots, and maybe even physical devices enter our classrooms – whether we ant them there or not. Since the release of ChatGPT, assessment and academic integrity have been among the highest concerns regarding GenAI in education. Whether or not that should be the case (and in other articles I’ve argued that focusing on the creative uses and ethical concerns of GenAI should be at least of equal priority), schools and universities should be looking for ways to approach AI and assessment which are realistic, practical, and inclusive.

Last year, I wrote about the AI Assessment Scale as a way to address some of the concerns about academic integrity and appropriate use. Later in 2023, Mike Perkins, Jasper Roe, Jason MacVaugh and I revisited the initial scale and made some improvements to allow for its use in a tertiary context and across a range of disciplines. In this series of posts, I’ll go into each level of the scale in detail, drawing on the original idea and the subsequent paper and providing examples of tasks and assessments.

Level 1: No AI

At this level of the AIAS, students are expected to use no Generative Artificial Intelligence apps or services. There are many reasons why this might be appropriate, and you need to be able to clearly articulate to students why this is the case. It is not sufficient to just say “you can’t use AI because it’s cheating”: as I’ll emphasise throughout this series, that may not be true, and it will be very difficult to enforce.

Here are some of the reasons you may choose Level 1 for assessment tasks:

  • The task itself precludes AI, for example a practical task away from devices (e.g., sport, metalwork, kitchen)
  • The student is required to demonstrate mastery of knowledge or retention, i.e., the ability to recite facts from memory
  • The task requires a demonstration of critical and creative thinking skills without the support of technology
  • The task is assessing communication skills, collaboration, discussion, debate, etc.
  • The task is designed as practice for examinations
  • The task is an examination
Practical AI Strategies includes an entire section on GenAI policy and assessment. It is available from Amba Press

Not just exams

At this stage, I want to stress that “no AI” does not mean “under examination conditions”. In 2023, many schools and universities had a knee-jerk reaction to Generative AI and changed all of their major or summative assessment tasks to exam conditions. The jury is out on the efficacy of high-stakes exams, and from personal experience they are not an effective way to assess learning in the classroom. The added pressure of timed, unseen essays in English for example rarely result in the highest quality writing.

Again, justifying “no AI” needs to go beyond assessment security or the fear of students using AI to “cheat”. As I have said elsewhere, we decide what is and is not cheating, and we do not need to use heavy-handed assessment policies to enforce students’ use of technology.

I have made a collection of resources related to AI and assessment, including many articles about different assessment modes with and without AI. Check them out here:

Ideas for Level 1 assessments

So if a task can be completed at Level 1, without it necessarily be an examination, what might that look like in different disciplines? In the article we wrote about the AIAS, we included some examples in the supplementary materials:

  • Mathematics: solving equations and problems without the aid of calculators or access to other devices
  • Literature: the completion of draft writing by hand under supervised conditions
  • Computer science: the demonstration of theoretical knowledge through discussion

Beyond these examples, there are many ways that students can be assessed on their knowledge and skills without incorporating Generative AI; basically, any task where students do not need to have access to a device. This might include tasks conducted in timed-periods, face to face, under supervision, or in workshop and other collaborative contexts. For example:

  • Science: Conducting a laboratory experiment where students manually collect and discuss data, demonstrating their understanding of scientific principles and methods. Note: If students use computers to analyse data, then they likely have access to GenAI.
  • Art: Creating a physical artwork using traditional materials like paint, clay, or charcoal, to showcase artistic skills and creativity.
  • History: Writing an essay in class about a historical event or figure, using only knowledge from lectures, their own hand-written notes, and textbooks, without internet access.
  • Music: Performing a piece of music live, either individually or as part of an ensemble, to demonstrate skill with an instrument or vocal ability.
  • Physical Education: Participating in a physical activity or sport, demonstrating physical skills, teamwork, and knowledge of game rules.
  • Languages: Conducting a conversation or oral exam in a foreign language, testing fluency and comprehension skills without digital translation tools.
  • Geography: Drawing and labelling a map of a local location from memory, showing an understanding of geographical locations and features.
  • Drama: Performing a scene or monologue in front of an audience, assessing acting skills and interpretation of the script.
  • Economics: Debating economic theories or current events, focusing on analytical skills and understanding of economic concepts.
  • Philosophy: Engaging in a Socratic dialogue or debate on philosophical topics, focusing on argumentation skills and critical thinking.

As you can see, there’s nothing particularly out of the ordinary here. We have been teaching without AI – and without computers – for much longer than we have taught with them. In fact, most of the tasks and assessments that happen in day-to-day K-12 and secondary education can and will happen without any access to GenAI. We have been lulled over the past few years into the idea that we must use digital technologies to teach. It’s similar to the argument doing the rounds now that “every student must be taught AI literacy as a core part of the curriculum.”

My counter-argument: the people in charge of companies like Microsoft and OpenAI didn’t have “AI literacy” in their curriculum, and they’re doing just fine. Ditto, it’s reported that many Silicon Valley families send their own kids to low-tech and tech-free schools. Again, I’m sure those children will grow up with a sufficient understanding of technology.

My first webinar for 2024 is Teaching AI Ethics. The session covers key area of ethical concern for Generative AI, including bias, environmental concerns, and copyright. Register below for Monday February 12th.

Authentication and assessment security

At Level 1 on the scale, there is only one way to guarantee total security: the task must be completed in person and with no device access. I’m not saying that students are all compulsive cheats (like some news articles seem to suggest), but for practical reasons, you can’t mandate “no AI” and send students away to complete the task.

Firstly, many if not all tasks aside from physical, practical tasks can be completed to some extent using generative AI. This extends to written tasks, creative tasks, design work, anything with images, many things with sound, and, yes, even maths. Microsoft Copilot – free to access, and running GPT-4 – can use programming languages like Python to handle complex mathematical requests. It can certainly handle most of the K-12 maths and science curricula.

Enjoying the article and want to stay up to date with the rest of the series? Join the mailing list:

Processing…
Success! You're on the list.

Next, the students most likely to “get away with” using AI are those already privileged by the education system. They will be more literate, more digitally literate, and have access to better (likely paid) models. They will craft more sophisticated prompts and getter better results, and use models which give more “human-like” output. In essence, you’ll create an equity issue.

Finally, detection tools are not reliable enough to ground an academic integrity judgement. I keep running into the use of platforms like Turnitin in schools and universities as the final (and sometimes only) line of defence against GenAI. It’s not appropriate to challenge a student’s work on the basis of these tools, and it may cause serious harm to the teacher-student relationship or the student if they have been falsely accused. There is plenty of evidence that detection tools don’t work. Here are a few media articles to share with your colleagues if the question comes up:

You’ll find plenty more if you go looking.

Level 1: Working with purpose and to demonstrate human ingenuity

All assessments should have a purpose beyond simply the demonstration of knowledge or the ticking of an educational box. Assessment should allow the educator to gauge how far a student has come, and where they need to go next. Tasks should be meaningful and have real-world applicability. Level 1 tasks, then, should focus on developing and demonstrating the skills where AI simply isn’t needed: human-centred skills where using GenAI, chatbots, image generation, or AI-assisted code generation tools just isn’t helpful.

Tasks prioritising discussion, working by hand, working in groups, and working away from devices don’t have to be conducted under strict examination conditions. They might form the bulk of any given unit, provided the students are working together in a face-to-face context. For online and distance education, Level 1 tasks can only really be used outside of formal assessments or as part of an “ungraded” assignment. We’ll need to come to terms with the implications of that, especially at a tertiary level where so many courses are now offered online.

If you have questions or comments, or you’d like to get in touch to discuss GenAI consulting and professional development, use the form below:

← Back

Thank you for your response. ✨

Warning
Warning
Warning
Warning.

One response to “Deep dive with the AI Assessment Scale: Level 1”

  1. […] is the second post in a series exploring the AI Assessment Scale (AIAS) in more detail. In the previous post, I looked at Level 1: No AI. This time we move up a level and explore how students might […]

Leave a Reply