Creating AI-resilient assessments: A guide for Arts instructors



Find practical strategies for creating assessments that are more resilient to AI misuse, tailored for instructors in the Faculty of Arts.

The rise of generative AI presents both challenges and opportunities for academic assessment. While these tools can be misused, they also push us to design more meaningful and authentic assessments that focus on higher-order thinking, creativity, and real-world applications. This guide offers some practical strategies to help UBC instructors across the Faculty of Arts create assessments that are more resilient to AI misuse.

The goal is not creating “AI-proof” take-home assignments, which is an impossible task since any assessment that students complete outside of a supervised setting is vulnerable to unpermitted AI use. Rather, we need to develop assessments that are less susceptible to simple AI generation while encouraging students to engage deeply with course materials.


Proactive measures: Setting the stage

Before designing individual assignments, it’s crucial to establish a classroom environment that proactively addresses the role of AI. Students are already fearful of having these conversations with their professors—worried that they may be seen as having the intention to cheat. By setting clear expectations and educating students about the ethical and effective use of these tools, you can prevent misunderstandings and foster a culture of academic integrity from the outset.

Expand all
|
Collapse all

Secure student buy-in by creating a clear, concise course policy on AI usage. Explain the rationale behind your policies, whether you are banning it, allowing for specific tasks, or actively integrating it.

When students understand the “why” (e.g., to develop critical thinking skills or to ensure fairness), they are more likely to adhere to your guidelines. Make this policy prominent in your syllabus, discuss it on the first day of class, and return to it when discussing assignments throughout the term. Remember, the AI landscape is constantly changing—make sure to revisit your policies on a per course and per term basis.

Need assistance? Consider using our AI Syllabus Policy Generator as a starting point.

If you decide not to simply ban AI, you will need to teach students when and how to use AI ethically as a tool for brainstorming, outlining, checking grammar, etc. You may want to create an assignment where students are asked to critique an AI-generated response, fact-check its output, or use it as a starting point for a more complex analysis. This not only improves their work but also teaches them about the limitations of these tools so that they can make more informed decisions.

Regardless, be explicit about the learning goals of such an assignment, why you are assigning it, and what you hope your students will gain from it.

You may want to consider incorporating UBC’s Guidance on Responsible AI Use into such discussions.


Strategies for take-home assessments

These overarching strategies can be applied to any assessment type to increase its resilience to AI and enhance its educational value but are particularly helpful for take-home assignments where instructor oversight is not possible.

The University of Sydney offers an excellent online repository of open educational resources, including ready-to-deploy assessments that align with many of the strategies discussed here. We encourage you to explore the AI for Educators Canvas Repository and adapt any resources that suit your teaching needs.

Expand all
|
Collapse all

Emphasize the steps involved in creating the final work. This makes it more difficult for a student to outsource an entire task to AI.

You may require students to submit outlines, annotated bibliographies, drafts, or reflections on their creative process. For example, an instructor may ask for access to a student’s Google Doc so that they can access its version history.

Connect assignments to students’ personal experiences, specific in-class discussions, or unique data sets.

For example, ask students to apply a theoretical concept to their own life, a recent local event, or a case study discussed in class. More authentic, personal dimensions may promote student engagement, resulting in them being more likely to self-author the assignment.

Base assignments on materials that are not easily accessible to large language models. Require students to incorporate specific passages from a textbook, a guest lecture, or an in-class film screening.

Design questions that require analysis, synthesis, evaluation and creation, rather than simple recall or summarization. AI is proficient at summarizing, but it struggles with deep, nuanced analysis or generating truly novel ideas that are based on complex constraints.

For example, instead of asking “Summarize the causes of World War I,” try “Evaluate the most significant cause of World War I, using at least three specific historical events to defend your position. Why is the cause you chose more significant than others we have discussed in class?”

Ask students to work across different formats. This requires a level of synthesis and understanding that is more difficult to achieve with separate AI queries.

For example, you may have students create a presentation based on a written paper, write a script for a podcast episode summarizing complex tasks or topics, or create a visual diagram that explains concepts and then write a detailed explanation of that diagram.

Require students to analyze and evaluate their own work by briefly defending or explaining it. This can take the form of one-on-one discussions (ideal for smaller classes), a short reflection video submitted alongside the assignment, or in-class presentations where students apply their knowledge by responding to peer questions.

These activities help assess student understanding beyond recall or basic comprehension, targeting analysis, evaluation, and even creation—cognitive levels that are less easily outsourced to generative AI. In addition, it better reflects how knowledge is disseminated throughout the academy while promoting students’ ownership of their work.

Break down large projects, like term papers, into smaller, scaffolded parts. For example, an essay may be broken down into proposal, annotated bibliography, outline, draft, peer review, and final draft stages.

This will allow you to see how the student’s project evolved over time and makes it extremely difficult for a student to generate the entire project at the last minute with AI. You may also choose some components to be done at home while others are completed in class.

In multiple-choice questions, use plausible but incorrect "distractor" options that would appeal to an AI's reliance on generalized models.

For instance, a question could ask students to identify the primary function of a contemporary urban area like Vancouver's Granville Island, located on a former industrial waterfront. Plausible distractors could be "heavy manufacturing" or "low-income housing," which are consistent with classic urban models an AI would know. However, the correct answer, "public market and tourism hub," requires specific, place-based knowledge that contradicts those models, effectively confusing the AI.

Including diagrams, charts, graphs, or multimedia elements into exam questions can make it more difficult for LLMs to provide accurate answers. While many models have become better at reading complex images, requiring a student to extrapolate information based on a visual model or diagram can still be an effective way to boost the AI resiliency of your assessment.


Leveraging in-class or proctored solutions

Sometimes, you need greater control over the reliability of your assessments, such as when conducting mid-term or final exams. While approaches like honor codes, conduct statements, and randomized question banks represent traditional methods to prevent academic misconduct, these are rarely effective at preventing the unauthorized use of AI.

Below are a few solutions you may wish to employ to promote AI resiliency in such cases, although you may want to compare your options before choosing which to implement.

Respondus LockDown Browser

LockDown Browser is a secure web browser that limits what students can do during assessments in Canvas. While using LockDown Browser, students cannot print or copy questions, visit other websites, open other applications, or exit the assessment until it has been submitted. This helps prevent the use of generative AI tools and other unauthorized resources during the test.

For additional assistance incorporating this technology, please consult Arts ISIT’s guide on Respondus Lockdown Browser.

Remote proctoring with Zoom

To improve security while students are writing an exam remotely, consider combining a technology like LockDown Browser with live proctoring over Zoom. You will need to require your students to have their cameras on during the exam and depending on the size of your class, you may require support.

Please consult Arts ISIT’s guide on how to monitor your online exams with LockDown Browser and Zoom for more in-depth guidance.

Computer labs

Administering assessments in a controlled lab environment can provide a high level of security. By restricting network access to only the necessary resources and combining this with live proctoring, it is far less likely that students will be able to utilize AI. In addition, the very fact that students are not using their own devices in such situations further reduces the likelihood that they will cheat or use AI.

Interested in the type of spaces available to you? You can view our available computer labs and request one for your course.


Considering inclusive assessment practices

While exploring options for making your assessment more AI-resistant, it is also important to keep in mind the impacts these changes might have in terms of inclusion and accessibility. As many researchers have pointed out, decisions around assessment design are inherently value-laden, as they can either foster or hinder inclusivity and impact individuals in diverse ways (Joanna Hong-Meng Tai et al, 2022).

For example, while shifting assessment towards timed, in-person activities may have benefits in-terms of limiting the use of AI, these changes may disadvantage students with certain disabilities or mental health considerations, resulting in increased requests for accommodation. It is important to take these impacts into consideration and balance the potential benefits and impacts of the changes you are making.

Here are some simple things you can do to help make your assessment more inclusive along with some resources to consider.

Expand all
|
Collapse all

If you are introducing in-class assessment activities, try to give students ample time to complete the activity so that the time doesn’t become a barrier and source of anxiety for students.

Consider spreading in-person assessment activities across the term instead of one or two intense periods.

Make sure at-home or online activities are well aligned with in-person assessment components so that the learning students do outside of class adequately prepares them and gives them practice for what they will demonstrate in class.

Provide practice opportunities that mimic what they will do during the in-class assessment to allow students to prepare for, and become familiar with, what the experience will involve.


Additional resources