This month, our guest editors are Associate Dean, Students & Professor of Teaching, English, Dr. Laurie McNeill, and Associate Dean, Academic & Associate Professor of Teaching, Political Science, Dr. Andrew Owen, from the UBC Faculty of Arts. They present their We’re Only Human…project and share insights and resources on the role of writing assessments in university classrooms shaped by Generative AI (GenAI).
What is the role of writing – and, more immediately, writing assessment – in the contemporary university classroom? What will motivate our students to put in the hard work of developing their writing, reading, and thinking skills when it seems like AI can do it for them?
These deeply unsettling and highly urgent questions have driven our work in the Teaching and Learning Enhancement Fund project We’re Only Human…. For the last year, we’ve been surveying students, talking with faculty, developing and trialing classroom activities, and reviewing the latest literature so that we can bring updated strategies and information to the UBC community. It is clear that, understandably, we are all struggling to adapt to the pace and scope of change.
One significant change, for courses that assign any take-home writing, is that current GenAI technology means that it is no longer practical to have course policies that ban AI or equate its use with misconduct. Instead, an “AI-lite” approach – one that limits or allows AI tools depending on the specific uses, linked to the specific learning outcome – can be pragmatic and persuasive, and shifts from a policing to a partnership approach. We’ve also experimented with assignments that deliberately incorporate AI to support rather than replace learning.
Our student surveys suggest most students understand that over-reliance on AI tools for high-stakes assignments, such as term papers, isn’t appropriate. On the other hand, they are more likely to see AI use as not a big deal for work they perceive as less meaningful (e.g., low-stakes “tasks” like discussion posts). In response, we can shift such tasks to in-class work (in feasible and accessible versions).
Our project team has seen the value of involving students as partners – as humans! – in discussions of the fundamental knowledge and aptitudes they need to develop to become original, critical thinkers, and effective communicators not just at university but beyond.
Additional resources
We are Only Human… project website
The We’re Only Human… project website serves as both a hub for faculty and staff to collaboratively address the evolving challenges of GenAI in the UBC Faculty of Arts, and a repository of the team’s work. It features team-tested learning and instructional resources and readings for faculty interested in this approach, along with the an overview of principles and options instructors might consider when planning written assessments.
Aligning our assessments to the age of generative AI
This article from the University of Sydney explores how assessment practices can be adapted to address the emergence of AI tools in education. It discusses strategies for designing assessments that promote academic integrity, foster critical thinking, and ensure fairness in a landscape where student may use GenAI.
10 things UBC students should know about generative AI
This UBC guide outlines practical principles for students using GenAI in their academic work, emphasizing that these tools should complement – not replace – critical learning efforts. Key advice includes building AI literacy, being mindful of privacy and bias, and transparently acknowledging AI use in assessments.
Will AI usher in the end of deep thinking?
This podcast discusses how AI tools are reshaping student writing, cognition, and education. The conversation explores how assessments, teaching methods, and educational norms might need to evolve to preserve “deep thinking” in a world increasingly mediated by GenAI.
Talk is cheap: why structural assessment changes are needed for a time of GenAI
This journal article argues that GenAI undermines traditional assessment validity, and that many institutional responses create only what the authors call an “enforcement illusion.” Instead, they call for structural changes to assessment design that build validity into assessment architecture rather than relying on unenforceable rules.
Enjoyed reading this edition of Edubytes? To view past issues, visit the Edubytes archive.
Are you interested in staying up to date on the latest trends in teaching and learning in higher education? Sign up for our newsletter and get this content delivered to your inbox once a month.

