Student Peer Assessment: James Charbonneau Interview

An interview with

James Charbonneau

James Charbonneau, an instructor in the Department of Physics and Astronomy, explains how he became part of the working group of faculty members who created ComPAIR, the student peer review platform that allows students to provide feedback to peers by comparing two sets of answers. James describes the reasoning behind the tool and how he uses it in his classroom.

What is the project about?

Physics 333 is a climate-energy course for people with very little physics background. I really wanted to get people to engage in the conversation that you can have with physics. The peer assessment is built into the course through these activities called big-picture questions. I give a really vague question to the students that is essentially some version of this:

“Imagine if you take all the gas cars in Vancouver, and change them all to electric transmission. What is the change in the atmosphere after a year? The change in temperature of the atmosphere after a year?”

So you make this local change with a global effect, and it’s not clear how the two are connected. Originally, students would work on these in groups and come up with the answers together. To build in some accountability, because it’s an online course, I would have them peer evaluate each other’s contributions.

That was the first sort of peer evaluation I had. It started as a way to facilitate this big community-building exercise, to get everybody thinking about and talking to each other about the same idea.

What motivated you to initiate the project?

It became clear after doing it the first time that students were actually getting a lot out of evaluating each other’s responses.

I initially had a Reddit-type system, where groups would post their answers, and other groups would vote on their answers, and comment on them. I realized that having people look at other answers and give feedback on them is a super-rich part of the experience. This is where the development of ComPAIR came into the course.

[There was a new project being developed and] I remember the first development meeting. I came in, and I talked. I showed what I wanted to happen, what I imagined facilitating these comparisons to look like, and they built a skeleton program that essentially did it, and that eventually became ComPAIR.

How did you do it?

So essentially, the students work on one of these big-picture questions. It’s a four-week process, working on one question. We work for one week to understand the problem. In the second week, we try to put together all the elements that people found in the first part, to arrive at an answer. Then the third week is to actually look at what they’ve all done and individually build a write-up.

They submit that to ComPAIR, and then people get two assignments anonymously from their peers. They have a rubric that I give them that essentially tells them how to choose which one is better. There are parts that have to do with how well-argued is it: Are the numbers right? Are they reflecting on their answer in some meaningful way? They do this three times, so they read six of these potentially huge things.

They get to see other people’s work, and that’s the value in it. You get to see, “what makes this one better than this one?” You learn so much by doing this deep dive into what other people’s work looks like. So that’s the part that I feel is super valuable to the process. And then I have them go, and I have them make a statement.

So they’re writing feedback on each person’s assignment, and then they make a statement about their own assignment after they’ve looked at them. The thing that I actually want to grade is their feedback on other people’s work. It gives them practice in assessing other people’s work, and I get to grade essentially how good they are at assessing.

Continue reading ▼

Did you have the support you needed for the project? Is there additional support you wish you had had to help you to achieve your goals?

Before ComPAIR, I had a ton of support in people from the Centre for Teaching, Learning and Technology (CTLT). To actually build ComPAIR, I was pretty lucky, because a few profs had already submitted a [Teaching and Learning Enhancement Fund (TLEF) proposal] to build something that was actually meant for grading. I saw this as a thing that facilitated the interaction between students that I wanted.

ComPAIR itself was a project that was supported by the TLEF and also involved Tiffany Potter and Mark MacLean. It was built around this algorithm called adaptive comparative judgment and the idea of Thurston’s law of comparisons. For a novice, comparing A versus B is much easier than giving a mark. In terms of peer assessment, giving a mark of 7.5 out of 10 is very hard. That’s expert behaviour. Ranking a list of people, taking ten people and ranking them, that’s expert behaviour. But for novices, what they can do in their first steps into doing assessments is, you take A and look at B and that A-B comparison is really simple and easy to do for the most part.

The first version was complete bare-bones, and so I had students use the software, give me feedback that I would give the developers during the development process. It was a really, really intense, involved thing. I think that was just because people were excited about the idea. It was a fun thing to program. It was a new way to design programs here at CTLT. It was new to me. It was new for the students.

What were some of the key outcomes of the project?

Without the peer assessment aspect of it, these big-picture questions wouldn’t happen in any meaningful way. These big-picture questions are the meat of the course. Physics 333 does not exist without this interaction between the students and them reading each other’s stuff.

The peer assessment that happens is not a summative assessment — it’s formative in a way. They’re learning from each other. I’m definitely facilitating conversations, facilitating aspects of it. I’m giving them feedback on their feedback when I mark it, but it’s largely a peer-driven process.

In this whole process, I never actually grade the thing they produce. It’s the peer interaction which is the important thing. In order to facilitate these sorts of big problems and have people think about things out of the box and have this sort of openness to get the wrong answer and the right answer and debate things, I think…you need the peer assessment part.

How did the project impact learners or the way in which you teach?

So I have some students say, “Oh, you’re just getting us to mark for you.” I don’t use the grades that they produce, so that’s not why I’m doing it. These are third-year students. They should start learning how to do these things like judgment. They’re learning to create.

One of the course goals is to be able to go out in the world and use numbers and not adjectives. So if you want people to be able to go into an argument, then you have to teach them to be able to assess that argument. And so in this course, they’re producing this thing and … they all have the exact same information, but they’re all coming up with a different version of it. And then assessing is that right or is that wrong, that’s part of this bigger thing.

I should say that some students hate it completely, and they think that it’s so confusing. The point isn’t to do research; the point is to take big problems and make assumptions, to make the problem simpler, and to communicate those assumptions to other people. The peer assessment part facilitates it.

I see it in the teaching evaluations. One of them said, “Every student over their university career should do two big-picture assignments.” Students said that they got skills by answering questions that they didn’t believe they could possibly answer. And there’s a lot of confidence that gets built in people. It’s all sort of built into this evaluating — evaluating what data are good and what data are bad. And that has to do with evaluating their peers too.

What lessons have you learned that you want to share with your colleagues?

I think that students like peer assessment. They like being able to exercise these skills that they don’t get to otherwise. But it has to be done in a very safe way.

When we were developing ComPAIR, we made the comments that people made on other people’s work out in the open. And people really didn’t like that. People said that “No, those comments are mine. That is criticism of my stuff, and I want to get it, but I do not want other people to see it.” And I thought that was really interesting, is that people want this, but there’s that aspect of safety and privacy and anonymity that gets people feeling like they can make these assessments of their peers.

I think a lot of it, too, has to be building up their expertise in this thing. Everybody did the exact same assignment with the exact same information. So everyone in Physics 333, they all have the same sort of expertise, so they feel comfortable when assessing their peers.

I think the actual grades that come out of it, people are a lot more honest with themselves than you would imagine. I also mark their reflections on their own assignment … and people are fairly honest with like, “Oh yeah, mine wasn’t really that good,” which is surprising to hear them say. And they’re free to say that because I’m not judging the quality of their work. Their grade relies on how reflective they are: are they able to articulate that theirs is better and why? Or worse and why?

So I think there’s a lot of safety. It’s a safe space for people to operate in. There were people who just wouldn’t do it if they knew that people were going to evaluate them out in the open.

What are the future plans for this work? How do you plan on sustaining what you have created through the project?

I think that I have to think about the workload of the actual assessment. People are sitting there, and they’re reading these huge documents … is that workload fair? Am I losing people just because it’s too much work?

We’re trying to figure out exactly what kind of learning happens when you do this, which is why I want to go back to my four years of assignment data and see if we can actually do some qualitative coding. The kind of feedback you have to give is, “Oh, it’s really unclear how you described that” or “your conclusion just makes no sense in terms of that.” They’re structural kind of things, and that’s something that I learned from this direct comparison kind of peer assessment. Because you can’t be as detailed, you give a sort of higher-level feedback, which is maybe actually more useful.


For more information on ComPAIR please visit the LTHub tool guide or view the following video overview.

 

How UBC faculty have incorporated Student Peer Assessment

Silvia Bartolic

Silvia introduced SPA as a way of sharing her sociology students’ work with their peers. She explains the challenges and learnings she found along the way.

Learn more >

James Charbonneau

Initially a community-building exercise, James explains how peer evaluation in his physics class evolved into student peer review platform ComPAIR, and the importance of safety in peer assessment.

Learn more >

Kevin Chong

One of the foundational pieces of creative writing is peer review. Traditionally run in small workshops, Kevin shares how he brought peer review across to large lecture classes and its importance to developing writers.

Learn more >

Peter Graf

Personalized feedback for student learning can be challenging to deliver in large classes. But beyond that, Peter sees peer assessment as an opportunity for students to develop important critical reading and self-assessment skills.

Learn more >

Misuzu Kazama

It’s far more common for peer review to be applied to writing tasks than spoken ones. Can language students give each other good feedback on a spoken assessment? Misuzu developed a project with real-world context to find out.

Learn more >

Kelly Allison & Marie Nightbird

Interpersonal communication is a key skill for social work students. After using informal peer feedback to develop those skills for many years, Kelly and Marie share how they formalized the process to gather insights and improve the student experience.

Learn more >

Jennifer Walsh Marr

From a starting point of investigating accountability in group work, Jennifer’s peer assessment project led to more student-centred teaching, and a better sense of community for her students.

Learn more >