“It’s an integral part of learning.” That’s how Gregor Kiczales, a professor in the UBC Department of Computer Science, describes the role of student assessment. Gregor is the instructor for Introduction to Systematic Program Design, a course that will be offered in May 2013, as part of UBC’s research-informed Coursera pilot project. Courses delivered through the Coursera platform are structured after Massive Open Online Courses (MOOCs), an emerging delivery model predicated on providing free, publicly available non-credit courses to a worldwide audience. As part of this initiative, UBC is offering four non-credit courses, the first of which was offered in January 2013. The three other courses, including Introduction to Systematic Program Design, will be offered in May 2013. While the impact of MOOCs on higher education is being actively debated, we asked Gregor to discuss the role and challenges of online assessment in MOOCs.
For the majority of classroom-based, blended, and distance learning environments, student assessment is provided by instructors or trained teaching assistants (TAs), often with the aid of well-designed evaluation rubrics to guide the process. With the emergence of MOOCs, one of the greatest challenges has been to find a student assessment model that works at a much larger scale, with potentially tens of thousands of students in a course.
Peer assessment is central to Introduction to Systematic Program Design. Gregor emphasizes the need for fair, substantive feedback. “The role of assessment is for students to know how they are doing,” he remarks. “We have got to get it right.”
Introduction to Systematic Program Design is an introductory programming course that teaches a design method which allows students to approach complex programs in a systematic way. “This design method, because it’s based on theoretical results about the nature of programs, produces well structured programs,” explains Gregor. “What we want to know is not just that your program worked,” he elaborates, “but that you followed our design method and produced a well-structured program.”
Gregor plans to use peer-based, rather than computer-based assessment methods, so that the feedback provided by the assessment is useful for the student. For his on-campus version of this course at UBC, Gregor is able to work with a team of TAs to assess student work. The challenge has been coming up with a rubric that works for peer-based assessment in a MOOC.
At first, Gregor started with a very rich assessment rubric to replicate the depth of grading done by TAs in the on-campus version of the course. “It was a really good rubric, totally fair,” he notes. Over time, however, he had to simplify his assessment strategy in order to make it work in the MOOC peer assessment model. “We realized that wasn’t going to work, because MOOC students might not have enough commitment to the grading to invest that much effort into it” he says.
Gregor is now exploring the use of a binary rubric, where every item is awarded a zero or full point – it is either right or wrong. Such a rubric is quite different from his on-campus version of the course. “We try, when we grade at UBC, to have a very rich rubric that allows the TAs to reward partial credit,” says Gregor. The TAs are well trained and spend a large amount of time learning how to properly grade assignments. “They are told to not just give a grade,” Gregor states, “but also to coach.” He also mentions that his TAs often point out common misconceptions, and encourage students to think about the problem from a different point of view. With such a large number of students in his MOOC, it isn’t possible to have TAs grading all of the student work or giving in-depth feedback. However, with students doing the peer assessment, one cannot expect the same type of feedback that one would get from a trained TA. Therein lays the challenge. “There is no way a peer grader is going to have that kind of knowledge,” explains Gregor.
The rubric has now been revised in an effort to make it easier for students to understand the peer assessment process so that they are able to give effective feedback to their peers. Gregor feels that in the MOOC context, the peer grading process should take students about a tenth of the time of completing a problem set. If it takes any longer, he thinks that students might not be interested in completing it. As a result, Gregor is trying to find a way to make the peer assessment process approachable for the students, “but still be fair, and still be substantive.” The peer assessment has to be easy to use, but still provide value and be useful for students. For Gregor, that is the key part. “That’s the kind of balance we are looking for.”
Online peer assessment is also a way to build a community within the course. “A course isn’t a book, a course is a living, breathing, social phenomenon,” notes Gregor. With that in mind, he feels that students have to work together. “Peer assessment doesn’t so much change what we do. It’s just another vehicle for getting them to learn from each other.” This is important, not only in his on-campus course, but in his MOOC as well. Students must be active participants in their learning environment.
In addition to peer assessment, Gregor will be using a discussion forum, where students are encouraged to talk about their peer assessment, as well as post comments and links about the problem sets. “We are going to try to launch that as a way of getting some of the better feedback from the community,” remarks Gregor. He also plans to have TAs monitoring the discussion forum on a regular basis. They will help filter out questions that need an instructor response, but Gregor hopes that the majority of the discussions will happen amongst the students themselves. “We have to get the community of students to be learning from each other.” With other MOOCs, it has been shown that students are willing to step in and help others, and he hopes for this to be the case for his course as well.
Gregor feels that the peer assessment model is very valuable for students. “Peer grading is good for them, because they will learn something,” he states. “When somebody else looks at [your work], they see it fresh. People will pick up things that you will have missed.” There is a dual purpose to peer assessment: one student is able to get feedback, while the other student is exposed to a different way of writing code. He remarks, “I would say that a lot of the value is just going to be from reading other peoples’ code and going ‘oh, you did it that way.’” For Gregor, the peer assessment process does a good job of exposing students to someone else’s work. “That is where the learning is at.”
This article was published in the March 2013 CTLT Newsletter, Dialogues. Below is a list of the articles included in the issue:
- Faculty Spotlight – 3M National Teaching Fellow Dr. Darren Dahl
- Online Peer Assessment in MOOCs: Students Learning from Students (currently viewing)
- Connect: Looking Back and Looking Forward
- Learning About the Social Complexity Behind Aboriginal Student Data
- 2013 CTLT Institute
- New Teaching and Learning Resources
- Other Professional Development Opportunities
Find out more information about the CTLT Newsletter, Dialogues.