Communities of Practice (CoPs) are groups that come together regularly to share knowledge, expertise, ideas, and suggestions about a shared passion for something they do, and many of these groups at UBC are facilitated by CTLT. The Course Design Community of Practice, which convenes monthly, recently met to discuss the topic of assessing participation. The meeting was co-facilitated by Mali Bain, CTLT Community of Practice Developer, and Warren Code, a Science Teaching and Learning Fellow with the Carl Wieman Science Education Initiative.
Prior to the CoP meeting, participants had been asked to bring a personal example of how they assess student participation, and those who did were asked to share with the group. Their methods of assessment were certainly enriched and improved by the various questions asked and suggestions offered. The meeting had a similar atmosphere to that of a book club, where all ideas are valid, and are built upon and discussed by peers. A few participants were new to the group, some were from a related community of practice, and a handful had been regularly attending meetings of the Course Design community. The mixture of participants accurately represented the nature of a community of practice, where there are both familiar and new faces at every meeting. What brings them all together is their eagerness to listen, take notes, and contribute ideas.
The group then shifted their discussion to how to effectively divide the percentage values of each class component when determining a participation mark. A general consensus was formed around the thought that the successful use of a participation mark depends greatly on the size and year of the class. Although the classes mentioned in the meeting ranged from about 20 students to several hundred, and from first-year level courses to honours program seminars, some of the pedagogy around assessing participation could indeed be shared. A commonality amongst the participants was the use of interactive elements to both engage students and clarify the origin of participation marks. This ranged from the use of iClickers in classes with hundreds of students, to a complex rubric devised by a Science professor where participation marks ranged from one to five, and students were given a description as to how they could achieve each level. Another participant offered her format for determining participation marks: to achieve a high participation mark in her class, students have to contribute to class discussions with comments or questions. Students also have to lead a discussion on a designated day, and they have to make a presentation on a certain topic to the class. This sparked a conversation about how the description of “participation” can vary, which, in turn, led to a discussion on the disparity between the value of small group and large group discussions. Many agreed that there is currently a disconnect between the ways these two types of discussion are evaluated and weighed when it comes to giving a participation mark.
From that point, the conversation shifted to a discussion of the importance of the language used in the explanation of marking – a topic which elicited comments from all corners of the communal table. It was suggested that any method has the potential to work; students just need to understand why they are being evaluated in a particular way. A few members suggested that involving students in proactively creating the participation marking scale and determining what defines good participation (i.e., quality of answers versus simply talking a lot) was an effective method, as it had worked well for them. This discussion led to a different one on the validity of iClickers. One instructor, who has used the electronic polling device extensively, affirmed that when it comes to iClickers, students need to see that it is a useful way to engage themselves in class, not simply a way to check attendance.
After much discussion, the group found that perhaps it is not a question of whether or not participation marks are a useful thing to have as a grade determinant, but how to make those marks valuable in creating a better learning experience for students. Assessment is meant to motivate students, and making sure that it encourages students in the right way is the mark of a well-informed instructor. One instructor proposed that sometimes, the use of marks is needed to create the environment where students regularly contribute and take part in open discussions.
The group moved smoothly through a variety of topics, despite the fact that there was no set agenda, and there never was a dull moment in the conversation. It seems fitting that instructors, brought together by a common desire to learn more about effective methods for evaluating their students’ contributions, were active participants in their own brainstorming process. While the topic to be explored in each Course Design Community of Practice meeting changes each month based on the interests of the group, it is clear that the meetings are a collaborative experience. Brainstorming new ideas, asking questions, and digging deeper into a problem, together, are always on the agenda.
Additional resources on the topic:
Course Design Community of Practice Blog – contains notes for this session, as well as other related topics such as student peer assessment, course design with TAs, team-based learning, and more.
Student Participation Assessment Wiki – annotated bibliography of resources.