I tend to use some group exams in my inquiry course. I’ve been meaning to write up something about it for a while. So here is a brief intro.
Flavor #1: Learning through Discussion
In the first part of this kind of exam question, students are provided with some novel phenomena on the topic we have been studying. They must write up their individual prediction about what will happen or what they will observe and (more importantly) write explanations. In the second part, they get to discuss with their group for as long as they want. After discussion, they have several options:
(1) If they change their mind, they have to do two things. First, write up their new predictions with explanations. Second, re-visit their prior prediction and discuss what was the flaw or problem in their prior reasoning. What did they fail to consider? What ideas from class were they being inconsistent with? What situation would there reasoning have been correct, and how is this situation critically different?
(2) If they didn’t change their mind, they also have to do two things. First, they have to clearly explain an idea they heard that was different that theirs, explaining that idea as best they can. Second, they have to respond to that explanation by pointing out the flaw in the reasoning.
If I’ve done a good job picking the question/ situation, no groups will have all individuals with the same prediction, and a majority of students will be able to put the pieces together for a good explanation after discussion (but few before). If a group does end up with all the same ideas, I can make them conference with another group, I can ask them to anticipate why a person might think the opposite would happen and the rebut that, or I can give them a canned explanation to consider and respond to. I’ve tried each in the past, and they each have benefits and flaws. Conferencing with another group takes up time for both groups. Asking them to both anticipate and respond to an argument is harder than hearing someone else’s argument and responding. Me writing a canned response is different than having to listen, argue, and contend with a peer.
The way I grade as following. No points are necessarily taken off for a wrong prediction. I am more focused on the explanation and ideas, looking for clarity of ideas and for a gapless causal explanation. Any conclusions and ideas that are inconsistent or contradictory to our class’ ideas and evidence are merely noted. However, if any inconsistencies are not explicitly noted and reconciled in the second part, students will mostly likely lose some points overall. Note here that it is not enough for the student to have the right explanation afterward. Students must return to their prior explanation and address it. On the other hand, if students are sticking with their original explanation, I am really looking for them to respond to other arguments by not merely repeating their idea. They must attend to the argument and discuss a flaw in it.
Flavor #2: Learning through Investigation
This kind of exam is similar to the first, except that students must go make an observation after initially predicting. If they predicted wrong, then they have to revisit their explanation by both writing a new explanation that can account for what they observed and discussing the flaw in their reasoning. If they predicted correctly, I have some of the same options available to me. I can make them construct an alternative prediction and rebut it, and I can make them respond to a canned explanation.
Often times I combined learning through discussion with learning through investigation, and it becomes a more length task.
These exams really rely on the instructor to pick the right tasks. Picking a good task critically depends upon an instructor knowing the limits of their students’ understanding and how far those ideas can stretch. Having colleagues to bounce ideas off of can be really helpful in developing these tasks.
Grading these exams can be a bit time-intensive, and it certainly requires professional judgment. It is critical to use the same criteria for evaluating these as students’ written homework, but I try to avoid over-rubricizing these exams.
Offering these exams requires that students have had many opportunities to write and critique explanations, and to have had practice and feedback on constructing counter arguments.
During the exam, I circulate around and listen to conversations. I often get called over by groups who feel stuck, possibly not being able to make sense of the observation that differed from their prediction. I typically encourage them to either (1) continue discussing, (2) grab a whiteboard, (3) look through their lab notebooks, or (4) make some sketches.
Group exams certainly have some concerns. Do some students benefit unfairly from being in a “good” group? Are some students hurt by being in “bad” group? I haven’t done the analysis, but it would be interesting to look at variation of exam scores across and within groups.
In a Later Post: I hope to discus a specific example. The question I picked. What was my reasoning behind using this question based on the ideas our class had developed, and why I thought students would be able to stretch these ideas to make sense of the task together but not individually. I also want to give some examples to show range of student work.