The benefit of not knowing

In my inquiry course, students take a group exam with their research team. In the group exam, students are presented with a novel situation that they have not encountered before, although it is one I am confident they have the ideas to make sense of over the course of 30-40 minutes. As a group, they have to collectively come to a prediction and explain why that prediction should happen using words and diagrams that are consistent with the ideas we have developed as a class. They then go make the observation.

If they get the prediction wrong, they have to do two things:

(1) Explain how they are making sense of what they did observe

(2) Attend to the flaw in the reasoning or diagram that led them to initial prediction

If they they get it right, they also have to do two things:

(1) Discuss a different answer that some other person could have thought would happen and why they would think this

(2) Discuss the flaw in that reasoning.

I don’t grade on correctness of initial prediction at all. I do grade on the clarity, consistency, and completeness of their explanations and diagrams. Any lack of consistency or completeness in the initial explanation, which is later addressed after the observation is fine by me. Of course that means they can’t just explain the right answer after the fact; they also have to go back to their original explanation to discuss the flaw in their original thinking–specifically address what inconsistency or incompleteness was present.

In the exam this week, the groups that predicted correctly seemed to have done a worse job than those groups who predicted wrong. By worse, I mean that their final explanations were less clear, more inconsistent, and less complete than the other groups. I’ve been pondering over why this might be the case. I think it’s for several reasons:

(1) When you get the prediction wrong, there is a much more authentic need to explain the discrepant observation. It is problematic that the observation turned out that way it did. This authenticity drives different engagement with the task.

(2) Groups who get it wrong spend a lot more time discussing. It takes time and effort to really put together a good explanation.

(3) Groups who get it wrong not only have to sort out the right explanation, but they need to sort through their wrong explanations. Sure, groups who get it right still have to create a fictional wrong idea and response, but it’s not the same as responding to your own wrong idea.

(4) For groups who get the prediction right, there is very little check on “getting it right for the wrong reason”. So, groups who get it right and observe have very little incentive to reconsider their thinking.

(5) Even if you get the prediction correct for fairly correct reasons, it may just seem obvious to you why the answer is what it is, and you may construct a poor explanation, just because you don’t feel like there’s much to explain. Maybe students have the right explanation in mind, but they don’t put time and effort into carefully constructing that argument in words and diagrams.

Anyway, it’s an interesting situation. I will say that no groups did poorly, but by far the best final explanations came from groups who were articulate and clear about their wrong reasoning to start with.

The Beginnings of Computational Thinking

I’ve been having lots of conversations with our director of computational sciences about computational thinking. Among many things, we have been talking about, “What are the beginnings of computational thinking and how do we foster those beginnings?” I’ve come to see that the beginnings of computational thinking involve thinking about arithmetic and algebra as strongly interconnected and thinking about computation as involving creativity and insight.

Here are few examples that we discussed this week.

How would you calculate 21 x 19? Of course, there are many ways to do it. You could add 19 twenty-one times, or add 21 nineteen times. You could do 21 * 20 and then subtract 21. One interesting way we can think about 21 x 19 is as (20 + 1) (20-1). The reasons this is interesting is because it takes form (x+1)(x-1) = x² -1, which is then just 20² – 1 = 399.  Of course, we can generalize this to any distance from known squares, so that 18 x 22 = (20-2)(20+2) = 20²-2² = 396.  With this method, even products like 63*57 and 112 * 88 are a cake walk.

Another question we talked about was 26 x 27, which is of the form (x+1)(x+2) = x² + 3x +2, which gives us 25² + 3(25) +2 = 702. Less compelling, but still an interesting and different way of thinking about it than the standard algorithm.

The point of all of this isn’t just to figure out how to multiply numbers quickly. Rather the point is to (i) begin thinking about how to break down complex calculations into collections of much simpler ones, (ii) to begin to recognize how classes of similar problems might all be solvable by a common algorithm, (iii) to come recognize that there are often many different algorithms that can be used to carryout the same calculation and (iv) to begin to make contact with the idea that efficiency of an algorithm can depend greatly on what kind of problem you have and its structure.

I know there are lots of physics teachers and physics education researchers concerned with computation thinking. What do you guys think?

Blog at

Up ↑