So let’s say you gave a certain test to your students. There were 30 questions on the test. They were all multiple-choice each with 5 choices. The test is ungraded but students are required to take the exam. The exam is used solely for evaluating the effectiveness of instruction.
Let’s say you plotted for each students “Test score” vs. “Time spent taking the test”. Let’s say the plot looked like this:
What do you make of it?
Equation of Best Fit: Score = (0.35 Points / Minute) * Time + 6.5
Correlation Coefficient: R = 0.71
I’ve never seen a positive correlation between time spent on an exam and score. Was there free ice cream as soon as people finished or some other artificial incentive for ending early? Or were the students so demoralized that they gave up and stared out the window?
The test isn’t graded… but it is required. You can probably guess what test it is.
I would say that it shows there is a lot of different types of test takers. I think the positive correlation you are seeing is heavily influenced by the one really deliberate (slow) perfectionist student you had — and the two or so students you have who appeared to rush through things and really had no idea but guessed. If you take those out (you’ll probably have them just about every quiz or test you take — i certainly do) I think you’ll find the correlation weakens significantly, which just shows you have some quick students who know it, some quick students who make careless mistakes, some slower students who know it and are careful, some slow students who don’t know it but are trying hard, and all in between.
I wish it was as simple as “take your time, do better” but it doesn’t work that way. In fact, I think if a student knows it well, (or thinks they do at least) they’ll probably do it fast — I see that in my math and physics tests.
Taking out the outliers… we get .50 correlation…not nearly as strong, but perhaps something. I’m trying to decide what data to remove for obvious “didn’t try” reasons.
If the test is the FCI, I would say that many answers are written to play strongly into a student’s gut instinct, and if they are the type to quickly go with their gut and not try to really reason through each of the answers, they’re likely to do quite poorly. I’d be curious to know what would happen if you explicitly instructed students to think carefully about each problem and the reasonability of each answer, rather than simply searching for the one they feel is right.
It just makes me wonder, how much the score has to do with whether students’ approach to the test is a patient one where you don’t just answer the first thing that comes to mind. Andrew Heckler, at OSU, did a study where they forced students to wait a few seconds before answering questions involving interpreting graphs, and student success shot way way up. I’m curious to what extent do FCI scores reflect the mindset of the test taker–how to take the test. In other words, are there some students who have knowledge, but don’t access it, because they are either hurried or don’t care to think it through.
The cluster of 3 below the line at 40 minutes and the cluster of 3 above the line at 20 minutes, plus the outliers, might make interesting subjects for interviews.
Yeah, I agree. I mean we can throw away any students who take < 10 min. In reality, you probably need 30+ min minimum to read each question. But students who can do well quickly– are they just fast readers? Or would they do better if they took their time?
Before reading the comments or thinking about what test this would be, my first reaction was that a pattern like that would show up if the questions on the test were deceptively difficult. By that I mean that the questions were written such that the students would often think that the problems were easy but really the students were missing some added subtleties. And of course, as soon as I started reading the comments I realized what the test is, and I think it pretty much fits that bill (although I’ve never actually seen the whole FCI, so I guess that’s really just speculation from what I’ve read about it…)
Also, a further thought that just occurred to me: is this a post-test? (because you’ve been writing about your summer intro physics class for a while now…) If it is, I would guess that this correlation means that intellectually, many (most?) of your students have mastered Newtonian thinking, but that on a gut reaction level, most of them haven’t. So the ones who rush through it give the same non-Newtonian answers they always would have, but the ones who take the time to really think carefully the way you’ve been teaching them to get many more of the answers correct.
And now I’m really curious what you think about those explanations or if you have an explanation of your own based on other information you have about the students?