This year in algebra-based physics, I switched to larger-grain standards that emphasize synthesis; where as previously had finer-grained standards and fine-tuned assessments that target only one specific skill at a time. You have to remember that my standards based assessments system (with learning goals and reassessments) happen before a common, high-stakes exam.
Typically, on the first exam, the distribution of grades from my section would have looked like this
4 As
14 Bs
10 Cs
4 Ds
This semester my grades look like this
10 As
9 Bs
4 Cs
4 Ds
5 Fs
The average score remained about the same, but the distribution changed a lot. I think I can speculate about why, but I don’t like what I have to say. Sure, it could be random noise, but I sort of predicted this might happen based on informational observations. That is, it could still be random noise, but I’m subject to confirmation bias. Anyway, here’s my tentative explanation:
With the fine-grained standards, struggling students would get repeated practice on basic skills (e.g., distance vs position vs displacement). Non-struggling students would get it right on the first shot, and not need to reassess. This system made sure that struggling students had mastered very basic skills before the exam, but perhaps left the non-struggling students with less opportunity to practice honing their problem-solving skills. Because of this, my old distribution had a high floor, and relatively sparse ceiling.
This semester, with the synthesis-level assessments, we get a different picture. Struggling students make lots of mistakes on the more difficult assessments; and without targeted, focused goals to practice for reassessment, they don’t develop sufficient basic skills they did in the old system. They may just get swamped in trying to figure out how to solve complex problems. Non-struggling students don’t get it right the first time, but they get close enough to learn something, and take up opportunities in reassessments to hone their skills. Because of this, the ceiling gets more populated, but the floor drops down.
So is my assessment system now just helping students who would have done good do great? Was my old system better at helping struggling students? I can’t be sure, but I’m thinking.
Hi Brian. Can you speak a bit more to the difference between your new and old SBG exams? I feel like your speculation regarding the old system being better for struggling students is probably on the right track, but I want to get a better picture regarding the differences between the two systems.
In the old system, I had a standard like, “I can distinguish position, distance, and displacement”, and there might be a graph or a written description, and students would have to calculate all one of each”. There might also be a standard like, “I can distinguish average velocity and average speed.” There might be something that goes around a track a distance of 100m in 2s, starting where it ends, and they’d have to give me each.” In the new system, there is just one big problem where students have to do it all. For another example, in kinematics I might have, “I can reason through how the velocity changes each second for an object in free-fall” and “I can identify the direction and sign of acceleration and velocity vectors”. Each standard specifically targeted one, but not both standard. Now, students get a difficulty acceleration problem where they have to do all of that.
Can’t you use synthesized, harder questions with grainier feedback? Best of both worlds? I think it is a problem if each question only tests one standard at a time, but I think grainier feedback is really helpful for everyone (strugglers and non-strugglers alike).
I agree that would be the best of both worlds. But currently, I’m not able to give grainy feedback to every student on synthesis problems. It’s lame, but with high course loads, plus research responsibilities, professional development with teachers, advising/mentoring, etc,. I’m struggling to do that. It’s also a problem that we move on from topic to topic really quickly. One day on velocity ideas, two days on acceleration ideas, one day on projectile motion, three days on forces, two days on energy, one day on momentum, etc… With this it’s terribly hard to keep up with the range and variety of feedback that students need. Ultimately, I need to think about helping my students to practice outside of class in ways that are more productive so they are either able to give themselves good feedback or come to me better prepared to receive feedback.