A paper (or more likely a letter) that we will get around to writing at some point is the narrative of the long arc of how we went about making changes to our algebra-based physics course.
The short version of the story is that over the course of a decade, the department implemented many popular research-based physics teaching strategies with no measurable effect on student learning gains as measured by the FCI. Our normalized learning gains held strongly in the 0.20–0.25 range (with no individual class ever scoring above ~0.3), despite efforts to use research based teaching strategies such as
- Collaborative problem solving (white-boarding)
- Peer Instruction (clicker questions), and
- Learning Assistant led Tutorials
In response, a more comprehensive curricular overhaul was developed over a 2-3 year period. Early piloted versions with developers and select instructors achieved normalized gains in the 0.40 – 0.60 range. Our first year of full implementation (with ~10 different instructors across 20 different sections) achieved an average normalized gain of 0.46.
The perhaps interesting / challenging part of the story is that pretty much the exact same teaching strategies listed above (for the older course) also describe the new course structure. From a bird’s eye view, nothing about the course changed. Perhaps not surprising, but worth unpacking, is how improved learning outcomes were maybe achieved through a combination of efforts to (1) develop/adapt instruction to local conditions, (2) design for curricular coherence, (3) and provide on-the-job professional development.
Our story I think has relevance to broader physics education community for a few reasons:
1. Our experience with several failed efforts to improve learning outcomes through popular research-based strategies is probably common just under reported. I do not take this to mean that these strategies are “ineffective”, but it probably means that we don’t have a comprehensive picture of what factors allow them to be effective. More nuance is needed to understand what actually works and how, when, why.
2. The population of students we work with is understudied in physics education research. (See here). How this does or doesn’t contribute to failed efforts and successes is important to start to address.
3. We did this across the entire department, not just for select classes and instructors. I don’t know how common this is, nor how much of the research has been done in this context. There are likely different threads here related to implementation, buy-in, and professional development.
4. Given the overall sameness of the course structure, it could be valuable just to unpack more of the details of what was done, and to construct plausible stories of what details may have mattered and why. When Hake published his original 6,000 survey, they spent some time documenting and discussing cases where IE methods were used but high learning gains were not achieved. Having a documented case of the journey from an ineffective IE course to more effective could contribute to better understanding.
All of that would be too ambitious for a paper about our curriculum development efforts, but I think it helps frame why I think trying to document and report about our experience is worthwhile.
Update: here are some links to what our curriculum overhaul looked and felt like:
- Images of student work
- A lesson on UCM described
Wow, this is really interesting! With my work with new faculty around the country, I’m convinced of your statement about un- or under-successful implementations of Research-Based Instructional Strategies (which is a phrase I only remember by first saying ROUSs to myself).
I always struggle with what combination of words to use. I’d certainly be interested to hear your experience and take working with faculty. Sounds like a good AAPT/PERC meal conversation!
Sounds like a very interesting paper to read. Also, maybe a typo in point 4, “Hake” not “Hame”?
Thanks. I fixed the typo and provided a link.
This would be a great paper to read, and I am immediately curious about learning more about what you did. The issues around coherence and PD seem really important, but the “make it local” sounds fascinating and I want to know more. Is it at all related to the work that Steve Kanim, Mike Loverude, and Luanna Ortiz (by now going back waaaay into the past) were doing with rewriting the UW tutorials? Also, that you’re working with an understudied population feels really important, and is one reason I want to see more about all this! I think a lot of departments could lean from what you all have been doing…