From “Lessons From the Physics-Education Reform Effort” by R.R Hake
As previously indicated, the data of Fig. 1 show that seven of the IE courses (717 students) achieved <g>’s close to those of the T courses. Five of those made extensive use of high-tech microcomputer-based labs (Thornton and Sokoloff 1990, 1998). Case histories of the seven low-<g> courses (Hake 1998b) suggest that implementation problems occurred. Another example of the apparent failure of IE/high-tech methods has been described by Cummings et al. (1999). They considered a standard physics Studio Course at Rensselaer in which group work and computer use had been introduced as components of in-class instruction, the classrooms appeared to be interactive, and students seemed to be engaged in their own learning. Their measurement of <g>’s using the FCI and the Force Motion Concept Evaluation (Thornton & Sokoloff 1998) yielded values close to those characteristic of T courses (Hake 1998a,b,c). Cummings et al. suggest that the low <g> of the standard Rensselaer studio course may have been due to the fact that “the activities used in the studio classroom are predominately ‘traditional’ activities adapted to fit the studio environment and incorporate the use of computers.” Thus the apparent “interactivity” was a product of traditional methods (supported by high technology), not published IE methods developed by physics-education researchers and based on the insights of cognitive scientists and/or outstanding classroom teachers, as for the survey courses.
This quote and similar ones from Hake have been on my mind these past few weeks as undergraduate students I am working with are grappling with the question, “Why are our learning gains on the FCI lowered than expected and desired?” This question emerges out of our learning about the FCI, analyzing our local data, and comparing to other research and outcomes that have used the FCI.
So, in our courses, we use peer instruction methods, collaborative-problem solving with whiteboards, and computer facilitated learning. Our class is mostly “flipped” (where students read lecture at home and practice problems in class). Most of it takes place in a studio setting. It sounds and often looks very interactive. On the other hand, all the content students are interacting with– from the textbook, to the laboratory activities, to the questions and problems students work on in groups–is home-grown. For better or worse, these home-grown materials would probably be characterized as “traditional activities adapted to fit the studio environment and incorporate the use of computers.”
As a group of researchers, we are approaching our question in a variety of different ways:
(1) One students hopes to ask instructors in our department to take the FCI, not by answering how they would answer, but by choosing the answer they think would be the most common incorrect answer chosen by students, and to estimate the percentage of students answering each questions correctly after the course. He hopes to compare instructor expectations to reality, in order to answer question like, “How knowledgeable are instructors of specific content difficulties students have and how aware are they of the prevalence of those difficulties in our courses?” We have been reading a lot of pedagogical content knowledge.
(2) Another student is interested in examining student learning in relationship to our home-grown textbook. Does the book explicitly address specific difficulties we know about from our own data and research in physics education? Does the book implicitly reinforce any difficulties? Does it provide opportunities for developing conceptual understanding as well as problem solving? He is also interested in questions like, “Do students actually read the text? How much? How deeply? In what do they engage with text? What do they actually take away from the reading the text, and how does that play out in relationship to classroom instruction?” … We have been reading a lot about self-explanation, preparation for future leraning, refutation texts, and the influence of prior knowledge (e.g., misconceptions) on reading comprehension.
(3) Another student is interested in examining structural factors of instruction that might be contributing to lower than expected FCI gains, including
- Student background and academic preparation
- The prevalence of exam questions that probe (and hold students accountable) for conceptual understanding
- The quality of apprenticeship and training that undergraduate TAs and new faculty receive for teaching using interactive engagement methods
- The strategies that instructor use to motivate and cultivate a classroom culture in which IE methods are taken seriously.
We have been reading papers about the kinds of background that correlate with FCI scores, as well as papers about programs that have successful or unsuccessful implementations of reform-physics-curricula.
This work, for better or worse, treads on a sensitive arena–a close examination of ourself. The fact that this work is being carried out by students, I think, could be perceived as making this endeavor even more sensitive, but in an odd way it makes it authentic. All of these students are really interested in improving instruction here, doing research that is valid but also relevant to local stakeholders. They have no axe to grind or hidden agenda. We are also just genuinely intrigued by the puzzle, and curious to pursue its potential solutions. Some of that solution, will not doubt in my mind, need to be geared toward improving the curriculum at the content-level–the content as embedded in all the tasks we ask students to engage with, from the text, to the labs, to the questions and problems they work on. Some of that solution will no doubt be about getting our department on board with the continued renewal of that content based on assessment, feedback, analysis, and ongoing revisions. In that sense, the work we are beginning serves a launching point for what will need to become an ongoing endeavor.
Wow. I’m impressed and hope that you will share your findings. I really like the idea of taking the FCI along with students and trying to predict the most common errors. Are these undergrads or grad students doing this work?
Undergrads. We will definitely be sharing out findings… formally and informally.