FCI in Teaching Physics

Students took the FCI in my teaching physics class, the three ways I outlined before:

Overall, students did quite well, all students scoring above 70%, with the group averaging 83%. The most common stumbling blocks were indicating force in the direction of motion; distinguishing position, velocity, and average velocity; difficulty comparing the magnitude of force pairs under non-equilibrium situations, and reasoning consistent with force as proportional to velocity.

After everyone finished, we worked through trying to reach consensus, and got as far as the first 20 questions. There was only one question that consensus could not be reached, and the arguments went back and forth for quite some time. Based on the particular question and arguments we were working with, it’s fair to say we’ll be visiting the horse and the wagon paradox. I feel like I did a good job of maintaining  a neutral position with respect to that answer, while still pressing upon and helping to clarify the arguments.

For the first day, things went well in terms of willingness to share, argue, listen, and respond. I was happy that for the most part that we were focused on the arguments and having reasons for changing one’s mind; although changing one’s mind for peer pressure reasons was certainly happening here and there. I usually pressed for what arguments had convinced people to change their mind. In class, I also had a chance to point out some important elements of our discourse, including some nuances in construction of counter-arguments, calls for using representational tools to resolve disputes, and being able to explain or argue for an answer in multiple ways. Some of the arguments I had never considered myself, which was nice. At one point I was able to point out that, at least for one question, that we were mostly in agreement on the answer, but with contradictory explanations for that answer.

As we worked through consensus, our conversation naturally spilled over to explaining why someone might think wrong answers. I heard lots of good beginnings for making sense of how students think. It is nice that we began this conversation in class, because this is what students are going to do for homework–empathize with the thinking of students to account for the range of alternative answers.

Tomorrow, students in my physics course get the FCI and we’ll get to see how they faired at predicting what incorrect answer students will give and predicting the difficulty of the question.

Galileo (and yours truly) on Accelerated Motion

Here is Galileo (in the Two New Sciences) writing on some properties of naturally accelerated motion which are worth knowing:

“That the distances traversed during equal intervals of time by a body falling from rest, stand to one another in the same ratio as the odd numbers beginning with unity

Amid deepening consultation with Galileo, I am inclined to think that non-calculus-based physics would benefit from a framing in terms of integer sequences and series rather than the changing of d’s to Δ’s.

Week One of Teaching Physics

I have been waffling for over a month on how to start the first day of my “teaching of physics” course. My first inclination was to start by doing some science together. However, I have really decided to better capitalize on certain aspects of place and time.  See, in the course, we are going to be examining a lot of artifacts and events of student thinking. This will involve analyzing student data from diagnostic instruments, analyzing students’ written work to problems, observing student discussions, facilitating student discussions, watching video of students’ working on physics, and watching me interviewing students, analyzing video of my interview students, interviewing students, etc. The course is really an inquiry into student thinking. Along the way, we’ll be reading and writing a lot.

So I have decided to jump right into this kind of work, but tied spatially and temporally to what’s going on else where in our department. See, this Thursday is the first day of my teaching physics course. On Friday, however, I have to give out the FCI to students in my algebra-based physics course. So here’s what we’re going to do the “teaching of physics course”

Part I: FCI answers and predictions

On Thursday, I’m having students in my teaching physics course take the FCI three ways:

  • By indicating what they think the right answer is
  • By trying to predict the most commonly chosen wrong answer (from my class)
  • By trying to predict the percentage correct (maybe even with upper and lower bounds that would be surprising if the actual value fell outside)

As a class, we’ll look over our choices of answer and try to reach an agreement on the right answers, noting places where we are not able to. All of our data goes in a googledoc.

Part II: FCI explanations

For homework, they have to pick four (?) problems to discuss more thoroughly. For those four questions they must,

Explain why they chose their answer, including a discussion of any changes to their thinking that might have happened while taking the FCI, while discussing it in class, or while thinking about it afterwards.

Discuss why a student would pick the predicted wrong answer: What might they have been thinking? What reasoning would lead someone to this answer? What experiences might a person have in the everyday world that support this answer? Why is the right answer not appealing?

Explain why they chose the percentage they did: How did you decide whether this would be an easy or hard question? What about the question? What about the content? What about students? impacted your decision.

Part III: FCI Comparisons

They have the weekend to complete the part II, just before I send out the data from the FCI for my class, also in google doc. Here is the new assignment due following Thursday based on data:

For each question, determine what the most common wrong answer was. In places, where you did not correctly predict the most common wrong answer, discuss why you now think students might have chosen this other answer so frequently and not the one you predicted they would.

For each question, determine the percentage of students getting the answer correct. Make a plot “Predicted percentage vs. Actual Percentage”. Explain the meaning of plot, making sure to discuss the meaning of points that fall far above, far below, or near to the diagonal. Pick your two worst predictions and discuss the discrepancy between what you thought would happen and what actually happen.

At this point, they should be sufficiently exhausted.

Teaching Evaluation Wrap-up

Below is data from my teaching evaluations last semester, presented in terms of the 7 factors that the University derives out of student responses to this form.

Obviously, I faired better with respect to evaluations in my algebra-based physics course than my inquiry-based physical science course. This is not a huge surprise. Over the semester, I talked about student discontent in my inquiry course here, here, and here. I also talked about the need in the future to better frame this course in terms of their careers as future teachers, as I recognized that I was doing a poor job. Based on those discussion with students, it is also not that surprising that students in my inquiry course evaluated me fairly strong in terms of interactions and motivation with students, but poorly on both grading and value/effectiveness of course. This breakdown is also consistent with my previous self-assessments that I need to improve in areas of both organization and course architecture. In my physics course, I have to do little course design. In my inquiry course, I am completely over-hauling course design.  This semester in inquiry, there are a lot of changes, including a changes to grading policies and course organization. This semester in physics, there are a lot of changes, mostly pertaining to how quizzes are administered and graded.

Anyway, here’s the breakdown, with +/- given with respect to department averages.

Presentation Ability: 

Physics: 4.9  (+0.9)

Inquiry: 4.4  (+0.4)

Organization /Clarity:

Physics: 4.8  (+0.9)

Inquiry: 3.9   (0.0)

Assignments / Grading:

Physics: 4.8  (+0.5)

Inquiry: 3.9  (-0.4) 

Scholarly Approach:

Physics: 4.7  (+0.7)

Inquiry: 4.0  (+0.0)

Student Interactions:

Physics: 4.9  (+1.1)

Inquiry: 4.5  (+0.7)

 Motivating Students:

Physics: 4.7   (+1.1)

Inquiry: 4.2   (+0.4)

Effectiveness / Worth

Physics: 4.5  (+0.8)

Inquiry: 3.0  (-0.7) ouch!

 

I also thought it was interesting to just look at the highs and low for each course.

Inquiry Class

5.0 Encourages Class Discussion

4.8 Is enthusiastic about his/her subject.

4.6 Gives examinations requiring creative, original thinking

3.6 Assigns grades fairly

3.5 Presents the origins of ideas and concepts

3.2 Explains the grading system fairly

Physics Course

5.0 Seems to Enjoy Teaching

5.0 Enthusiastic about his/her subject

5.0 Relates to student as individuals  (this one I’m particularly happy about, every student rated this as a high as possible)

4.5   Presents origins of ideas and concepts.

4.5   Gives assignments and exams that are reasonable in length and difficulty.  (I don’t assign anything or write exams; all done by a third party)

4.4   Discusses recent developments in the field.

Talking about Projects

In our algebra-based physics course, students have had to complete two independent projects that are carried out in groups. Each independent project involves the group of students writing a proposal, the group giving an oral presentation, and each individual student submitting a formal written report. The independent projects have to be related to course content and must involve data collection and the use of analytical skills developed in lab (linearizing data so that equations of best-fit give physically relevant quantities, and managing and reporting uncertainties ). Typical projects have been students investigating terminal velocity, spring constants, the independence of horizontal and vertical motions, coefficients of friction.

In December, the fall-semester instructors met to debrief about how the semester went, and student projects were a large part of the discussion. There was a strong consensus-view among the instructors for reducing the projects from two down to one, mostly for reasons that, in their current implementation, we could not see much realized educational value.  Although they are grading intensive, this did not seem to be a driving concern of anyone. Now, the issue is going to be discussed at our department meeting next week, but the decisions has already been made to keep two projects for this spring semester.

Here are my experiences and thoughts about doing the projects, at least as they are structured now:

  • Despite being given extensive guidelines for grading the projects, there have been no clear learning goals communicated to the instructors regarding the projects. If the grading guidelines are any indication of any tacit learning goals, it is to make sure that students can follow directions by using appropriate formats, figures, headings, citations, etc. This contributes, I believe, in flaky assessment practices and poor communication to students about the purpose and value of these projects.
  • Group projects have been tough to manage on social level. Last semester, I had cases where I suspected minority students were being denied access as full participants in their groups, and then later identified as not carrying their weight (in peer-evaluations).  I had another case, where an older and  returning student (with a job, family, and child on the way), became immensely frustrated at the lack of initiative and commitment from a bunch of eighteen-year olds who declined to show up twice for agreed-upon meetings. In another class, two students got into a screaming match over the project, nearly resulting in a fight during class.
  • In an already over-crammed schedule, we lose 2 full days of class to students giving presentations (8% of our time together). The presentations are mostly boring and somewhat horrendous with a few gems here and there. They are  also peer-graded, which contributes to making them fairly meaningless, as most students will not really give a bad grade out.

But I don’t want to talk about any of that on Wednesday. I worry that those above concerns contribute to a very unproductive conversation in which everyone gets to weigh in on whether or not they like projects, what they do and don’t like about projects, how projects have gone well and not well in the past, etc. “Me, too” conversations are better suited for lunch talk, not for meetings. What I most fear about our meetings is that there is no structure in place for constructing arguments about course reform.

Instead, what I want to talk about on Wednesday is this:

What educational goals or values are these projects intended to support? Why are these goals important to us?

How would we know if the projects are, in fact, helping us to meet these goals? What would be convincing evidence? What would not? What would we do differently if we found that the projects were not? Would we be more likely to tweak the implementation, drop the goals, or seek out other avenues for meeting these goals? Why?

What are other avenues for meeting these goals? Have these been considered, used, or discussed in the past? What are the advantages and disadvantages?

Even if we are meeting our goals, are there any undesirable / unintended consequences that deserve our attention? Are these issues of implementation? Do these merit considering alternatives?

I’d really love to throw this one in, but I fear it would be too much: Are we assessing student work in a way that is consistent with our learning goals?… If not, why not? If so, do we think we could be doing a better job?

Research on Teacher Education

From Editorial in Journal of Teacher Education

We bemoan the fact that many  content-area education researchers, researchers who engage in teacher education in their own universities, almost exclusively submit their research that could potentially benefit teacher education to specialized content journals—journals that may or may not be read by teacher educators. Clearly, faculty who walk in two university worlds—for example, both science education and teacher education—have choices to make about their identities as researchers. Could we benefit from encouraging content-area education researchers to frame their work from the “inside” teacher education perspective discussed previously and viewing the JTE as an outlet for that work?

Of Balconies and Dog-Ownership

I have never taken an astronomy course, and I’ll admit that I don’t know a whole lot about Astronomy. I couldn’t tell you the order of all the planets, and you could probably catch me on days where I couldn’t even name all the planets. I probably have as many misconceptions about the scale of the solar system, our galaxy, and the universe as our students do. I don’t think in light-years or parsecs. I don’t think in red-shift. I’ve looked at some astronomy concept inventories, and, while I’m sure I’d do better than students who haven’t taken an astronomy course recently, I would by no means score high. Up  until recently, I could have only given you fairly canned explanations for the seasons and the phases of the moon. I still don’t think I quite understand the dynamics of eclipses as well as I should.

Most of what I do understand about astronomy has come from being somewhere with opportunity and having technology or a ritual at hand.

When I moved to Maine, I lived on the 3rd floor of a eastward facing building. Our Balcony looked over a park and then over the Penobscot, which provided year-round views of the sunrise. I had also recently acquired a digital camera. Over the course of two years, I took pictures of the sunrise at least a few times a week.

I took pictures for two reasons: I was obsessed with my new camera, and I was in a place where beautiful things were in plain sight every morning. I took pictures without much scientific interest in what I was observing. I was more interested in the colors, the fog, whether or not I could catch the sun behind trees or with birds in flight, and capturing that in different ways with my camera.

Over the span of a year, things that I seemingly already knew became interesting. The sun rose at different times at different times of the year, and the sun rose in different locations different times of year. This interest emerged slowly and gradually. And this perplexes me: how is it that things I already know can become spontaneously perplexing? It seems to happen all the time now.

Anyway, at some point, I just sort of began to look over my photos, which conveniently recorded both date and time, and began to coordinate these photos with landmarks on google maps. Here is some of that.

This eventually led me and another postdoc to derive on our office whiteboard an expression that would tell us the amount of daylight hours each day of the year in Maine, and then at any latitude. Later I derived another expression for the movement of the sunrise across the horizon as a function of day. I also became interested in why the coldest month wasn’t in December, and discovered the amount of phase-shift between darkest and coldest depends strongly on geography.

Something also interesting to me is this: how place and technology provided opportunities for me to become interested, and at the same time that place created constraints on the science I did. For example, I couldn’t see sunsets from my window. And since, I was not at my house during the day all that much, this also constrained the kinds of moon rises I would see. These constraints, we might conclude were limiting, but they were also focusing. Places can do that.

Where I live now, see, I have a yard with a fairly unobstructed view spanning from east to west. So now, I am much more interested in the path that the stars and night time moon take across the sky. I check out the location of stars (and maybe the moon) when I get home. I check it again when I take Rudi for a walk. I check it again when I play with Rudi outside. I check it again when I let Rudi out one more time to pee before bedtime. When I wake up to let Rudi out, guess what, I check it again. I didn’t immediately start checking these things, when I moved here. Why? One reason might be that it took a few months for my unconscious noticing to become conscious noticing to become interest. Or maybe, it’s just because it gets darker earlier now, so that my routines with Rudi better coincides with observing the nighttime sky. Maybe it started in the fall, when my wife and I were in the routine of sitting out by the fire pit a few times a week.

Consider this. In terms of the coverage of content you’d want students to learn in a introductory astronomy course, I have learned very little by being in a place and noticing and thinking. And that little bit of learning has taken place over a time much greater than a semester. I guess I’ve really only learned a bit about why the sun rises in different locations, how we can predict the amount of day light at different times of year, and how the stars and moon move across the sky and how that changes each day. We want to cover these things in a week or two in an intro astronomy course, right?

So, I’m curious. Where in our science curricula are there places for interest and learning that grow slowly, gradually, and spontaneously over years? How do we provide places, technologies, and routines that might make it a bit more likely for interests and learning to happen this way–slowly, gradually, and spontaneously over years. I want to understand more about how dog-ownership and 3rd-story windows grow into science? I wonder if my science would have been different if my camera didn’t record time and date, or if I had a north facing balcony?  Would I not care about the nighttime sky now if my backyard was full of trees instead of open to sky?

Faculty Question of the Day

This one stirred up many interesting and varied arguments at lunch, even spilling over to email throughout the day:

How much of the variation in earths’ temperature across the seasons is accounted for in terms of (1) the changing daylight hours (resulting in either shorter or longer exposure times) vs (2) changing altitude of the sun (resulting in more or less incident flux)? Does your answer depend on where you live? More importantly, how do you know and why do you believe?

As usual, guesses, intuition, wild speculation, careful theory, contrived experiments, and natural data are welcome.

** My bigger point in bringing this up was more to these questions: What does it mean to explain the seasons? How do we want students to approach and attempt to make sense of the seasons? **

Pause, Brian…. think this through

Oh my god. Am I really doing this? Am I really, one week before classes, going to totally uproot what I thought I was going to do in my inquiry course, and do something totally different.

Am I really going to make the entire course about the sun?

Blog at WordPress.com.

Up ↑