Disaggregation of Learning Gains–Please Argue with Me

In a previous post, I brought up the issue of disaggregating FCI learning gains. For some perspective, Colleta, Phillips, and Steinhart (2007) have looked at disaggregating FCI learning gains by SAT scores. What they find is that students with higher SAT scores learn disproportionately more than students with low SAT sores. They find correlations around 0.5, and note that the trend is more parabolic than linear. Whether you are more inclined to view SAT scores as a measure of “cognitive ability” (as the authors do) or more inclined to view such scores as a proxy for SES (as I do), I’m becoming increasingly confident that disaggregation is important, and that everyone should be thinking about it.

An emerging site that should especially be thinking about this is the PER user’s guide. At AAPT summer 2013, Adrian Madsen gave a talk about their new initiative to develop a data explorer.

Screen shot 2013-08-09 at 7.40.44 AM

 

The last line of their abstract asks for feedback. What I can offer here is a conversation.

Our institution doesn’t use the SAT much as we are in ACT country. Here is some mock data that somewhat reflects what our institution actually looks like.

Screen shot 2013-08-09 at 7.30.29 AM

 

This red points on the graph show a broad distribution of students (some students with ACT scores < 15 with majority between 20-25), arising from a fairly low barrier to admittance–it’s something like either a 19 on the ACT or 2.0 GPA. The blue data points show normalized FCI gains broken down by ACT scores, and gives the same overall trend that Colleta, Philips, and Steinhart find with SAT. A colleague of mine who saw this trend said, “The smart get even smarter”. The highlighted horizontal line just indicates what our average normalized gain tends to be, which is typically what is typical yo report.

What I’d like to do now is walk through some different scenarios where higher FCI gains are achieved in different ways, to further the conversation about why disaggregation matters.

Scenario #1–Improved Learning for Everyone

Screen shot 2013-08-09 at 7.30.34 AM

In this scenario, a course has higher FCI scores, because instruction leads everyone to do better. Students on the high end receive a bump, but importantly so do students on the low end. In this scenario, the distribution of students remains the same. If we were to make changes like this, we’d be improving learning without addressing the learning gap. You can of course imagine ways in which you improve learning and increase the learning gap, by helping the high end students more than the low end.

Scenario #2 Reducing the Learning Gap

Screen shot 2013-08-09 at 7.30.38 AM

 

In this scenario, learning gains are better because students with low ACT scores are learning more with out necessarily helping students on the high end. In this case, learning gains improve and the learning gaps decreases.

Scenario #3 Become More Selective

Screen shot 2013-08-09 at 7.30.43 AMIn this scenario, higher learning gains are achieved due to a shift in the distribution of students. This might happen at more prestigious institutions, or it might be achieved by having more stringent requirements for enrolling in introductory physics. Or, this could be achieved by designing a course to pressure struggling students into withdrawing, so they don’t end up on the post test. In this scenario, I’m assuming that the “trend” stays the same, but there could easily be interactions among distributions (red) and learning (blue). For example, a broad distribution may create difficulties with differentiating instruction. Tighter distributions may benefit from having more homogeneous populations.

Discussion 

What I hope I’ve done here is started a conversation that might convince you that NOT disaggregating masks important features of what’s happening in our classroom. This I believe should be important to people whether they are a staunch believer in “cognitive ability” or staunch advocates for “social justice.” or both. First off, not disaggregating makes comparisons among institutions difficult. A average learning gain of 0.7 and a learning gain of 0.4 can’t be compared meaningfully without disaggregation. If both courses have similar distributions of students, then they can be compared. If one has a distribution of students on the right end and the other has a distribution toward the left end, then comparisons need to be done carefully. Second, disaggregation might allow us to better understand how specific institutions have enacted successful reforms (or failed to), and furthermore it might help institutions to assess their situation and make reforms in a more informed way.

Things I’m not saying:

  • I’m not saying, “Students with high ACT scores will learn no matter what.” Findings from PER suggest that even successful students fail to meaningful learn even basic physics concepts when instruction relies heavily on lecture. What we are talking about here is courses that do enact reformed instruction.
  • I’m not saying, “You should be satisfied with marginal learning gains if you are at an institution with underprepared students”.
  • I’m not saying, “If you have high learning gains, it must be because you teach at a selective school.”

Things I’m wondering about:

  • How should we disaggregate? Based on standardized tests? Poverty measures? Math or Reasoning? There are all kinds of ways we can and do and could disaggreate? But what standards of reporting can we argue are most important / informative? What obstacles are there to reporting in these way?
  • What are the downsides of disaggregation? I assume there are, and we should think and talk about them.
  • Scenario one is really interesting to me. Why? There are reasons to suspect education communities alone cannot address the fact that the blue dots trend the way they do. (Perhaps we can lessen the steepness?) The truth is that the correlation between poverty and educational achievement is strong–it is robust across scales (e.g., classrooms, schools, states, countries), and is robust across shift in measures (SAT, PISA, FCI, etc). That said, there is good reasons to believe that some institutions of learning fair better, despite being subject to the same poverty correlations. For example, I’ve seen Michael Marder talk about poverty and education, in which he shows how every state is subject to the poverty trend, but despite that, they find that the poorest students in states like Texas and Massachusetts achieve about the same as the richest students in Alabama and Mississippi. In other words, it appears you can shift the line up and down vertically.
  • This isn’t to say that Scenario two isn’t important, or that I don’t care about it. I have argued previously–what is really Steve Kanim’s point–(in my previous post) that more research needs to be done regarding physics learning in courses, institutions, and students with less preparation and opportunity. It could be that students with high ACT and SAT scores learn more in most of our reformed environments exactly because they are based on research on how those students learn, rather than they are learning less because of their under-preparation. In that case one might say, “the smart get smarter, in part because we mostly study how to help the smart get even smarter”

 

Bundled ABP Standards

OK, draft of 13 standards for intro physics are below. That’s realistically one per week.The goal here has been to align standards with the high stakes assessment that will be administered to students during the semester, while supporting their learning in ways that I know matter. There’s lot of compromises going on here, and it still needs some adjustments. The standards are grouped by what students will need to have mastered for each of the four high stakes assessments–exams that focus on problem-solving.

Student Initiated Re-assessments

I students will have to apply in order to reassess with some evidence submitted with it that they practiced. I don’t want to have crazy barriers to reassess, but I don’t want anyone reassessing without putting in some work to learn first.

Feedback to Students

Intend to give feedback on learning indicators. Units and Vectors are the only ones that don’t have proficiency standards, since they aren’t separate problem-solving types they’ll see.  I’m leaning toward evaluating on a four point scale.

Developing (Minus): At least one learning indicator

Developing: All learning indicators but no problem-solving proficiency

Proficient (Minus): Problem-solving proficiency with not all learning indicators

Proficient: Problem-solving proficiency with all learning indicators (all together)

Assessment Format

The quizzes will typically have a problem to solve and 1-3 conceptual/reasoning questions. Students can get marks for learning indicators either on conceptual questions or in the midst of problem solving. Proficiencies (whether minus or not) will only be given for completely correct problem-solving with adequate work shown to justify credit.

Students can apply for reassessment that I will bring to class up until the exam that covers the relevant topic. After that, students must apply reassessment that happens during office hours. This means that I only have to juggle 3-4 standards at a time. High stakes exams, including the final, can be used as evidence for developing or proficiency.

 

Concerns:

#1 Don’t have a standard for uniform circular motion, but it’s something that will be on the test. I don’t want to have five standards during the time before exam two. While I could get rid of Newton’s laws (basic), I don’t think that’s going to help student learning in the long run.

#2 I’ve folded free-fall into constant acceleration. That’s to keep the number of standards low, especially toward the beginning while we are all figuring this out.

#3 Energy and momentum are bundled–super scary. The reason for this is that students are typically asked to solve problems that involve both concepts together. Have to think about this one, but I am inclined to have standards that align with expectations. They’ll still get separate practice and feedback on energy and momentum concepts.

#4 I have previously done binary grading (Y/N), so I’m concerned about the time it will take me to grade these / write these. With fine-grained targeted assessments, I graded same day and gave back to students. It was hectic, but doable. Probably not anymore. So now, I’m thinking self-assessment at the back of the room is only way to go.

#5 How will I translate this into a portion of the grade I have control over?

#6 Please tell me what else I should be concerned about. Comments, criticisms, concerns, questions are more than welcome.

 

1.1 Units

  • I am familiar with SI units and their prefixes
  • I can correctly re-express quantities using different units
  • I recognize unit cancellations and can simplify expressions involving them.

 

1.2 Constant Velocity

Learning Indicators

  • Distinguish among position, change-in-position, and distance
  • Use and interpret position vs. time graphs
  • Distinguish between average speed and velocity

Proficiency Indicator

  • Solve complex back-and-forth motion problems

 

1.3 Constant Acceleration

Learning Indicators

  • I can relate acceleration, velocity, and change in velocity
  • I can use and interpret velocity vs. time graphs
  • I use a reliable “getting started” method that includes drawing a sketch, choosing a coordinate system, & identifying variables from text/diagrams
  • I can identify the direction of kinematic vector quantities and utilize such information consistently using algebraic sign.

Proficiency Indicator

  • Solve complex problems involving constant acceleration.

 

2.1 Vectors

  • I can determine the components of vectors given magnitude and angle
  • I can describe the magnitude and angle of a vector given its components

 

2.2 Projectiles

Learning Indicators

  • I use a reliable “getting started” method, including drawing a sketch, choosing a coordinate system, and identifying variables from the text.
  • I correctly identify and distinguish dimensions with constant a and constant v
  • I can recognize when vector analysis is needed and can perform it
  • I can apply the independence of dimensions to qualitatively reason about special cases of projectile motion.

Proficiency Indicator

  • I can solve projectile motion problems.

2.3 Newton’s Laws (Basic)

Learning Indicators

  • Recognize when the forces on an object or system are balanced or unbalanced from graphs, equations, or descriptions of motion
  • Draw a force diagram (FBD) accurately showing directions and types of forces acting on an object or system.
  • Write net force equations describing an object or system.

Proficiency Indicator

  • Solve problems using net force equations and diagrams

 

2.4 Newton’s Laws (Advanced)

Learning Indicators

  • Use trigonometric relationships to find force components
  • Recognize when to and be able to apply specific force models (e.g., static friction, kinetic friction, ideal springs, etc).
  • Write net force equations describing an object or system.

Proficiency Indicator

  • Solve problems using net force equations and diagrams

 

 

3.1 Energy and Momentum

 Learning Indicators

  • I can calculate the work due to a force
  • I can recognize situations where mechanical energy is conserved
  • I can write a correct conservation of energy equation
  • I can recognize situations where conservation of momentum applies
  • I can write a correct conservation of momentum equation

 Proficiency Indicator

  • I can solve problems that requires conservation of energy & momentum

 

3.2 Static Equilibrium

Learning Indicators

  • I can determine the torque associated with a force around a given pivot
  • I can write a correct sum of torques statement
  • I can write a correct sum of forces statement

Proficiency Indicator

  • I can solve problems involving static equilibrium

 

3.3 Rotational Kinematics

Learning Indicators

  • I can relate frequency, angular frequency, and period
  • I can relate angular displacements, (average) angular velocity, and time
  • I can relate angular velocities, angular accelerations, and time
  • I can relate angular kinematic variables to tangential kinematic variables

 

Proficiency Indicator

  • I can solve rotational kinematic problems

 

4.1 Oscillations

Learning Indicators

  • I can identify amplitude and period in graphs, equations, and pictures
  • I can identify the factors that do and do not influence frequency for both a simple pendulum and a simple mass-spring system
  • I can compare velocity, acceleration, and force for various points along the motion of an object in a simple mass-spring system
  • I can qualitatively analyze the energy transformation for an oscillating system

 Proficiency Indicator

  • I can analyze an oscillating system using kinematics, forces, and/or energy concepts to solve a problem.

 

4.2 Waves

Learning Indicators

  • Relate string length and wavelength for standing waves on a string
  • Reason about and use relationships for wave speed, wavelength, & frequency
  • Reason about & use relationships that relate wave speed to medium properties

Proficiency Indicator

  • I can solve problems involving vibrations among multiple media

 

4.3 Hydrostatics

Learning Indicators

  • I can quantitatively/qualitatively reason about pressure changes in a liquid
  • I can relate pressure, force, and area and recognize the need to do so
  • I can qualitatively reason about buoyant force using Archimedes principle
  • I use Newton’s laws to analyze the statics/dynamics of submerged objects

Proficiency Indicators

  • I can solve hydrostatic problems

How I’m Imagining the Beginning of Forces Unfolding

Brain Dump.

Getting Started (Day One, at the end of another day really)

(1) Paige Keely’s Formative Assessment Worksheet (Individual, Group Discuss). Done this before, goes pretty well.

(2) Bowling Ball Challenge–whole class engagement, get ideas, puzzles, questions going. Have done this with outreach stuff, but not in class.

Exploration Before Reading (Day One)

(3) Hover Puck Activity (Words, Motion Maps, Graphs). Done this before. Goes pretty smoothly

(4) Revisit Statements from Worksheet– resolve what can be, and leave unresolved what can’t.

(5) Some remarks from me. Introduce notion of interactions, and the idea that forces come in pairs (but not that they are equal). Mention historical notions of “force impressa” and “force viva”, and how relates to how physicists now use the word force, and what it does and doesn’t refer to.

Reading (At-home)

(6) Selected Reading (The whole chapter should not be read. It’s really bad).

(7) Online Reading Questions to relate N1 and N2 aspect of reading to in-class observations with bowling balls and pucks. Other questions to explore common sense around N3rd Law, and not resolve. Questions to connect statements from reading to statements on Formative Assessment Worksheet. Questions to test their understanding.

Synthesizing and Stabilizing from Reading (Back in Class)

(7) Based on my assessment from student responses to #5, summarize key ideas and challenges I saw. Ask a few clicker questions that target key ideas and challenges. last few questions should be getting us toward skills / ideas for problem-solving so…

(8) Interactive Demo Problem (I mostly do, with stops for students to peer discuss)

  • Cart on Track:  Fan exerts force one way, pulling with string the other.
    • Not moving, using spring scale on string to get a measure of Force associated with Fan.
    • Predict acceleration of Cart with Fan Only. Check with Motion Detector.
    • Big goal here is to link concepts from reading to problem-solving in simple situation where we have a= 0 opposing forces, a > 0, single force.

Game Time-students in the driver seat again.

(9) White Board Problem (Extends what I did to a case where a > 0 with multiple forces)

Now pull cart with ~constant force with string greater than fan, and have TA note the value of force read by spring scale. Show them the velocity vs. time graphs. Ask them to determine how much force I was exerting.

    • 1st:  Best Guess, Too High, Too Low. On the front board.
    • 2nd: Have students share some ideas about getting started, information that might be important, challenges they anticipate, and what concepts from reading are relevant. All on front board.
    • 3rd: Have extension question ready–what would be different if I turned the fan around and pulled just as hard? What would graph look like then and why?

(10) Board Meeting Discussion. Lots more to say here about format, goals,etc.

(11) Don’t forget to reveal answer, and compare to our Best Guesses, etc. Celebration abound.

Assessment Time

(12) Standards-based assessment… Main questions will have students are given a graph of velocity vs. time, but it’s negative slope. Students are given all the horizontal forces they need. They’ll need to determine the mass. [Note: This is not a permutation we have not done]. Other questions will target more conceptual things related to common difficulties like F~v, etc.

(13) Students will self-assess at the back of the room. Turn in to me if they want additional feedback or want to submit work for evidence of mastering the standard.

Looking Ahead Time Troughts

Extend the concept of equilibrium to two-dimensions via a force table lab we have to do. Would love to do as VAD, but students in this class will be expected to do/know more with components. Do I have time to do both?

Extend the difficulty of problems to include non-perpendicular forces and ramps. This will be what their exam problems will look like.

Students will be reassessing on vector standards, and need to emphasize to them how important it is to work toward mastery of the vector standards.

Will be introducing friction in here too. Bridging Analogy stuff.

Instructor Concerns:

Haven’t really done anything with N3rd Law, beyond original intro of force pairs and reading/ reading questions…

Didn’t really talk about vertical forces, or Normal force understandings / mechanism. Where am I going to fold that in?

 

Ramping up Changes: PFL, Reading, and SBG

Here are three things I’m trying to *ramp* up in my physics class this fall. By ramp up, I mean, they are all things I’m already doing, but I am trying to make them more coherent and threaded throughout the entire course.

Invention or exploration activities before every assigned reading

Norm for My Institution: Our physics class has a “traditional” flipped model thing going on. Students are expected to read before coming to class, then they come to class for some combination of conceptual questions ,group problem-solving, demonstrations, and labs.

What I’ve been Doing: I’ve been doing some re-flipping of this to a limited extent throughout the semester. For example, the day before the reading on acceleration, I have students engaged in the Invention Tasks designed by Andrew Boudreaux, Steve Kanim, and Suzanne Brahmia. Or, the week before students read anything about forces, we are doing explorations with hover pucks and bowling balls.

Plans for this Semester: This semester I am committing to having a “preparation for future learning” activity before every assigned reading. I plan on using a combination of invention tasks, PhET simulations, hands-on explorations,

Online questions that support deeper-processesing of reading

Norm at My Institution: Our physics courses uses graded MC quizzes to incentivize reading.

What I’ve been doing: I use online reading questions (for formative assessment), which are “graded” on effort only.  The other thing I do is discuss reading and learning explicitly in class. For example, I have started the 2nd day of class with students watching and discussing this video on shallow vs. deep processing. It’s a kind of a cheesy video, but if when I stop at the right points and ask the right questions, we get really interesting, engaging, and authentic discussions going on around learning.

Plan for this Fall:  I wrote about this in this previous post, but the gist is that I’m going to ask online questions that support students in going back to interrogate the text. I also plan on not just “starting the semester” with some discussion about reading, I plan on spend more time in class actually reading, in order to continue to discuss, practice, and foster reading comprehension strategies.

Competency-oriented and formative-assessment-driven quizzes

Norm at My Institution: We grade students on tasks like labs, quizzes, tests, and projects, and give them points for participating through clicker questions.

What I’ve been doing:  All my quizzes have been standards-based, with many opportunities for reassessment. To a lesser extent, I dabbled in the summer with standards-based lab skills, wherein successful completion of the lab and lab write up was merely a “ticket” to get a new data set which you had to analyze all on your own. The benefit here was that it changed the game of lab from “getting it down in your lab notebook”  to “I better make sure I understand how to do this”.

Plan for the Fall: I also discussed this recently here, but I’m going to be paring down the number of standards (by bundling) to emphasize synthesis over isolated skills. I am also working to make the problems my students practice better aligned with the high stakes exams that will be administered. For labs, I’m trying to develop a rubric I can use in a SBG-way, and plan on assigning grades at the end based on average of Best 3 plus Last 3.

Right now, my lab rubric is looking something like this:

I carefully choose, depict, and consistently use variables relevant to the purpose of the lab

  • Unambiguous sketch of the experimental setup in which variables are clearly defined
  • Variables are used consistently and clearly in data tables, graphs, and algebra

I relate graphical, verbal, and algebraic representations of the model being tested

  • Trend lines depicted in graphs are consistent with model being used (or noted for its inconsistency)
  • Appropriate verbal descriptions of the model are given
  • Shown how the case-specific algebraic model relates to the general model

I make evidenced-based argument to support or reject the use of a model

  • A claim is made about whether to reject or support the model
  • Specific trends in the data are used to support claim
  • Reasoning is provided to explain why trends mentioned are relevant

I carryout error analysis to determine bounds on numerical result

  • Estimated uncertainties are provided and seem reasonable
  • The “weakest link” is correctly identified using fractional uncertainty
  • The uncertainty of final result is correctly determined
  • Final result is reported with correct units and sig figs.

My work is represented with due care

  • Data tables are neat and legible
  • Graphs are labeled and legible with trend lines made with straight edge
  • Error analysis work is easy to follow
  • Complete sentences that are legible are used when appropriate.

Online Reading Quizzes 2.0

The two talks below at AAPT Summer 2013 have motivated me to re-think what kinds of questions I should ask students to consider online.

Screen shot 2013-07-24 at 11.04.12 AM

First, I should say I think we don’t have a very good text for my intro physics course. Students are expected to read the text before class, and the typical way things go is students have a 5 MC quiz when they walk in the door. My strategy has been to have students answer questions online before class (graded for effort), and then use low-stakes standards-based grading quizzes which students can reassess on.

Because of the bad text, previously, my goal had been to use the online quizzes to introduce questions, exercises, and links to external resources  to supplement the text with good stuff the text lacked. Now my goal has switched a bit toward helpings students engage with the text more meaningfully (even if it isn’t great). My hope is to get students engaged back with the text, and to make reading strategies explicit both in and outside the classroom. Each of the strategies below I hope to practice with students in class before introducing to students in the online reading quizzes.

One thing this exercise has forced me to do is actually read the text carefully, and look for opportunities for students to engage.

Right now I’m considering how much autonomy to give students, and whether and how I’m going to let students decide later on what sentences, equations, etc to interrogate. That would be the goal, right? For them to monitor their own understanding, and decide where to put forth the effort to “dig deeper”.

Anyway, here are some of the things I’m considering, with an example to help illustrate it. Feedback, suggestions comments, criticisms, questions welcome.

Interrogate a Sentence

On pg. 20 the text reads, “The slope of the position vs. time graph gives us the average velocity of the object under consideration.”

Explain why this sentence is true?

Interrogate an Equation

Equation 3.2 on page 25 defines “average acceleration in the x-direction”.

Imagine you are driving a car along a long straight road, and then you begin to speed up. Explain how you could use the car’s speedometer and a stopwatch to determine the car’s average acceleration.  Why does that procedure make sense in terms of the definition provided?

Perform a Home Experiment

Part I: First, go find a friend or family member. Next, get a piece of paper and a textbook. Discuss with your friend what will happen when you drop the piece of paper and textbook from the same height at the same time. Which will hit first? Be sure to explain why you think so to each other. Do it and see what happens.

Part II:  Now crumple up the piece of paper. You are going to repeat the experiment again with the crumpled paper, but before you do so, discuss with your friend what you both think are going to happen and why.

Part III: Finally, grab various objects and have fun dropping them–tissue plastic bottles, paper, pencils, coins, dollar bills, rocks, baseballs, whatever. Drop them from various heights if you want.  Don’t break anything that shouldn’t be and don’t hurt anyone.

Based on the text’s definition of “free-fall” (read the 2nd paragraph) which objects do you think underwent free-fall motion? Which one’s would you say did not undergo free fall motion? Explain how you used the definition to help you decide. Under what conditions will two objects dropped from the same height at the same time hit the ground at the same time? When will they not?

Test Your Skills

You are driving your car along a straight and narrow road, while maintaining a constant speed of 60 mph. Above, an airplane flies with a constant velocity of 500 mph. Which of the following statements best characterizes how the car’s acceleration compares to the plane’s acceleration?

    1. The car has a greater acceleration than the airplane
    2. The airplane has a greater acceleration than the car
    3. Both the airplane and the car have the same acceleration
    4. It’s impossible to tell how the accelerations compare from the information above.

Explore Your Commonsense

On page 41 of the text, the author writes, “Here’s an extra question for you to think about (one that people usually get wrong!): In the previous example, what was the acceleration of the rock the instant it reached the very top of its motion? If you think the answer is zero, you’d better think again!”

(a) First off, the author has clued us in that the acceleration is not zero at the top, but let’s play along. Intuitively, why does it make sense to think that the acceleration at the top is zero? What reasoning might a person give who thinks it’s zero?

(b) You Choose: Either write down an idea you have about why the acceleration cannot be zero at the top OR write down one question you’d like to discuss in class about the acceleration of a ball in free-fall.

Responsive Teaching–Brief Summary and Links

This summer I went to the Science Teaching Responsiveness Conference. While there was much debate about what “Responsive Teaching” even means, there is loose agreement about the kind of thing it is and some of the typical ingredients we’d expect to be present. My interpretations of what some of those things might be:

* Classroom structures and practices centered around making student thinking visible

* Noticing / attending to student thinking–the substance of their ideas, not just the correctness.

* Interpreting and relating to student thinking with a disciplinary lens

* Responding to students contributions in ways to foster and promote productive disciplinary engagement

* Supporting students in entering disciplinary pursuits (practices, mindsets, attitudes) as part of learning disciplinary content, and recognizing that those two goals can be in conflict in moments.

* Fostering a communities of learners environment that is appropriate to the discipline and the population

* Selecting out of student contributions new learning targets (instructional objectives) that are taken up as part of the curriculum, adapting instruction in order to do so.

Two orienting examples that were frequently brought up were:

Deborah Ball’s, “With an eye on the mathematical horizon…”

and

David Hammer’s, “Discovery Learning and Discovery Teaching

Website to Visit

If you are interested in responsive teaching, I’d highly recommend a recently launched website “Responsive Teaching in Science“. It garnered much excitement at the conference, and provides resources for engaging in responsive teaching, including lots of videos from elementary school classrooms, case studies, readings, etc. I’m hoping to use the website in my physical science course for future elementary school teachers this year.

Learn.Brian.Learn

This fall, I am going to be a student again. The plan is to take one class per semester over the next four years, so that I’m in a position to do student teaching around the time I’m coming up for sabbatical.

Reasons Why I’m Doing This?

#1 It would be nice to be certified to teach and to have some high school teaching experience.

#2 It would be nice to have a better understanding of the program of study the pre-service physics teachers experience.

#3 It would be nice to have a better understanding of the educational bureaucracies the pre-service teachers have to navigate, both before and around the time of induction.

#4 I’m just interested to do it. I’ll learn a lot and I’ll grow. I like that.

#5 Right now, I have less interaction with the folks over in MTeach than I should. This isn’t the only way to do it, but it does force the issue somewhat.

So, really, at the end of the day, I’m responsible for mentoring and teaching pre-service physics teachers, both formally and informally. Right now, I teach 3 courses that all pre-service physics teachers must take. Meaning they get a high dose of “me”. Figuring out exactly our “Physics Teaching Concentration” and “me” does, can, and should interface with the UTeach education minor is not straightforward. I can read the syllabus, talk to instructor, sit in on a course, but I’m still left guessing as to a lot of the substantive ideas and meta-messages students experience along the way. Understanding how I can best leverage what they have and haven’t learned / experience is important, and I feel like I’ll be able to do a better job. Second, having more classroom experience will be extremely useful for many of the same reasons. I will be able to broaden the perspectives I can provide in the courses I teach, but also have a more nuanced, realistic approach to mentoring.

What Challenges Lay Ahead?

#1  I’m already (often) too busy, and this will make me more busy. This has consequences for time and stress to be engaged at home and at work.

#2 There is need to initiate and maintain open lines of communication with instructors who teach the courses I’ll take. I’m trying to imagine what it’ll be like for colleagues to grade my assignments, evaluate my work, etc? How can I enter in such situations in ways that reduce tensions?

#3 Am I really ready to do homework and take exams again? Jeez.

#4 How will /should situations be handled where I’m teaching students in my classes while simultaneously working on a project together in another class? There is potential for conflict of interest, and I have to think about that in concert with #2

Fortunately, this semester I’ll just be dipping my toe in the water. It’s a one-credit course, meets 1.5 hours a week with in-class teaching experiences. Totally excited.

Final Thoughts: I’m wondering whether I’ll blog about my experiences here, or start another blog. And then, I’m wondering, should that blog be public or private.

Would to hear what people think about all of this.

SBG Grain-size: Assess the Small, Evaluate the Large

One of the biggest changes to my assessment plan this year is that I intend to bundle standards in intro physics. See, one of the tensions that exists in competency-based grading is what grain-size of standards to use. Small-grain standards have the benefit of being very detailed and explicit about the concepts and skill students need to master. Two downsides are these however: Logistically, it can create a problem where you have TOO many standards. Learning-wise, it can create a problem where you’ve disassembled doing science into discrete skills so much that students aren’t really doing science anymore. Large-grain standards have the benefit of keeping the number of standards down (logistically) while also focusing on synthesis of skills (learning-wise). However, large-grain standards may leave students with feedback that is too broad and not targeted to the specific things they need to work.

Josh Gates, who blogs over at Newton’s Minions, approaches this problem by bundling fine-grained skills and concepts in broader competencies. Students receive feedback on the fine-grained skills and concepts, but competency at the synthesis level is what matters at the end. I am by no means saying this is the only solution or the best solution, but it has particularly affordances for my situation. Here’s why I think so:

  • Since I don’t have control over curriculum coverage or pacing, I was always having to make compromises about which fine-grained standards were the most important to use. Students were getting practice and feedback on certain skills but not others. Even with this parsing down, I still felt bogged down by having too many standards. Simultaneously, I had too many standards and not enough standards. Re-packaging I think will help, because shifting my grain size up or down alone wasn’t going to help.
  • Since 40% of the students’ grade is determined by high-stakes exams not written by me, the fine-grained standards I was using were helping to give students practice and feedback on underlying skills, but not enough on synthesis problem-solving. By bundling, I can make sure they are getting practice and feedback at the level they are expected to perform on exams, while also giving them feedback on the fine-grained stuff.

Here’s an example from Josh, and how he’s bundled skills skills into a competency that students understand the balanced-forces particle model.

Screen shot 2013-07-22 at 6.51.48 AM

I’ll be spending the next couple weeks revising and bundling my old standards to better support students learning and better align with the implicit curricular coverage established by the third-party exams.

 

 

Physics Education: Research, Assessment, & Poverty

Participations in this summer’s conferences has really got me thinking about a many  thing about physics education research. Here I’m going to begin writing about one of those things. To start this conversation, I want to talk about a poster from the 2013 Physics Education Research Conference.

Who We Study, Who We Teach

Steve Kanim from New Mexico State University

Steve analyzed Physics Education Research publications from the American Journal of Physics and Physical Review–Special Topic in PER. The analysis was limited to publications that included actual student data (i.e., no discussions, opinions, sharing of best practices, dissemination only papers, etc).  Steve finds that 75% of the students we study are enrolled in calc-based physics. This is disproportionate to distribution of classes we teach–only 33% of the students we teach take calc-based physics. The population of students least studied are those in two-year colleges, which comprise 25% of the students we teach (less than one percent of the students we study). Students in algebra-based courses are also under-represented in our research.

Steve is careful not to overly criticize our community’s beginnings. Our field has grown, in part, due to the fact that our research has focused on how even our “best” students struggle to develop functional understandings of basic physics concepts. Rather than blaming our past, Steve’s analysis points to a gap we need to address now.

Steve also looked at this data by disaggregating studies based on the SAT MATH distribution. From this perspective, it still appears that we are studying students on the high end. For me (Steve did not say this), this is especially critical due to the correlations that exist between achievement tests like the SAT and poverty and correlations between poverty and race. It could easily be said that we have been focusing more of our efforts and resources on the privileged. Steve also mentioned some research, which I can’t remember right now, that has found that a SAT Math score of 600 is a threshold for achievement in upper-level physics.

Oh, Force Concept Inventory

Steve’s poster also referenced some research about the FCI, which has also got me thinking again about the Force Concept Inventory (FCI), and how the FCI relates to our field’s focus on the upper end. If you don’t know, the FCI is the most widely used assessment / evaluation instrument in physics education. When using the FCI, normalized gains are the most widely used method to report student learning outcomes.

The idea behind normalized gain is to “take into consideration” students pre-test scores. Normalized gains can be interpreted as the “fraction of gain that could have occurred.”  For example: a student who starts with a score of 40% and ends with 70%, gains +30% out of possible +60% gain, thereby having a normalized gain of 50%.

Despite measuring scores this way it appears that normalized gain (can be) strongly correlated with pre-test score. (Coletta and Phillips, 2005).

Screen shot 2013-07-20 at 9.35.19 AM

Underlying this correlation is additional findings that normalized gains on the FCI are strongly correlated with student scores on the Lawson Test of Scientific Reasoning Test, and they also correlated with students’ SAT scores (Coletta and Phillips, 2007).

Screen shot 2013-07-20 at 9.21.03 AM

A potentially huge problem we have as a community is that we report normalized FCI gains with out disaggregating these scores along such measures. I’d argue that this tendency is potentially dangerous, because it can lead us to make claims and offer implications for instruction that are distorted. For examples of how failures to disaggregate student achievement with measures of poverty lead to trouble, see Michael Marder’s prezi on Education and Poverty.

What can we do?

#1 We need Steve to publish his analysis of the mismatch between who we teach and who we study. This will enable those seeking funding to study under-represented populations to point to Steve’s research on the immense need for such research. It will also enable us to press funding institutions to create more parity in funding priorities. I emailed Steve this morning to offer encouragement and any help in making sure this happens.

#2 We need to begin as a community to publicize our own FCI normalized gains along with accompanying data that aides with meaningful disaggregation. This is true not only for publications about research. It should also include standards of reporting to funding agencies, and even standards of reporting on blogs. For example, right now, my own institutions reports normalized FCI gains from our algebra-based physics course to PhysTEC, and PhysTEC shares back data from all PhysTEC supported sites scores without disaggregation. I’ll start this process here: Our normalized gains at MTSU for algebra-based physics hover just below 0.3, and our SAT MATH scores are 460-570 range, with SAT Reading being 460-510 range. Note that this falls nicely in line with the graph above. Along this issue, we should really support the PER user’s guide. Although not on the site yet, they are working hard to create an Assessment Database and Analyzer tool that will make it easier for everyone to upload, use and interpret data in meaningful ways.

#3 Physics Education Researchers as individuals need to go out of their to engage with more research concerning students who aren’t just down the hall. The disproportionate focus on calc-based physics and severe shortage on two-year colleges is not malicious–it comes from convenience and a desire to improve our own local educational settings. Research-intensive universities are more likely to have students at the higher end of preparation and opportunity, and are also likely to have professors who have time and resources to do research. Instructors at two-year colleges have the opposite situation–no time, resources, or support to conduct research, and more likely to have students with less preparation and opportunity. I emailed three community college physics instructors this morning to begin that conversation.

What say you? (Feature Comments)

Eric Brewe: “We should think about the use of normalized gain. It over values gains made at high end schools.”

Gasstation without pumps: “One question remains—why are students taking algebra-based physics? … Is the FCI the appropriate measure?”

Writing Goals for next 6 Months

Near-term Grant Writing Goals:

Internal Faculty Research Grant to support follow-up on work pertaining to this post stemming from an undergraduate’s thesis. (Sept 25th)

Internal Public Service Grant to support Physics Teacher Collaboratives (Oct 1st), which we are starting this fall.

Spencer Small Research Grant to support research and instructional efforts on Responsive Teaching in our physics pre-service concentration (Oct 15th)

Longer-Term: Career Grant Next Year

 

Near-term Paper Writing Goals:

C&I paper, with Natasha on a micro-analysis of development of knowledge for teaching (Aug/Sept)

Phys Rev.–PER paper, on varied meanings of “straight” in student discussions of light. (Oct/Nov)

AJP paper, when F ≠ -grad(U)? (Dec/Jan)

Longer-term:  TE paper(s) with Leslie

Blog at WordPress.com.

Up ↑