Brain Dump.

Getting Started (Day One, at the end of another day really)

(1) Paige Keely’s Formative Assessment Worksheet (Individual, Group Discuss). Done this before, goes pretty well.

(2) Bowling Ball Challenge–whole class engagement, get ideas, puzzles, questions going. Have done this with outreach stuff, but not in class.

(3) Hover Puck Activity (Words, Motion Maps, Graphs). Done this before. Goes pretty smoothly

(4) Revisit Statements from Worksheet– resolve what can be, and leave unresolved what can’t.

(5) Some remarks from me. Introduce notion of interactions, and the idea that forces come in pairs (but not that they are equal). Mention historical notions of “force impressa” and “force viva”, and how relates to how physicists now use the word force, and what it does and doesn’t refer to.

(7) Online Reading Questions to relate N1 and N2 aspect of reading to in-class observations with bowling balls and pucks. Other questions to explore common sense around N3rd Law, and not resolve. Questions to connect statements from reading to statements on Formative Assessment Worksheet. Questions to test their understanding.

Synthesizing and Stabilizing from Reading (Back in Class)

(7) Based on my assessment from student responses to #5, summarize key ideas and challenges I saw. Ask a few clicker questions that target key ideas and challenges. last few questions should be getting us toward skills / ideas for problem-solving so…

(8) Interactive Demo Problem (I mostly do, with stops for students to peer discuss)

• Cart on Track:  Fan exerts force one way, pulling with string the other.
• Not moving, using spring scale on string to get a measure of Force associated with Fan.
• Predict acceleration of Cart with Fan Only. Check with Motion Detector.
• Big goal here is to link concepts from reading to problem-solving in simple situation where we have a= 0 opposing forces, a > 0, single force.

Game Time-students in the driver seat again.

(9) White Board Problem (Extends what I did to a case where a > 0 with multiple forces)

Now pull cart with ~constant force with string greater than fan, and have TA note the value of force read by spring scale. Show them the velocity vs. time graphs. Ask them to determine how much force I was exerting.

• 1st:  Best Guess, Too High, Too Low. On the front board.
• 2nd: Have students share some ideas about getting started, information that might be important, challenges they anticipate, and what concepts from reading are relevant. All on front board.
• 3rd: Have extension question ready–what would be different if I turned the fan around and pulled just as hard? What would graph look like then and why?

(10) Board Meeting Discussion. Lots more to say here about format, goals,etc.

(11) Don’t forget to reveal answer, and compare to our Best Guesses, etc. Celebration abound.

Assessment Time

(12) Standards-based assessment… Main questions will have students are given a graph of velocity vs. time, but it’s negative slope. Students are given all the horizontal forces they need. They’ll need to determine the mass. [Note: This is not a permutation we have not done]. Other questions will target more conceptual things related to common difficulties like F~v, etc.

(13) Students will self-assess at the back of the room. Turn in to me if they want additional feedback or want to submit work for evidence of mastering the standard.

Extend the concept of equilibrium to two-dimensions via a force table lab we have to do. Would love to do as VAD, but students in this class will be expected to do/know more with components. Do I have time to do both?

Extend the difficulty of problems to include non-perpendicular forces and ramps. This will be what their exam problems will look like.

Students will be reassessing on vector standards, and need to emphasize to them how important it is to work toward mastery of the vector standards.

Will be introducing friction in here too. Bridging Analogy stuff.

Instructor Concerns:

Haven’t really done anything with N3rd Law, beyond original intro of force pairs and reading/ reading questions…

Didn’t really talk about vertical forces, or Normal force understandings / mechanism. Where am I going to fold that in?

Here are three things I’m trying to *ramp* up in my physics class this fall. By ramp up, I mean, they are all things I’m already doing, but I am trying to make them more coherent and threaded throughout the entire course.

Invention or exploration activities before every assigned reading

Norm for My Institution: Our physics class has a “traditional” flipped model thing going on. Students are expected to read before coming to class, then they come to class for some combination of conceptual questions ,group problem-solving, demonstrations, and labs.

What I’ve been Doing: I’ve been doing some re-flipping of this to a limited extent throughout the semester. For example, the day before the reading on acceleration, I have students engaged in the Invention Tasks designed by Andrew Boudreaux, Steve Kanim, and Suzanne Brahmia. Or, the week before students read anything about forces, we are doing explorations with hover pucks and bowling balls.

Plans for this Semester: This semester I am committing to having a “preparation for future learning” activity before every assigned reading. I plan on using a combination of invention tasks, PhET simulations, hands-on explorations,

Online questions that support deeper-processesing of reading

Norm at My Institution: Our physics courses uses graded MC quizzes to incentivize reading.

What I’ve been doing: I use online reading questions (for formative assessment), which are “graded” on effort only.  The other thing I do is discuss reading and learning explicitly in class. For example, I have started the 2nd day of class with students watching and discussing this video on shallow vs. deep processing. It’s a kind of a cheesy video, but if when I stop at the right points and ask the right questions, we get really interesting, engaging, and authentic discussions going on around learning.

Plan for this Fall:  I wrote about this in this previous post, but the gist is that I’m going to ask online questions that support students in going back to interrogate the text. I also plan on not just “starting the semester” with some discussion about reading, I plan on spend more time in class actually reading, in order to continue to discuss, practice, and foster reading comprehension strategies.

Competency-oriented and formative-assessment-driven quizzes

Norm at My Institution: We grade students on tasks like labs, quizzes, tests, and projects, and give them points for participating through clicker questions.

What I’ve been doing:  All my quizzes have been standards-based, with many opportunities for reassessment. To a lesser extent, I dabbled in the summer with standards-based lab skills, wherein successful completion of the lab and lab write up was merely a “ticket” to get a new data set which you had to analyze all on your own. The benefit here was that it changed the game of lab from “getting it down in your lab notebook”  to “I better make sure I understand how to do this”.

Plan for the Fall: I also discussed this recently here, but I’m going to be paring down the number of standards (by bundling) to emphasize synthesis over isolated skills. I am also working to make the problems my students practice better aligned with the high stakes exams that will be administered. For labs, I’m trying to develop a rubric I can use in a SBG-way, and plan on assigning grades at the end based on average of Best 3 plus Last 3.

Right now, my lab rubric is looking something like this:

I carefully choose, depict, and consistently use variables relevant to the purpose of the lab

• Unambiguous sketch of the experimental setup in which variables are clearly defined
• Variables are used consistently and clearly in data tables, graphs, and algebra

I relate graphical, verbal, and algebraic representations of the model being tested

• Trend lines depicted in graphs are consistent with model being used (or noted for its inconsistency)
• Appropriate verbal descriptions of the model are given
• Shown how the case-specific algebraic model relates to the general model

I make evidenced-based argument to support or reject the use of a model

• A claim is made about whether to reject or support the model
• Specific trends in the data are used to support claim
• Reasoning is provided to explain why trends mentioned are relevant

I carryout error analysis to determine bounds on numerical result

• Estimated uncertainties are provided and seem reasonable
• The “weakest link” is correctly identified using fractional uncertainty
• The uncertainty of final result is correctly determined
• Final result is reported with correct units and sig figs.

My work is represented with due care

• Data tables are neat and legible
• Graphs are labeled and legible with trend lines made with straight edge
• Error analysis work is easy to follow
• Complete sentences that are legible are used when appropriate.

The two talks below at AAPT Summer 2013 have motivated me to re-think what kinds of questions I should ask students to consider online.

First, I should say I think we don’t have a very good text for my intro physics course. Students are expected to read the text before class, and the typical way things go is students have a 5 MC quiz when they walk in the door. My strategy has been to have students answer questions online before class (graded for effort), and then use low-stakes standards-based grading quizzes which students can reassess on.

Because of the bad text, previously, my goal had been to use the online quizzes to introduce questions, exercises, and links to external resources  to supplement the text with good stuff the text lacked. Now my goal has switched a bit toward helpings students engage with the text more meaningfully (even if it isn’t great). My hope is to get students engaged back with the text, and to make reading strategies explicit both in and outside the classroom. Each of the strategies below I hope to practice with students in class before introducing to students in the online reading quizzes.

One thing this exercise has forced me to do is actually read the text carefully, and look for opportunities for students to engage.

Right now I’m considering how much autonomy to give students, and whether and how I’m going to let students decide later on what sentences, equations, etc to interrogate. That would be the goal, right? For them to monitor their own understanding, and decide where to put forth the effort to “dig deeper”.

Anyway, here are some of the things I’m considering, with an example to help illustrate it. Feedback, suggestions comments, criticisms, questions welcome.

Interrogate a Sentence

On pg. 20 the text reads, “The slope of the position vs. time graph gives us the average velocity of the object under consideration.”

Explain why this sentence is true?

Interrogate an Equation

Equation 3.2 on page 25 defines “average acceleration in the x-direction”.

Imagine you are driving a car along a long straight road, and then you begin to speed up. Explain how you could use the car’s speedometer and a stopwatch to determine the car’s average acceleration.  Why does that procedure make sense in terms of the definition provided?

Perform a Home Experiment

Part I: First, go find a friend or family member. Next, get a piece of paper and a textbook. Discuss with your friend what will happen when you drop the piece of paper and textbook from the same height at the same time. Which will hit first? Be sure to explain why you think so to each other. Do it and see what happens.

Part II:  Now crumple up the piece of paper. You are going to repeat the experiment again with the crumpled paper, but before you do so, discuss with your friend what you both think are going to happen and why.

Part III: Finally, grab various objects and have fun dropping them–tissue plastic bottles, paper, pencils, coins, dollar bills, rocks, baseballs, whatever. Drop them from various heights if you want.  Don’t break anything that shouldn’t be and don’t hurt anyone.

Based on the text’s definition of “free-fall” (read the 2nd paragraph) which objects do you think underwent free-fall motion? Which one’s would you say did not undergo free fall motion? Explain how you used the definition to help you decide. Under what conditions will two objects dropped from the same height at the same time hit the ground at the same time? When will they not?

You are driving your car along a straight and narrow road, while maintaining a constant speed of 60 mph. Above, an airplane flies with a constant velocity of 500 mph. Which of the following statements best characterizes how the car’s acceleration compares to the plane’s acceleration?

1. The car has a greater acceleration than the airplane
2. The airplane has a greater acceleration than the car
3. Both the airplane and the car have the same acceleration
4. It’s impossible to tell how the accelerations compare from the information above.

On page 41 of the text, the author writes, “Here’s an extra question for you to think about (one that people usually get wrong!): In the previous example, what was the acceleration of the rock the instant it reached the very top of its motion? If you think the answer is zero, you’d better think again!”

(a) First off, the author has clued us in that the acceleration is not zero at the top, but let’s play along. Intuitively, why does it make sense to think that the acceleration at the top is zero? What reasoning might a person give who thinks it’s zero?

(b) You Choose: Either write down an idea you have about why the acceleration cannot be zero at the top OR write down one question you’d like to discuss in class about the acceleration of a ball in free-fall.

This summer I went to the Science Teaching Responsiveness Conference. While there was much debate about what “Responsive Teaching” even means, there is loose agreement about the kind of thing it is and some of the typical ingredients we’d expect to be present. My interpretations of what some of those things might be:

* Classroom structures and practices centered around making student thinking visible

* Noticing / attending to student thinking–the substance of their ideas, not just the correctness.

* Interpreting and relating to student thinking with a disciplinary lens

* Responding to students contributions in ways to foster and promote productive disciplinary engagement

* Supporting students in entering disciplinary pursuits (practices, mindsets, attitudes) as part of learning disciplinary content, and recognizing that those two goals can be in conflict in moments.

* Fostering a communities of learners environment that is appropriate to the discipline and the population

* Selecting out of student contributions new learning targets (instructional objectives) that are taken up as part of the curriculum, adapting instruction in order to do so.

Two orienting examples that were frequently brought up were:

Deborah Ball’s, “With an eye on the mathematical horizon…”

and

David Hammer’s, “Discovery Learning and Discovery Teaching

Website to Visit

If you are interested in responsive teaching, I’d highly recommend a recently launched website “Responsive Teaching in Science“. It garnered much excitement at the conference, and provides resources for engaging in responsive teaching, including lots of videos from elementary school classrooms, case studies, readings, etc. I’m hoping to use the website in my physical science course for future elementary school teachers this year.

This fall, I am going to be a student again. The plan is to take one class per semester over the next four years, so that I’m in a position to do student teaching around the time I’m coming up for sabbatical.

Reasons Why I’m Doing This?

#1 It would be nice to be certified to teach and to have some high school teaching experience.

#2 It would be nice to have a better understanding of the program of study the pre-service physics teachers experience.

#3 It would be nice to have a better understanding of the educational bureaucracies the pre-service teachers have to navigate, both before and around the time of induction.

#4 I’m just interested to do it. I’ll learn a lot and I’ll grow. I like that.

#5 Right now, I have less interaction with the folks over in MTeach than I should. This isn’t the only way to do it, but it does force the issue somewhat.

So, really, at the end of the day, I’m responsible for mentoring and teaching pre-service physics teachers, both formally and informally. Right now, I teach 3 courses that all pre-service physics teachers must take. Meaning they get a high dose of “me”. Figuring out exactly our “Physics Teaching Concentration” and “me” does, can, and should interface with the UTeach education minor is not straightforward. I can read the syllabus, talk to instructor, sit in on a course, but I’m still left guessing as to a lot of the substantive ideas and meta-messages students experience along the way. Understanding how I can best leverage what they have and haven’t learned / experience is important, and I feel like I’ll be able to do a better job. Second, having more classroom experience will be extremely useful for many of the same reasons. I will be able to broaden the perspectives I can provide in the courses I teach, but also have a more nuanced, realistic approach to mentoring.

#1  I’m already (often) too busy, and this will make me more busy. This has consequences for time and stress to be engaged at home and at work.

#2 There is need to initiate and maintain open lines of communication with instructors who teach the courses I’ll take. I’m trying to imagine what it’ll be like for colleagues to grade my assignments, evaluate my work, etc? How can I enter in such situations in ways that reduce tensions?

#3 Am I really ready to do homework and take exams again? Jeez.

#4 How will /should situations be handled where I’m teaching students in my classes while simultaneously working on a project together in another class? There is potential for conflict of interest, and I have to think about that in concert with #2

Fortunately, this semester I’ll just be dipping my toe in the water. It’s a one-credit course, meets 1.5 hours a week with in-class teaching experiences. Totally excited.

Final Thoughts: I’m wondering whether I’ll blog about my experiences here, or start another blog. And then, I’m wondering, should that blog be public or private.

Would to hear what people think about all of this.

One of the biggest changes to my assessment plan this year is that I intend to bundle standards in intro physics. See, one of the tensions that exists in competency-based grading is what grain-size of standards to use. Small-grain standards have the benefit of being very detailed and explicit about the concepts and skill students need to master. Two downsides are these however: Logistically, it can create a problem where you have TOO many standards. Learning-wise, it can create a problem where you’ve disassembled doing science into discrete skills so much that students aren’t really doing science anymore. Large-grain standards have the benefit of keeping the number of standards down (logistically) while also focusing on synthesis of skills (learning-wise). However, large-grain standards may leave students with feedback that is too broad and not targeted to the specific things they need to work.

Josh Gates, who blogs over at Newton’s Minions, approaches this problem by bundling fine-grained skills and concepts in broader competencies. Students receive feedback on the fine-grained skills and concepts, but competency at the synthesis level is what matters at the end. I am by no means saying this is the only solution or the best solution, but it has particularly affordances for my situation. Here’s why I think so:

• Since I don’t have control over curriculum coverage or pacing, I was always having to make compromises about which fine-grained standards were the most important to use. Students were getting practice and feedback on certain skills but not others. Even with this parsing down, I still felt bogged down by having too many standards. Simultaneously, I had too many standards and not enough standards. Re-packaging I think will help, because shifting my grain size up or down alone wasn’t going to help.
• Since 40% of the students’ grade is determined by high-stakes exams not written by me, the fine-grained standards I was using were helping to give students practice and feedback on underlying skills, but not enough on synthesis problem-solving. By bundling, I can make sure they are getting practice and feedback at the level they are expected to perform on exams, while also giving them feedback on the fine-grained stuff.

Here’s an example from Josh, and how he’s bundled skills skills into a competency that students understand the balanced-forces particle model.

I’ll be spending the next couple weeks revising and bundling my old standards to better support students learning and better align with the implicit curricular coverage established by the third-party exams.

Participations in this summer’s conferences has really got me thinking about a many  thing about physics education research. Here I’m going to begin writing about one of those things. To start this conversation, I want to talk about a poster from the 2013 Physics Education Research Conference.

Who We Study, Who We Teach

Steve Kanim from New Mexico State University

Steve analyzed Physics Education Research publications from the American Journal of Physics and Physical Review–Special Topic in PER. The analysis was limited to publications that included actual student data (i.e., no discussions, opinions, sharing of best practices, dissemination only papers, etc).  Steve finds that 75% of the students we study are enrolled in calc-based physics. This is disproportionate to distribution of classes we teach–only 33% of the students we teach take calc-based physics. The population of students least studied are those in two-year colleges, which comprise 25% of the students we teach (less than one percent of the students we study). Students in algebra-based courses are also under-represented in our research.

Steve is careful not to overly criticize our community’s beginnings. Our field has grown, in part, due to the fact that our research has focused on how even our “best” students struggle to develop functional understandings of basic physics concepts. Rather than blaming our past, Steve’s analysis points to a gap we need to address now.

Steve also looked at this data by disaggregating studies based on the SAT MATH distribution. From this perspective, it still appears that we are studying students on the high end. For me (Steve did not say this), this is especially critical due to the correlations that exist between achievement tests like the SAT and poverty and correlations between poverty and race. It could easily be said that we have been focusing more of our efforts and resources on the privileged. Steve also mentioned some research, which I can’t remember right now, that has found that a SAT Math score of 600 is a threshold for achievement in upper-level physics.

Oh, Force Concept Inventory

Steve’s poster also referenced some research about the FCI, which has also got me thinking again about the Force Concept Inventory (FCI), and how the FCI relates to our field’s focus on the upper end. If you don’t know, the FCI is the most widely used assessment / evaluation instrument in physics education. When using the FCI, normalized gains are the most widely used method to report student learning outcomes.

The idea behind normalized gain is to “take into consideration” students pre-test scores. Normalized gains can be interpreted as the “fraction of gain that could have occurred.”  For example: a student who starts with a score of 40% and ends with 70%, gains +30% out of possible +60% gain, thereby having a normalized gain of 50%.

Despite measuring scores this way it appears that normalized gain (can be) strongly correlated with pre-test score. (Coletta and Phillips, 2005).

Underlying this correlation is additional findings that normalized gains on the FCI are strongly correlated with student scores on the Lawson Test of Scientific Reasoning Test, and they also correlated with students’ SAT scores (Coletta and Phillips, 2007).

A potentially huge problem we have as a community is that we report normalized FCI gains with out disaggregating these scores along such measures. I’d argue that this tendency is potentially dangerous, because it can lead us to make claims and offer implications for instruction that are distorted. For examples of how failures to disaggregate student achievement with measures of poverty lead to trouble, see Michael Marder’s prezi on Education and Poverty.

What can we do?

#1 We need Steve to publish his analysis of the mismatch between who we teach and who we study. This will enable those seeking funding to study under-represented populations to point to Steve’s research on the immense need for such research. It will also enable us to press funding institutions to create more parity in funding priorities. I emailed Steve this morning to offer encouragement and any help in making sure this happens.

#2 We need to begin as a community to publicize our own FCI normalized gains along with accompanying data that aides with meaningful disaggregation. This is true not only for publications about research. It should also include standards of reporting to funding agencies, and even standards of reporting on blogs. For example, right now, my own institutions reports normalized FCI gains from our algebra-based physics course to PhysTEC, and PhysTEC shares back data from all PhysTEC supported sites scores without disaggregation. I’ll start this process here: Our normalized gains at MTSU for algebra-based physics hover just below 0.3, and our SAT MATH scores are 460-570 range, with SAT Reading being 460-510 range. Note that this falls nicely in line with the graph above. Along this issue, we should really support the PER user’s guide. Although not on the site yet, they are working hard to create an Assessment Database and Analyzer tool that will make it easier for everyone to upload, use and interpret data in meaningful ways.

#3 Physics Education Researchers as individuals need to go out of their to engage with more research concerning students who aren’t just down the hall. The disproportionate focus on calc-based physics and severe shortage on two-year colleges is not malicious–it comes from convenience and a desire to improve our own local educational settings. Research-intensive universities are more likely to have students at the higher end of preparation and opportunity, and are also likely to have professors who have time and resources to do research. Instructors at two-year colleges have the opposite situation–no time, resources, or support to conduct research, and more likely to have students with less preparation and opportunity. I emailed three community college physics instructors this morning to begin that conversation.

Eric Brewe: “We should think about the use of normalized gain. It over values gains made at high end schools.”

Gasstation without pumps: “One question remains—why are students taking algebra-based physics? … Is the FCI the appropriate measure?”

Near-term Grant Writing Goals:

Internal Faculty Research Grant to support follow-up on work pertaining to this post stemming from an undergraduate’s thesis. (Sept 25th)

Internal Public Service Grant to support Physics Teacher Collaboratives (Oct 1st), which we are starting this fall.

Spencer Small Research Grant to support research and instructional efforts on Responsive Teaching in our physics pre-service concentration (Oct 15th)

Longer-Term: Career Grant Next Year

Near-term Paper Writing Goals:

C&I paper, with Natasha on a micro-analysis of development of knowledge for teaching (Aug/Sept)

Phys Rev.–PER paper, on varied meanings of “straight” in student discussions of light. (Oct/Nov)

AJP paper, when F ≠ -grad(U)? (Dec/Jan)

Longer-term:  TE paper(s) with Leslie

I just noted how different these requests come off. Ultimately, it doesn’t seem to effect my final decision about reviewing, but it does effect my initial reaction.

Journal #1 Request

Dear [Potential Reviewer]

We are writing to ask if you’d be interested in reviewing a manuscript for [Journal] entitled [Article Name]. Based on your expertise, we feel that you would provide a thoughtful assessment of the manuscript’s strengths and weaknesses.

We hope that you agree to review this paper, in which case you will receive another email with instructions for accessing the manuscript using our on-line review system. We would need this review returned within [time frame]. If you are unable to review for us at this time, we would appreciate receiving names of other potential reviewers you would recommend. In order to expedite the review process, we ask that you please respond to this request within the next 2-3 days.

[Abstract Attached]

Journal #2 Request:

Dear [Potential Review]

We would appreciate your review of this manuscript, which has been submitted to [Journal]. We ask that you return your report within [time frame]