Tag Archives: problem solving

How you do math is, well, not how you do math

Math is an important tool in life, but the mechanics of math, the way it is often taught, is not actually the way it is typically applied. Probably because our brains aren’t wired like computers.

Here’s an example – you are the cashier at a grocery store. A customer buys $43.37 worth of groceries, and hands you a $50 dollar bill. What do you give them in change?

In math, you would find the difference, $50 – $43.37 = $6.63, then count the change out as $5, $1, 2 quarters, a dime and three pennies.

A real cashier, however, counts up: from $43.37, three pennies makes .40, a dime and 2 quarters makes $44, then $1 and $5 make $50. They don’t need to know the total change amount is $6.63, because the goal is to hand over change, not count it.

Likewise, when solving division problems we don’t tend to do division in our head, we do multiplication. When asked how many nickels make up 45 cents, we don’t think 45/5 = 9. We think what times 5 = 45? It seems that we are programmed to think forward, and that thinking backwards is really difficult to do, and when we do think backwards, we are actually thinking forward in a series of backward steps (like the lagging strand of DNA, if I may be so nerdy).

I think this is really important to recognize. In an earlier post on my approach to problem solving, I talked about the necessity of working backwards from the answer to ensure you know where you are going, and that experienced problem solvers do this without even thinking about it. So when I teach problem solving, I not only try to model this as explicitly as possible for my students, but I also teach the metacognitive side – to get the students thinking about their own thinking process when problem solving. To expose the man behind the curtain, as it were.

Certainly to be good in science one has to have a handle on evidence-based problem solving, both inductive and deductive. I feel that teaching metacognition is a way to help students develop those skills. It also helps me fathom the workings of the teenage brain.


little insights

I am always on the lookout for little insights into how students think, and this is an example of how students can perceive the answer on paper to be the end goal, rather than learning. We are working on Astronomy with my grade 9’s, and they were answering some questions. The first question was:

What is an Astronomical Unit? What is it in km?

They correctly put that it is the distance from the Earth to the sun, approximately 150,000,000 km. The third question was:

Space probes can travel at about 30,000 km/h. About how long would it take to fly directly from the Earth to the sun?

The response? “How are we supposed to know how far away the sun is?”

One advantage I am finding with flipping the classroom is that these things arise when I am around to help…

The Sorcerer’s Apprentice, or Never Use a Formula You Don’t Understand

In my grade 10 Science class I recently gave my students an introductory microscope lab, and in my haste I used a “canned” lab from a textbook. Although there are some good activities in this lab, students are presented with a number of equations for determining FOV and magnification, including:

These equations, at face value, are straightforward – in other words they can plug in the numbers and get an answer. But there I something subtly insidious about them – they are just confusing enough that students are unable to apply these formula correctly later. Why? Because they are overly scripted, making the calculations look more complicated than they are, implying that without the formulas, they would not be able to achieve the “correct” answer. They build a reliance on formula, rather than concepts – and formulas without knowing what they mean can lead to trouble – much like poor Mickey’s spell in The Sorcerer’s Apprentice.*

So after an abysmal assessment (which was in part a setup – I could see they were becoming formula dependent), I gave them the following question:

Both images represent the view through the same microscope, with exactly the same settings. How big is the object in the second image?

Their first question? “Which formula do we use?”

My response was a shrug.

I watched as they struggled – one or two figured it out pretty quickly, but others tried dividing the object width (~12mm) by 7 (and some by 8!), some multiplied by 7, some divided by 40 (the circle diameter), but it was clear they were searching for a magic formula. Some, after scowling for a good long time, finally asked “which units do we use? Millimeters or UM’s?” (Aaaagh! That’s not a U! That’s a µ!)

It was challenging to subtly hint at how to simply measure the object without “giving” them the answer, because I didn’t want them to revert to the mindset of me, the teacher, as the sole gatekeeper of knowledge. Eventually they worked it out. Some estimated, some marked off the length of the object on a pencil or sheet of paper and held it to the millimeter scale, and the cleverer ones borrowed a friend’s sheet and held them together in front of a light. (And those that just used someone else’s numbers, well, I had multiple versions of the sheet, so they invariably had to redo it anyway!)

The next question was a bit more involved. I said the view in the image above was through a microscope with 10x ocular and 2x objective. I then asked what the FOV would be using a 20x objective. Despite my earlier warning stay clear of equations for this exercise, I saw many pulling out the equations from the previous lab. And that’s where they really got into trouble…

Numbers were thrown willy-nilly into the equations in the hope that somehow they were correct. Several students, despite correctly identifying the magnifications as 20x and 200x, wrote out

40 / 200 = 7mm / x

When I asked where the 40 came from, they said “low power on a microscope is 40x”.

“All microscopes?” I asked. That threw them.

Eventually I helped them work out that the higher magnification was ten times the lower magnification, so the view would be zoomed in ten times as close. The FOV should then be 10x as small (which is in itself a tricky concept, students are tempted to say 10 times the magnification means bigger, so the FOV is 10 times bigger). For most it eventually clicked that 10x the magnification means the field diameter is 10x smaller. Simple and no formulas to memorize.

It was remarkable, in a way, that a simple set of four of these questions took them a full 80 minute period – but that was mainly because I wouldn’t let them get away with wrong answers. One could call it a waste of a period, but I would not. It was absolutely necessary.

This is exactly the kind of thing Eric Mazur talks about. I will definitely be doing more of these exercises in the future!

*I mean the Fantasia version. Though that scene is included in the recent Nicholas Cage film.

Eighty minutes well spent

Eric Mazur gives a terrific, evidence based explanation of what is wrong with lecturing as a primary source of knowledge transfer, and what to do about it. I really like his explanation, about 51 minutes in, that the better we understand the material, the harder it is for us to teach, because we become more removed from what it was like to learn it the first time.

A Physics Fairy Tale

Once upon a time, in a magical Physics kingdom, there lived an evil teacher who gave resistor network problems like this:

The Evil Physics Teacher made his students solve all kinds of complex resistor networks.  It was time consuming, but the Evil Teacher thought it was worth it because it proved their ability to manipulate Ohm’s law, Kirchoff’s laws, and the resistor formulas.

And the one day, a student asked a simple question:

“What does this circuit do?”

The Evil Physics Teacher paused, and then somewhat sheepishly admitted that the resistor network didn’t actually do anything.

“Can you show us a resistor network that does do something? Like one that an electrical engineer would actually design for something?”

Again, the teacher paused. Um, no. He explained. Everything has resistance, so it is important to be able to figure out total loads, but no one actually designs resistor networks, per se.

“But like, in a house everything is in parallel, right? So current loads would just be the sum of individual currents. Nothing is ever in series in a house.”

The teacher thought about that, but did not reply. Instead, at the end of the day he went home to ponder. Perhaps, just perhaps, there was something to this thought. Resistor networks did allow students to exercise their understanding of electricity, but in an overly complex, unrealistic and – let’s face it – borderline sadistic way. And so the Evil Physics Teacher mended his ways. He began providing simpler questions that still exercised the same skill sets without consuming a full period for a single problem.

As a result, learning the material was quicker and more readily attainable, and there was more time to move onto richer and more exciting things in the curriculum.

And they all lived happily ever after.

(PS – some parts are truer than others…)


Failure, Prototyping and Angry Birds

The other day I had a thought, and I tweeted it. That thought was

In learning, failure is not an option. It’s a *necessity*.

Now, that may seem somewhat trite, but it is also true. If all we ever do is succeed, we have never pushed our own limits. But I think more importantly, students are terrified of failure in any form. It is an important step – and a very liberating one – when students realize they can do poorly on something without it being the end of the world. And when they start to realize that it is a stepping stone to future success, that’s when they take off. I see this every year when my AP Physics students get back a practice exam with a raw score of 50%. After they have a good cry, they realize they have lost nothing, and gained considerable experience, and move on. But why should grade 12 be the first time they encounter this?

So I considered the idea of a small project that would require prototyping. The object of a prototype is to fail. That is,it is to test the limits of design, so it can be improved. By forcing successive failures that lead to gradual improvement, this mindset can be modeled. A bit of real world context could help too: WD40, for example, is almost as ubiquitous as duct tape, and yet the 40 means they tried – and failed – 39 times before they hit on a working formula.

But then it occurred to me that another common example of repeated failure leading to success is Angry Birds. No one ever ever uses up their birds and says “I’m a failure at this game”. Ever. Experienced players will even launch birds at specific targets in an attempt to discover weaknesses in the structure, which is intentional failure with a purpose.

I don’t know where I will go with this idea yet, but I think it is a powerful one, and worth mulling over.

DIY personalized, randomized assignments

I like giving students randomized assignments. That is, I like assignments where they all get the same questions, but different numbers. That way, they can collaborate, but have to ask “how did you solve that question?” rather than “what did you get for that question?

Years ago I tried out Webassign, and I liked the way it built algorithm-based questions. But I don’t have it, and I don’t have money in the budget to pay for it. We use Blackboard as our LMS, and it too has algorithm questions, but the format is kludgy at best. It uses a Java based visual equation editor, which has issues with parsing complex equations, rather than plain text entry. It is so cumbersome I gave up on it. I have Wiley Plus with my senior Physics textbook, which is great, but it doesn’t help with any of my other subjects.

In terms of online resources, I am short of easy and cheap options. But who said it had to be online? I have plenty of tools available right on my laptop that will do the job just fine! Here is how I create personalized, randomized numerical assignments for my students:

Basically, I create a set of questions, then use a spreadsheet to generate random numbers and solutions for those questions. Mail merge tools can then be used to import the randomized numbers into the questions. I use Excel and Word, but any spreadsheet and word processor should do the job.

Let’s step through the process, examine the formulas, and then you can make your very own! To begin, we need a question. For this demo, we will pick something fairly simple. Normally I include 5 or 6 questions in this type of assignment, but we will use one to illustrate. How about

Three light bulbs are connected in parallel.  The first has a resistance of [R1]Ω, the second has a resistance of [R2]Ω and the third has a resistance of [R3]Ω.  The total voltage for the power supply is [V]V. What is the current in the circuit?

Now we have a question, we need to set up our spreadsheet. I set mine up like this:

Then I decide on my max and min values, and enter them.

Then comes the magic:

To explain briefly, this formula grabs the max and min values, and generates a random number between the two, and rounds that number to the number of decimals required. The dollar signs in front of the cell numbers indicate that the formula must always grab those top three numbers for calculating, that way I can copy the formula anywhere, and it will always work.  Here is what it looks like copied to the other cells:

Note that the column (letter) changes in the formula, but the row (number) remains the same. That’s the dollar sign at work.

Next we need a formula for the answer:

Now we are almost ready, but there is one last step. The RAND() function will regenerate random numbers every time the spreadsheet is opened, and every time you modify a cell (note how the values are different in the last two images). So if you want a static set of numbers that will remain the same no matter how many times it is accessed, you need to create one. I copy the excel table (minus the top three rows) and paste it into a word document. Really, any type of document would do, Word can use a spreadsheet, text file, word document, database, or just about anything as a data source.



Now we turn our original question(s) document into a mail merge, define the static data table as the mail merge source, and insert the merge fields for the placeholders we left in the question.  We can even insert the student’s name, to personalize it:

And for me, I use a separate mail merge document to generate a table of answers for each student into a master answer sheet:

(For organization purposes, I tend to name my files similarly, like AssignmentX.doc, AssignmentXnumbers.xls, AssignmentXstaticNumbers.doc, AssignmentXanswersheet.doc, and place them in a folder of their own. )

On the assignment sheet itself, I usually include two rows of answer boxes, labeled “1st try” and “2nd try”. If my goal is to get students to understand how to solve these problems, I want them to retry any questions they get wrong. Having answer boxes on the sheet means scoring (marking, grading, whatever term you like) the assignments is easy – they hand me the assignment, I check the answers against my master answer sheet, record their score on my sheet, and hand it back. Maybe 20 seconds per student.

These assignments have a lot going for them – they allow students to collaborate, they offer immediate feedback, they allow students to correct – or at least attempt to correct – their mistakes, they are kind of fun to make, and they are dead easy to mark (grade, score, whatever…). There is some up-front time creating them, but they save loads of time at the other end, and once you have them, year after year you can just drop in the names of your new students, and prep time is minimal. It is used offline – no risk of network issues – and it is free. Even if you don’t have Word and Excel, you can do the same thing in OpenOffice, which is free. So all around, this type of assignment has lots going for it. Really, the only shortcoming is that these are not open-ended inquiry assignments – but then, they are not intended to be the only type of assessment, just one of many.

Try it, and let me know how it works out. Also, if there are any GoogleDoc wizards out there who might know how to create an online version, I would love to hear from you!

Microsoft Mathematics

This week, Richard Byrne posted a note about Microsoft Mathematics on Free Technology for Teachers. Curious, I downloaded it and have been playing with it. Although not intuitive at first, it quickly becomes more so as you use it. It looks like an excellent tool for Science students, as it can streamline some of the more lengthy, time consuming calculations that suck up class time, like solving quadratics and systems of equations.

There is always the concern that if the computer does all the work, the students never learn it, but there are a few good counter-arguments. First, understanding when, how and why to use the math is arguably more important than the actual mechanics. Second, students who know their way around a graphing calculator can do this anyway. And third, Microsoft Mathematics can show the steps:

This in itself could be useful for helping students to learn how to perform these solutions, and allow students who struggle with the math to still succeed in Science. I think this could become part of my regular repertoire. Thanks Richard!

How Do You Measure That (HDYMT)?

Consider the following question:

A projectile leaves a canon at 25 m/s. If the barrel of the canon is 1.0 m long, a) what is the average acceleration of the projectile? b) If the projectile has a mass of 200g, what is the average force on the projectile while it is in the barrel of the canon?

That is a fairly typical physics problem, but as we have already discussed, it suffers from a surplus of explicit information, transforming it from a problem to a calculation. Plug’n’chug rather than critical thinking.

I prefer to present the problem this way: bring out a canon, shoot it, and ask students to determine the average force on the projectile in the barrel. In my case, I have an air canon that we built years ago that still serves me well. It is made of ABS pipe, and can shoot a squash ball or ping-pong ball. Following the GAP method of problem solving, it is a fairly straightforward process to work backwards to solve the problem:

  • We need force F
  • F = ma
  • m is measurable with a balance.
  • Solving for a requires 3 variables. 1) Initial speed in the canon is 0; 2) Length of the barrel can be measured with a metre stick; 3) Final speed is the speed of the ball as it leaves the canon.

So the entire problem can be solved once we know the speed of the ball. But how do you measure that?

At lower speeds, video analysis works reasonably well. But as the speed increases, the video frames become increasingly blurred and harder to measure accurately. And who wants to stay at low speeds? After some experimentation, we eventually settled on using a strobe light at 100 flashes/s recorded on video. We then extracted the few frames that captured the event – each with 3-4 strobed images – and stacked them in photoshop. Here is a sample result:

With a metre stick in the frame, we could determine the distance between images of the ball, each of which were .01 s apart. Measurements could be made manually, or with the assistance of Tracker. Manually, students can use a ruler to measure the distance between ball images and compare that to the metre stick scale – and this can be done on paper, or even on a computer screen. Using tracker, the scaling can be done automatically once the metre scale is set (Tracker will set 10 frames to a single still image – for more frames the same image can be loaded multiple times, 1 for each frame needed).

Once the launch speed is determined (and this itself is a touch tricky – the ping pong ball experiences quite a bit of air drag), we just follow the steps of the gap method back to the answer.

Interestingly, the first time I gave my students a still image problem like this with a metre stick in the image for scale, they were completely baffled. Even though scale images is something they have done in both Geography and Math, they seemed at a loss applying those skills in Physics. They kept asking for the formula to scale the image. That tells me I need to do a lot more of this sort of thing!

Tracker is Awesome.

I have written before about the importance of measurement, and the importance of authenticity. Of course, these beg the question of how one can produce accurate measurements of real-life events in order to analyze them.

I have used probeware – I still have five sets of probes (with awkward serial interfaces) in my classroom, but I rarely use them anymore, at least for any experiments involving motion. Probes have the advantage that they provide immediate graphical representation of events, but they don’t require the thinking that goes on with measurement.

These days, my tool of choice is Tracker, created by Douglas Brown, a retired Physics instructor at Cabrillo College in California. Tracker, as its name suggests, is video analysis software. But it is much more: it can model dynamic systems and superimpose these models on a video, it can plot vectors, track rotation as well as x and y displacement, track objects in a moving frame (ie for hand-held or panning videos), it can track intensity changes in space and time to get brightness curves, and it can export individual frames from a video. It also has a built-in, comprehensive set of graphing and analysis tools, or the data can be exported to a spreadsheet. And here’s the best part: tracker is free (under GPL), and multi-platform. In short, Tracker is my new BFL (Best Friend in the Laboratory).

Only a few short years ago, video analysis software was costly – at least for a site license – and it was awkward to capture and download video. Now, most students have a phone or iPod that will capture video that can be downloaded and analyzed immediately. Which means students can set up an experiment, video it, and analyze it immediately.

Tracker will track a point automatically if it is sufficiently high contrast, otherwise one must track the points manually – which has the advantage of having to decide exactly which points to track, how many frames per data point, and a variety of other variables that require thinking and experimenting. It thus provides a free, easy to use, rich analysis tool that still requires critical thinking and student involvement. You can see why I like it so much!