Thursday, January 27, 2011

How do I plan for success?

Before I was a teacher, I was a computer programmer. It wasn’t a long career. An unsatisfying summer internship saw to that. But to this day, I am amazed at how much my programming experience impacts my teaching – especially when it comes to planning units.

My computer classes at Pepperdine University taught me to write programs using an approach called Top-Down Design. I start by developing a clear picture of what the program is meant to accomplish. Next, I put into place an output protocol that allows me to ensure that the program is working correctly. This is quality assurance. With this end in mind, I begin the process of writing code that meets the desired outcome.

Depending on the size of the program, this can be an arduous chore. Therefore, the Top-Down Design approach suggests thinking of the program as a series of smaller tasks. I simply write “black box” procedures or functions to accomplish these tasks that I fill in later. Each of these “black boxes” includes some output protocol that allows me to monitor the progress of my program and make adjustments as needed.

Now I am ready to write the code for the “black boxes.” As I make my way through writing the program, I execute test runs during which I check the outputs I put in to monitor the program’s progress. That way, when problems arise I can address them immediately. If I wait until the end to see if the program works it is often too late or too unmanageable to make necessary corrections. (This reminds me of Tom Wujec's TED Talk.)

Once the program is complete it is time to put it to the test with real, messy data. Chances are there will be some bugs that I didn’t anticipate. Fortunately, if I wrote the monitoring outputs correctly, then I have feedback regarding where the problem is and what I need to do about it. More often than not, this results in a successful program that accomplishes its goals.

What does this have to do with unit planning? Those of you familiar with Understanding by Design probably see the connection between the Top-Down Design approach to programming and the essential elements of the unit planning design developed by Wiggins and McTighe. Anyone struggling with the connection between programming and unit planning might replace “program” with “unit,” “output” with “assessment,” “black box” with “lesson,” and “real, messy data” with “learners.”

If not, maybe this diagram might help. The summative assessment is the goal of the program. Formative assessments represent the output protocols that I will use to monitor learners' progress toward the goal. And the experiences are the lessons that I will develop that supports this progress.

I learned to write unit plans after I learned to write programs, but I initially did not see the connection. I planned my units like I was planning a parade - one lesson after another until it was time for a test. And then the test only checked to see if the learners had paid attention to the parade. Planning with the end in mind from the start makes so much more sense and results in a much more coherent curriculum.

I hope you plan for and find success.

Thursday, January 20, 2011

Are We There Yet?

Assessment and evaluation are often used interchangeably, but the framework that I use when thinking about the Teaching-Learning Cycle sees them as two separate phases. Assessment is data gathering. Evaluation is analysis of data.

I was first introduced to this framework at a Learning Network Conference. The distinction made between assessment and evaluation was new to me and it took a while to get use to it. The intentionality of the split makes sense, however, as it allows me to concentrate on the task at hand. Once data are collected, then I can evaluate what they mean.

The Teaching-Learning Cycle’s perspective on evaluation also required a change in my thinking. As a math teacher, my typical evaluation approach was to judge things as right or wrong.  The Teaching-Learning Cycle reflects a more Vygotskian approach through a series of questions:
  • What can the learner do?
  • What is the learner trying to do?
  • What does the learner need to do next?
I see the first question as representing Vygotsky’s zone of actual development. The second and third address his zone of proximal development.

I came to appreciate this view of evaluation. I think it fits nicely into an analogy for making and using rubrics that I share with preservice and inservice teachers. The idea is that evaluation is like monitoring one’s progress on a trip from Grand Rapids, Michigan to Detroit. A version of the analogy rubric is shown below.



Grand Rapids

Because of a lack of understanding or effort, you spun your wheels and never really left Grand Rapids.


Where are we? You made some movement out of Grand Rapids, but it is unclear as to where you are headed.


It’s obvious you put a lot of effort into this trip, but you went in the wrong direction.  You did not meet most of the intended goals.


You’re headed in the right directions, but there is still a lot to do before you reach your destination. 

Detroit (but how)

You made it, but I’m not sure how.  There are gaps that make it difficult to follow your path.


Whoa!  You overshot the goal. An occasional side trip is to be expected (encouraged even) but be aware.


You made it!  You provide a unique, efficient, and insightful route from Grand Rapids to Detroit.


Not only did you make it to Detroit, but you were able to connect to ideas from class in order to make the trip memorable.

In the analogy, any rubric ought to represent a taxonomy that supports the evaluator (teacher or learner) in determining where the learner is (what he or she can do and is trying to do) and describing a path toward success (what comes next).

My wife, Kathy, uses a simpler but no less elegant rubric she developed with her first grade learners to support their efforts to engage in independent reading. The learners use the rubric to self-assess and self-correct during reading time. They want to be successful readers and now they have a road map to show them the way.

I think this is true of all learners. Unfortunately, in the past my evaluation methods reflected more of a sorting mentality than a supporting one. My rubrics measured product instead of progress and did not foster a growth mindset.

I have come a long way since then but there is still a ways to go. I would say I am in Lansing. At least now I am heading in the right direction.

Thursday, January 13, 2011

How's my driving?

A couple of years ago Kathy and I got our first hybrid car. When I sat in the driver’s seat for the first time I was mesmerized by all the electronic gauges in front of me. Two gauges drew my attention. The first was a visual display representing my current miles per gallon (MPG) while the other provided a constantly updating MPG tank average. In no time, these gauges influenced my driving as I tried to keep the tank average above 40 MPG.

The first chance I got, I shared with my colleague John Golden my enthusiasm for monitoring the MPG gauges and how they were affecting my driving. As I recall, he wondered aloud how the current MPG gauge worked. It was interesting to think about but I did not spend much time on it. I trusted it. After all, I was using less gas.

As so often happens, the conversation morphed into us talking about teaching. I saw the gauges as a metaphor for assessing – gathering data about learning. I said something like, “Wouldn’t it be wonderful if our learners came with displays that would give us immediate feedback on their level of interest or understanding?” I do not remember where the conversation went, but the mental picture of my learners with engagement or understanding gauges prominently displayed stuck. While they are not gauges, I have found several approaches to serve as substitutes.

I do not remember where I got the idea for using green, yellow, and red cups as a way to monitor learners’ progress but a presentation by my colleague Jacque Melin comes to mind. Learners use the cups to represent their current level of progress – green means go (“I have no questions”), yellow means caution (“I have questions but I’m still moving forward”), and red means stopped (“I have questions that have stalled my progress”). During activities, I watch (and listen) for the changing cups and know when I need to offer additional support.
Fisher and Frey (in Checking for Understanding) describe a method where learners hold up fingers, from zero to five, to communicate their current understanding of the topic being discussed. I occasionally use this approach when I want finer detail than the cups can offer. “Great, you are at green. But do you really understand what you are doing?” Using this assessment, I can gather information about whether or not I need to adjust the lesson to meet my learners’ needs.

I rarely use this final approach, a modification of Fisher and Frey’s, but I find it very powerful. Last semester, I used it in an introductory education course.  The learners were not engaging with some of the content and I wanted to find out why. So one day I introduced an Engagement Monitoring Sheet. After each workshop, they reflected on their level of engagement during the different phases of the workshop. At the end of the class, they looked for any patterns in their engagement and wrote a reflection.
I collected their Engagement Monitoring Sheets and found two interesting things that changed the way I taught the course. First, many learners struggled to engage after taking a break. We talked about this and I offered strategies they might use to re-engage. Throughout the remainder of the class, I noticed many learners using these strategies coming out of break. The second observation was that it was difficult for them to engage if they did not know what to do. In other words, if I was too vague then they disengaged. From then on, I concentrated on the clarity of my instructions and asked others to repeat them to ensure that they had understood. By altering my teaching, learners’ level of engagement improved.

Yes, I know that these approaches rely on the learners’ ability to self-assess their own understanding or engagement and then report it honestly. Learners may need help in calibrating their gauges and feeling safe sharing their assessments.  But if I can trust my car, then I can trust my learners. Anyway, the proof is in the results. And I do like at the end of a trip when my car evaluates my driving as EXCELLENT!

Thursday, January 6, 2011

What's in a Name?

Learning, thinking, knowing and understanding are significantly enhanced when one is provided with opportunities for 'talking one's way to meaning'.
Brian Cambourne

I decided to blog after a discussion with a colleague in GVSU's College of Education. He explained that blogging gave him a platform to share those ideas that he found himself repeating in class after class. I could relate since I, too, have my favorite "stories" and "sayings" that I share every semester. Being a mathematician, I liked the efficiency associated with this approach to blogging. As I thought more about it, however, I realized that it offered more than that. A blog would provide me the chance to write my way to understanding.

That is where Scape comes from in my blog's name. I want to try to understand the educational landscape. I agree with Cambourne that trying to share my perspective with others consolidates my thinking and enhances my understanding. This blog is not intended to change the reader's thinking, just the writer's. (Here is an example.)

And that is where Delta comes in. Delta is the mathematical symbol for change. (It is also the Greek d, as in David - some things aren't that deep.) I am in the habit of beginning each class by apologizing to my learners. I'll teach the class better next time because of what I learn from my interactions with them and from their feedback. I remind them that they are free to take the class next year - when it is improved. No one takes me up on that, but it sets the tone that I expect to grow as an educator.

So that is why I chose Delta Scape as the name for my blog. Writing it will hopefully sharpen (and change) my perspective of teaching and learning. Feel free to peek in to see what I've learned. Or wait until next year, when I'm sure it will be better.