Saturday, January 13, 2007

Professional Development Rubrics

There seems to be some conversation about the right term for these-- rubrics, scoring guides, continuums, etc., but I'm sure we are all picturing the same table of headings describing a scale from not-good to great.

In the business world, and somewhat in education, they are also called Behaviorally Anchored Rating Scales (BARS). I'm adding one word to that, making it Data-based Behaviorally Anchored Rating Scales (D-BARS). If you ever see that somewhere else, you can say you know where it started.

If you're adopting, amending, writing your own D-BARS, there are some errors to avoid lest the outcome be less than helpful to the observer and observee.

A very common error has to do with creating a continuum of behavior indicators. Since across the top of these documents is a scale that progresses from one extreme to the other, conceptually with no gaps or overlaps between the divisions, the D-BARS (the physical observable behavior that exemplifies each division) should also be a continuum. As I look at these documents from across the country and world, one of the most common errors is that the actual behavior being used as an indicator changes from one division/cell to another. It shouldn't. What should be described is one behavior across the continuum, poor to great. An example.....

The target standard/behavior is "Teachers involve and guide all students in assessing their own learning." The category headings are Unsatisfactory, Emerging, Basic, Proficient, and Distinguished.

The behavior indicator for the Unsatisfactory level is "Students do not assess their own learning." That's a clear statement, but there's more to the unsatisfactory level than no assessment at all. Students might be assessing themselves once a year, unguided, inaccurately, using the wrong criteria, etc. The descriptor for Unsatisfactory should describe the range of indicators, all of which are unsatisfactory.

The next level, Emerging, has this as a behavior indicator: "Teacher checks student work and communicates progress through the report card." This indicator is unrelated to student assessment of their own learning, and doesn't provide the guidance for whoever would use the D-BARS to clearly be able to determine the difference between Unsatisfactory and Emerging. This statement might fit well in a target standard related to 'communicating progress to students', and in that standard might well fit in the 'Emerging' category.

Perhaps (and this is brainstorming - collaborative discussion needed)...

Emerging would be "Students are asked to state/guess what their grade on an assignment will be." or "Students are asked to grade each other's papers without the use of a scoring guide." [Students are assessing their work related to grades, and with little guidance]

The Basic category could be something akin to "Students are assessed by the teacher according to a scoring guide and asked to describe why they agree/disagree with the grade." [Students are asked to apply the scoring guide in their reflection, but do not actually self-assess]

A descriptor for the Proficient level might be "Using a teacher provided scoring guide, students are asked to assess their work before they hand it in to the teacher." [Students assess their work according to a scoring guide]

And finally, the Distingished level could read "Using collaboratively developed (teacher and students) scoring guides, students are engaged in self and peer assessment of progress toward meeting the standards." [Students are engaged and guided in the process of creating the criteria, and then applying that criteria to themselves and others.]

I would hope that there would be discussion about my choices and wording, as this is only to illustrate the need for a continuum in the described behavior indicators.

The next thinking might be about what are the keystone observable behaviors that should be tracked to gather data on "Involving and guiding students in assessing their own learning." Is it the amount of time students are engaged in assessing learning? The number of references to standards made by the teacher? The number and or level of questions asked by students related to assessment and standards? Every observer who makes a determination of level is doing so on the basis of something they see. We need to come to consensus concerning what's valid and reliable.

No comments: