A new study released last week by the Bill & Melinda Gates Foundation found that rating teachers by watching them teach is possibly the best way to help them improve, the Los Angeles Times reported this week. It also found that observing teachers infrequently (once a year or less) is insufficient, and that the observers, generally school administrators, often don't know what to look for.
Huck/Konopacki Labor Cartoons |
Both findings should come as no surprise to teachers (except for the seeming about face by the Gates Foundation, which has been at the forefront of merit pay, value-added, and other attempts to undermine teachers’ job security). Not surprisingly, Gates is still calling for the use of student standardized test scores to evaluate teachers, along with a host of other measures.
The Times article highlighted Memphis, which uses a new evaluation system (supported by Gates to the tune of $90 million), in which teachers are observed four to six times annually by more than one evaluator, each of whom must pass a certification program. Teachers supposedly get detailed feedback on observations within seven days. However, only 40% of their evaluation is based on observations, while 35% is based on a value-added formula, 15% on other measures of achievement, 5% on student surveys and 5% on the teacher's demonstrated content knowledge. Thus, contrary to the his Foundation’s own findings, Gates is funding a program that weights the best method for evaluating teachers—direct observations (40%)—lower than student achievement (35% + 15%).
The Times article highlighted Memphis, which uses a new evaluation system (supported by Gates to the tune of $90 million), in which teachers are observed four to six times annually by more than one evaluator, each of whom must pass a certification program. Teachers supposedly get detailed feedback on observations within seven days. However, only 40% of their evaluation is based on observations, while 35% is based on a value-added formula, 15% on other measures of achievement, 5% on student surveys and 5% on the teacher's demonstrated content knowledge. Thus, contrary to the his Foundation’s own findings, Gates is funding a program that weights the best method for evaluating teachers—direct observations (40%)—lower than student achievement (35% + 15%).
Observing teachers or most other workers, for that matter, is often the best way to determine how well they do their job. Ensuring that the evaluator is well-trained and knows what to look for (and how to observe it) and provided ample time to make several observations also seems pretty obvious. However, the evaluators’ objectivity is seldom questioned and this, too, has considerable influence on the accuracy of the observations and the effectiveness of the evaluations and feedback. Administrators have the power to hire and fire and are therefore biased and should not be doing the observations, even if well-trained. For example, if they want to get rid of a troublesome teacher who is too outspoken or active in the union, but who is otherwise a good teacher, they can still observe and write whatever they like in their evaluations.
Training and hiring sufficient outside evaluators would be expensive and time consuming and, under current economic conditions, very unlikely to occur, which is one reason for the popularity of value-added assessments. Since districts are mandated to give the high stakes tests anyway, it costs them little to crunch the numbers and determine if a teacher’s scores are improving. The problem is that the test scores are correlated more with students’ socioeconomic backgrounds than any other factor, including the quality of their teachers. Furthermore, a student’s ability to improve on the tests is also correlated with familial wealth. Thus, teachers in low income schools are less likely to see substantial improvements in their students’ scores, even if they are exceptional at their jobs. Also, no one has yet come up with a reliable and reproducible value-added formula, with the consequence that some teachers rate well one year and poorly the next (see here, here and here).
The demand for value-added assessments of teachers is also a giveaway to the test manufacturers, the proponents of NCLB and the “Teacher-Effectiveness” Industry (see here, here and here). As long as administrators are required to use student test data to evaluate teachers, students will be required to take high stakes tests, instructional time will be sacrificed to test prep and testing, and electives and other courses will be dropped to make room for more test prep.
No comments:
Post a Comment