United
Teachers of Los Angeles (UTLA), which represents teachers at the nation’s
second largest school district, recently signed what some are calling a
landmark agreement to use student test scores to evaluate teachers. 66% of the 16,892
members who voted approved the agreement, the Los Angeles Times reported, bringing L.A. in line
with Chicago, New York and many smaller cities in using student test data in
teacher evaluations.
The Times is
calling it a victory for the union, because the agreement limits the use of
value-added measures (VAM), which supposedly measure how much a teacher
influences students’ gains or losses on test scores from year to year. Instead,
the agreement allows for the evaluation of teachers based on raw state test
scores and district assessments.
In reality,
this can hardly be considered a victory. The reason that VAM should be opposed
is that it cannot accurately or consistently attribute student progress on
tests to anything their teachers may or may not have done. When used correctly
(e.g., taking the average of students’ scores over three consecutive years and
factoring out variables outside the teachers’ control, like poverty), VAM scores
may be consistent for teachers at the extremes (i.e., the very best and very
worst), but is essentially useless for the vast majority of teachers who fall
somewhere in between. Yet few school districts average the scores over this
length of time, particularly those that are on one-year evaluation cycles. Even
those on two-year cycles rarely do this.
Even when
VAM is not used, all use of student test data to evaluate teachers suffers from
most of these same problems. There is no scientifically accurate or consistent
way to know why a child’s test scores improve or decline over time. Their
socioeconomic status not only has a large influence on their baseline scores,
but also on how quickly they improve over time. School and community cultures
and attitudes can affect how seriously students take the tests. Student’s prior
teachers and prerequisite knowledge and school-readiness influence how much
they learn and how successful they are with future teachers.
The new
evaluation plan will also include the use of student and parent feedback and
teachers’ contributions to the school community, both of which are fraught with
potential for bias. Students and parents are not only untrained to evaluate
teacher quality, but they are not objective, either. They can and do provide
negative feedback on teachers for vindictive and petty reasons, like refusing
to change grades or for holding a child accountable for school or class
policies.
With respect
to the “school community,” teachers are already expected to participate in
committees and extracurricular activities and several of the California State
Teaching Standards address this. What remains unclear is whether this is just a
reiteration of what already exists in the standards (in which case it need not
be mentioned in this new agreement) or if it is an expectation that teachers
contribute even more time outside of their teaching responsibilities. If the
latter is true, this could lead to a bias in favor of those teachers who kiss
up most to administrators or who volunteer most for administrators’ pet
projects.
Not surprisingly
there was no consensus among LA teachers in favor of this deal. Only about half of the teachers participated
in the vote (according to Ed Source), with about 66% of those voting in
favor, which means only one-third of the district’s teachers actually approved
the measure. While it is impossible to know why turnout was so low, one can
speculate that many teachers were ambivalent to the point that they were
willing to accept whatever their colleagues decided. Indeed, Cheryl Ortega, the
union’s director of bilingual education, said she wanted to vote no, but voted
yes because she feared the state would mandate something worse.
The decision
by UTLA to push this terrible deal likely influenced many members to support it
and it amounts not only to a sellout of its own members, but a threat to all
teachers in the state. That UTLA and LAUSD were at the bargaining table in the
first place was the result of a recent court ruling that the state’s Stull
Act required the use of student test data in teacher evaluations. However, not only was the
court ruling flawed (the Stull Act requires the use of student data—not
necessarily the use of high stakes exams), but it was an attack on collective
bargaining, which is supposed to be between a union and its members’ employer
(in this case LAUSD, not the courts or the state legislature).
Yet even if
the Stull Act did mandate the use of high stakes student test data, this would
not justify UTLA rolling over and accepting it. The fact remains that the use
of student test data is unreliable, inconsistent and inappropriate for
evaluating teachers and will likely lead to many good teachers receiving poor
reviews and potentially losing their jobs, while doing little to identify, let
alone remediate bad teachers. Thus, it is also potentially detrimental for
schools and children. UTLA (like the UFT and CTU did in response to similar
legislation in New York and Illinois) chose to take the easy, passive,
risk-free road of obedience, rather than fighting for the interests of their
members or the interests of children.
Now that the
three biggest district unions have accepted the use of student test data to
evaluate their teachers without a serious fight, states will be emboldened to
shove it down the throats of other school districts.
No comments:
Post a Comment