Tuesday, October 11, 2011

Educational Pseudoscience and Reform Mumbo Jumbo


In order to make educational reforms seem more compelling, advocates attempt to serve them smothered in data. The problem is that very little educational research is done with double blind, randomized samplings or adequate controls, drawing into question the validity of the data collected, as well as the assertions made in their support. Furthermore, many of the “self-evident” and “obvious” assumptions and premises upon which these reforms are based are simply not true.

As a consequence, there are numerous popular reforms being imposed on teachers and children, often at great cost in time, stress, money and resources, without good reason to believe that they will improve learning outcomes.

Fortunately, the scientific community is now jumping into the educational reform fracas, with a number of good critiques of some of this bogus research. Science, one of the world’s preeminent scientific journals, has been particularly good about covering educational reform research, especially since Bruce Alberts took over as editor. (Alberts, by the way, has been a strong supporter of k-12 science education and has initiated or supported a number of great programs, including the Science Education Partnership, which matches SFUSD science teachers with professional scientists for summer internships where they learn new techniques and develop curriculum and lab activities for their students.)

Single-Sex Silliness
A lot has been made about the “benefits” of single-sex (SS) schooling. We have all heard about how boys dominate the discussions in science and math classes or how boys lack the “innate” communication and writing of girls. If such assumptions were true, SS schooling might make sense as a way to encourage and promote girls’ science and math skills, or boys’ literacy skills. However, not only is there no validity to the premise that boys and girls have innate cognitive differences, but there is no evidence that SS schooling improves their learning outcomes.

In “The Pseudoscience of Single-Sex Schooling,” (September 23, 2011, issue of Science), Diane Halpern and her colleagues examine the research on single-sex schooling and show that there is no evidence to support SS schools or classrooms. According to Halpern, et al, studies in Great Britain, Canada, Australia and New Zealand, all found little difference between single-sex and coeducational schooling. They argue that SS schooling has been justified by cherry-picked and misinterpreted data, anecdotes, sample bias and other methodological flaws. More importantly, they provide evidence that sex segregation actually increases gender stereotyping and legitimizes institutional sexism.

One common flaw in studies of SS education is the assumption that students in SS classes are equivalent to those in coeducational classes. However, students in SS schools are often more advanced academically than their peers in coed classes, thus inflating performance outcomes and providing an invalid comparison. Underperforming students also tend to transfer out of SS schools early, further skewing the data.

Bias is Implicit in Most Ed Reform
There are also biases inherent in studies of single sex schools (as in all reform programs). One example is the tendency to be motivated by novelty and belief in the innovation.

The novelty bias is implicit in virtually all educational reform efforts. This bias is intensified when reforms are co-designed by teachers themselves or when there is a strong consensus in support of initiating the reform. I remember several years ago being involved in the creation and implementation of Smaller Learning Communities (SLCs) at a high school where I was teaching. We put in weeks of full-time work during the summer and worked incredibly long hours during the school year in order to get the program up and running. When we realized that our students were not performing any better, despite the “obvious” and “common sense” reforms of smaller class sizes, increased collaboration, and greater personalization of educational services, we volunteered to do even more extra work during evenings and weekends to help these students succeed and make the program a success. We also tried things we never tried with our students prior to the implementation of SLCs, like allowing all students to retake all exams and rewrite all essays and lab reports for full credit.

All of these “extras” invalidated any meaningful comparison. It was impossible to tell why the students were performing the way they were. Was it the structure of SLCs, the increased personalization, the smaller class sizes? Or was it the extra long hours the teachers put in or the test corrections and rewrites and other “extras” that the SLC students were entitled to, but which were denied to the non-SLC students?

Another confounding factor with reforms is that they often attract a particular clientele and very rarely have a truly randomized sampling of students. When parents hear about smaller class sizes, for example, many jump at the opportunity to enroll their kids. However, affluent parents are more likely to have the time, self-confidence and knowledge of how to “work” the system to go through the process, thus skewing the data.

Common Sense Does Not Equal Truth
Halpern, et al, quote a teacher from a SS sex who believed that neurological studies have proven that boys and girls learn differently. This is a common misconception among teachers, parents and the public. In reality, neuroscientists have found very few differences between the brains of girls and boys. The differences they have found could be due to upbringing and sex-differentiated life experiences (e.g., being give dolls versus trucks to play with and experiencing gender targeted advertising). They have not found any significant gender differences that are “hard-wired” into kids’ brains and certainly nothing that justify differential teaching methods.

An even more popular misconception is the notion that teachers are the single greatest factor influencing students’ academic success. While this idea seems self-evident and obvious, the actual quote upon which it is based is that teachers are the most important “in-school” factor. In reality, academic success is influenced much more heavily by a student’s socioeconomic background and other outside of school factors than by teachers.

Improving teacher quality is not a bad thing. It is, however, an extremely inefficient way to improve schools and academic achievement if growing numbers of children are continually allowed to grow up in poverty. We would likely see much greater improvements in academic achievement and shrinking of the achievement gap through social reforms that reduce the wealth gap, than through any school- or teacher-based reform.

Nevertheless, the overwhelming majority of reforms focus on teaching. One of the most popular current reforms is Value-Added Measures of teacher quality, the idea that teacher quality can be measured by improvements in student achievement, which are presumed to be caused by their teacher(s).

Science published a good review of the research on Value-Added Measures by Douglas N. Harris, in the August 12, 2011 issue. He concludes that the salient research questions about Value-Added cannot be answered by the studies that currently dominate the literature.

Perhaps the most significant variables that are unaccounted for by Value-Added measures are the out-of-school factors that influence a child’s potential to learn, focus, pay attention and follow through on lessons. These variables are highly correlated with ethnicity, and even more so with family income. Burkam and Lee, for example, found that average cognitive scores of affluent kindergartens were 60% higher than those in the lowest income group. Hart and Risely found similar class-based differences in language development and IQ among children as young as three. Both studies suggest that an achievement gap is already firmly in place well before kids enter school.

This achievement gap can be reduced through programs like pre-school, Head Start, and extensive literacy development. However, when kids start school this far behind their peers, they are less likely to develop the self-efficacy and confidence necessary to thrive in school or catch up. Furthermore, the achievement gap can grow over time due to enriching summer, weekend and evening activities that may be more accessible to affluent children and which allow them to make gains over their peers when they aren’t even in school.

Harris points out that there is no consensus on how to calculate a value-added score. The correlation across various methods for measuring value-added is as low as 0.27, suggesting that some (or all) are not very good measures of teacher quality. He also notes that there is considerable imprecision in value-added measures due to sampling and measurement errors in students’ test scores.

As a result, value-added can at best only distinguish between teachers at the extremes of the performance distribution, something that a good administrator ought to be able to assess through the traditional observation and evaluation system. However, those in the middle would be virtually indistinguishable from each other. Likewise, value-added measures are unstable from year to year, with one study finding only 28-50% of the highest 1/5 of teachers retaining that status the following year.

As appealing as it is to have accurate statistical or quantitative measures of teacher performance, it is not currently possible with existing methods or formulae and the value-added model ought to be thrown into the dustbin of pseudoscience history, along with the discredited notion of SS education.

Rather than continuing to waste billions of dollars and subjecting educators to so much extra work and stress to implement poorly studied reforms with questionable data and analyses, we would likely see much greater educational improvements through social programs that reduce poverty and provide greater funding to schools.

No comments:

Post a Comment