A great article at The Atlantic.
Nubia Baptiste had spent some 665 days at her Washington, D.C., public school by the time she walked into second period on March 27, 2012. She was an authority on McKinley Technology High School. She knew which security guards to befriend and where to hide out to skip class (try the bleachers). She knew which teachers stayed late to write college recommendation letters for students; she knew which ones patrolled the halls like guards in a prison yard, barking at kids to disperse.
If someone had asked, she could have revealed things about her school that no adult could have known. Once Nubia got talking, she had plenty to say. But until that morning of her senior spring, no one had ever asked.
She sat down at her desk and pulled her long, neat dreadlocks behind her shoulders. Then her teacher passed out a form. Must be another standardized test, Nubia figured, to be finished and forgotten. She picked up her pencil. By senior year, it was a reflex. The only sound was the hum of the air conditioning.
Teachers in the hallway treat me with respect, even if they don’t know me.
Well, this was different. She chose an answer from a list:Sometimes.
This class feels like a happy family.
She arched an eyebrow. Was this a joke? Totally untrue.
In towns around the country this past school year, a quarter-million students took a special survey designed to capture what they thought of their teachers and their classroom culture. Unlike the vast majority of surveys in human history, this one had been carefully field-tested. That research had shown something remarkable: if you asked kids the right questions, they could identify, with uncanny accuracy, their most—and least—effective teachers.
This does not surprise me at all. I know from my own experience that most kids at school absolutely know who are the teachers who inspire you and make you want to learn, and those who are ineffective. This is not always the same as who the popular ones are. My chemistry teacher was widely mocked as a robot, but everyone said he was a very good teacher.
The point was so obvious, it was almost embarrassing. Kids stared at their teachers for hundreds of hours a year, which might explain their expertise. Their survey answers, it turned out, were more reliable than any other known measure of teacher performance—including classroom observations and student test-score growth. All of which raised an uncomfortable new question: Should teachers be paid, trained, or dismissed based in part on what children say about them?
I wouldn’t go that far, but I think student evaluations should be routine.
So far, this revolution has been loud but unsatisfying. Most teachers do not consider test-score data a fair measure of what students have learned. Complex algorithms that adjust for students’ income and race have made test-score assessments more fair—but are widely resented, contested, or misunderstood by teachers.
So this is what the NZEI and PPTA should propose as an alternative – student evaluations.
A decade ago, a Harvard economist named Ronald Ferguson went to Ohio to help a small school district figure out why black kids did worse on tests than white kids. He did all kinds of things to analyze the schoolchildren in Shaker Heights, a Cleveland suburb. Maybe because he’d grown up in the area, or maybe because he is African American himself, he suspected that important forces were at work in the classroom that teachers could not see.
So eventually Ferguson gave the kids in Shaker Heights a survey—not about their entire school, but about their specific classrooms. The results were counterintuitive. The same group of kids answered differently from one classroom to the next, but the differences didn’t have as much to do with race as he’d expected; in fact, black students and white students largely agreed.
The variance had to do with the teachers. In one classroom, kids said they worked hard, paid attention, and corrected their mistakes; they liked being there, and they believed that the teacher cared about them. In the next classroom, the very same kids reported that the teacher had trouble explaining things and didn’t notice when students failed to understand a lesson.
The Hattie research confirms this also.
But Kane also wanted to include student perceptions. So he thought of Ferguson’s survey, which he’d heard about at Harvard. With Ferguson’s help, Kane and his colleagues gave an abbreviated version of the survey to the tens of thousands of students in the research study—and compared the results with test scores and other measures of effectiveness. The responses did indeed help predict which classes would have the most test-score improvement at the end of the year. In math, for example, the teachers rated most highly by students delivered the equivalent of about six more months of learning than teachers with the lowest ratings. (By comparison, teachers who get a master’s degree—one of the few ways to earn a pay raise in most schools —delivered about one more month of learning per year than teachers without one.)Students were better than trained adult observers at evaluating teachers. This wasn’t because they were smarter but because they had months to form an opinion, as opposed to 30 minutes. And there were dozens of them, as opposed to a single principal. Even if one kid had a grudge against a teacher or just blew off the survey, his response alone couldn’t sway the average.
Student evaluation shouldn’t be the only data a school collects, but it should be a near mandatory one.
Of the 36 items included in the Gates Foundation study, the five that most correlated with student learning were very straightforward:
1. Students in this class treat the teacher with respect.
2. My classmates behave the way my teacher wants them to.
3. Our class stays busy and doesn’t waste time.
4. In this class, we learn a lot almost every day.
5. In this class, we learn to correct our mistakes.
When Ferguson and Kane shared these five statements at conferences, teachers were surprised. They had typically thought it most important to care about kids, but what mattered more, according to the study, was whether teachers had control over the classroom and made it a challenging place to be. As most of us remember from our own school days, those two conditions did not always coexist: some teachers had high levels of control, but low levels of rigor.
Again, this meshes with my experience.
No one knows whether the survey data will become less reliable as the stakes rise. (Memphis schools are currently studying their surveys to check for such distortions, with results expected later this year.) Kane thinks surveys should count for 20 to 30 percent of a teacher’s evaluations—enough for teachers and principals to take them seriously, but not enough to motivate teachers to pander to students or to cheat by, say, pressuring students to answer in a certain way.
This would be an excellent Budget 2013 initiative!