I like this profession because it gives me room to fiddle. There are always new projects to try, or tools to use, or ways to approach a topic. Each crop of students is fundamental in this pursuit of fiddling. Ideas that sound great in my head can look wildly different in practice and I want them to feel like they have a bit of a say in what class structure looks like. Most school districts would agree with this and if you poke around on your district/campus website I'm sure there's a report you can read that has survey results about the things parents/faculty might want to know. The last two years, our district surveys have been composed of parent interviews, online faculty surveys, online students surveys, and paper student surveys. The paper surveys are administered during normal class time to a random class period. There are over 100 questions on it and the students are supposed to answer them relevant to whatever teacher handed it to them. So some students are evaluating their math teachers while others might be sitting in English at the time.

A month or two later we get to see our results compiled into seven categories, about a dozen specific items called out in each category. I find it pretty fascinating, here's what caught my eye:

The Good

Caveat City: This represents answers from only 26 students. School/District numbers were printed for comparison but I have hidden them. Questions were answered on a 1-5 agree/disagree scale.

This is a small sample of the entire result table:

results1.png

Scouring the internet for lessons is paying off. I don't feel that all my lesson intros generate the kind of inquiry I want, but the number is apparently less than zero. It's something I will continue to iterate. As I scanned the results, all the talk about learning from mistakes, correcting mistakes, getting feedback, etc all scream SBG validation to me. Assessments are your main vector for offering comments on student work. They have be valuable. If you aren't offering a chance to see what they absorb from the comments, why bother leaving them? Second, how do you think I did so well knowing what they understand and don't? Testing roughly once a week is so ridiculously valuable. I have no idea how you can wait two weeks to see if a lesson worked. This year I easily did five or six course corrections because an initial assessment told me that something went horribly wrong.

The Hmmm

I was pleased to see that my methods are having an impact. Reading the results was great to see during that slump we all get at the end of the year when exhaustion rules. But, there were a couple head scratchers:

results2.png

I have no idea if I'm supposed to be worried about the first one. While school/district numbers are hidden here, I wasn't far off from those averages. I mean, there are some opportunities for them to help me. But designing the activities? Is that something they should do? I always operate under the notion that exploration and presentation methods need to be modeled. Teenagers don't necessarily have an ingrained ability to motivate the study of conic sections. Ironically, this survey was given to a Pre-Cal class about a month before they created their class video, a project done entirely from student input. I don't know, that questions itches for some reason. I have a rant about open-ended projects that this question would complement.

Now the second one, that was scary to see. A huge percentage of the class answered this question neutrally. This tells me that apparently I'm happy without a lot of depth in their answers, or I never stressed explanations enough to skew the results to one side or the other. I really have to spend some time thinking about this one. Do I not have the class time for it or something? Are my assessments too skill based? Are the kids expecting that I am the only provider of absolute truth? Do they think explanation is reserved only for lab reports?

These results are just as valuable to me as my annual appraisal. If you don't value the opinion of the thirty people staring back at you, you're doing it wrong.

Posted
AuthorJonathan Claydon