In December I took a measurement of my Calculus AB group on their knowledge of limits, derivatives, and curve sketching concepts. Students were grouped into quarters and got a score report. Recently, we did this again. Students have to sign up for their AP Exams by the end of next week and one of my stated goals is to make sure they have an informed opinion about what to do.

Design

A public version of the activity can be found here: Calculus Gauntlet 2 Public

I wanted to check a few things. First, could students determine when an integral or derivative was appropriate based on vocabulary clues. Second, could they handle data table, curve sketching, and function behavior free response questions. Third, a raw skills component, looking at their ability to find a series of derivatives and integrals. I also included a couple of multiple choice questions from practice AP exams. Students were given a review with all of this information.

Here's my planning checklist:

Screen Shot 2018-02-23 at 10.55.47 PM.png

There was a lot of reorganization as I built the activity, and some questions got cut for time. In the end I asked 13 vocab questions (choose integral or derivative for a scenario), a multi-part data table question (1 full FRQ), and released MC questions on integral composition on Day 1 (50 minutes). Day 2 (80 minutes) included multiple curve sketching questions (1 full FRQ), a multi-part questions where an arbitrary curve and areas are given (1 full FRQ), then a series of skills questions about product rule, quotient rule, and a selection of derivative and integral exercises completed on a notecard.

Implementation

The end result was a 42 (+4 slide homework survey) slide activity. I used Pacing to restrict student access for the various days, and could easily flip it on and off for a few absent students who needed to complete a part they missed. Props to Desmos for including a "randomize choices" option for MC questions. That wasn't available in December.

Having used the structure once before, kids needed no help getting started. With school Google accounts, the student.desmos.com website easily kept track of where they were and got them back in on Day 2 with no issues. To ensure uniformity of results, there was only one version this time. I wanted the data to be as accurate as possible, and didn't want have to fudge on account of versions. Whatever minor details students passed along as they talked about this in between days was a fine trade off. There was zero credible evidence that anyone might have done anything dishonest.

Data

Again, students were ranked and given an overall percentage. This time I improved their data strip to show how they did in each category. I'm hoping to use this as I construct AP Exam review modules, offering students more flexibility in what they choose to review.

Screen Shot 2018-02-23 at 10.56.51 PM.png

A couple of questions were dropped for technical reasons (written in a way that deviated substantially from how it was taught, causing a high incorrect rate). Students who got the question correct in spite of the error got to keep the point earned.

With 71 complete results: 60.58% average (top 10 students averaged 89.29%), 1st quarter 75%+, 2nd 59.82%+, 3rd 46.43%+, and 4th was anything below 46.43%. Students were asked to compare their results to a real AP scale (where a 5 is ~70%+). Other than dropping a couple questions, students percentages reflected their raw performance, no curve was applied.

I have given benchmarks every February for 4 years, here's the historical comparison:

Screen Shot 2018-02-23 at 10.57.25 PM.png

A ridiculous leap in performance that I think I can explain in a bit.

Generally passing AP scores come from students above the 80% line. I'm hoping we can expand that a bit. Since 2015, I have recommended the AP Exam for any student above the 45% line (2014 is an exception because I had no idea what I was doing). Per College Board policy, all students are free to take or not take the exam regardless of my recommendation.

Finding Promise

Previously, these February benchmarks were composed of questions from released AP Exams. Students struggled mightily with this task. I think time was a factor. Typically I would give a review about a week out, students would work on it in class and on their own. An answer key would be made available and then they'd take a multi-day assessment that was a shorter collection of released AP Exam questions. Results were poor because I think I had the wrong design goals in mind. AP Exams are a massive undertaking for students new to the material. There's generally a good reason most AP classes (even mine!) spend a month on review. A week just isn't enough time. In the end I think I wound up discouraging more kids that probably could've risen to the occasion given appropriate time to prepare.

This year I decided the most important thing was to look for promise. Yes, I would use real AP language and questions, but it would be incredibly focused. I stuck to the scenarios and skills we had covered in class, some of it full blown AP level, some of it not quite. I figure any student who can succeed here can be coached along through exam preparation.

And in this process I saw some great things from the kids. Those in the top worked very hard to stay at the top. Many many kids in the lower ranks redoubled their efforts to show me that December was a fluke. In making recommendations, students who jumped a quarter (or two or even three! some cases) got the benefit of the doubt. If they could improve that much between December and February, what if they were encouraged to keep working until May?

In the end I recommended 55 take the exam without question, 11 to consider it, and 8 to focus their effort on keeping up with classwork. Next week I'll know their final decisions. With a vast majority gearing up for this thing, I think that will provide the "we're all in this together" momentum that could make a difference.

Cautious optimism, as always.

Posted
AuthorJonathan Claydon