I used to work for ACT, Inc., designing and developing student assessments. In my final years there, I was Director of the Writing and Communications Literacies group. In one of my last major projects, I headed the team responsible for the revamped ACT Writing Test, which rolled out in 2015.
That roll-out was famously botched. ACT Test Development neglected to tend to the basics of scoring the new test; it’s partner, Pearson, screwed up the reporting. Students and their families had paid and trusted ACT to send scores in time for college application deadlines. There were serious life consequences for missing those deadlines. As a remedy, ACT ham-handedly told students to
take a picture of their paper score reports and send that to their prospective colleges. Needless to say, it was all a big mess.
Let it be known that I had nothing to do with that debacle: I was gone months before. (Changes in leadership. Peter principle. ‘Nuff said.)
My team and I improved the form and content of the writing test significantly, moving it away as best we could from the old binary prompt: “Some people think X about this bland made-up issue that you care nothing about; others think Y. Now pick a side and write something intelligible about it in 30
minutes.”
As hard as I worked to improve things, however, I came to realize that the job was hopeless. ACT would never present test-takers with an authentic writing task, one that would engage them, teach them, prepare them for college-level work, or give them a chance to show what they can really do. An exercise in writing that asks students to respond to a topic they have no interest in; that they’ve
never even thought about before; that constrains them under a strict, arbitrary time limit; that resembles in no way the work they’ll be asked to do in any other context—well, it’s a pointless exercise at best. Worse than pointless, it’s damaging to a student’s understanding of what competent writing is, and why they should care about it.
The whole (supposed) reason for taking the ACT test is to predict college success. But in fact, largely because of the contrived and constrained form of the test itself, ACT is incapable of presenting to colleges an accurate portrait of a student’s abilities and potential.
The longer I worked at ACT, the more disillusioned I became with the organization—and with conventional standardized testing as a force in education. The tests my group was charged with creating were clearly driving how writing was being taught in schools, and it was a very limited model of writing indeed—certainly not one that encouraged the kind of thinking and communication
skills students really need. The thousands upon thousands of student responses we saw each year made this apparent with depressing consistency.
My experience at ACT prompted me to think critically about test development, with
two key realizations shaking out:
1. For most kids, there is a huge gap between what they learn in high school and what they are expected to do when they get to college.
This is no stunning revelation. There are plenty of studies and statistics that bear out the fact that a huge percentage of high school graduates are underprepared for college. The ramifications are no secret either: not enough kids go to college; too many need remediation once they get there; too few graduate; too few graduate within four years.
Despite all the emphasis on “college readiness” in education circles, something is obviously not working properly. Actually, many things, but the crucial one that jumped out to my eyes was this:
2. Standardized tests, including—perhaps especially including—the ACT and SAT, are part of the problem.
These tests are highly contrived instruments that do not elicit or assess the kind of student performances required in authentic educational environments. Yet teaching and learning must conform to these tests because of their outsized role in key educational decisions. Little wonder then that so many students are unprepared for the demands of real academic work.
My intention for this blog is not merely to complain about testing, but to give it some serious critique from the point of view of someone with hands-on experience. Maybe this kind of discussion can be valuable to educators or policymakers, or parents, or even test makers, who likewise are thinking hard
about how standardized tests are affecting education.
My larger goal in writing this blog, as for BetterRhetor, is to help address the readiness gap between high school and college, and contribute to solutions that lead to more success for more kids.
We want to see college prep instruction and readiness assessment move to a higher level of efficacy; we want to see every student move up a level in education and life opportunities.
At the same time, we want to see the college-admissions playing field leveled up as well, so that students aren’t disadvantaged in their access to education and their readiness for college academics because of their income or background.
The current system is not working. Not enough high schoolers are developing the skills they need for success upon entry into college, despite the rise of standards-based education. The 60-year ACT/SAT admissions testing duopoly, which serves as a gatekeeper for so many students wanting into college, disadvantages and distracts instead of helping kids transition from high school to college academics. We need an alternative. (Click here for a discussion of the duopoly and the college readiness gap.)
We need to make available to colleges not faceless collections of scores and data, but rich, textured portraits of students that show their social, personal, and cognitive abilities, and their promise for academic success. Ultimately, we need a better way to connect students with colleges that believe in them.
That’s BetterRhetor’s goal.
© 2016 BetterRhetor Resources LLC
0 Comments