The Grading of the Common Core Tests (from the NY Times)
The new academic standards known as the Common Core emphasize critical thinking, complex problem-solving and writing skills, and put less stock in rote learning and memorization. So the standardized tests given in most states this year required fewer multiple choice questions and far more writing on topics like this one posed to elementary school students: Read a passage from a novel written in the first person, and a poem written in the third person, and describe how the poem might change if it were written in the first person.
But educators do not necessarily judge the results.
About 100 temporary employees of the testing giant Pearson worked in diligent silence scoring thousands of short essays written by third- and fifth-grade students from across the country. There was a onetime wedding planner, a retired medical technologist and a former Pearson saleswoman with a master’s degree in marital counseling. To get the job, like other scorers nationwide, they needed a four-year college degree with relevant coursework, but no teaching experience. Pearson, which operates 21 scoring centers around the country, hired 14,500 temporary scorers throughout the scoring season, which began in April and will continue through July. About three-quarters of the scorers work from home. Pearson recruited them through its own website, personal referrals, job fairs, Internet job search engines, local newspaper classified ads and even Craigslist and Facebook. They earned $12 to $14 an hour, with the possibility of small bonuses if they hit daily quality and volume targets.
Officials from Pearson and Parcc, a nonprofit consortium that has coordinated development of new Common Core tests, say strict training and scoring protocols are intended to ensure consistency, no matter who is marking the tests. Still, educators, see a problem if the tests are not primarily scored by teachers. About 12 million students nationwide from third grade through high school took the new tests this year. Parcc, formally known as the Partnership for Assessment of Readiness for College and Careers and the Smarter Balanced Assessment Consortium, another test development group, along with contractors like Pearson, worked with current classroom teachers and state education officials to develop the questions and set detailed criteria for grading student responses, including New York, separately developed Common Core tests without either consortium’s involvement.
Parcc said that more than three-quarters of the scorers have at least one year of teaching experience, but that it does not have data on how many are currently working as classroom teachers. Some are retired teachers with extensive classroom experience, but one scorer in San Antonio, for example, had one year of teaching experience, 45 years ago. For exams like the Advanced Placement tests given by the College Board, scorers must be current college professors or high school teachers who have at least three years of experience teaching the subject they are scoring.
“Having classroom teachers engaged in scoring is a tremendous opportunity,” said Tony Alpert, executive director of Smarter Balanced. “But we don’t want to do it at the expense of their real work, which is teaching kids.”
The most important factor in scoring, testing experts say, is to set guidelines that are clear enough so that two different scorers consistently arrive at the same score.
During training sessions of two to five days for the Parcc tests, prospective scorers study examples of student essays that have been graded by teachers and professors as well as the scoring criteria. To monitor workers as they score, Pearson regularly slips previously scored responses into the computer queues of scorers to see if their numbers match those already given by senior supervisors. Scorers who repeatedly fail to match these so-called validity papers are let go.
At the San Antonio center on Friday, the scorers worked on the Parcc test, which was given in 11 states and Washington.
Still, the new tests are much more complicated and nuanced than previous exams and require more from the scorers, said James W. Pellegrino, a professor of psychology at the University of Illinois at Chicago who serves on advisory boards for Parcc and Smarter Balanced. “You’re asking people still, even with the best of rubrics and evidence and training, to make judgments about complex forms of cognition,” Mr. Pellegrino said. “The more we go towards the kinds of interesting thinking and problems and situations that tend to be more about open-ended answers, the harder it is to get objective agreement in scoring.”
I understand the difficulties and complexities of marking examinations. I did it for all of my professional life. But the testing companies which are making large sums of money not use trained educators and to pay them (according to the article) “$12 to $14 an hour, with the possibility of small bonuses if they hit daily quality and volume targets” and to allow them to work at home where no body validates their involvement or the time it takes to grade and to give them bonuses based on “volume targets” an examination seems criminal when so much depends on the grading of the papers. Why aren’t Pearson and PARCC using professionals as the College Board does with the grading of Advanced Placement tests?
Compare this to the grading of tests for doctors or accountants. Would we allow these important tests to be graded by non-professionals who have had limited experience in the field and worked at home and were paid $12 to $14 an hour and were given incentive bonuses for meeting volume targets? Just asking!