In recent months, Teach Plus had over 1,000 teachers review sample items from PARCC, one of the two testing consortia trying to create assessments aligned to the Common Core standards.
I say “trying” because in reading, the task is pretty much impossible. The standards specify things students should be able to do, but they contain almost no content. Thankfully, they do call for content-rich curriculum and explain that comprehension depends on broad knowledge, but they don’t provide the content-specificity needed to guide instruction or assessment.
Thousands of different curricula and assessments could be aligned to the standards, which would be fine if teachers were trusted to develop both. But teachers are not allowed to create the assessments—at least the ones that count. So it is entirely possible for a teacher to develop an “aligned” curriculum that does not prepare students for the content that shows up on the “aligned” assessment.
The result is an unfair assessment.
Test developers acknowledge as much, creating guidelines for item development that minimize knowledge as a source of “bias.”
Well, the 1,000 teachers who just reviewed PARCC think the stripping of knowledge did not go far enough:
Nearly all participants found that the PARCC passages were better quality than the passages in state tests, as they are previously published pieces (indicating that they are complex and demonstrate expertise in nonfiction). However, there was some concern students did not have “background knowledge, nor the vocabulary to understand” vocabulary within the texts. Their comments suggest that to assess students as accurately as possible, some portions may need to be edited for diverse learners, or those with limited background knowledge of certain content areas.
I understand why teachers would call for reducing the prior knowledge demands of the test—they are stuck in this crazy world of being measured with content that no one told them to teach. But let’s be honest: reducing the knowledge demand makes the test a little fairer; it does not make the education students are getting any better.
The knowledge bias can’t be avoided with tests that are not explicitly aligned to the curriculum. Without a curriculum that specifies what has been taught—and therefore what it is fair to expect students to know—test writers are reduced to a narrow band of banal topics (but even “Jenny goes to the market” demands some prior, unequally distributed knowledge).
The less the knowledge bias, the less the test reflects real-world comprehension. Outside testlandia, comprehension is not isolated from knowledge. An adult who can’t comprehend a newspaper is not considered literate. Broad knowledge is inherent in literacy. If we care about reading, as opposed to testing, we shouldn’t be creating tests that minimize knowledge demands. We should be developing a coherent instruction, assessment, and accountability system that builds broad knowledge and is fair because it tests what is taught.
Clearly, our nation’s policymakers need a crash course in reading. Once they understand that there is no such thing as general comprehension ability, maybe they’ll stop trying to hold schools accountable for developing it.
Fortunately, a great crash course is now available: Daniel Willingham’s latest book, Raising Kids Who Read: What Parents and Teachers Can Do. If policymakers read between the lines, they’ll see an awful lot they can do too.
As with Willingham’s previous books, this one is engaging, easy to read, and super informative. Here’s just a taste:
Most parents want their children to be solid general readers. They aren’t worried about their kids reading professional journals for butterfly collectors, but they expect their kids to be able to read the New York Times, National Geographic, or other materials written for the thoughtful layperson. A writer for the New York Times will not assume deep knowledge about postage stamps, or African geography, or Elizabethan playwrights— but she will assume some knowledge about each. To be a good general reader, your child needs knowledge of the world that’s a million miles wide and an inch deep—wide enough to recognize the titles The Jew of Malta and The Merchant of Venice, for example, but not that the former may have inspired the latter. Enough to know that rare stamps can be very valuable, but not the going price of the rare Inverted Jenny stamp of 1918.
If being a “good reader” actually means “knowing a little bit about a lot of stuff,” then reading tests don’t work quite the way most people think they do. Reading tests purport to measure a student’s ability to read, and “ability to read” sounds like a general skill. Once I know your ability to read, I ought to be able (roughly) to predict your comprehension of any text I hand you. But I’ve just said that reading comprehension depends heavily on how much you happen to know about the topic of the text , because that determines your ability to make up for the information the writer felt free to omit. Perhaps, then, reading comprehension tests are really knowledge tests in disguise.
There is reason to think that’s true. In one study, researchers measured the reading ability of eleventh graders with a standard reading test and also administered tests of what they called “cultural literacy”—students’ knowledge of mainstream culture. There were tests of the names of artists, entertainers, military leaders, musicians, philosophers, and scientists, as well as separate tests of factual knowledge of science, history, and literature. The researchers found robust correlations between scores on the reading test and scores on the various cultural literacy tests—correlations between 0.55 and 0.90.
If we are to increase reading ability, policymakers will have to accept that it takes many years to develop the breadth of knowledge needed for tests that are not based on a specific curriculum. We shouldn’t be stripping the knowledge demands out of our tests; we should be stripping the unreasonable mandates from our accountability policies. If we all focused on raising readers, we would spend far less time on testing and far more on building broad knowledge.

Here again, the Core Knowledge people are stuck in this whole idea that students in school should acquire knowledge. What a hoary old mistake! We now test what is important—literacy skills, analytic skills, bloviation skills, and like that. Knowledge of subject matter may be nice to have, but it has nothing to do with what should go on in school, right? If we give students knowledge, they might start thinking for themselves with it, and that will play hell with all our efforts at Big Data and its Analysis for Program Planning for Sales in Education! Dontcha Get it!??
My 6th grade daughter came to me last night with an assignment she was working on. Her language arts class should be titled “Examples of White Racism Throughout History” (of course there are no dates, and the issues are presented in no chronological or coherent order).
She was supposed to highlight the main ideas in an essay on a theater in DC that wouldn’t allow a black woman to sing. The essay started off: “The Lincoln Memorial is a monument rich with symbolism.” I asked her what the Lincoln Memorial was. She had no idea. It is so apparent how important background knowledge is in understanding and interpreting essays like this.
The assignment simply amounts to trying to guess what the teacher thinks are the important ideas in the essay and identifying those.
[…] Raising Readers—Not Test Takers « The Core Knowledge Blog. […]
I do an enrichment afterschool class with Gr 4-5 students in a high poverty neighborhood in the Bronx who scored 4s on their ELA exams last year. I recently gave them a PARCC-developed practice extended response task that had them read two passages from fictional narratives. Concomitantly, I read some extended responses these same students had written in class, and was grading yet a third test prep extended response task I had given them the week before.
A pattern emerged from these three assignments that directly bears on what this post addresses. The passages were fine, if relatively low on the interest and prior knowledge spectrum for these students. But they are high-achieving, and have enough dexterity of thinking to overcome these challenges.
What concerned me was the phrasing of the tasks in all three assignments. In each instance, they were asked to speculate on how the point of view of the author changes and shapes the narrative of the stories. For 4th and 5th graders, this is quite a burden to translate and navigate. Many students wound up overthinking their responses and falling back on ideas that are familiar to them: comparing the messages of the passages, finding common or contrasting morals, that sort of deep reading of the texts.
Asking them to step further back and write about abstractions like author’s point of view is, I believe, a bridge too far. The students are confused and waste valuable time trying desperately to, as was mentioned, figure out what the test writers and their teachers want them to convey in their writing.
I fear the worst if these types of tasks are put before our students- especially those less skilled than mine- to plumb such deep and murky waters next month on the state tests.
John, very illuminating. You are reminding me of my elementary/middle school experience in the 1970’s: trying to perform some skill that I vaguely understood. I would suggest that this is the kind of material that university education departments invent. Teacher education programs are, in my opinion, what are really hurting the entire education field. I survived one, and looking back, I wish I would have just got a master’s degree in my area of interest: English.
Great post and comments. I wish Dan Willingham and others would retire the term “cultural literacy”. No better way to alienate potential progressive allies, as it conjures up the Culture Wars. Also the term “cultural” connotes opera, the symphony, Emily Bronte… Things that scream “superfluous”. It’s big mental stretch for lay people to see the connection between this stuff and functional intelligence. “General knowledge” or “world knowledge” or “foundational starter kit of knowledge about our complex world” seem better to me.
Ponderosa may be on to something. We know that ideology trumps facts every time when it comes to persuading people. As much as I like “Cultural Literacy,” perhaps “Foundational Knowledge” might be more neutral and easier for people to open their minds to. OTOH, as Will Fitzhugh wryly noted, it may be very hard for the education community to give up the ideological bias against knowledge and the focus on mythical “skills.”
I am a first grade elementary teacher and one of the skills I teach is that we need to” access our background knowledge before we read”. My students are low income students so the background knowledge they have comes from TV or me. I think one of the problems I face in teaching lower elementary is that there is not enough time to teach science and social studies ( where background knowledge comes from) because we have determined times we need to teach reading, writing and math. There is no flexibility so that you can teach other subjects. Also, children do not read at home. The love of reading starts in the home when parents read to you.
Mrs. Olsen- Do you have a prep period? If so, what is being taught during these times? At our school, these prep periods are used to teach science. Unfortunately, Social Studies is not, but students are getting an earful of that from the Core Knowledge program. We found that teachers couldn’t give the needed commitment to their L&L domains if they tried to do all of them, so most of the science domains are now taught by cluster teachers, with plenty of time given to each domain twice a week.
As a math teacher, I am teaching with the common core learning standards. I feel that the curriculum has less topics than the old curriculum which is good for students. This way students will get to understand fewer topics in more depth. I feel that the standards are clearly written and are easy to know what is expected from our students.
[…] decade under No Child Left Behind has shown that reading tests without a definite curriculum arecounterproductive, but here we go […]
[…] Mr. Willingham has argued, all reading comprehension tests are really “knowledge tests in disguise.” Rather than […]
[…] Mr. Willingham has argued, all reading comprehension tests are really “knowledge tests in disguise.” Rather than […]
[…] Mr. Willingham has argued, all reading comprehension tests are really “knowledge tests in disguise.” Rather than […]