Vol.#43: Literacy Data

countsSeveral years ago, an ELA colleague and I were presenting writing strategies to another middle school’s PLTs. The IRT’s office was in the PLT meeting room, and during a break between our sessions she remarked how she always had math teachers coming in to scan the results of their County required standardized test benchmarks immediately. However, she always had to chase down the language arts teachers to “make” them scan the bubble cards for the data. They’d given the test as required, just not scanned the cards for the results. She asked us what to do about it, and we sheepishly admitted we were often the same. Amazed, she asked… “Why?”

“Well, that data doesn’t really tell us anything we don’t already know.

Standardized data from the math benchmark practice tests tells our math teammates if students are struggling with decimals, or fractions, or two-step equations. In short, if students need more help…and if so, with which with specific skills.

The truth is…the data on these reading benchmarks tells us that since our AIG students score higher gifted readers must be better readers and our ESL students who are learning English don’t score as well on a test for…reading English.”

Image Credit: Pixabay User Websi
Image Credit: Pixabay User Websi

None of that is new information to any literacy teacher, and even if it were it doesn’t speak to how to shape his or her instruction. We are Data Rich, Information Poor. (D.R.I.P.) Analysis of that data does not help us see the path forward clearly for our students. Perhaps worse, it doesn’t necessarily even reflect the quality of instruction they’ve been given.

And while greater educational titans like Alfie Kohn have already explained  the many problems of relying on standardized data for, well, anything, it is my contention that using it to measure English Language Arts, both for measuring teachers and students, is an exceptionally erroneous practice.

Standardized testing by definition is supposed to be an “objective” assessment. However, subjective factors such as beliefs and values aren’t shouldn’t be separable from measuring literacy. While math is cut and dry (there is a right answer) interpretation of a literary work it not black and white. The students who can argue support for more than one of the four cookie-cutter answers – and do so in their heads during the test thereby often choosing the “wrong” one – are likely in reality the best readers. Disagreement on what an author meant by effective figurative language use or dissention in supporting different possible intended themes are not to be transcended in analysis and assessment of literature but embraced.

Am I missing some insight in interpreting formative standardized benchmark data? Is there some value here that I am overlooking? Please let me know in the comments!

Advertisements

One thought on “Vol.#43: Literacy Data”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s