I really love kinetic typography, and if the video is about education, all the better.
So I came upon this video this week:
I sent it to about a half-dozen other educators to see their take, because I really wrestled with the message.
On one hand, I really relate to the message that 17-year-old Suli Breaks passionately delivers, refusing to be reduced to a number on a test. I’ve written in both prose and poetic forms that students are “more than a score”. The insanity over standardized testing was even featured on Last Week Tonight with John Oliver. (You should really watch if you didn’t catch it, and if you’re not offended by some salty language.)
Anyway, back to Mr. Suli Breaks. I found much of what he said to be powerful, relatable, and certainly fair.
And yet…
There seemed to be a hint of devaluing academics in general; a playing down of the importance of one’s education, which made me uncomfortable. Several of the teachers I sent it to felt the same.
I posed this question: How does he show he values education, if he understandably does not value the testing, and that’s all he’s known education to be? How do we expect him to separate the two?
The conversation that ensued had me thinking deeper about this and how it relates to educators. I think it is similar to the crux of the problem those of us opposing the current state of standardized testing face:
How do we demonstrate our willingness for accountability when it has become synonymous with standardized testing?
I read Karl Fisch’s great post over at The Fischbowl about the word “accountability” and how too many in education erroneously equate it with using standardized testing to justify educational actions and decisions.
It got me to thinking how this current phenomenon often has educators, sometimes myself included, pinned in the corner of “all standardized testing is bad.” This is an understandable reaction to the ridiculous, high-stakes, over-emphasized testing of today. When one feels they are under attack, they take a defensive stance. Testing gives a snapshot of a narrow facet of skills, and while it shouldn’t be the focus nor the be-all-end-all… it isn’t completely useless.
After writing recently about my frustrations of the frequent pre-screening before the pretesting before the big test, it must sound like I’m completely backtracking. However, it’s the way the data is used that is important to examine.
Testing should be small, incremental, low-stakes, and personalized. If I have a student who is struggling, as a language arts teacher I should be able to request testing to indicate issues of fluency vs. comprehension to know how best to help him/her. It should be targeted and prescriptive, but this would require trusting educational decisions of professional educations, which is not what’s happening in the political scope of education right now.
Even the larger tests that level students in achievement ranges could be helpful if it were early in the year so teachers could use it to help inform their instruction for the year. However, it’s used at the end of the as a summary of what the student and teacher have “done right”. This, again, is a misuse of the data. It’s an autopsy when only a biopsy can help a teacher help a student. Also, inferences are being drawn from the data which does not measure what it’s being assumed to measure. (ie: “teacher effectiveness.”)
Therefore, high-stakes testing becomes the “goal”. Schools can’t test to see what they need to teach, they are too busy scrambling to teach what’s on the test that contains what someone else decided was important and another said it would carry serious consequences for the student, teacher, and school if some bubbles aren’t colored as well as last year. And consider what that these tests could never measure for just a moment…
Your doctor does not decide your heath on a BMI score or triglyceride reading alone. However, that small piece of data can inform a medical professional if its part of a larger picture. The problem is when non-educators in charge of education (which is a problem in and of itself) decide to measure the doctor’s competence by his/her patients’ BMI average (teacher’s test scores). This is a misuse of the data, and a ridiculous way to measure the doctor.
I have data about literacy – my students’ and own children’s – coming at me on a regular intervals; tidalwaves on the beach of what is otherwise a peaceful school experience.
For my own son, he came home with an mClass report with all little running men at the top of their little green bars – save one – and a lexile level that corresponds with a 3.6 grade level early in his third grade year. However, another letter says he’s been flagged as a “failing reader” based on the preliminary standardized test given in the beginning of third grade. This would have perplexed me if I didn’t already know how ludicrous it is to assess literacy of children with these frustrating bubble tests.
For my sixth-grade students, I have access to their standardized test data from the end of fifth grade – the ones with passages that are way too long assessing way too many standards and simply expecting way too much of the poor ten-year-old test takers.
We also give our middle schoolers quarterly timed tests on basic skills in reading and math. Based on these results, students are sorted into green, yellow, and red, with intervention plans written for those in the “danger zones”. Also, there are standardized benchmark tests at the end of each quarter to see if they are on track to attain a passing achievement level for the standardized state test at the end of the year.
If anyone counted, that’s seven tests during the year for students, including the “real” test. But not including any tests given by the teacher. (And that’s just for reading, don’t forget to then add in math. And science. And social studies… But I digress.)
I am not naive enough to think I am going to change the path we are going down right now, but I feel strongly that if we are going to make students do all this, I’d better find a way to make all the resulting data helpful to my instruction.
And therein lies another layer of my molten lava white-hot fury. What has been sorely missing from the dialogue in all these data-sessions is the next steps. Ok, Sally Sue is “red”. What does she need now? Or, even more frustrating, she passed one test, but is “red” on the other. So…now what? What do I DO for her? (You know, that I wasn’t going to do anyway? Like…teach her?)
Perhaps this oversight is because those who pushed this agenda only wanted to sell us all the screening tests so they don’t actually know what to do next? Or, maybe their answer is they want us to buy their scripted program to “fix it”, but we are all out of money?
At any rate, here’s where I am with this new normal. I need pragmatic (*ahem* free) ways to address all this conflicting data. What follows is a list of strategies I have to that end:
Sort your next Google search by reading level. Catlin Tucker is an amazing ELA techie educator I follow and has a great post that shows you how.
Offer the same article in several different lexile levels using Newsela. Some articles have leveled questions as well. (Newsela has a free version and a “pro” version.)
ReadWorks “The Solution to Reading Comprehension” offers both nonfiction and literary passages, questions, and units for free. It includes lexile leveling information.
Use Intervention Central for their free resources, like a list of reading comprehension strategies. Their Maze Passage Generator will level any text according to these scales: FORCAST, Spache, Dale-Chall, Flesch-Kincaid, Coleman-Liau, Automate Readability Index, Flesch Reading Ease, Fog Index, Lix Formula, SMOG-Grading.
You can also check the reading level of any text or website at read-able.com for free.
Offer clear instructions for how you want students to complete a close reading of a text. Here’s mine. Sorry for the shameless plug. 🙂
Mr. Nussbaum’s webpage has reading comprehension passages and Maze passages that score themselves for free! It only goes up through grade 6, so it would only help students up through about a 960 lexile.
ReadTheory is free, and allows you to create classes and track reading comprehension progress.
There are several reading leveler apps you can pay for and they are probably fancier, but I’ve found this one handy, both as a mom and as a teacher. For example, I used to have long conversations with my students who kept picking up books during DEAR time, not an occasional graphic novel, but always a graphic novel, cartoon books, picture book …you know the type? Anyway, scanning their bar code and simply telling them it has a 2.4 grade level has been more effective than the long conversation. 🙂
One on my horizon to try: curriculet.com It’s free and I’ve heard good things!
I have also found the following conversion chart handy, because of course the data does not always come in the same format:
These have helped me in more than one “What are you doing for my child?” conference and to complete the required intervention plans based on all the data. I don’t know if they have revolutionized me as a literacy teacher, but I suppose time scores will tell.
Have a strategy, tool, or resource for helping your students as readers? Please share in the comments!
The current misguided philosophy is that tax payers are paying for “results” (ie: standardized testing scores) out of their teachers. Besides the simple fact that standardized tests don’t measure educational quality, it’s approaching the funding of education completely wrong. You are not buying a result, you are investing in one.
Fiscal conservatives, please listen up: Funding education is an investment that will pay you back in spades. And I don’t mean that hippie-dippy, “the world will just be a better place” crap you may not believe in…you will be better off financially.
And just in case you thought it was “different money”…
“Analysis by the National Association of State Budget Officers shows that elementary and high schools receive 73 percent of their state funding from this discretionary fund; colleges and universities count on the fund for half of their budgets. However, $9 out of every $10 that support imprisonment come from the same pot of money.”
Beside the cost of prison, there’s the fact that citizens will be gainfully employed, paying their share of taxes on their higher income, happier and more fulfilled… no, I forgot, we aren’t factoring in that last part.
So, the question is, do you want to spend money to educate a citizen, or what in my state is over three times as much to imprison one?
In my last post, I argued against the use of the current practices for gathering data for measuring growth and proficiency in literacy.
I suggested that for math, formative standardized test data is a biopsy. For literacy, it’s more like an autopsy.
And while the data indicates strong versus sickly readers, this information is usually no surprise to the professional educator, and more importantly it offers no treatment plan: advice on which medicine to administer.
With the release of my state’s scores re-renormed to the Common Core, there’s lots of focus on all the new data. What it all means. Why the scores are lower. How it will be improved.
And while the politics rage on, I have to explain to parents that their child simply went from twelve centimeters to five inches, and yes the number may actually be smaller, but I believe it to show growth in his/her reading ability.
And I need to take this new information and figure out how it should inform my instruction. I need the data to indicate a treatment plan for the literacy health of my students.
“One of the problems we have with formative assessment is a paradigm that is often called, “data-driven decision making”. This leads to a focus on the data, rather than on the decisions. So, people collect data, hoping it might come in useful, and then figure out sometime later what kinds of decisions they might use the data to inform. I’m thinking that we ought to perhaps reverse the ideas in data-driven decision-making and instead focus on decision-driven data collection. Let’s first figure out the decisions we need to make, and then figure out the data that would help us make that decision in a smarter way.”
I’ve pondered this at great length. If my goal is decision-driven data collection, what would I want out of a standardized literacy assessment? What do I want the data to tell me?
Several years ago, an ELA colleague and I were presenting writing strategies to another middle school’s PLTs. The IRT’s office was in the PLT meeting room, and during a break between our sessions she remarked how she always had math teachers coming in to scan the results of their County required standardized test benchmarks immediately. However, she always had to chase down the language arts teachers to “make” them scan the bubble cards for the data. They’d given the test as required, just not scanned the cards for the results. She asked us what to do about it, and we sheepishly admitted we were often the same. Amazed, she asked… “Why?”
“Well, that data doesn’t really tell us anything we don’t already know.
Standardized data from the math benchmark practice tests tells our math teammates if students are struggling with decimals, or fractions, or two-step equations. In short, if students need more help…and if so, with which with specific skills.
The truth is…the data on these reading benchmarks tells us that since our AIG students score higher gifted readers must be better readers and our ESL students who are learning English don’t score as well on a test for…reading English.”
Image Credit: Pixabay User Websi
None of that is new information to any literacy teacher, and even if it were it doesn’t speak to how to shape his or her instruction. We are Data Rich, Information Poor. (D.R.I.P.) Analysis of that data does not help us see the path forward clearly for our students. Perhaps worse, it doesn’t necessarily even reflect the quality of instruction they’ve been given.
And while greater educational titans like Alfie Kohn have already explainedthe many problems of relying on standardized data for, well, anything, it is my contention that using it to measure English Language Arts, both for measuring teachers and students, is an exceptionally erroneous practice.
Standardized testing by definition is supposed to be an “objective” assessment. However, subjective factors such as beliefs and values aren’t shouldn’t be separable from measuring literacy. While math is cut and dry (thereisa right answer) interpretation of a literary work it not black and white. The students who can argue support for more than one of the four cookie-cutter answers – and do so in their heads during the test thereby often choosing the “wrong” one – are likely in reality the best readers. Disagreement on what an author meant by effective figurative language use or dissention in supporting different possible intended themes are not to be transcended in analysis and assessment of literature but embraced.
Am I missing some insight in interpreting formative standardized benchmark data? Is there some value here that I am overlooking? Please let me know in the comments!
These opportunities are well-deserved and no one who remains in the classroom could fault anyone for taking them. However, each one is the loss of an educator who daily and directly touched the lives of students. Those of us left in the pragmatic and emotional wake of their departure feel stretched and strained. They each will be missed dearly.
One of these fallen fellow classroom warriors, Trishia Joy Lowe, wrote the following of her classroom departure and has graciously allowed me to share it here with you.
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Today, I leave what I have loved doing for nearly twenty years – teaching, NOT education, TEACHING. I put in my papers and am moving forward to a career in business as a Director in Growth and Public Relations.
It is bitter-sweet.
I loved the classroom when it was just My students, THEIR love of learning, and ME.That’s REAL, that’s AUTHENTIC, THAT IS ALIVE. I had an obligation to impart a passion for learning, not just grades. I took seriously my responsibility to build skills, ignite curiosity, and grow my students intellectually – to hold my students as accountable to their progress as I held myself – not merely to answer A-B-C-D or None of the Above.
However, too many outside factors have faded that beautiful reality, that “life all its own”, that love of learning in my students and in me. (Yes, I learned so much from those beautiful, honest little people).
Too many influences have robbed us of our ability to share freely, teach openly, assess each other honestly, and grow. Too many factors stand between me and my students as I teach – they have polluted what was once a pure process.
So, I’m waving the “White Flag”. I surrender. I leave.
As I tendered my own resignation, I learned two moreoutstanding North Carolina teachers are leaving the classroom in my building. How many more teachers need to leave NC schools before parents understand there are highly trained, highly educated, highly intelligent, highly committed professionals who stand before their children each day, pouring everything THEY’VE got into THEIR children?
How many more skilled teachers need to leave before administrators “get it” and allow the truly “best and brightest” the autonomy to teach passionately without fear? To assess honestly for the sake of a child’s REAL growth without questioning from administrators as to our “judgement”?
How many more NC teachers need to leave before legislators just leave the professionals alone to do what they do best—TEACH?
(And by the way: a pay raise commensurate with that professionalism might be nice.)
Teachers have and continue to “fight the good fight” despite legislators, who, in many instances, are less educated, and less committed to people than their own pockets. Teachers’ pockets were emptied long ago, but they continue to teach passionately and courageously while digging deeper into their emptying pockets to buy supplies for their students and their classrooms.
However, the camel’s back is breaking. What happens when the camel finally wanders off for a better oasis?
In April, our school’s benchmark results were emailed to the entire staff. When analyzing the language arts department’s results, my students’ projected growth – the percentage projected to meet their targets – was abysmal. I mean it. I was dead last.