Category Archives: Educational Data

Vol.#78: Never The Destination

accountableI read Karl Fisch’s great post over at The Fischbowl about the word “accountability” and how too many in education erroneously equate it with using standardized testing to justify educational actions and decisions.

It got me to thinking how this current phenomenon often has educators, sometimes myself included, pinned in the corner of “all standardized testing is bad.” This is an understandable reaction to the ridiculous, high-stakes, over-emphasized testing of today. When one feels they are under attack, they take a defensive stance. Testing gives a snapshot of a narrow facet of skills, and while it shouldn’t be the focus nor the be-all-end-all… it isn’t completely useless.

After writing recently about my frustrations of the frequent pre-screening before the pretesting before the big test, it must sound like I’m completely backtracking. However, it’s the way the data is used that is important to examine.

Testing should be small, incremental, low-stakes, and personalized. If  I have a student who is struggling, as a language arts teacher I should be able to request testing to indicate issues of fluency vs. comprehension to know how best to help him/her.  It should be targeted and prescriptive, but this would require trusting educational decisions of professional educations, which is not what’s happening in the political scope of education right now.

Even the larger tests that level students in achievement ranges could be helpful if it were early in the year so teachers could use it to help inform their instruction for the year. However, it’s used at the end of the as a  summary of what the student and teacher have “done right”. This, again, is a misuse of the data. It’s an autopsy when only a biopsy can help a teacher help a student. Also, inferences are being drawn from the data which does not measure what it’s being assumed to measure. (ie: “teacher effectiveness.”)

Therefore, high-stakes testing becomes the “goal”. Schools can’t test to see what they need to teach, they are too busy scrambling to teach what’s on the test that contains what someone else decided was important and another said it would carry serious consequences for the student, teacher, and school if some bubbles aren’t colored as well as last year.  And consider what that these tests could never measure for just a moment…

Your doctor does not decide your heath on a BMI score or triglyceride reading alone.  However, that small piece of data can inform a medical professional if its part of a larger picture. The problem is when non-educators in charge of education (which is a problem in and of itself) decide to measure the doctor’s competence by his/her patients’ BMI average (teacher’s test scores). This is a misuse of the data, and a ridiculous way to measure the doctor. 

TL; DR:

Education Haiku

Advertisement

Vol.#63: Simplifying a Teacher’s Life: Free Technology Tools for Assessment

Last week, I posted my presentation   “Every Teacher a Literacy Teacher Using Technology Tools” from what I shared with the 2015 Kenan Fellows at the North Carolina Center for the Advancement of Teaching (NCCAT) in June.  As promised, though a little late, I am adding the other presentation: “Simplifying a Teacher’s Life:  Free Technology Tools for Assessment” this week.

The video is long (30 minutes), but as with any flipped lesson, it provides the benefit of being able to pause, skip, or come back to it as needed. Plus, the focus is free technology tools to collect student data so you spend less time grading, so in the end you will get your 30 minutes back, I promise! 🙂

cc-by-nc-sa

  • Care to share your experience or planned use for any of these tools?
  • Have another tool to add?

Please share in the comments!

Vol.#55: Is the NC Goal “First in Teacher Flight”?

It may be only be six weeks after New Year’s, but already both the state of North Carolina and Wake County have grave concerns about filling the needed teaching positions for next school year.

And so they should.

North Carolina often fills positions from teachers in states like Ohio and New York where turnover is low and teachers can’t find positions. However, with no more pay for advanced degrees in NC, most of those candidates will likely no longer be coming here anymore.

Besides needing to attract teachers, there’s the issue of teacher turnover. NCDPI was concerned enough about this very issue to send a report to the General Assembly. You can read the whole report here, but I’ve compiled a few highlights:


easel.ly

You’ll notice it’s not just that more teachers are leaving, but that more and more tenured, experienced teachers are leaving. The mentors of the beginning teachers. The department chairs. The leadership team members. The teachers any principal needs upon which to build a school.

The concerns the data raise are only the tip of the iceberg for what I feel is impending, based on my front-row view from the classroom trenches.

TeacherXing

For example, of significant note but not yet reflected in this report is the fact that in Wake County alone, the number of teachers who have left specifically to teach in another state have already doubled so far this year from this data last year.

And…it’s still only February.

Also consider this year so far this blog has included:

None of these facts are reflected in the reported data. Yet.

And then, this week a teacher raise for only new teachers was proposed. This conversation, which I’ve been given permission to share with you, should give you some insight into the morale and mindset of North Carolina’s teacher leaders:

facebook quit FINAL

These are some of the best educators in North Carolina classrooms from all over the state. And although I can personally vouch for their exceptionalism as educators, I am certain these sentiments are not exceptional. Conversations like this one are happening on every facebook wall and in every teacher lounge in the state.

Yes, indeed…they should be gravely concerned about the mass exodus coming North Carolina’s way.

first in flight

Vol.#47: Education Infographic Inferences

I learned last week that communicating data via an infographic is a power not to be wielded lightly. The ongoing feedback in the comments had me repeatedly changing and re-editing numbers and phrasing. For example, at the bottom teachers work “more than others” became “more than full-time” because so many commenters seemed to be taking it as a personal judgement on how much they did or did not work in their own jobs.

Back in July, I’d gotten quite a lot of feedback (almost 200 comments) about the infographic on the data of the change in average teaching salaries over a decade, so I wondered: what about an infographic on just the average teachers’ salaries per state? What would that information look like?

The most recent data I found was from 2012, and as I created the high-to low list, I thought I also saw a pattern on how states voted during that same year’s presidential election.

I color-coded the infographic according to those results:


easel.ly

I thought about how NCAE almost always supports the democratic candidate, and I found those who “stood out” like Alaska and New Mexico very interesting. Also, I wondered how much of the “average salary” was higher from retention of experienced teachers (particularly abysmal in my own state of North Carolina) or other factors outside of education specifically, such as the general cost-of-living. For example, even though Hawaii is in the top half of the states for a teacher’s average salary, according to at least one source the “comfort index” on that salary is actually the lowest in the nation due to how expensive it is to live there.

What do you infer from this data?

Volume #44: Literacy Data, Part Deux

In my last post, I argued against the use of the current practices for gathering data for measuring growth and proficiency in literacy.

I suggested that for math, formative standardized test data is a biopsy. For literacy, it’s more like an autopsy.

And while the data indicates strong versus sickly readers, this information is usually no surprise to the professional educator, and more importantly it offers no treatment plan: advice on which medicine to administer.

With the release of my state’s scores re-renormed to the Common Core, there’s lots of focus on all the new data. What it all means. Why the scores are lower. How it will be improved.

And while the politics rage on, I have to explain to parents that their child simply went from twelve centimeters to five inches, and yes the number may actually be smaller, but I believe it to show growth in his/her reading ability.

And I need to take this new information and figure out how it should inform my instruction. I need the data to indicate a treatment plan for the literacy health of my students.

During my participation in VoiceThread titled “Formative Assessment and Grading” in October 2011, Dylan Wiliam said something that has always really stuck with me:

“One of the problems we have with formative assessment is a paradigm that is often called, “data-driven decision making”. This leads to a focus on the data, rather than on the decisions. So, people collect data, hoping it might come in useful, and then figure out sometime later what kinds of decisions they might use the data to inform.  I’m thinking that we ought to perhaps reverse the ideas in data-driven decision-making and instead focus on decision-driven data collection. Let’s first figure out the decisions we need to make, and then figure out the data that would help us make that decision in a smarter way.”

~Dylan Wiliam   “Formative Assessment and Grading”,  Slide 5   [My emphasis]

I’ve pondered this at great length. If my goal is decision-driven data collection, what would I want out of a standardized literacy assessment? What do I want the data to tell me?

What else? What other information (as a teacher or as a parent) do you believe the data should provide about students’ literacy abilities?

Vol.#43: Literacy Data

countsSeveral years ago, an ELA colleague and I were presenting writing strategies to another middle school’s PLTs. The IRT’s office was in the PLT meeting room, and during a break between our sessions she remarked how she always had math teachers coming in to scan the results of their County required standardized test benchmarks immediately. However, she always had to chase down the language arts teachers to “make” them scan the bubble cards for the data. They’d given the test as required, just not scanned the cards for the results. She asked us what to do about it, and we sheepishly admitted we were often the same. Amazed, she asked… “Why?”

“Well, that data doesn’t really tell us anything we don’t already know.

Standardized data from the math benchmark practice tests tells our math teammates if students are struggling with decimals, or fractions, or two-step equations. In short, if students need more help…and if so, with which with specific skills.

The truth is…the data on these reading benchmarks tells us that since our AIG students score higher gifted readers must be better readers and our ESL students who are learning English don’t score as well on a test for…reading English.”

Image Credit: Pixabay User Websi
Image Credit: Pixabay User Websi

None of that is new information to any literacy teacher, and even if it were it doesn’t speak to how to shape his or her instruction. We are Data Rich, Information Poor. (D.R.I.P.) Analysis of that data does not help us see the path forward clearly for our students. Perhaps worse, it doesn’t necessarily even reflect the quality of instruction they’ve been given.

And while greater educational titans like Alfie Kohn have already explained  the many problems of relying on standardized data for, well, anything, it is my contention that using it to measure English Language Arts, both for measuring teachers and students, is an exceptionally erroneous practice.

Standardized testing by definition is supposed to be an “objective” assessment. However, subjective factors such as beliefs and values aren’t shouldn’t be separable from measuring literacy. While math is cut and dry (there is a right answer) interpretation of a literary work it not black and white. The students who can argue support for more than one of the four cookie-cutter answers – and do so in their heads during the test thereby often choosing the “wrong” one – are likely in reality the best readers. Disagreement on what an author meant by effective figurative language use or dissention in supporting different possible intended themes are not to be transcended in analysis and assessment of literature but embraced.

Am I missing some insight in interpreting formative standardized benchmark data? Is there some value here that I am overlooking? Please let me know in the comments!