Tag Archives: formative assessments

Vol.#75: Mastery vs. Work Behaviors

I apologize for the absence of fresh posts lately. Any teacher knows how the time second quarter can just get away from you, so I won’t try to explain.

As the quarter closes this week and I enter numbers that turn into the less-specific feedback of letters representing a range of numbers on the report card, I think about what report cards really represent to students and parents.

The last days have been met with so many of the usual questions that teachers get at the end of the quarter:

“Can I have a packet for extra credit?”

  • No. Nor may you eat junk food for months and then eat a salad right before the doctor’s appointment and get the same results as the person eating healthily the entire time.

“What can I do to get an ‘A’?”

  • Um . . . know more and do more to show that you know it?

“If I do XYZ (turn in this missing assignment, retake the low test grade, etc.) is it possible to get an average of blah-blah?”

  • Look, even if you had all the exact numbers in your mythical scenario to give me, I am afraid I could not plug it in with your grades – which I don’t know of the top of my head – and compute the weighted average to give you an answer. So…stop.

What’s frustrating is that the focus in all these questions is how to get the (usually lowest number in the arbitrary range of the) letter grade. Not the learning. Nor the work that should have gone into mastery. Nor the opportunities already missed. 

If it’s not on the report card, it does not have meaning or value for parents or students.

Teachers know that work behaviors and effort are very important, probably even more important to a child’s future success than if s/he can diagram a sentence, or solve for x, or find the capital of Belize on a map.  Therefore, teachers usually have typically included them in a grade to give them meaning and value. Those behaviors might count for 25% of a class’s grade, or 10%, or “folded in” to each assignment and result in some unknown number.

I’ve done the same. Valuing effort is important.

The problem? A grade as a method of communication to students, parents, universities, and other stake holders in that information is compromised: What does that “B-” mean? A hard worker who doesn’t fully get math – or – a lazy but brilliant math student? It could be either – and it often is.

Here’s my proposed solution: We need to report both content mastery and work behaviors.  Equally.

content work

Each class each reporting term should have a content mastery grade AND a work behaviors grade.  A student earning an “A/D” knows the material set forth in the standards, but does little in the way of these important behaviors, which he will also need in life. (ie: the lazy AIG child, who does almost nothing but gets an “A” in mastery anyway)  However a “C/A” student may struggle with the content, but she works REALLY hard to get that “C” in mastery.

Parents would know an “F/F” on the report card means there’s a reason the child isn’t learning any of the material. An “F/B” however  represents a student mostly trying and still failing to grasp concepts. That’s a very different problem. We already know the difference as teachers: parents should know this about their children too. It should be reported to them. It should be reflected on the report card. It should matter.

Work behaviors need a separate grade on a report card so that they are deemed important but the content mastery is still clear.

Thoughts? Rebuttal? Hit me up in the comments!

Advertisements

Vol.#72: Data Happens (And What To Do Next)

I have  data about literacy – my students’ and own children’s – coming at me on a regular intervals; tidalwaves on the beach of what is otherwise a peaceful school experience.

For my own son, he camScreen Shot 2014-09-21 at 5.08.52 PMe home with an mClass report with all little running men at the top of their little green bars – save one – and a lexile level that corresponds with a 3.6 grade level early in his third grade year.  However, another letter says he’s been flagged as a “failing reader” based on the preliminary standardized test given in the beginning of third grade. This would have perplexed me if I didn’t already know how ludicrous it is to assess literacy of children with these frustrating bubble tests.

For my sixth-grade students, I have access to their standardized test data from the end of fifth grade – the ones with passages that are way too long assessing way too many standards and simply expecting way too much of the poor ten-year-old test takers.

We also give our middle schoolers quarterly timed tests on basic skills in reading and math. Based on these results, students are sorted into green, yellow, and red, with intervention plans written for those in the “danger zones”. Also, there are standardized benchmark tests at the end of each quarter to see if they are on track to attain a passing achievement level for the standardized state test at the end of the year.

demaNdingIf anyone counted, that’s seven tests during the year for students, including the “real” test. But not including any tests given by the teacher. (And that’s just for reading, don’t forget to then add in math. And science. And social studies… But I digress.)

I am not naive enough to think I am going to change the path we are going down right now, but I feel strongly that if we are going to make students do all this, I’d better find a way to make all the resulting data helpful to my instruction.

And therein lies another layer of my molten lava white-hot fury. What has been sorely missing from the dialogue in all these data-sessions is the next steps. Ok, Sally Sue is “red”.  What does she need now?  Or, even more frustrating, she passed one test, but is “red” on the other. So…now what? What do I DO for her? (You know, that I wasn’t going to do anyway? Like…teach her?)

Perhaps this oversight is because those who pushed this agenda only wanted to sell us all the screening tests so they don’t actually know what to do next? Or, maybe their answer is they want us to buy their scripted program to “fix it”, but we are all out of money?

At any rate, here’s where I am with this new normal.  I need pragmatic (*ahem* free) ways to address all this conflicting data. What follows is a list of  strategies I have to that end:

  • Offer the same article in several different lexile levels using Newsela. Some articles have leveled questions as well. (Newsela has a free version and a “pro” version.)
  • ReadWorks “The Solution to Reading Comprehension” offers both nonfiction and literary passages, questions, and units for free. It includes lexile leveling information.
  • You can also check the reading level of any text or website at  read-able.com for free.
  • Offer clear instructions for how you want students to complete a close reading of a text. Here’s mine. Sorry for the shameless plug. 🙂
  • Mr. Nussbaum’s webpage has reading comprehension passages and Maze passages that score themselves for free! It only goes up through grade 6, so it would only help students up through about a 960 lexile.
  • ReadTheory is free, and allows you to create classes and track reading comprehension progress.
  • There are several reading leveler apps you can pay for and they are probably fancier, but I’ve found this one handy, both as a mom and as a teacher. For example, I used to have long conversations with my students who kept picking up books during DEAR time, not an occasional graphic novel, but always a graphic novel, cartoon books, picture book …you know the type? Anyway, scanning their bar code and simply telling them it has a 2.4 grade level has been more effective than the long conversation. 🙂
  • One on my horizon to try: curriculet.com  It’s free and I’ve heard good things!
  • I have also found the following conversion chart handy, because of course the data does not always come in the same format:

4879716These have helped me in more than one “What are you doing for my child?” conference and to complete the required intervention plans based on all the data. I don’t know if they have revolutionized me as a literacy teacher, but I suppose time scores will tell.

Have a strategy, tool, or resource for helping your students as readers? Please share in the comments!

Vol.#63: Simplifying a Teacher’s Life: Free Technology Tools for Assessment

Last week, I posted my presentation   “Every Teacher a Literacy Teacher Using Technology Tools” from what I shared with the 2015 Kenan Fellows at the North Carolina Center for the Advancement of Teaching (NCCAT) in June.  As promised, though a little late, I am adding the other presentation: “Simplifying a Teacher’s Life:  Free Technology Tools for Assessment” this week.

The video is long (30 minutes), but as with any flipped lesson, it provides the benefit of being able to pause, skip, or come back to it as needed. Plus, the focus is free technology tools to collect student data so you spend less time grading, so in the end you will get your 30 minutes back, I promise! 🙂

cc-by-nc-sa

  • Care to share your experience or planned use for any of these tools?
  • Have another tool to add?

Please share in the comments!

Vol.#59: Four Things I Wish Parents Knew About Grades Online

old report cardSchools have been communicating with parents about their child’s success in school since the days of the one-room school house. I remember getting “progress reports” or “interims” for the first time as a student in the late eighties. In an effort to update the parents and students with progress before the end of each quarter, we received written notes or computer printouts mid-quarter. These had all the assignments listed, where report cards simply had an average or letter grade.

However, in the information age, parents and students can now check on a computer or smart phone around the clock and see the status of grades in each class. This is a powerful and relatively new reality in education. Were I able to log on and see all my grades as a student, or were my parents able to, I know many things would have been different.

However, after a teaching students with families who have this capability for several years now, I have found the “resolution” to which some parents wish to have their child’s grades focused at all times a pragmatic impossibility for the teacher.

Here are four things I wish every parent knew:

1. Grading is not immediate. 

Look, I get it. I type in my phone number at Yogurt Mountain for the rewards program (I may have a “sea salt caramel” problem, but I digress) and before I grab a napkin the rewards email comes in and my phone chimes in my pocket. We are in an age of expecting immediate feedback, from our banks to our froyo.

However, a middle school teacher with four classes of thirty students teaches 120 students. If the teacher looks at your child’s assignment for only three minutes, she has six hours of grading to do. Just because the posting is immediate doesn’t mean the process to assess the work is, and it will go a long way with your child’s teachers if you keep that in mind.

2. Ask your child about the grade first. Always.

I have entered a grade at 9 am planning and had an email asking about it within ten minutes. In class, I was handing out the test and reviewing the information, retest procedures, and so on. Were the parent to wait until their child got home, the child would should be able to answer the questions.

This is more than just the “you have one of them and I have 120” mentioned above. By asking, the parent reinforces the student is the one in the driver’s seat of his/her education. By explaining what they learned at school, a student will reinforce those concepts. And absolutely, if your child can’t explain something after you’ve talked with him or her, feel free to follow up with a call or email to the teacher. You’ll know more than you would have and have a great starting place.

3. Understand the way in which your child’s grade is calculated.

I have a “formative” category that is weighted zero. These might be pretests, standardized benchmarks, and other grades which provide information of progress that to not factor into the actual average. I say this at Open House. I say this at “Meet the Teacher” night. I say this a Student Led Conferences. It’s printed on the interims, in comments next to the assignments, and is posted on my webpage. This doesn’t stop me from getting emails. Actually, I don’t even mind the confused emails as much as I do the angry ones who accuse me of incorrectly calculating the grade because the parent has added and divided by the number of grades, ignoring the fact that major summative assignments are weighted more heavily than minor ones. So, maybe this tip should just read, “Seek to understand before you attack.”

4. Keep in mind that it is just a snapshot in time.

Screen Shot 2014-04-27 at 4.35.28 PMIf you check grades online or the teacher prints them for you to review, keep in mind that like your bank account, it’s just what’s there at that very moment. Your child’s average is obsolete as soon as another assignment has been collected. Do not panic about that grade that is lower than you’d like,  nor “relax” if it’s fine. It’s just that day’s reality, and will change soon. Your efforts are better spent looking at with what types of assignments your child struggles, if there are retake or make up opportunities listed, and if your child is turning work in on time.

Teachers, what tips for parents would you add?

Parents, what things could a teacher do to help communicate your child’s successes and struggles in online grade reporting?

Volume #44: Literacy Data, Part Deux

In my last post, I argued against the use of the current practices for gathering data for measuring growth and proficiency in literacy.

I suggested that for math, formative standardized test data is a biopsy. For literacy, it’s more like an autopsy.

And while the data indicates strong versus sickly readers, this information is usually no surprise to the professional educator, and more importantly it offers no treatment plan: advice on which medicine to administer.

With the release of my state’s scores re-renormed to the Common Core, there’s lots of focus on all the new data. What it all means. Why the scores are lower. How it will be improved.

And while the politics rage on, I have to explain to parents that their child simply went from twelve centimeters to five inches, and yes the number may actually be smaller, but I believe it to show growth in his/her reading ability.

And I need to take this new information and figure out how it should inform my instruction. I need the data to indicate a treatment plan for the literacy health of my students.

During my participation in VoiceThread titled “Formative Assessment and Grading” in October 2011, Dylan Wiliam said something that has always really stuck with me:

“One of the problems we have with formative assessment is a paradigm that is often called, “data-driven decision making”. This leads to a focus on the data, rather than on the decisions. So, people collect data, hoping it might come in useful, and then figure out sometime later what kinds of decisions they might use the data to inform.  I’m thinking that we ought to perhaps reverse the ideas in data-driven decision-making and instead focus on decision-driven data collection. Let’s first figure out the decisions we need to make, and then figure out the data that would help us make that decision in a smarter way.”

~Dylan Wiliam   “Formative Assessment and Grading”,  Slide 5   [My emphasis]

I’ve pondered this at great length. If my goal is decision-driven data collection, what would I want out of a standardized literacy assessment? What do I want the data to tell me?

What else? What other information (as a teacher or as a parent) do you believe the data should provide about students’ literacy abilities?

Vol.#43: Literacy Data

countsSeveral years ago, an ELA colleague and I were presenting writing strategies to another middle school’s PLTs. The IRT’s office was in the PLT meeting room, and during a break between our sessions she remarked how she always had math teachers coming in to scan the results of their County required standardized test benchmarks immediately. However, she always had to chase down the language arts teachers to “make” them scan the bubble cards for the data. They’d given the test as required, just not scanned the cards for the results. She asked us what to do about it, and we sheepishly admitted we were often the same. Amazed, she asked… “Why?”

“Well, that data doesn’t really tell us anything we don’t already know.

Standardized data from the math benchmark practice tests tells our math teammates if students are struggling with decimals, or fractions, or two-step equations. In short, if students need more help…and if so, with which with specific skills.

The truth is…the data on these reading benchmarks tells us that since our AIG students score higher gifted readers must be better readers and our ESL students who are learning English don’t score as well on a test for…reading English.”

Image Credit: Pixabay User Websi
Image Credit: Pixabay User Websi

None of that is new information to any literacy teacher, and even if it were it doesn’t speak to how to shape his or her instruction. We are Data Rich, Information Poor. (D.R.I.P.) Analysis of that data does not help us see the path forward clearly for our students. Perhaps worse, it doesn’t necessarily even reflect the quality of instruction they’ve been given.

And while greater educational titans like Alfie Kohn have already explained  the many problems of relying on standardized data for, well, anything, it is my contention that using it to measure English Language Arts, both for measuring teachers and students, is an exceptionally erroneous practice.

Standardized testing by definition is supposed to be an “objective” assessment. However, subjective factors such as beliefs and values aren’t shouldn’t be separable from measuring literacy. While math is cut and dry (there is a right answer) interpretation of a literary work it not black and white. The students who can argue support for more than one of the four cookie-cutter answers – and do so in their heads during the test thereby often choosing the “wrong” one – are likely in reality the best readers. Disagreement on what an author meant by effective figurative language use or dissention in supporting different possible intended themes are not to be transcended in analysis and assessment of literature but embraced.

Am I missing some insight in interpreting formative standardized benchmark data? Is there some value here that I am overlooking? Please let me know in the comments!

Vol.#15: The Edmodo Education

I started using Edmodo over the past couple of months. Specifically, I offered it as one option to complete a reading project. I’d thought having a smaller group of students to start would help me ease into it, however over 75% of my 109 students opted for the Edmodo choice instead of the more traditional alternative.

 
I used the Edmodo quizzes as part of the assessment for the project. Now having used them, I see the quiz feature as having a likely future in my classroom as formative assessments, such as homework, as opposed to actual “quizzes”. Edmodo does not allow retakes easily and both the timed feature and occasional glitches in the system make quizzes that “count” stressful. However, the instant feedback it provides would be very vaulable in the formative stage and would reduce class-time reviewing answers on completed assignments, allowing for more time on new, engaging tasks and collaboration.

While pondering the future of this possibility, I had my students complete the following Consens-o-gram. Continue reading Vol.#15: The Edmodo Education