They say there is no such this as a bad question, but, “Is this a grade?” makes me think otherwise. This is one of my least favorite questions of all time, and teachers are asked this by students often.
It reveals a student’s thought process on if a learning experience is important and worth their time or not.
I have tried several approaches to this question. I have tried to ban the question from the classroom without success. I have tried consistently using the vague response, “All things in life are assessed.” They have been undeterred. My students have even gotten savvy enough to know to ask, “Is this formative or summative”?
I decided I do not want to answer this question again. To that end, I have created a flow chart to post on my wall:
ReadTheory is a literacy tool which tailors itself to the student’s individual performance in reading. It selects a passage and questions for the student at random from the pool of available quizzes at the student’s level:
Students “choose a level to start.” My students were not aware that it meant “grade level” and some assumed they would start on “level 1″. After completing a passage that was entirely too easy, it quickly adjusted for them.
The video references how it adapts to a student’s performance as they go. Here’s how:
▲ Level up: If a student performs outstandingly on the quiz (score 90% or more), then the quiz is never shown again and the level increases by one.
► Level unchanged: If the student passes this quiz (score between 70% and 89%), it is never shown again and the student remains at the same current grade level of reading.
▼ Level down: If the student performs poorly on the quiz (score 69% or less), then the quiz is replaced into the pool of available quizzes and the level decreases by one.
The teacher receives data charts and progress reports which are interactive and intuitive. The class average, student start level, current level, average level, and number of tests completed are all shown.
I especially like that students get immediate feedback on the questions they get right and wrong, and that for incorrect passages, they can click to get the “explanation behind the answer”.
There are several ways ReadTheory could be improved:
Being able to upload a csv file would have been really nice, although entering students’ names one at a time didn’t take too long.
I would have really appreciated an easy pdf download by class that has the website, default password, and each students’ username in rows to cut apart for easier distribution.
I have some parents who would love a parent log in, similar to what Edmodo, Class Dojo, and Class Charts have, so that they could see their child’s ongoing progress.
Some of my students have noted it’s a lot like Study Island (which many of them had in Elementary school) but without the fun gaming/reward part.
Study Island costs money and I appreciate that Read Theory is free. It does keep “points” in some fashion, but I’m unclear how these are obtained and what they represent. They do not appear to be attached to badges or any type of reward within the actual web App outside of the statement of “You now have X many points.” As I learn more, I may look to how I can reward them “outside the screen” in my classroom.
There’s no “stop”. Our students are trained to look for the stop sign when testing. These passages keep going on until a student chooses or a teacher tells them to stop. Choosing how many passages to do (or a time limit) and having a “stop” pop up would be a nice option.
These suggested improvements aside, I really like ReadTheory so for its ease of use, intuitive data, and personalization for students.
It was just after Thanksgiving in 1985. My family had just relocated from Newton, New Hampshire to Hilton Head Island, South Carolina. I was in fourth grade, my younger sister in first.
Our first week there, my mother received phone calls from some horrified very concerned teachers. Where were our gloves? Scarves? Boots? Winter coats? I mean it was almost December! Of course, temperatures would have been in the 60s, which would have been May weather for us. Coats? You’re lucky we’re not here in shorts.
Here in NC we have only attended school on both Mondays the past two weeks. During these eight snow days, I have seen lots of commentary on social media about the “Southern Snow Day” phenomenon. The Atlantic did a piece last year with a Map: “How Much Snow It Takes to Cancel School in the US”. They made sure to make the point that it’s more about infrastructure than fortitude of citizens.
Still, comments like, “We’d love that forecast up here in Maine!” or “We have six inches more than you and we’re still going to school here in Massachusetts.” were lobbed at those of us holed up in our homes by mere whispers of winter weather. “We just have a different mindset up here,” one friend of a friend posted.
No. False. It’s much more than your “mindset”. It’s a result of societal, environmental, and biological differences.
A society decides on what to spend its collective revenues. It’s not worth investing in salt trucks and sand trucks and arsenals of snow plows to maintain the roads full-time when your state doesn’t get snow for three weeks, let alone three months. Boston and Nashville are of similar size in population, but it would not make sense for their budgets to allocate similar funds to snow management.
The environment here surrounding a snow event is also different. It gets warm enough here during the day to remelt the snow. Then snowmelt refreezes at night, making a treacherous black ice glaze. It’s not that “southerners can’t drive on snow” because it quickly becomes sheets of ice. Northerners aren’t going to drive on that either, even with your fancy snow tires. And of course all our school districts need is one bus to slide on our ill-prepared roads to be open to litigation.
Finally, there’s the biological differences I experienced first hand at nine years old. I wore shorts that first winter, but wouldn’t now. Our blood thins/thickens due to where we live. People adapt to their environment. There are more heat stroke stories from the north than the south in the summer. They just aren’t as adapted when temps spike. It doesn’t mean southerners should take to social media to call them “wimps” for it.
A month into that first school year on Hilton Head Island in 1985, it “snowed” with the lightest dusting. School completely stopped and everyone went outside. The fourth and fifth grades were in mobiles, and I remember so clearly how teachers and students were running around, laughing, delighted….
I had just moved from New Hampshire. One could literally see the ground right though this “snow”. I didn’t get it.
It had not snowed on the island in over a decade. My classmates, unless they had moved like I, had never seen this stuff fall from the sky in their entire lifetime and might not again until their twenties.
I read Karl Fisch’s great post over at The Fischbowl about the word “accountability” and how too many in education erroneously equate it with using standardized testing to justify educational actions and decisions.
It got me to thinking how this current phenomenon often has educators, sometimes myself included, pinned in the corner of “all standardized testing is bad.” This is an understandable reaction to the ridiculous, high-stakes, over-emphasized testing of today. When one feels they are under attack, they take a defensive stance. Testing gives a snapshot of a narrow facet of skills, and while it shouldn’t be the focus nor the be-all-end-all… it isn’t completely useless.
After writing recently about my frustrations of the frequent pre-screening before the pretesting before the big test, it must sound like I’m completely backtracking. However, it’s the way the data is used that is important to examine.
Testing should be small, incremental, low-stakes, and personalized. If I have a student who is struggling, as a language arts teacher I should be able to request testing to indicate issues of fluency vs. comprehension to know how best to help him/her. It should be targeted and prescriptive, but this would require trusting educational decisions of professional educations, which is not what’s happening in the political scope of education right now.
Even the larger tests that level students in achievement ranges could be helpful if it were early in the year so teachers could use it to help inform their instruction for the year. However, it’s used at the end of the as a summary of what the student and teacher have “done right”. This, again, is a misuse of the data. It’s an autopsy when only a biopsy can help a teacher help a student. Also, inferences are being drawn from the data which does not measure what it’s being assumed to measure. (ie: “teacher effectiveness.”)
Therefore, high-stakes testing becomes the “goal”. Schools can’t test to see what they need to teach, they are too busy scrambling to teach what’s on the test that contains what someone else decided was important and another said it would carry serious consequences for the student, teacher, and school if some bubbles aren’t colored as well as last year. And consider what that these tests could never measure for just a moment…
Your doctor does not decide your heath on a BMI score or triglyceride reading alone. However, that small piece of data can inform a medical professional if its part of a larger picture. The problem is when non-educators in charge of education (which is a problem in and of itself) decide to measure the doctor’s competence by his/her patients’ BMI average (teacher’s test scores). This is a misuse of the data, and a ridiculous way to measure the doctor.
One of my earliest posts (Volume #7) was about how to use technology to its maximum advantage in the classroom. I’ve sometimes been referred to an “early adopter” (one who starts using a product or technology as soon as it becomes available) because I like trying new tools as soon as I hear about them. However, the term “Early Adoption” seemed antiquated to me when talking about EdTech. I looked it up, and in fact the term originates from the technology adoption life-cycle originally published in 1957.
I think a better term might be “Early Adaptation” as one is”adapting” to how things will eventually be for all, rather than “adopting” something unusual, different, or foreign. Adoption is a concise process, where adaptation is ongoing. Am I just debating semantics here, or does someone else see my point?
And what even is technology? Both Alan Kay and Sir Ken Robinson have been quoted as saying technology is: “Nothing invented before you were born.”
So, to my current sixth grade students, that would be nothing invented before 2004. This means they see laptops, hybrid cars, iPods, camera phones, DVR or Tivo, and the internet as just regular normalcy, not technology. Our using them in the classroom would be analogous to when your teachers used television in the ’70s or ’80s: flashy and fun, but not novel or new.
First of all, I can’t imagine the technology available in 1982 wasn’t more of a handicap than a shortcut. But anyway…
If you asked the students in your classroom about movies and special effects (or FX as they may spell it) they would think it synonymous with computers, CGI, and so on. There wouldn’t even be a line of distinction.
But besides this fact, every piece of data I’ve read confirms an unintended shooting of a loved one is statistically more likely than actually protecting your home. It would stand to reason that accidents are more likely than successful warding off of would-be school shooters in schools as well. Certainly, the teachers that accidentally shot themselves while at school in Utah and in Idaho earlier this school year do not bode well for the success of this”arm the teachers” plan.
Simultaneously amuse and horrify yourself fellow teachers: At your next faculty meeting, when someone is saying something so unbelievably, stupefyingly short-sighted, ill-advised, and/or unintelligent (and you and I both know that s/he will) ponder working up the hall from them… whist they are armed.
And if you can get past your horror, realize the whole plan is more than a little insulting, given our current circumstances. They can’t pay us a professional wage or give any paid professional development. (I’m paying for conferences out-of-pocket, anyway…you?) But they’ll suggest finding money to arm us? Nice.
Plus, teachers are already so over worked, so overburdened… I mean seriously. During a fire drill I am just lucky to get the little green card in the window. There are days I can’t find the stack of 120 copies I just made and you want me to be responsible for a 9mm Smith and Wesson? No thanks.
Anyway, whilst surfing the various social interwebs this holiday, I came across the following video. I can understand if some people find celebrity “campaigns” normally annoying, but I think this one is worth the 1:23.
I apologize for the absence of fresh posts lately. Any teacher knows how the time second quarter can just get away from you, so I won’t try to explain.
As the quarter closes this week and I enter numbers that turn into the less-specific feedback of letters representing a range of numbers on the report card, I think about what report cards really represent to students and parents.
The last days have been met with so many of the usual questions that teachers get at the end of the quarter:
“Can I have a packet for extra credit?”
No. Nor may you eat junk food for months and then eat a salad right before the doctor’s appointment and get the same results as the person eating healthily the entire time.
“What can I do to get an ‘A’?”
Um . . . know more and do more to show that you know it?
“If I do XYZ (turn in this missing assignment, retake the low test grade, etc.) is it possible to get an average of blah-blah?”
Look, even if you had all the exact numbers in your mythical scenario to give me, I am afraid I could not plug it in with your grades – which I don’t know of the top of my head – and compute the weighted average to give you an answer. So…stop.
What’s frustrating is that the focus in all these questions is how to get the (usually lowest number in the arbitrary range of the) letter grade. Not the learning. Nor the work that should have gone into mastery. Nor the opportunities already missed.
If it’s not on the report card, it does not have meaning or value for parents or students.
Teachers know that work behaviors and effort are very important, probably even more important to a child’s future success than if s/he can diagram a sentence, or solve for x, or find the capital of Belize on a map. Therefore, teachers usually have typically included them in a grade to give them meaning and value. Those behaviors might count for 25% of a class’s grade, or 10%, or “folded in” to each assignment and result in some unknown number.
I’ve done the same. Valuing effort is important.
The problem? A grade as a method of communication to students, parents, universities, and other stake holders in that information is compromised: What does that “B-” mean? A hard worker who doesn’t fully get math – or – a lazy but brilliant math student? It could be either – and it often is.
Here’s my proposed solution: We need to report both content mastery and work behaviors. Equally.
Each class each reporting term should have a content mastery grade AND a work behaviors grade. A student earning an “A/D” knows the material set forth in the standards, but does little in the way of these important behaviors, which he will also need in life. (ie: the lazy AIG child, who does almost nothing but gets an “A” in mastery anyway) However a “C/A” student may struggle with the content, but she works REALLY hard to get that “C” in mastery.
Parents would know an “F/F” on the report card means there’s a reason the child isn’t learning any of the material. An “F/B” however represents a student mostly trying and still failing to grasp concepts. That’s a very different problem. We already know the difference as teachers: parents should know this about their children too. It should be reported to them. It should be reflected on the report card. It should matter.
Work behaviors need a separate grade on a report card so that they are deemed important but the content mastery is still clear.