12 Winters Blog

The myth of ‘best practices’ in education

Posted in August 2017, Uncategorized by Ted Morrissey on August 20, 2017

Last Wednesday I began my thirty-fourth year as a schoolteacher. To be sure, teaching has changed in those years, kids have, too — although neither as much as one might think. There is one thing, however, that has been amazingly consistent: the number of people who, year upon year, insist that I and my peers adopt a method which they bill as a “best practice” — some technique that they know will improve my teaching because, well, how could it not? It’s a best practice.

Not once — in all those innumerable workshops, inservices and presentations — has a purveyor of a best practice offered a shred of evidence that what they’re promoting will actually lead to better (let alone, the best) teaching. It’s always offered under the implied guise of common sense. It’s the epitome of the logical fallacy of begging the question: Dear Teacher, accept the fact that what you’ve been doing (whatever it may be) hasn’t been as effective as what I’m about to tell you to do. Trust me — I’m a presenter.

And teaching is, allegedly, an evidence-based profession. Schools claim that what they’re doing is “evidence-based,” but oftentimes, if there is something like evidence out there, it’s contrary to what’s being prescribed. On the one hand, I don’t really blame folks for not presenting the evidence to support their claims of the effectiveness of the practice they’re advocating, because (as I’ve written about before) testing in education is fraught with problems. It’s extremely difficult, if not impossible, to generate data which can be reliably analyzed. In any given testing situation, there are simply too many variables to control, and many of them are literally beyond the control of educators. Students are not rats confined to the tiny world of a lab where researchers can effect whatever conditions they’re studying. Imagine scientists sending their rats home each night and asking them to return the next morning for continued research; and periodically the group of rats they’ve been studying are replaced by a whole new group of rats whose histories are a total mystery. (Apologies for comparing students to rats — for what it’s worth, I like rats … and students.)

All right, so I don’t blame purveyors of best practices for not presenting their (nonexistent) evidence; however, I do blame them for suggesting, implicitly, that evidence does exist. It must, right? Otherwise how could they say some technique, some approach is “best” (or at least “better”)?

The reality is, best practices are a myth. Forget good, better, best; let’s turn, instead, to effective versus ineffective (and even that paradigm is nebulous). Effectiveness must be considered on a case by case basis. That is, we want all students to benefit as a result of our efforts, but what works for Bobby versus what works for Suzie on any given day at any given moment, for any given skill or knowledge acquisition, may constitute completely opposite approaches; and tomorrow the reverse may be true. And quite honestly, whether an approach is effective or ineffective may be unknowable, in the moment and even in the long term. The learning takes place in the student’s mind, and the mind is a murky, complicated place. Hopefully the skill or knowledge is identifiable and assessible (via a quiz or test or paper or project), but it may not be, especially in the humanities, which are more concerned with creative and critical applications than in the sciences or the vocational area, where right-or-wrong, black-or-white distinctions are the rule rather than the exception.

Generally the purveyor of a best practice is able to communicate the technique in a few bullet points on a handout or a PowerPoint, but the differences — the vast differences — between grade levels, subject matters, demographics of students, backgrounds and knowledge-levels of teachers, etc., etc., etc. make such simplistic declarations ridiculous. Imagine going to an agricultural convention and telling an assembled group of farmers that you have for them a best practice, and here it is in six bullet points. You’re welcome. No matter what they’re growing, where they’re growing it, what sorts of equipment they have at their disposal, what the climate models are suggesting, how the markets are trending — This is it, brother: Just follow these six steps and your yields will be out of this world. Trust me — I’m a presenter.

The farmers would be nonplussed to put it mildly. Plug in professionals from any other arena — business owners, attorneys, medical doctors, engineers — and the ridiculousness of it (that a single set of practices will improve what they’re doing, regardless of individual situations) becomes clear. It’s so clear, in fact, I can’t imagine any presenter doing it — telling a room full of surgeons, for instance, to do this one simple procedure all the time, no matter the patient’s history, no matter their lab work, no matter how they’re responding on the table — and yet it happens to educators all the time.

Almost without fail, techniques that are presented as best practices are observable. It’s about what you say to students or what they say to you; what you write on the chalkboard; what you write in lesson plans or curricular outlines. It simplifies the process of evaluating teachers’ performances if the evaluator can look for a few concrete actions from every teacher, from kindergarten teacher to calculus teacher, from welding teacher to reading teacher; from the teacher of gifted students to the teacher of exceptional students. It makes assessment so much simpler if everyone is singing from the same hymnal.

I deliberately used the word performances in the previous paragraph because so often that’s what evaluation boils down to: a performance for the audience-of-one, the evaluator. We often hear the term “high-stakes testing” in the media (that is, standardized tests whose results have significant consequences for test-takers and their schools), but we have also entered into a time of “high-stakes evaluating” for teachers, performance assessments which impact their literal job security. Teachers quickly learn that if their evaluator claims x, y and z are best practices, they’d better demonstrate x, y and z when they’re being observed — but quite possibly only when they’re being observed because in truth they don’t believe in the validity or the practicality of x, y and z as a rule.

In such cases, teachers are not trying to be insubordinate, or mocking, or rebellious; they’re trying to teach their charges in the most effective ways they know how (based on the training of their individual disciplines and their years of experience in the classroom), and they disagree with the practices which are being thrust upon them. Teachers do no take an oath equivalent to doctors’ Hippocratic oath, but conscientious teachers have, in essence, taken a personal and professional vow to do no harm to their students; thus they find themselves in a conundrum when their judgments about what’s effective and what isn’t are in conflict with the best practices by which they’re being evaluated. For teachers who care about how well they’re teaching — and that’s just about every teacher I’ve had the privilege to know in the last thirty-four years — it’s a source of stress and anxiety and even depression. More and more teachers every year find that the only way to alleviate that stress in their lives is to leave the profession.

Again, much of the problem is derived from the need for observable behaviors. I like to think my interactions with students in the classroom are positive and effective, but, as a teacher of literature and especially as a teacher of writing, I know my most important and most valuable work is all but invisible. My greatest strengths, I believe, are in developing questions and writing prompts that navigate students’ interactions with a text, and (even more so) in responding to the students’ work. When a student hands in an essay based on a prompt I’ve given them about a text, it is essentially a diagram of how their mind worked as they read and analyzed the text (a novel, or story, or poem, or film) — a kind of CAT scan if you will — and my task is to interpret the workings of their mind (in what ways did their mind work well, and in what ways did their mind veer off the path somewhat) and then, once I’ve interpreted their mind-at-work, I have to provide them comments which explain my interpretations and (here’s the really, really hard part) also comments which will alter their mental processes so that next time they’ll write a more effective essay. In short, I’m trying to get them to think better and to express their thoughts better. (I should point out that to do all of this, I also have to possess a thorough understanding of the text under consideration — a text perhaps by Homer or Shakespeare or Keats or James or Joyce or Morrison.)

It’s the most important thing I do, and no one observing me in the classroom will ever see it. If my students improve in their reading and thinking and writing and speaking — largely it will be because of my skill to interact with them productively, brain to brain, on the page. The process is both invisible and essential. This is what teaching English is; this is what English teachers do. And we are not unique, by any means, in the profession. Yet our value — our very job security — is based on behaviors that are secondary or even tangential to the most profound sorts of interactions we have with our students.

I know that purveyors of best practices mean well (for-profit educational consultants aside). They are good, smart people who sincerely believe in what they’re advocating, and frequently a kernel or two of meaningful advice can be derived from the presentation, but we need to stop pretending that there’s one method that will improve all teaching, regardless of the myriad factors which come into play every time a teacher engages a group of students. It makes teaching seem simple, and teaching is many, many, many things but simple isn’t one of them.

(Image found via Google Images here.)

 

 

 

Advertisements

The paradox of uniformity

Posted in April 2017, Uncategorized by Ted Morrissey on April 13, 2017

Nearly a year ago I posted “Danielson Framework criticized by Charlotte Danielson” and it has generated far more interest than I would have anticipated. As of this writing, it has been viewed more 130,000 times. It has been shared across various platforms of social media, and cited in other people’s blogs. The post has generated copious comments, and I’ve received dozens of emails from educators — mostly from North America but beyond too. Some educators have contacted me for advice (I have little to offer), some merely to share their frustration (I can relate), others to thank me for speaking up (the wisdom of which remains dubious). To be fair, not everyone has been enthusiastic. There have been comments from administrators who feel that Charlotte Danielson (and I) threw them under the school bus. Many administrators are not devotees of the Framework either, and they are doing their best with a legislatively mandated instrument.

Before this much-read post, I’d been commenting on Danielson and related issues for a while, and those posts have received a fair amount of attention also. Literally every day since I posted about Danielson criticizing the use of her own Framework the article has been read by at least a few people. The hits slowed down over the summer months, understandably; then picked up again in the fall — no doubt when teachers were confronted with the fact it’s their evaluation year (generally every other year for tenured teachers). Once people were in the throes of the school year, hits declined. However, beginning in February, the number of readers spiked again and have remained consistently high for weeks. Teachers, I suspect, are getting back their evaluations, and are Googling for information and solace after receiving their infuriating and disheartening Danielson-based critique. (One teacher wrote to me and said that he was graded down because he didn’t produce documentation that his colleagues think of him as an expert in the field. He didn’t know what that documentation would even look like — testimonials solicited in the work room? — and nor did I.)

It can tear the guts out of you and slacken your sails right when you need that energy and enthusiasm to finish the school year strong: get through student testing (e.g. PAARC), stroke for home on myriad learning outcomes, prepare students for advancing to the next year, and document, document, document — all while kids grow squirrelier by the minute with the advance of spring, warmer weather, and the large looming of year’s end.

But this post isn’t about any of that, at least not directly. The Danielson Framework and its unique failures are really part of a much larger issue in education, from pre-K to graduate school: something which I’ll call the drive for uniformity. I blame Business’s infiltration and parasitic take over of Education. It’s difficult to say exactly when the parasite broke the skin and began its pernicious spread. I’ve been teaching (gulp) since 1984 (yes, English teachers were goofy with glee at the prospect of teaching Nineteen Eighty-Four in 1984, just as I was in 2001 to teach 2001 — we’re weird like that), and even then, in ’84, I was given three curriculum guides with precisely 180 pages in each; I was teaching three different courses, and each guide had a page/lesson for each day of the school year. Everyone who was teaching a particular course was expected to be doing the same thing (teaching the same concept, handing out the same handout, proctoring the same test) on the same day.

Not every school system was quite so prescriptive. I moved to another district, and, thankfully, its curriculum was much less regimented. Nevertheless, it was at that school that I vividly recall sitting in a faculty meeting and the superintendent uttering the precept “We shall do more with less.” The School Board, with his encouragement, was simultaneously cutting staff while increasing curricular requirements. English teachers, for example, were going to be required to assign twelve essays per semester (with the understanding that these would be thoroughly read, commented on, and graded in a timely fashion). At the time I had around 150 students per day. With the cuts to staff, I eventually had nearly 200 students per day. This was the mid 1990s.

The point is, that phrase — We shall do more with less — comes right out of the business world. It’s rooted in the idea that more isn’t being achieved (greater productivity, greater profits) because of superfluous workers on the factory floor. We need to cut the slackers and force everyone else to work harder, faster — and when they drop dead from exhaustion, no problem: there are all those unemployed workers who will be chomping at the bit to get their old job back (with less pay and more expectations). CEOs in the business world claimed that schools were not doing their jobs. The employees they were hiring, they said, couldn’t do math, couldn’t write, had aversions to hard work and good attendance. It must be the fault of lazy teachers, the unproductive slackers on the factory floor so to speak.

Unions stood in the way of the mass clearing of house, so the war on unions was initiated in earnest. Conservative politicians, allied with business leaders, have been chipping away at unions (education and otherwise) wherever they can, under the euphemism of “Right to Work,” implying that unions are preventing good workers from working, and securing in their places lazy ne’er-do-wells. The strategy has been effective. Little by little, state by state, protections like tenure and seniority have been removed or severely weakened. Mandates have increased, while funds have been decreased or (like in Illinois) outright withheld, starving public schools to death. The frustrations of stagnant wages, depleted pensions, and weakened job security have been added to by unfair evaluation instruments like the Danielson Framework.

A telltale sign of business’s influence is the drive for uniformity. One of the selling points of the Danielson Framework was that it can be applied to all teachers, pre-K through 12th grade, and even professionals outside the classroom, like librarians and nurses. Its one-size-fits-all is efficient (sounding) and therefore appeals to legislators. Danielson is just one example, however. We see it everywhere. Teaching consultants who offer a magic bullet that will guarantee all students will learn, no matter the subject, grade level, or ability. Because, of course, teaching kindergarteners shapes is the same as teaching high school students calculus. Special education and physical education … practically the same thing (they sound alike, after all). Art and band … peas in a pod (I mean, playing music is a fine art, isn’t it? Duh.).

And the drive for uniformity has not been limited to K-12 education. Universities have been infected, too. All first-year writing students must have the same experience (or so it seems): write the same essays, read the same chapters in the same textbook, have their work evaluated according to the same rubric, etc., etc. Even syllabi have to be uniform: they have to contain the same elements, in the same order, reproduce the same university policies, even across departments. The syllabus for a university course is oftentimes dozens of pages long, and only a very small part of it is devoted to informing the students what they need to do from week to week. The rest is for accreditation purposes, apparently. And the uniformity in requirements and approaches helps to generate data (which outcomes are being achieved, which are not, that kind of thing).

It all looks quite scientific. You can generate spreadsheets and bar graphs, showing where students are on this outcome versus that outcome; how this group of students compares to last year’s group; make predictions; justify (hopefully) expenditures. It’s the equivalent of the much-publicized K-12 zeal for standardized testing, which gives birth to mountains of data — just about all of which is ignored once produced, which is just as well because it’s all but meaningless. People ignore the data because they’re too busy teaching just about every minute of every day to sift through the voluminous numbers; and the numbers are all but meaningless because they only look scientific, when in fact they aren’t scientific at all. (I’ve written about this, too, in my post “The fallacy of testing in education.”)

But this post isn’t about any of those things either.

It’s about the irony of uniformity, or the paradox of it, as I call it in my title. Concurrent with the business-based drive for uniformity has been the alleged drive for higher standards: more critical thinking, increased expectations, a faster track to skill achievement. Yet uniformity is the antithesis of higher standards. We’re supposed to have more rigor in our curricula, but coddle our charges in every other way.

We can’t expect students to deal with teachers who have varying classroom methods. We can’t expect them to adjust to different ways of grading. We can’t expect them to navigate differences in syllabi construction, teacher webpage design, or even the use of their classroom’s whiteboard. We can’t expect students to understand synonyms in directions, thus teachers must confine themselves to a limited collection of verbs and nouns when writing assignments and tests (for instance, we must all say “analyze” in lieu of “examine” or “consider” — all those different terms confuse the poor darlings). This is a true story: A consultant who came to speak to us about the increased rigor of the PAARC exam also advised us to stop telling our students to “check the box” on a test, because it’s actually a “square” and some students may be confused by looking for the three-dimensional “box” on the page. What?

But are these not real-world critical-thinking situations? Asking students to adapt to one teacher’s methodology versus another? Requiring students to follow the logic of an assignment written in this style versus that (or that … or that)? Having students adjust their schoolwork schedules to take into account different rhythms of due dates from teacher to teacher?

How often in our post-education lives are we guaranteed uniformity? There is much talk about getting students “career-ready” (another business world contribution to education), yet in our professional careers how much uniformity is there? If we’re dealing with various customers or clients, are they clones? Or are we expected to adjust to their personalities, their needs, their pocketbooks? For that matter, how uniform are our superiors? Perhaps we’re dealing with several managers or owners or execs. I’ll bet they’d love to hear how we prefer the way someone else in the organization does such and such, and wouldn’t they please adjust their approach to fit our preferences? That would no doubt turn into a lovely day at work.

I’ve been teaching for 33 years, and over that time I’ve worked under, let’s see, seven building principals (not to mention different superintendents and other administrators). Not once has it seemed like a good idea to let my current principal know how one of his predecessors handled a given situation in the spirit of encouraging his further reflection on the matter. Clearly I am the one who must adapt to the new style, the new approach, the new philosophy.

These are just a few examples of course. How much non-uniformity do we deal with every day, professionally and personally? An infinite amount is the correct answer. So, how precisely are we better preparing our students for life after formal education by making sure our delivery systems are consistently cookie-cutter? We aren’t is the correct answer. (Be sure to check the corresponding squares.)

Education has made the mistake of allowing Business to infect it to the core (to the Common Core, as a matter of fact). Now Business has taken over the White House, and it’s taken over bigly.

But this blog post isn’t about that.

The fallacy of testing in education

Posted in October 2015 by Ted Morrissey on October 18, 2015

For the last several years education reformers have been preaching the religion of testing as the lynchpin to improving education (meanwhile offering no meaningful evidence that education is failing in the first place). Last year, the PARCC test (Partnership for Assessment of Readiness for College and Careers) made its maiden voyage in Illinois. Now teachers and school districts are scrambling to implement phase II of the overhaul of the teacher evaluation system begun two years before by incorporating student testing results into the assessment of teachers’ effectiveness (see the Guidebook on Student Learning Objectives for Type III Assessments). Essentially, school districts have to develop tests, kindergarten through twelfth grade, that will provide data which will be used as a significant part of a teacher’s evaluation (possibly constituting up to 50 percent of the overall rating).

To the public at large — that is, to non-educators — this emphasis on results may seem reasonable. Teachers are paid to teach kids, so what’s wrong with seeing if taxpayers are getting their money’s worth by administering a series of tests at every grade level? Moreover, if these tests reveal that a teacher isn’t teaching effectively, then what’s wrong with using recently weakened tenure and seniority laws to remove “bad teachers” from the classroom?

Again, on the surface, it all sounds reasonable.

But here’s the rub: The data generated by PARCC — and every other assessment — is all but pointless. To begin with, the public at large makes certain tacit assumptions: (1) The tests are valid assessments of the skills and knowledge they claim to measure; (2) the testing circumstances are ideal; and (3) students always take the tests seriously and try to do their best.

assessment blog quote 1

But none of these assumptions are true most of the time — and I would go so far as to say that all of them being true for every student, for every test practically never happens. In other words, when an assessment is given either the assessment itself is invalid, and/or the testing circumstances are less than ideal, and/or nothing is at stake for students so they don’t try their best (in fact, it’s not unusual for students to deliberately sabotage their results).

For simplicity’s sake, let’s look at the PARCC test (primarily) in terms of these three assumptions; and let’s restrict our discussion to validity (mainly). There have been numerous critiques of the test itself that point out its many flaws (see, for example here; or here; or here). But let’s just assume PARCC is beautifully designed and actually measures the things it claims to measure. There are still major problems with its data’s validity. Chief among the problems is the fact that there are too many factors beyond a district’s and — especially — a classroom teacher’s control to render the data meaningful.

For the results of a test — any test — to be meaningful, the test’s administrator must be able to control the testing circumstances to eliminate (or at least greatly reduce) factors which could influence and hence skew the results. Think about when you need to have your blood or urine tested — to check things like blood sugar or cholesterol levels — and you’re required to fast for several hours beforehand to help insure accurate results. Even a cup of tea or a glass of orange juice could throw off the process.

That’s an example that most people can relate to. If you’ve had any experience with scientific testing, you know what lengths have to be gone to in hopes of garnering unsullied results, including establishing a control group — that is, a group that isn’t subjected to whatever is being studied, to see how it fares in comparison to the group receiving whatever is being studied. In drug trials, for instance, one group will receive the drug being tested, while the control group receives a placebo.

Educational tests rarely have control groups — a group of children from whom instruction or a type of instruction is withheld to see how they do compared to a group that’s received the instructional practices intended to improve their knowledge and skills. But the lack of a control group is only the beginning of testing’s problems. School is a wild and woolly place filled with human beings who have complicated lives, and countless needs and desires. Stuff happens every day, all the time, that affects learning. Class size affects learning, class make-up (who’s in the class) affects learning, the caprices of technology affect learning, the physical health of the student affects learning, the mental health of the student affects learning, the health of the teacher affects learning (and in upper grades, each child has several teachers), the health and circumstances of the student’s parents and siblings affect learning, weather affects learning (think “snow days” and natural disasters); sports affects learning (athletes can miss a lot of school, and try teaching when the school’s football or basketball team is advancing toward the state championship); ____________ affects learning (feel free to fill in the blank because this is only a very partial list).

assessment blog quote 2

And let me say what no one ever seems to want to say: Some kids are just plain brighter than other kids. We would never assume a child whose DNA renders them five-foot-two could be taught to play in the NBA; or one whose DNA makes them six-foot-five and 300 pounds could learn to jockey a horse to the Triple Crown. Those statements are, well, no-brainers. Yet society seems to believe that every child can be taught to write a beautifully crafted research paper, or solve calculus problems, or comprehend the principles of physics, or grasp the metaphors of Shakespeare. And if a child can’t, then it must be the lazy teacher’s fault.

What is more, let’s look at that previous sentence: the lazy teacher’s fault. Therein lies another problem with the reformers’ argument for reform. The idea is that if a student underachieves on an exam, it must be the fault of the one teacher who was teaching that subject matter most recently (i.e., that school year). But learning is a synergistic effect. Every teacher who has taught that child previously has contributed to their learning, as have their parents, presumably, and the other people in their lives, and the media, and on and on. But let’s just stay within the framework of school. What if a teacher receives a crop of students who’d been taught the previous year by a first-year teacher (or a student teacher, or a substitute teacher who was standing in for someone on maternity or extended-illness leave), versus a crop of students who were taught by a master teacher with an advanced degree in their subject area?

Surely — if we accept that teaching experience and education contribute to teacher effectiveness — we would expect the students taught by a master teacher to have a leg up on the students who happened to get a newer, less seasoned, less educated teacher. So, from the teacher’s perspective, students are entering their class more or less adept in the subject depending on the teacher(s) they’ve had before. When I taught in southern Illinois, I was in a high school that received students from thirteen separate, curricularly disconnected districts, some small and rural, some larger and more urban — so the freshman teachers, especially, had an extremely diverse group, in terms of past educational experiences, on their hands.

For several years I’ve been an adjunct lecturer at University of Illinois Springfield, teaching in the first-year writing program. UIS attracts students from all over the state, including from places like Chicago and Peoria, in addition to students from nearby rural schools, and everything in between (plus a significant number of international students, especially from India and China). In the first class session I have students write a little about themselves — just answer a few questions on an index card. Leafing through those cards I can quickly get a sense of the quality of their educational backgrounds. Some students are coming from schools with smaller classes and more rigorous writing instruction, some from schools with larger classes and perhaps no writing instruction. The differences are obvious. Yet the expectation is that I will guide them all to be competent college-level writers by the end of the semester.

The point here, of course, is that when one administers a test, the results can provide a snapshot of the student’s abilities — but it’s providing a snapshot of abilities that were cured by uncountable and largely uncontrollable factors. How, then, does it make sense (or, how, then, is it fair) to hang the results around an individual teacher’s neck — either Olympic-medal like or albatross like, depending?

As I mentioned earlier, validity is only one issue. Others include the circumstances of the test, and the student’s motivation to do well (or their motivation to do poorly, which is sometimes the case). I don’t want to turn this into the War and Peace of blog posts, but I think one can see how the setting of the exam (the time of day, the physical space, the comfort level of the room, the noise around the test-taker, the performance of the technology [if it’s a computer-based exam like the PARCC is supposed to be]) can impact the results. Then toss in the fact that most of the many exams kids are (now) subjected to have no bearing on their lives — and you have a recipe for data that has little to do with how effectively students have been taught.

So, are all assessments completely worthless? Of course not — but their results have to be examined within the complex context they were produced. I give my students assessments all the time (papers, projects, tests, quizzes), but I know how I’ve taught them, and how the assessment was intended to work, and what the circumstances were during the assessment, and to some degree what’s been going on in the lives of the test-takers. I can look at their results within this web of complexities, and draw some working hypotheses about what’s going on in their brains — then adjust my teaching accordingly, from day to day, or semester to semester, or year to year. Some adjustments seem to work fairly well for most students, some not — but everything is within a context. I know to take some results seriously, and I know to disregard some altogether.

assessment blog quote 3

Mass testing doesn’t take into account these contexts. Even tests like the ACT and SAT, which have been administered for decades, are only considered as a piece of the whole picture when colleges are evaluating a student’s possible acceptance. Other factors are weighed too, like GPA, class rank, teacher recommendations, portfolios, interviews, and so on.

What does all this mean? One of things that it means is that teachers and administrators are frustrated with having to spend more and more time testing, and more and more time prepping their students for the tests — and less and less time actually teaching. It’s no exaggeration to say that several weeks per year, depending on the grade level and an individual school’s zeal for results, are devoted to assessment.

The goal of assessment is purported to be to improve education, but the true goals are to make school reform big business for exploitative companies like Pearson, and for the consultants who latch onto the movement remora-like, for example, Charlotte Danielson and the Danielson Group; and to implement the self-fulfilling prophecy of school and teacher failure.

(Note that I have sacrificed grammatical correctness in favor of non-gendered pronouns.)

Here’s my beef with PARCC and the Common Core

Posted in August 2014, Uncategorized by Ted Morrissey on August 9, 2014

Beginning this school year students in Illinois will be taking the new assessment known as PARCC (Partnership for Assessment of Readiness for College and Careers), which is also an accountability measure — meaning that it will be used to identify the schools (and therefore teachers) who are doing well and the ones who are not, based on their students’ scores. In this post I will be drawing from a document released this month by the Illinois State Board of Education, “The top 10 things teachers need to know about the new Illinois assessments.” PARCC is intended to align with the Common Core, which around here has been rebranded as the New Illinois Learning Standards Incorporating the Common Core (clearly a Madison Avenue PR firm wasn’t involved in selecting that name — though I’m surprised funds weren’t allocated for it).

This could be a very long post, but I’ll limit myself to my main issues with PARCC and the Common Core. The introduction to “The top 10 things” document raises some of the most fundamental problems with the revised approach. It begins, “Illinois has implemented new, higher standards for student learning in all schools across the state.” Let’s stop right there. I’m dubious that rewording the standards makes them “higher,” and from an English/language arts teacher perspective, the Common Core standards aren’t asking us to do anything different from what we’ve been doing since I started teaching in 1984. There’s an implied indictment in the opening sentence, suggesting that until now, the Common Core era, teachers haven’t been holding students to particularly high standards. I mean, logically, if there was space into which the standards could be raised, then they had to be lower before Common Core. It’s yet another iteration of the war-cry: Teachers, lazy dogs that they are, have been sandbagging all these years, and now they’re going to have to up their game — finally!

Then there’s the phrase “in all schools across the state,” that is, from the wealthiest Chicago suburb to the poorest downstate school district, and this idea gets at one of the biggest problems — if not the biggest — in education: grossly inequitable funding. We know that kids from well-to-do homes attending well-to-do schools do significantly better in school — and on assessments! — than kids who are battling poverty and all of its ill-effects. Teachers associations (aka, unions) have been among the many groups advocating to equalize school funding via changes to the tax code and other laws, but money buys power and powerful interests block funding reform again and again. So until the money being spent on every student’s education is the same, no assessment can hope to provide data that isn’t more about economic circumstances than student ability.

As if this disparity in funding weren’t problematic enough, school districts have been suffering cutbacks in state funding year after year, resulting in growing deficits, teacher layoffs (or non-replacement of retirees), and other direct hits to instruction.

According to the “The top 10 things” document, “[a] large number of Illinois educators have been involved in the development of the assessment.” I have no idea how large a “large number” is, but I know there’s a big difference between involvement and influence. From my experience over the last 31 years, it’s quite common for people to present proposals to school boards and the public clothed in the mantle of “teacher input,” but they fail to mention that the input was diametrically opposed to the proposal.

The very fact that the document says in talking point #1 that a large number of educators (who, by the way, are not necessarily the same as teachers) were involved in PARCC’s development tells us that PARCC was not developed by educators, and particularly not by classroom teachers. In other words, this reform movement was neither initiated nor orchestrated by educators. Some undefined number of undefined “educators” were brought on board, but there’s no guarantee that they had any substantive input into the assessment’s final form, or even endorsed it. I would hope that the teachers who were involved were vocal about the pointlessness of a revised assessment when the core problems (pun intended), like inadequate funding, are not being addressed. At all.

“The top 10 things” introduction ends with “Because teachers are at the center of these changes and directly contribute to student success, the Illinois State Board of Education has compiled a list of the ten most important things for teachers to know about the new tests.” In a better world, the sentence would be Because teachers are at the center of these changes and directly contribute to student success … the Illinois State Board of Education has tasked teachers with determining the best way to assess student performance. Instead, teachers are being given a two-page handout, which is heavy in snazzy graphics, two to three weeks before the start of the school year. In my district, we’ve had several inservices over the past two years regarding Common Core and PARCC, but our presenters had practically no concrete information to share with us because everything was in such a state of flux; as a consequence, we left meeting after meeting no better informed than we were after the previous one. Often the new possible developments revised or even replaced the old possible developments.

The second paragraph of the introduction claims that PARCC will “provide educators with reliable data that will help guide instruction … [more so] than the current tests required by the state.” I’ve already spoken to that so-called reliable data above, but a larger issue is that this statement assumes teachers are able to analyze all that data provided by previous tests in an attempt to guide instruction. It happens, and perhaps it happens in younger grades more so than in junior high and high school, but by and large teachers are so overwhelmed with the day-to-day — minute-to-minute! — demands of the job that there’s hardly time to pore through stacks of data and develop strategies based on what they appear to be saying about each student. Teachers generally have one prep or planning period per day, less than an hour in length. The rest of the time they’re up to their dry-erase boards in kids (25 to 30 or more per class is common). In that meager prep time and whatever time they can manage beyond that, they’re writing lesson plans; grading papers; developing worksheets, activities, tests, etc.; photocopying worksheets, activities, tests, etc.; contacting or responding to parents or administrators; filling out paperwork for students with IEPs or 504s; accommodating students’ individual needs, those with documented needs and those with undocumented ones; entering grades and updating their school websites; supervising hallways, cafeterias and parking lots; coaching, advising, sponsoring, chaperoning. . . .

Don’t get me wrong. I’m a scholar as well as a teacher. I believe in analyzing data. I’d love to have a better handle on what my students’ specific abilities are and how I might best deliver instruction to meet their needs. But the reality is that that isn’t a reasonable expectation given the traditional educational model — and it’s only getting worse in terms of time demands on teachers, with larger class sizes, ever-changing technology, and — now — allegedly higher standards.

Educational reformers are so light on classroom experience they haven’t a clue how demanding a teacher’s job is at its most fundamental level. In this regard I think education suffers from the fact that so many of its practitioners are so masterful at their job that their students and parents and board members and even administrators get the impression that it must be easy. Anyone who is excellent at what she or he does makes it look easy to the uninitiated observer.

I touched on ever-changing technology a moment ago; let me return to it. PARCC is intended to be an online assessment, but, as the document points out, having it online in all schools is unrealistic, and that “goal will take a few more years, as schools continue to update their equipment and infrastructure.” The goal of its being online is highly questionable in the first place. The more complicated one makes the assessment tool, the less cognitive processing space the student has to devote to the given question or task. Remember when you started driving a car? Just keeping the darn thing on the road was more than enough to think about. In those first few hours it was difficult to imagine that driving would become so effortless that one day you’d be able to drive, eat a cheeseburger, sing along with your favorite song, and argue with your cousin in the backseat, all simultaneously. At first, the demands of driving the car dominated your cognitive processing space. When students have to use an unfamiliar online environment to demonstrate their abilities to read, write, calculate and so on, how much will the online environment itself compromise the cognitive space they can devote to the reading, writing and calculating processes?

What is more, PARCC implies that schools, which are already financially strapped and overspending on technology (technology that has never been shown to improve student learning and may very well impede it), must channel dwindling resources — whether local, state or federal — to “update their equipment and infrastructure.” These are resources that could, if allowed, be used to lower class sizes, re-staff libraries and learning centers, and offer more diverse educational experiences to students via the fine arts and other non-core components of the curriculum. While PARCC may not require, per se, schools to spend money they don’t have on technology, it certainly encourages it.

What is even more, the online nature of PARCC introduces all kinds of variables into the testing situation that are greatly minimized by the paper-and-pencil tests it is supplanting. Students will need to take the test in computer labs, classrooms and other environments that may or may not be isolated and insulated from other parts of the school, or off-site setting. Granted, the sites of traditional testing have varied somewhat — you can’t make every setting precisely equal to every other setting — but it’s much, much easier to come much, much closer than when trying to do the test online. Desktop versus laptop computers (in myriad models), proximity to Wi-Fi, speed of connection (which may vary minute from minute), how much physical space can be inserted between test-takers — all of these are issues specific to online assessments, and they all will affect the results of the assessment.

So my beef comes down to this about PARCC and the Common Core: Hundreds of millions of dollars have been spent rewording standards and developing a new assessment that won’t actually help improve education. Here’s what would help teachers teach kids:

1. Equalize funding and increase it.

2. Lower class sizes, kindergarten through 12th grade, significantly — maximum fifteen per class, except for subjects that benefit from larger classes, like music courses.

3. Treat teachers better. Stop gunning for their jobs. Stop dismantling their unions. Stop driving them from the profession with onerous evaluation tools, low pay and benefits, underfunded pensions, too many students to teach to do their job well, and ridiculous mandates that make it harder to educate kids. Just stop it.

But these common sense suggestions will never fly because no one will make any money off of them, let alone get filthy rich, and education reform is big business — the test developers, textbook companies, technology companies, and high-priced consultants will make sure the gravy train of “reform” never gets derailed. In fact, the more they can make it look like kids are underachieving and teachers are underperforming, the more secure and more lucrative their scam is.

Thus PARCC and Common Core … let the good times roll.