12 Winters Blog

The myth of ‘best practices’ in education

Posted in August 2017, Uncategorized by Ted Morrissey on August 20, 2017

Last Wednesday I began my thirty-fourth year as a schoolteacher. To be sure, teaching has changed in those years, kids have, too — although neither as much as one might think. There is one thing, however, that has been amazingly consistent: the number of people who, year upon year, insist that I and my peers adopt a method which they bill as a “best practice” — some technique that they know will improve my teaching because, well, how could it not? It’s a best practice.

Not once — in all those innumerable workshops, inservices and presentations — has a purveyor of a best practice offered a shred of evidence that what they’re promoting will actually lead to better (let alone, the best) teaching. It’s always offered under the implied guise of common sense. It’s the epitome of the logical fallacy of begging the question: Dear Teacher, accept the fact that what you’ve been doing (whatever it may be) hasn’t been as effective as what I’m about to tell you to do. Trust me — I’m a presenter.

And teaching is, allegedly, an evidence-based profession. Schools claim that what they’re doing is “evidence-based,” but oftentimes, if there is something like evidence out there, it’s contrary to what’s being prescribed. On the one hand, I don’t really blame folks for not presenting the evidence to support their claims of the effectiveness of the practice they’re advocating, because (as I’ve written about before) testing in education is fraught with problems. It’s extremely difficult, if not impossible, to generate data which can be reliably analyzed. In any given testing situation, there are simply too many variables to control, and many of them are literally beyond the control of educators. Students are not rats confined to the tiny world of a lab where researchers can effect whatever conditions they’re studying. Imagine scientists sending their rats home each night and asking them to return the next morning for continued research; and periodically the group of rats they’ve been studying are replaced by a whole new group of rats whose histories are a total mystery. (Apologies for comparing students to rats — for what it’s worth, I like rats … and students.)

All right, so I don’t blame purveyors of best practices for not presenting their (nonexistent) evidence; however, I do blame them for suggesting, implicitly, that evidence does exist. It must, right? Otherwise how could they say some technique, some approach is “best” (or at least “better”)?

The reality is, best practices are a myth. Forget good, better, best; let’s turn, instead, to effective versus ineffective (and even that paradigm is nebulous). Effectiveness must be considered on a case by case basis. That is, we want all students to benefit as a result of our efforts, but what works for Bobby versus what works for Suzie on any given day at any given moment, for any given skill or knowledge acquisition, may constitute completely opposite approaches; and tomorrow the reverse may be true. And quite honestly, whether an approach is effective or ineffective may be unknowable, in the moment and even in the long term. The learning takes place in the student’s mind, and the mind is a murky, complicated place. Hopefully the skill or knowledge is identifiable and assessible (via a quiz or test or paper or project), but it may not be, especially in the humanities, which are more concerned with creative and critical applications than in the sciences or the vocational area, where right-or-wrong, black-or-white distinctions are the rule rather than the exception.

Generally the purveyor of a best practice is able to communicate the technique in a few bullet points on a handout or a PowerPoint, but the differences — the vast differences — between grade levels, subject matters, demographics of students, backgrounds and knowledge-levels of teachers, etc., etc., etc. make such simplistic declarations ridiculous. Imagine going to an agricultural convention and telling an assembled group of farmers that you have for them a best practice, and here it is in six bullet points. You’re welcome. No matter what they’re growing, where they’re growing it, what sorts of equipment they have at their disposal, what the climate models are suggesting, how the markets are trending — This is it, brother: Just follow these six steps and your yields will be out of this world. Trust me — I’m a presenter.

The farmers would be nonplussed to put it mildly. Plug in professionals from any other arena — business owners, attorneys, medical doctors, engineers — and the ridiculousness of it (that a single set of practices will improve what they’re doing, regardless of individual situations) becomes clear. It’s so clear, in fact, I can’t imagine any presenter doing it — telling a room full of surgeons, for instance, to do this one simple procedure all the time, no matter the patient’s history, no matter their lab work, no matter how they’re responding on the table — and yet it happens to educators all the time.

Almost without fail, techniques that are presented as best practices are observable. It’s about what you say to students or what they say to you; what you write on the chalkboard; what you write in lesson plans or curricular outlines. It simplifies the process of evaluating teachers’ performances if the evaluator can look for a few concrete actions from every teacher, from kindergarten teacher to calculus teacher, from welding teacher to reading teacher; from the teacher of gifted students to the teacher of exceptional students. It makes assessment so much simpler if everyone is singing from the same hymnal.

I deliberately used the word performances in the previous paragraph because so often that’s what evaluation boils down to: a performance for the audience-of-one, the evaluator. We often hear the term “high-stakes testing” in the media (that is, standardized tests whose results have significant consequences for test-takers and their schools), but we have also entered into a time of “high-stakes evaluating” for teachers, performance assessments which impact their literal job security. Teachers quickly learn that if their evaluator claims x, y and z are best practices, they’d better demonstrate x, y and z when they’re being observed — but quite possibly only when they’re being observed because in truth they don’t believe in the validity or the practicality of x, y and z as a rule.

In such cases, teachers are not trying to be insubordinate, or mocking, or rebellious; they’re trying to teach their charges in the most effective ways they know how (based on the training of their individual disciplines and their years of experience in the classroom), and they disagree with the practices which are being thrust upon them. Teachers do no take an oath equivalent to doctors’ Hippocratic oath, but conscientious teachers have, in essence, taken a personal and professional vow to do no harm to their students; thus they find themselves in a conundrum when their judgments about what’s effective and what isn’t are in conflict with the best practices by which they’re being evaluated. For teachers who care about how well they’re teaching — and that’s just about every teacher I’ve had the privilege to know in the last thirty-four years — it’s a source of stress and anxiety and even depression. More and more teachers every year find that the only way to alleviate that stress in their lives is to leave the profession.

Again, much of the problem is derived from the need for observable behaviors. I like to think my interactions with students in the classroom are positive and effective, but, as a teacher of literature and especially as a teacher of writing, I know my most important and most valuable work is all but invisible. My greatest strengths, I believe, are in developing questions and writing prompts that navigate students’ interactions with a text, and (even more so) in responding to the students’ work. When a student hands in an essay based on a prompt I’ve given them about a text, it is essentially a diagram of how their mind worked as they read and analyzed the text (a novel, or story, or poem, or film) — a kind of CAT scan if you will — and my task is to interpret the workings of their mind (in what ways did their mind work well, and in what ways did their mind veer off the path somewhat) and then, once I’ve interpreted their mind-at-work, I have to provide them comments which explain my interpretations and (here’s the really, really hard part) also comments which will alter their mental processes so that next time they’ll write a more effective essay. In short, I’m trying to get them to think better and to express their thoughts better. (I should point out that to do all of this, I also have to possess a thorough understanding of the text under consideration — a text perhaps by Homer or Shakespeare or Keats or James or Joyce or Morrison.)

It’s the most important thing I do, and no one observing me in the classroom will ever see it. If my students improve in their reading and thinking and writing and speaking — largely it will be because of my skill to interact with them productively, brain to brain, on the page. The process is both invisible and essential. This is what teaching English is; this is what English teachers do. And we are not unique, by any means, in the profession. Yet our value — our very job security — is based on behaviors that are secondary or even tangential to the most profound sorts of interactions we have with our students.

I know that purveyors of best practices mean well (for-profit educational consultants aside). They are good, smart people who sincerely believe in what they’re advocating, and frequently a kernel or two of meaningful advice can be derived from the presentation, but we need to stop pretending that there’s one method that will improve all teaching, regardless of the myriad factors which come into play every time a teacher engages a group of students. It makes teaching seem simple, and teaching is many, many, many things but simple isn’t one of them.

(Image found via Google Images here.)

 

 

 

Advertisements

The paradox of uniformity

Posted in April 2017, Uncategorized by Ted Morrissey on April 13, 2017

Nearly a year ago I posted “Danielson Framework criticized by Charlotte Danielson” and it has generated far more interest than I would have anticipated. As of this writing, it has been viewed more 130,000 times. It has been shared across various platforms of social media, and cited in other people’s blogs. The post has generated copious comments, and I’ve received dozens of emails from educators — mostly from North America but beyond too. Some educators have contacted me for advice (I have little to offer), some merely to share their frustration (I can relate), others to thank me for speaking up (the wisdom of which remains dubious). To be fair, not everyone has been enthusiastic. There have been comments from administrators who feel that Charlotte Danielson (and I) threw them under the school bus. Many administrators are not devotees of the Framework either, and they are doing their best with a legislatively mandated instrument.

Before this much-read post, I’d been commenting on Danielson and related issues for a while, and those posts have received a fair amount of attention also. Literally every day since I posted about Danielson criticizing the use of her own Framework the article has been read by at least a few people. The hits slowed down over the summer months, understandably; then picked up again in the fall — no doubt when teachers were confronted with the fact it’s their evaluation year (generally every other year for tenured teachers). Once people were in the throes of the school year, hits declined. However, beginning in February, the number of readers spiked again and have remained consistently high for weeks. Teachers, I suspect, are getting back their evaluations, and are Googling for information and solace after receiving their infuriating and disheartening Danielson-based critique. (One teacher wrote to me and said that he was graded down because he didn’t produce documentation that his colleagues think of him as an expert in the field. He didn’t know what that documentation would even look like — testimonials solicited in the work room? — and nor did I.)

It can tear the guts out of you and slacken your sails right when you need that energy and enthusiasm to finish the school year strong: get through student testing (e.g. PAARC), stroke for home on myriad learning outcomes, prepare students for advancing to the next year, and document, document, document — all while kids grow squirrelier by the minute with the advance of spring, warmer weather, and the large looming of year’s end.

But this post isn’t about any of that, at least not directly. The Danielson Framework and its unique failures are really part of a much larger issue in education, from pre-K to graduate school: something which I’ll call the drive for uniformity. I blame Business’s infiltration and parasitic take over of Education. It’s difficult to say exactly when the parasite broke the skin and began its pernicious spread. I’ve been teaching (gulp) since 1984 (yes, English teachers were goofy with glee at the prospect of teaching Nineteen Eighty-Four in 1984, just as I was in 2001 to teach 2001 — we’re weird like that), and even then, in ’84, I was given three curriculum guides with precisely 180 pages in each; I was teaching three different courses, and each guide had a page/lesson for each day of the school year. Everyone who was teaching a particular course was expected to be doing the same thing (teaching the same concept, handing out the same handout, proctoring the same test) on the same day.

Not every school system was quite so prescriptive. I moved to another district, and, thankfully, its curriculum was much less regimented. Nevertheless, it was at that school that I vividly recall sitting in a faculty meeting and the superintendent uttering the precept “We shall do more with less.” The School Board, with his encouragement, was simultaneously cutting staff while increasing curricular requirements. English teachers, for example, were going to be required to assign twelve essays per semester (with the understanding that these would be thoroughly read, commented on, and graded in a timely fashion). At the time I had around 150 students per day. With the cuts to staff, I eventually had nearly 200 students per day. This was the mid 1990s.

The point is, that phrase — We shall do more with less — comes right out of the business world. It’s rooted in the idea that more isn’t being achieved (greater productivity, greater profits) because of superfluous workers on the factory floor. We need to cut the slackers and force everyone else to work harder, faster — and when they drop dead from exhaustion, no problem: there are all those unemployed workers who will be chomping at the bit to get their old job back (with less pay and more expectations). CEOs in the business world claimed that schools were not doing their jobs. The employees they were hiring, they said, couldn’t do math, couldn’t write, had aversions to hard work and good attendance. It must be the fault of lazy teachers, the unproductive slackers on the factory floor so to speak.

Unions stood in the way of the mass clearing of house, so the war on unions was initiated in earnest. Conservative politicians, allied with business leaders, have been chipping away at unions (education and otherwise) wherever they can, under the euphemism of “Right to Work,” implying that unions are preventing good workers from working, and securing in their places lazy ne’er-do-wells. The strategy has been effective. Little by little, state by state, protections like tenure and seniority have been removed or severely weakened. Mandates have increased, while funds have been decreased or (like in Illinois) outright withheld, starving public schools to death. The frustrations of stagnant wages, depleted pensions, and weakened job security have been added to by unfair evaluation instruments like the Danielson Framework.

A telltale sign of business’s influence is the drive for uniformity. One of the selling points of the Danielson Framework was that it can be applied to all teachers, pre-K through 12th grade, and even professionals outside the classroom, like librarians and nurses. Its one-size-fits-all is efficient (sounding) and therefore appeals to legislators. Danielson is just one example, however. We see it everywhere. Teaching consultants who offer a magic bullet that will guarantee all students will learn, no matter the subject, grade level, or ability. Because, of course, teaching kindergarteners shapes is the same as teaching high school students calculus. Special education and physical education … practically the same thing (they sound alike, after all). Art and band … peas in a pod (I mean, playing music is a fine art, isn’t it? Duh.).

And the drive for uniformity has not been limited to K-12 education. Universities have been infected, too. All first-year writing students must have the same experience (or so it seems): write the same essays, read the same chapters in the same textbook, have their work evaluated according to the same rubric, etc., etc. Even syllabi have to be uniform: they have to contain the same elements, in the same order, reproduce the same university policies, even across departments. The syllabus for a university course is oftentimes dozens of pages long, and only a very small part of it is devoted to informing the students what they need to do from week to week. The rest is for accreditation purposes, apparently. And the uniformity in requirements and approaches helps to generate data (which outcomes are being achieved, which are not, that kind of thing).

It all looks quite scientific. You can generate spreadsheets and bar graphs, showing where students are on this outcome versus that outcome; how this group of students compares to last year’s group; make predictions; justify (hopefully) expenditures. It’s the equivalent of the much-publicized K-12 zeal for standardized testing, which gives birth to mountains of data — just about all of which is ignored once produced, which is just as well because it’s all but meaningless. People ignore the data because they’re too busy teaching just about every minute of every day to sift through the voluminous numbers; and the numbers are all but meaningless because they only look scientific, when in fact they aren’t scientific at all. (I’ve written about this, too, in my post “The fallacy of testing in education.”)

But this post isn’t about any of those things either.

It’s about the irony of uniformity, or the paradox of it, as I call it in my title. Concurrent with the business-based drive for uniformity has been the alleged drive for higher standards: more critical thinking, increased expectations, a faster track to skill achievement. Yet uniformity is the antithesis of higher standards. We’re supposed to have more rigor in our curricula, but coddle our charges in every other way.

We can’t expect students to deal with teachers who have varying classroom methods. We can’t expect them to adjust to different ways of grading. We can’t expect them to navigate differences in syllabi construction, teacher webpage design, or even the use of their classroom’s whiteboard. We can’t expect students to understand synonyms in directions, thus teachers must confine themselves to a limited collection of verbs and nouns when writing assignments and tests (for instance, we must all say “analyze” in lieu of “examine” or “consider” — all those different terms confuse the poor darlings). This is a true story: A consultant who came to speak to us about the increased rigor of the PAARC exam also advised us to stop telling our students to “check the box” on a test, because it’s actually a “square” and some students may be confused by looking for the three-dimensional “box” on the page. What?

But are these not real-world critical-thinking situations? Asking students to adapt to one teacher’s methodology versus another? Requiring students to follow the logic of an assignment written in this style versus that (or that … or that)? Having students adjust their schoolwork schedules to take into account different rhythms of due dates from teacher to teacher?

How often in our post-education lives are we guaranteed uniformity? There is much talk about getting students “career-ready” (another business world contribution to education), yet in our professional careers how much uniformity is there? If we’re dealing with various customers or clients, are they clones? Or are we expected to adjust to their personalities, their needs, their pocketbooks? For that matter, how uniform are our superiors? Perhaps we’re dealing with several managers or owners or execs. I’ll bet they’d love to hear how we prefer the way someone else in the organization does such and such, and wouldn’t they please adjust their approach to fit our preferences? That would no doubt turn into a lovely day at work.

I’ve been teaching for 33 years, and over that time I’ve worked under, let’s see, seven building principals (not to mention different superintendents and other administrators). Not once has it seemed like a good idea to let my current principal know how one of his predecessors handled a given situation in the spirit of encouraging his further reflection on the matter. Clearly I am the one who must adapt to the new style, the new approach, the new philosophy.

These are just a few examples of course. How much non-uniformity do we deal with every day, professionally and personally? An infinite amount is the correct answer. So, how precisely are we better preparing our students for life after formal education by making sure our delivery systems are consistently cookie-cutter? We aren’t is the correct answer. (Be sure to check the corresponding squares.)

Education has made the mistake of allowing Business to infect it to the core (to the Common Core, as a matter of fact). Now Business has taken over the White House, and it’s taken over bigly.

But this blog post isn’t about that.

Interview with Megan Sullivan: Clarissa’s Disappointment

Posted in February 2017, Uncategorized by Ted Morrissey on February 12, 2017

My wife Melissa and I launched Shining Hall, an imprint of Twelve Winters Press, in 2015 in large part because we know how important children’s literature can be in helping children achieve sound emotional health. Many children struggle with issues of depression and anxiety that impact their developing self-esteem. As a society we’re aware of the angst of teenagers, but we rarely pay much attention to younger children and the emotional struggles they may be facing every day. In an effort to expand Shining Hall’s list, we established the Larry D. Underwood Prize for Children’s Literature (named for Melissa’s father who was an educator and a prolific author).

We received several terrific entries, but one stood out above the rest: Clarissa’s Disappointment, with Resources for Families, Teachers and Counselors of Children of Incarcerated Parents, by Megan Sullivan. Melissa, the contest judge, loved the book. She said, “Clarissa’s Disappointment is exactly the kind of writing I want Shining Hall to publish. It perfectly captures an issue that affects so many children and families, yet largely remains unaddressed in our educational system. Incarceration does nothing to help individuals and contributes to the destruction of families. I chose this wonderful book hoping that it will be read and used by adults and children to begin the healing process at some level. My father, Larry Underwood, dedicated his life to children and would have loved to meet Meg, read her book, and share in the transformation that is Shining Hall.”

When Melissa shared the manuscript with me, I especially loved the fact it was in essence two books in one: a unique children’s story and a resource for adults who are trying to help children deal with having a parent in prison. Twelve Winters has specialized in hard-to-pigeonhole books that draw from multiple genres — a characteristic which may make them unacceptable to other publishers, especially commercial publishers.

In the spring of 2016 we sent Meg the good news that her book had been chosen for the Prize and we would be publishing it in print and digital formats. The only obstacle was that the book needed illustrations. Luckily, Meg thought she knew of someone who would be perfect for the job: Daniel Jay. Dan was interested, and throughout the summer he worked on illustrations for the book. Then in the fall and winter, Meg and I collaborated on editing and producing Clarissa’s Disappointment.

I’m pleased to announce that Clarissa’s Disappointment was published February 6, 2017, and will be available everywhere. It’s become something of a tradition that when the Press releases a new book, I interview the author via email and publish it here on my 12 Winters blog. Thus I sent Meg some questions, and what follows are her unedited responses.

CGS Prof. Megan Sullivan

What was your motivation for writing Clarissa’s Disappointment?

My motivation for writing Clarissa’s Disappointment was at least threefold. First, I believe such a book would have helped me when my father was incarcerated. I recall that when I was a middle-schooler, I read a book where the main protagonist, a boy, had a father in prison. I nearly gobbled that book up, because it felt to me that someone understood my predicament. I wrote Clarissa’s Disappointment in part because I wanted to offer that solace to others. I also wrote it because there are not many children’s books that focus on incarceration and none that I know that features what is called the “reentry period,” or that period of time when a formerly incarcerated person returns home to his community and family. It bothered me that the 2.7 million minor children who currently have parents in prison or jail as well as the untold number whose parents have been incarcerated in the United States might not be seeing their lives in print. Finally, I wrote the book because I could not get the voice of Clarissa out of my head.

9780986159756-perfect-clarissas-disappointment-front-cover-1000

You’ve mainly done academic writing. How easy or difficult was it to transition into writing children’s literature?

It didn’t feel like much of a transition for me. Perhaps this is because around the same time I began conducting research on children with incarcerated parents, I also started writing what would become Clarissa’s Disappointment. It could also be that it didn’t feel like much of a transition because I see the primary purpose of all writing as about being the best writer one can be. I tend to think less about genres and more about doing the best I can for the kind of writing I’m doing.

Up to the very last, you were tinkering with the text to get Clarissa’s narrative voice just right. Tell us about that process, of creating the voice of this little girl.

Yes, I so wanted to get Clarissa’s voice right. The tricky thing was that because the book is both a fictional story and a resource for others, it was sometimes hard to separate the voice of the child from the voice of the adult. When I was writing I literally had Clarissa’s voice in my head. I imagined what she looked like and how she spoke. I imagined how she moved and thought and wrote, and I tried to convey all of this. Because Clarissa’s story is informed by my own, I was also conscious not to conflate Clarissa’s voice with my voice.

In addition to Clarissa’s story, you’ve included resources for families, teachers and counselors of children of incarcerated parents. Where did you draw from for these resources? Why did you think it was important to create a book that is essentially two books in one?

A huge shout out to Twelve Winters Press! Who else would have taken on this challenge of two books in one? I couldn’t be more pleased. I also feel incredibly honored and humbled that Melissa Morrissey chose the book for the Underwood Prize. This award is special to me in part because Melissa is a teacher; that she “gets it” is a huge vote of confidence.

Often those who are tasked with or have the potential to talk to children whose parents are incarcerated know too little about the topic to be helpful. A school counselor might be sympathetic to the plight of a child whose parents are no longer living together, but will he/she know how to respond to questions about visiting a prison? Families might know how they feel about a loved one who is in prison or jail, but do they know the best way to discuss this with children? Teachers and school librarians want to help children find that “just right” book, but maybe they too would like to know more about how to choose a book with the needs of children whose parents are incarcerated in mind. Furthermore, there is professional literature out there for counselors, teachers and others, and there are some books about incarceration for children, but I felt that combining the two would bring children and adults together in a way that could be especially powerful.

How did you find the illustrator, Daniel Jay? Describe that collaborative process.

daniel-jayI have long been enamored of Dan Jay’s work, especially his urban street and market scenes. I also appreciate that Dan is a scientist by training (he runs a lab at Tufts University), and has spent much of his career teaching others about the connections between art and science. He understands deeply the relationships between art and science, writing and life, teaching and reading. Dan and I are also friends, and even though some might caution against working with a friend, we didn’t have any problems.

 

Creating the illustrations forced you (all of us) to commit to Clarissa’s ethnicity, and you hesitated somewhat (if I recall) to make Clarissa African-American but ultimately decided to. Tell us about that thought process and why you decided to make the Pettigrews a black family.

I have always imagined Clarissa as an African-American or bi-racial child. I also know African-American children are disproportionately affected by parental incarceration. For these and other reasons I couldn’t imagine yet another book that failed to showcase children of color as the center of their universe. And yet as a white woman I did not want to appropriate another person’s experience; nor did I want to perpetuate a stereotype about children of color (i.e. that their parents are the only people in prison or jail). Ultimately I think I did the right thing, because Clarissa is the character I imagined, and I feel like I remained true to her. Yet I think I was correct to at least consider the tension, and it helped me to talk about this with you, Ted. I think writers are correct to acknowledge the tension.

You seem to have great respect for reading and writing (perhaps, especially, reading and writing poetry) for their therapeutic value. Is that true, and if so, where does that respect stem from?

I do respect the potential therapeutic benefits of reading and writing. I’m sure that partly this is because both are therapeutic for me and always have been, though I’ve never been much of a diary-keeper. I think the written word endures because it has something to tell us as readers, and I know writing helps us think about what we believe and how we feel. Maybe this is particularly true in the case of children’s books. I can remember being both transported and grounded by books as a child, and I think it would be wonderful if we could offer others the same opportunity.

What are your hopes for Clarissa’s Disappointment and its resources? How do you hope it will be used? How important will networking be in getting it into the hands of both children who may enjoy it and benefit from reading it, and also the adult professional audience that you’re targeting?

My dream is that Clarissa’s Disappointment will be in as many school and classroom libraries as possible. I also hope families and counselors and organizations that work with children will buy the book to have on hand. I think I will have to be a huge networker to make this happen, and luckily I’m up for the challenge. I feel like I’ve got this thing that I believe in without reservation and that I feel nearly as zealous about as one might a religion! I’m hoping to visit schools and do readings and talk about the book to anyone who will listen, and maybe even those who don’t want to listen!

My wife and I recently watched the documentary 13th. It wasn’t, of course, totally new information, but the scope of the problem is astonishing, depressing, rage-provoking. I presume you’re familiar with the film. What is your reaction to it, especially in terms of what it means about the number of children who are dealing with having one or both parents in prison?

13th is rage-provoking, and you are correct that it brings to mind the sheer number of children who are affected. We know that currently there are 2.7 million minor children who have an incarcerated parent in the United States, and we know that millions more have experienced parental incarceration. And yet I think what 13th should also make us ponder is that all our children have been impacted by incarceration. What today we call mass incarceration has hurt all our families and communities.

What are some other projects you have in the works? Other children’s stories? Academic projects?

My next book will be about the Irish writer Maeve Brennan. In 1934 Brennan’s father was the first Irish minister to the United States. When the family returned to Ireland, Maeve stayed and made her career as a journalist and fiction writer. She wrote for The New Yorker from the 1950s through about 1980. The New Yorker published many of her short stories, and two collections of her writing were published while she was alive; more of her work was published after her death. Brennan is often remembered for how she died (i.e. penniless and mentally ill), but her prose is among the finest of twentieth-century women writers, and I want to celebrate that.



Megan Sullivan
is co-editor of Parental Incarceration: Personal Accounts and Developmental Impact, as well as many essays and articles. She was awarded the Anthony Award in Prose from Between the Lines Literary Review for her essay “My Father’s Prison.” She is an associate dean and associate professor at Boston University. Megan was ten years old when her father was incarcerated. (Author photo copyright © 2009 Boston University Photo Services)

Daniel Jay is an adjunct professor at the School of the Museum of Fine Arts and is a professor at Tufts University School of Medicine. He is a nationally recognized artist whose mission is to inspire where art and science meet. He has had a number of solo shows, including the Boston Convention Centre and the French Cultural Center. (Illustrator photo copyright © 2014 Kelvin Ma)

 

Danielson Framework criticized by Charlotte Danielson

Posted in April 2016, Uncategorized by Ted Morrissey on April 27, 2016

I’ve been writing about the Danielson Framework for Teacher Evaluation for a couple of years, and in fact my “Fatal Flaws of the Danielson Framework” has been my most read and most commented on post, with over 5,000 hits to date. I’ve also been outspoken about how administrators have been misusing the Framework, resulting in demoralized teachers and unimproved (if not diminished) performance in the classroom. (See in particular “Principals unwitting soldiers in Campbell Brown’s army” and “Lowered teacher evaluations require special training.”) At present, teachers are preparing — at great time and expense — to embark on the final leg of the revamped teacher evaluation method with the addition of student performance into the mix (see ISBE’s “Implementing the Student Growth Component in Teacher and Principal Evaluation”). I’ve also written about this wrongheaded development: “The fallacy of testing in education.”

Imagine my surprise when I discovered an unlikely ally in my criticism of Charlotte Danielson’s much lauded approach: Charlotte Danielson herself. The founder of the Danielson Framework published an article in Education Week (April 18 online) that called for the “Rethinking of Teacher Evaluation,” and I found myself agreeing with almost all of it — or, more accurately and more egocentrically, I found Charlotte Danielson agreeing with me, for she is the one who has changed her tune.

My sense is that Ms. Danielson is reacting to widespread dissatisfaction among teachers and principals with the evaluation process that has been put in place which is based on her Danielson Framework. Her article appeared concurrently with a report from The Network for Public Education based on a survey of nearly 3,000 educators in 48 states which is highly critical of changes in teacher evaluation and cites said changes as a primary reason for teachers exiting the profession in droves and for young people choosing not to go into education in the first place. For example, the report states, “Evaluations based on frameworks and rubrics, such as those created by Danielson and Marzano, have resulted in wasting far too much time. This is damaging the very work that evaluation is supposed to improve . . .” (p. 2).

Ms. Danielson does not, however, place blame in her Framework, at least not directly. She does state what practically all experienced teachers have known all along when she writes, “I’m deeply troubled by the transformation of teaching from a complex profession requiring nuanced judgment to the performance of certain behaviors that can be ticked off a checklist.” Her opinion is a change from earlier comments when she said that good teaching could be easily defined and identified.  In a 2012 interview, Ms. Danielson said that her assessment techniques are “not like rocket science,” whereas “[t]eaching is rocket science. Teaching is really hard work. But doing that [describing what teaching “looks like in words”] isn’t that big a deal. Honestly, it’s not. But nobody had done it.”

Instead of her Framework, then, Ms. Danielson places the lion’s share of the blame with state legislators who oversimplified her techniques via their adoptions, and — especially — with administrators who are not capable of using the Framework as it was intended. She writes, “[F]ew jurisdictions require their evaluators to actually demonstrate skill in making accurate judgments. But since evaluators must assign a score, teaching is distilled to numbers, ratings, and rankings, conveying a reductive nature to educators’ worth and undermining their overall confidence in the system.”

Amen, Sister Charlotte! Testify, girlfriend!

Danielson quote 1

Ms. Danielson’s critique of administrators is a valid one, especially considering that evaluators were programmed, during their Danielson training, to view virtually every teacher as less than excellent, which put even the best-intentioned evaluators in a nitpicking mode, looking for any reason, no matter how immaterial to effective teaching, to find a teacher lacking and score them “proficient” instead of “excellent.” In her criticism of administrators Ms. Danielson has touched upon what is, in fact, a major shortcoming of our education system: The road to becoming an administrator is not an especially rigorous one — especially when it comes to academic rigor — and once someone has achieved administrative status, there tends to be no apparatus in place to evaluate their performance, including (as Ms. Danielson points out) their performance in evaluating their teachers.

Provided that administrators can keep their immediate superior (if any) content, as well as the seven members of the school board (who are almost never educators themselves), they can appear to be effective. That is, as long as administrators do not violate the terms of the contract, and as long as they are not engaging in some form of obvious harassment, teachers have no way of lodging a complaint or even offering constructive criticism. Therefore, if administrators are using the Danielson Framework as a way of punishing teachers — giving them undeservedly reduced evaluations and thus exposing them to the harms that can befall them, including losing their job regardless of seniority —  there is no way for teachers to protect themselves. They cannot appeal an evaluation. They can write a letter to be placed alongside the evaluation explaining why the evaluation is unfair or invalid, but their complaint does not trigger a review of the evaluation. The evaluator’s word is final.

Danielson quote 2

According to the law of averages, not all administrators are excellent; and not all administrators use the evaluation instrument (Danielson or otherwise) excellently. Some administrators are average; some are poor. Some use the evaluation instrument in a mediocre way; some use it poorly. Hence you can quite easily have an entire staff of teachers whose value to the profession is completely distorted by a principal who is, to put it bluntly, bad at evaluating. And there’s not a thing anyone can do about it.

Another crucial point that Charlotte Danielson makes in her Education Week article is that experienced teachers should not be evaluated via the same method as teachers new to the field: “An evaluation policy must be differentiated according to whether teachers are new to the profession or the district, or teach under a continuing contract. . . . Once teachers acquire this status [i.e. tenure], they are full members of the professional community, and their principal professional work consists of ongoing professional learning.” In other words, experienced teachers, with advanced degrees in their content area and a long list of professional accomplishments, shouldn’t be subjected to the same evaluation procedure as someone who is only beginning their career and has much to learn.

In fact, using the same evaluation procedure creates a very odd dynamic: You oftentimes have an administrator who has had only a limited amount of classroom experience (frequently fewer than ten years, and perhaps only two or three) and whose only advanced degree is the one that allows them to be an administrator (whereby they mainly study things like school law and school finance), sitting in judgment of a teacher who has spent twenty or thirty years honing their teaching skills and who has an advanced degree in their subject area. What can the evaluator possibly say in their critique that is meaningful and appropriate? It is commonplace to find this sort of situation: A principal who was a physical education or drivers education teacher, for perhaps five years, is now sitting in an Advanced Placement Chemistry classroom evaluating a twenty-year veteran with a masters degree or perhaps even a Ph.D. in chemistry. The principal feels compelled to find something critical to say, so all they can do is nitpick. They can’t speak to anything of substance.

Danielson quote 3

What merit can there be in a system that makes evaluators omnipotent judges of teachers in subject areas that the evaluators themselves literally are not qualified to teach? It isn’t that veteran teachers don’t have anything to learn. Far from it. Teaching is a highly dynamic, highly challenging occupation; and the successful teacher is constantly learning, growing, self-reflecting, and networking with professional peers. The successful principal makes space for the teacher to teach and for the student to learn, and they protect that space from encroachment by anyone whose design is to impede that critical exchange.

Ms. Danielson offers this alternative to the current approach to evaluation: “An essential step in the system should be the movement from probationary to continuing status. This is the most important contribution of evaluation to the quality of teaching. Beyond that, the emphasis should be on professional learning, within a culture of trust and inquiry. . . . Experienced teachers in good standing should be eligible to apply for teacher-leadership positions, such as mentor, instructional coach, or team leader.”

Ironically, what Ms. Danielson is advocating is a return to evaluation as most teachers knew it prior to adoption of the Danielson Framework.

(Grammar alert: I have opted to use the gender-neutral pronouns they and their etc. even when they don’t agree in number with their antecedents.)

 

 

The fallacy of testing in education

Posted in October 2015 by Ted Morrissey on October 18, 2015

For the last several years education reformers have been preaching the religion of testing as the lynchpin to improving education (meanwhile offering no meaningful evidence that education is failing in the first place). Last year, the PARCC test (Partnership for Assessment of Readiness for College and Careers) made its maiden voyage in Illinois. Now teachers and school districts are scrambling to implement phase II of the overhaul of the teacher evaluation system begun two years before by incorporating student testing results into the assessment of teachers’ effectiveness (see the Guidebook on Student Learning Objectives for Type III Assessments). Essentially, school districts have to develop tests, kindergarten through twelfth grade, that will provide data which will be used as a significant part of a teacher’s evaluation (possibly constituting up to 50 percent of the overall rating).

To the public at large — that is, to non-educators — this emphasis on results may seem reasonable. Teachers are paid to teach kids, so what’s wrong with seeing if taxpayers are getting their money’s worth by administering a series of tests at every grade level? Moreover, if these tests reveal that a teacher isn’t teaching effectively, then what’s wrong with using recently weakened tenure and seniority laws to remove “bad teachers” from the classroom?

Again, on the surface, it all sounds reasonable.

But here’s the rub: The data generated by PARCC — and every other assessment — is all but pointless. To begin with, the public at large makes certain tacit assumptions: (1) The tests are valid assessments of the skills and knowledge they claim to measure; (2) the testing circumstances are ideal; and (3) students always take the tests seriously and try to do their best.

assessment blog quote 1

But none of these assumptions are true most of the time — and I would go so far as to say that all of them being true for every student, for every test practically never happens. In other words, when an assessment is given either the assessment itself is invalid, and/or the testing circumstances are less than ideal, and/or nothing is at stake for students so they don’t try their best (in fact, it’s not unusual for students to deliberately sabotage their results).

For simplicity’s sake, let’s look at the PARCC test (primarily) in terms of these three assumptions; and let’s restrict our discussion to validity (mainly). There have been numerous critiques of the test itself that point out its many flaws (see, for example here; or here; or here). But let’s just assume PARCC is beautifully designed and actually measures the things it claims to measure. There are still major problems with its data’s validity. Chief among the problems is the fact that there are too many factors beyond a district’s and — especially — a classroom teacher’s control to render the data meaningful.

For the results of a test — any test — to be meaningful, the test’s administrator must be able to control the testing circumstances to eliminate (or at least greatly reduce) factors which could influence and hence skew the results. Think about when you need to have your blood or urine tested — to check things like blood sugar or cholesterol levels — and you’re required to fast for several hours beforehand to help insure accurate results. Even a cup of tea or a glass of orange juice could throw off the process.

That’s an example that most people can relate to. If you’ve had any experience with scientific testing, you know what lengths have to be gone to in hopes of garnering unsullied results, including establishing a control group — that is, a group that isn’t subjected to whatever is being studied, to see how it fares in comparison to the group receiving whatever is being studied. In drug trials, for instance, one group will receive the drug being tested, while the control group receives a placebo.

Educational tests rarely have control groups — a group of children from whom instruction or a type of instruction is withheld to see how they do compared to a group that’s received the instructional practices intended to improve their knowledge and skills. But the lack of a control group is only the beginning of testing’s problems. School is a wild and woolly place filled with human beings who have complicated lives, and countless needs and desires. Stuff happens every day, all the time, that affects learning. Class size affects learning, class make-up (who’s in the class) affects learning, the caprices of technology affect learning, the physical health of the student affects learning, the mental health of the student affects learning, the health of the teacher affects learning (and in upper grades, each child has several teachers), the health and circumstances of the student’s parents and siblings affect learning, weather affects learning (think “snow days” and natural disasters); sports affects learning (athletes can miss a lot of school, and try teaching when the school’s football or basketball team is advancing toward the state championship); ____________ affects learning (feel free to fill in the blank because this is only a very partial list).

assessment blog quote 2

And let me say what no one ever seems to want to say: Some kids are just plain brighter than other kids. We would never assume a child whose DNA renders them five-foot-two could be taught to play in the NBA; or one whose DNA makes them six-foot-five and 300 pounds could learn to jockey a horse to the Triple Crown. Those statements are, well, no-brainers. Yet society seems to believe that every child can be taught to write a beautifully crafted research paper, or solve calculus problems, or comprehend the principles of physics, or grasp the metaphors of Shakespeare. And if a child can’t, then it must be the lazy teacher’s fault.

What is more, let’s look at that previous sentence: the lazy teacher’s fault. Therein lies another problem with the reformers’ argument for reform. The idea is that if a student underachieves on an exam, it must be the fault of the one teacher who was teaching that subject matter most recently (i.e., that school year). But learning is a synergistic effect. Every teacher who has taught that child previously has contributed to their learning, as have their parents, presumably, and the other people in their lives, and the media, and on and on. But let’s just stay within the framework of school. What if a teacher receives a crop of students who’d been taught the previous year by a first-year teacher (or a student teacher, or a substitute teacher who was standing in for someone on maternity or extended-illness leave), versus a crop of students who were taught by a master teacher with an advanced degree in their subject area?

Surely — if we accept that teaching experience and education contribute to teacher effectiveness — we would expect the students taught by a master teacher to have a leg up on the students who happened to get a newer, less seasoned, less educated teacher. So, from the teacher’s perspective, students are entering their class more or less adept in the subject depending on the teacher(s) they’ve had before. When I taught in southern Illinois, I was in a high school that received students from thirteen separate, curricularly disconnected districts, some small and rural, some larger and more urban — so the freshman teachers, especially, had an extremely diverse group, in terms of past educational experiences, on their hands.

For several years I’ve been an adjunct lecturer at University of Illinois Springfield, teaching in the first-year writing program. UIS attracts students from all over the state, including from places like Chicago and Peoria, in addition to students from nearby rural schools, and everything in between (plus a significant number of international students, especially from India and China). In the first class session I have students write a little about themselves — just answer a few questions on an index card. Leafing through those cards I can quickly get a sense of the quality of their educational backgrounds. Some students are coming from schools with smaller classes and more rigorous writing instruction, some from schools with larger classes and perhaps no writing instruction. The differences are obvious. Yet the expectation is that I will guide them all to be competent college-level writers by the end of the semester.

The point here, of course, is that when one administers a test, the results can provide a snapshot of the student’s abilities — but it’s providing a snapshot of abilities that were cured by uncountable and largely uncontrollable factors. How, then, does it make sense (or, how, then, is it fair) to hang the results around an individual teacher’s neck — either Olympic-medal like or albatross like, depending?

As I mentioned earlier, validity is only one issue. Others include the circumstances of the test, and the student’s motivation to do well (or their motivation to do poorly, which is sometimes the case). I don’t want to turn this into the War and Peace of blog posts, but I think one can see how the setting of the exam (the time of day, the physical space, the comfort level of the room, the noise around the test-taker, the performance of the technology [if it’s a computer-based exam like the PARCC is supposed to be]) can impact the results. Then toss in the fact that most of the many exams kids are (now) subjected to have no bearing on their lives — and you have a recipe for data that has little to do with how effectively students have been taught.

So, are all assessments completely worthless? Of course not — but their results have to be examined within the complex context they were produced. I give my students assessments all the time (papers, projects, tests, quizzes), but I know how I’ve taught them, and how the assessment was intended to work, and what the circumstances were during the assessment, and to some degree what’s been going on in the lives of the test-takers. I can look at their results within this web of complexities, and draw some working hypotheses about what’s going on in their brains — then adjust my teaching accordingly, from day to day, or semester to semester, or year to year. Some adjustments seem to work fairly well for most students, some not — but everything is within a context. I know to take some results seriously, and I know to disregard some altogether.

assessment blog quote 3

Mass testing doesn’t take into account these contexts. Even tests like the ACT and SAT, which have been administered for decades, are only considered as a piece of the whole picture when colleges are evaluating a student’s possible acceptance. Other factors are weighed too, like GPA, class rank, teacher recommendations, portfolios, interviews, and so on.

What does all this mean? One of things that it means is that teachers and administrators are frustrated with having to spend more and more time testing, and more and more time prepping their students for the tests — and less and less time actually teaching. It’s no exaggeration to say that several weeks per year, depending on the grade level and an individual school’s zeal for results, are devoted to assessment.

The goal of assessment is purported to be to improve education, but the true goals are to make school reform big business for exploitative companies like Pearson, and for the consultants who latch onto the movement remora-like, for example, Charlotte Danielson and the Danielson Group; and to implement the self-fulfilling prophecy of school and teacher failure.

(Note that I have sacrificed grammatical correctness in favor of non-gendered pronouns.)

Here’s my beef with PARCC and the Common Core

Posted in August 2014, Uncategorized by Ted Morrissey on August 9, 2014

Beginning this school year students in Illinois will be taking the new assessment known as PARCC (Partnership for Assessment of Readiness for College and Careers), which is also an accountability measure — meaning that it will be used to identify the schools (and therefore teachers) who are doing well and the ones who are not, based on their students’ scores. In this post I will be drawing from a document released this month by the Illinois State Board of Education, “The top 10 things teachers need to know about the new Illinois assessments.” PARCC is intended to align with the Common Core, which around here has been rebranded as the New Illinois Learning Standards Incorporating the Common Core (clearly a Madison Avenue PR firm wasn’t involved in selecting that name — though I’m surprised funds weren’t allocated for it).

This could be a very long post, but I’ll limit myself to my main issues with PARCC and the Common Core. The introduction to “The top 10 things” document raises some of the most fundamental problems with the revised approach. It begins, “Illinois has implemented new, higher standards for student learning in all schools across the state.” Let’s stop right there. I’m dubious that rewording the standards makes them “higher,” and from an English/language arts teacher perspective, the Common Core standards aren’t asking us to do anything different from what we’ve been doing since I started teaching in 1984. There’s an implied indictment in the opening sentence, suggesting that until now, the Common Core era, teachers haven’t been holding students to particularly high standards. I mean, logically, if there was space into which the standards could be raised, then they had to be lower before Common Core. It’s yet another iteration of the war-cry: Teachers, lazy dogs that they are, have been sandbagging all these years, and now they’re going to have to up their game — finally!

Then there’s the phrase “in all schools across the state,” that is, from the wealthiest Chicago suburb to the poorest downstate school district, and this idea gets at one of the biggest problems — if not the biggest — in education: grossly inequitable funding. We know that kids from well-to-do homes attending well-to-do schools do significantly better in school — and on assessments! — than kids who are battling poverty and all of its ill-effects. Teachers associations (aka, unions) have been among the many groups advocating to equalize school funding via changes to the tax code and other laws, but money buys power and powerful interests block funding reform again and again. So until the money being spent on every student’s education is the same, no assessment can hope to provide data that isn’t more about economic circumstances than student ability.

As if this disparity in funding weren’t problematic enough, school districts have been suffering cutbacks in state funding year after year, resulting in growing deficits, teacher layoffs (or non-replacement of retirees), and other direct hits to instruction.

According to the “The top 10 things” document, “[a] large number of Illinois educators have been involved in the development of the assessment.” I have no idea how large a “large number” is, but I know there’s a big difference between involvement and influence. From my experience over the last 31 years, it’s quite common for people to present proposals to school boards and the public clothed in the mantle of “teacher input,” but they fail to mention that the input was diametrically opposed to the proposal.

The very fact that the document says in talking point #1 that a large number of educators (who, by the way, are not necessarily the same as teachers) were involved in PARCC’s development tells us that PARCC was not developed by educators, and particularly not by classroom teachers. In other words, this reform movement was neither initiated nor orchestrated by educators. Some undefined number of undefined “educators” were brought on board, but there’s no guarantee that they had any substantive input into the assessment’s final form, or even endorsed it. I would hope that the teachers who were involved were vocal about the pointlessness of a revised assessment when the core problems (pun intended), like inadequate funding, are not being addressed. At all.

“The top 10 things” introduction ends with “Because teachers are at the center of these changes and directly contribute to student success, the Illinois State Board of Education has compiled a list of the ten most important things for teachers to know about the new tests.” In a better world, the sentence would be Because teachers are at the center of these changes and directly contribute to student success … the Illinois State Board of Education has tasked teachers with determining the best way to assess student performance. Instead, teachers are being given a two-page handout, which is heavy in snazzy graphics, two to three weeks before the start of the school year. In my district, we’ve had several inservices over the past two years regarding Common Core and PARCC, but our presenters had practically no concrete information to share with us because everything was in such a state of flux; as a consequence, we left meeting after meeting no better informed than we were after the previous one. Often the new possible developments revised or even replaced the old possible developments.

The second paragraph of the introduction claims that PARCC will “provide educators with reliable data that will help guide instruction … [more so] than the current tests required by the state.” I’ve already spoken to that so-called reliable data above, but a larger issue is that this statement assumes teachers are able to analyze all that data provided by previous tests in an attempt to guide instruction. It happens, and perhaps it happens in younger grades more so than in junior high and high school, but by and large teachers are so overwhelmed with the day-to-day — minute-to-minute! — demands of the job that there’s hardly time to pore through stacks of data and develop strategies based on what they appear to be saying about each student. Teachers generally have one prep or planning period per day, less than an hour in length. The rest of the time they’re up to their dry-erase boards in kids (25 to 30 or more per class is common). In that meager prep time and whatever time they can manage beyond that, they’re writing lesson plans; grading papers; developing worksheets, activities, tests, etc.; photocopying worksheets, activities, tests, etc.; contacting or responding to parents or administrators; filling out paperwork for students with IEPs or 504s; accommodating students’ individual needs, those with documented needs and those with undocumented ones; entering grades and updating their school websites; supervising hallways, cafeterias and parking lots; coaching, advising, sponsoring, chaperoning. . . .

Don’t get me wrong. I’m a scholar as well as a teacher. I believe in analyzing data. I’d love to have a better handle on what my students’ specific abilities are and how I might best deliver instruction to meet their needs. But the reality is that that isn’t a reasonable expectation given the traditional educational model — and it’s only getting worse in terms of time demands on teachers, with larger class sizes, ever-changing technology, and — now — allegedly higher standards.

Educational reformers are so light on classroom experience they haven’t a clue how demanding a teacher’s job is at its most fundamental level. In this regard I think education suffers from the fact that so many of its practitioners are so masterful at their job that their students and parents and board members and even administrators get the impression that it must be easy. Anyone who is excellent at what she or he does makes it look easy to the uninitiated observer.

I touched on ever-changing technology a moment ago; let me return to it. PARCC is intended to be an online assessment, but, as the document points out, having it online in all schools is unrealistic, and that “goal will take a few more years, as schools continue to update their equipment and infrastructure.” The goal of its being online is highly questionable in the first place. The more complicated one makes the assessment tool, the less cognitive processing space the student has to devote to the given question or task. Remember when you started driving a car? Just keeping the darn thing on the road was more than enough to think about. In those first few hours it was difficult to imagine that driving would become so effortless that one day you’d be able to drive, eat a cheeseburger, sing along with your favorite song, and argue with your cousin in the backseat, all simultaneously. At first, the demands of driving the car dominated your cognitive processing space. When students have to use an unfamiliar online environment to demonstrate their abilities to read, write, calculate and so on, how much will the online environment itself compromise the cognitive space they can devote to the reading, writing and calculating processes?

What is more, PARCC implies that schools, which are already financially strapped and overspending on technology (technology that has never been shown to improve student learning and may very well impede it), must channel dwindling resources — whether local, state or federal — to “update their equipment and infrastructure.” These are resources that could, if allowed, be used to lower class sizes, re-staff libraries and learning centers, and offer more diverse educational experiences to students via the fine arts and other non-core components of the curriculum. While PARCC may not require, per se, schools to spend money they don’t have on technology, it certainly encourages it.

What is even more, the online nature of PARCC introduces all kinds of variables into the testing situation that are greatly minimized by the paper-and-pencil tests it is supplanting. Students will need to take the test in computer labs, classrooms and other environments that may or may not be isolated and insulated from other parts of the school, or off-site setting. Granted, the sites of traditional testing have varied somewhat — you can’t make every setting precisely equal to every other setting — but it’s much, much easier to come much, much closer than when trying to do the test online. Desktop versus laptop computers (in myriad models), proximity to Wi-Fi, speed of connection (which may vary minute from minute), how much physical space can be inserted between test-takers — all of these are issues specific to online assessments, and they all will affect the results of the assessment.

So my beef comes down to this about PARCC and the Common Core: Hundreds of millions of dollars have been spent rewording standards and developing a new assessment that won’t actually help improve education. Here’s what would help teachers teach kids:

1. Equalize funding and increase it.

2. Lower class sizes, kindergarten through 12th grade, significantly — maximum fifteen per class, except for subjects that benefit from larger classes, like music courses.

3. Treat teachers better. Stop gunning for their jobs. Stop dismantling their unions. Stop driving them from the profession with onerous evaluation tools, low pay and benefits, underfunded pensions, too many students to teach to do their job well, and ridiculous mandates that make it harder to educate kids. Just stop it.

But these common sense suggestions will never fly because no one will make any money off of them, let alone get filthy rich, and education reform is big business — the test developers, textbook companies, technology companies, and high-priced consultants will make sure the gravy train of “reform” never gets derailed. In fact, the more they can make it look like kids are underachieving and teachers are underperforming, the more secure and more lucrative their scam is.

Thus PARCC and Common Core … let the good times roll.