12 Winters Blog

Critical thinking, conservatism and a personal conundrum

Posted in October 2017, Uncategorized by Ted Morrissey on October 22, 2017

I have a confession: I’ve been feeling anxious since the start of the school year. I haven’t slept especially well. I’ve had digestive issues. I developed a case of shingles. I’ve had trouble concentrating, and I’ve experienced some uncharacteristic lethargy (which I attribute to a mild bout of depression). Here’s the problem, I think: I’m a schoolteacher and I’m being evaluated this year. I don’t blame the Danielson Framework directly, but I do blame it for contributing to my anxiety.

This is my thirty-fourth year in the classroom, teaching mainly senior English classes (meanwhile I’ve also spent about twenty years teaching undergraduate and graduate courses in literature and writing — I have an MA and a Ph.D. in my subject area). Pre-Danielson, evaluations were kind of a nuisance, but all in all a positive experience. They would end with me sitting in my evaluator’s office discussing teaching strategies, underscoring things that seemed to work well and identifying an area or two where some tinkering may be in order. For twenty-plus years, I’d leave the office with an “excellent” rating, some food for thought (largely generated by my own self-reflection), and a sense of well-being because I was perceived as a valuable part of the school community. In short, I believed my evaluator was glad I was in the classroom.

Then came Charlotte Danielson and the Danielson Framework. Profit-driven school reformers and the legislators in their pockets embraced the Framework because of its proclivity to find fault with teachers. It was originally designed, after all, to be used with first-year teachers, so of course finding fault (that is, finding areas that need improvement) was one of its chief goals. It is rife with hairsplitting adjectives, adverbs and verbs that invite evaluators to select between categories (“distinguished” versus “proficient” for instance) that are separated by a razor’s edge. For example, right off the bat, in Domain One, “Demonstrating Knowledge of Content and Pedagogy,” evaluators are tasked with differentiating between a teacher who “displays extensive knowledge of the important concepts in the discipline and how these relate to one another and to other disciplines” (Distinguished) and a teacher who “displays solid knowledge of the important concepts in the discipline and how these relate to one another” (merely Proficient).

How does one quantitatively distinguish “extensive” from “solid” knowledge? How many whats are in an extensive understanding, and how many whats are in a solid understanding? Both teachers must show how these bits of knowledge relate to one another, but the distinguished teacher also shows how these bits relate to other disciplines. As an English teacher, I’m not sure what is meant by “other disciplines.” Under the umbrella of English are slightly smaller umbrella areas like literature, composition, and linguistics; and under each of these slightly smaller umbrellas like American literature, versus British literature, versus world literature; then we have Colonial and Native American literature, nineteenth-century literature, twentieth-century literature, and so on. Or does “other disciplines” strictly mean, from an English perspective, things like history, biology, psychology, and physical education? If one discusses character motivation in a piece of literature, is that not touching on psychology? If one discusses setting, could that not touch on history?

Then there’s the whole issue of explicit versus implicit display? How obviously must the relationship be made in order to count as being connected? And wait a second — isn’t the whole idea for the students to be making the connections themselves? Is the teacher who draws the connections explicitly doing the intellectual work for the students? Isn’t it better to lead the students to the point where they can make the connections themselves? How exactly will the evaluator be able to determine who among a hundred different souls made (or will someday make) connections thanks to a particular teacher’s efforts? Therefore, perhaps the teacher who isn’t demonstrating connections is the more distinguished teacher. Maybe Sister Charlotte has it all bass ackwards. Right? (After all, she has extremely limited classroom experience.)

Let’s toss into the chaotic mix the fact that the evaluators tasked with making these Solomon-like decisions almost certainly, statistically speaking, aren’t qualified to teach the subject themselves (they were, say, a drivers education teacher and now they’re evaluating an Advanced Placement chemistry teacher, or they were a choir teacher and now they’re evaluating an art teacher). Also, even with pop-in visits to the teacher’s classroom, they’re still only observing teachers less than 1% of the time they spend with students during the course of the school year.

Wait, you argue, teachers being evaluated under Danielson also have to provide documentation, that is, “artifacts” that demonstrate their abilities in the various Domains. When Danielson first came along six years ago (as far as my world is concerned), teachers would overwhelm their evaluators with hundreds of pages of artifacts, which still only told a tiny sliver of their story in the classroom. Understandably, evaluators weren’t able to wade through all the paperwork — to say nothing of their ability to understand it in any meaningful sort of way. (I certainly couldn’t look at a six-inch stack of handouts from the chemistry teacher or physics teacher or French teacher or P.E. teacher and be able to determine if it all meant they were Distinguished versus Proficient [versus Basic versus Unsatisfactory].)

After that initial round of Danielson-style evaluations, a lot of districts went to a slim-downed approach whereby teachers would only have to give their evaluator the bare minimum of artifactual evidence of their teaching ability. Great. But, hold on, isn’t the idea of providing artifacts designed to compensate for the copious gaps left by their evaluators observing their teaching less than 1% of the time they spend with students? The ridiculously thick binders of documentation only told a tiny portion of the teacher’s professional story, and now the big improvement is that teachers are allowed to provide a tiny portion of the tiny portion. Granted, the amount of material is much more manageable, but does it a give greater or lesser insight into the teacher’s professional skills? Yes, reading only the first few pages of James Joyce’s Ulysses is a more manageable task than reading the whole 650-page novel of dense, experimental prose — but should one be in a position of authoritatively passing judgment on the book? (Side note: Censors used to think so.)

Thousands of teachers find themselves in the anxiety-producing situation of having their livelihoods depend on the assessment of an evaluator who isn’t qualified in their subject area and who has significantly less classroom experience, who’s using an instrument designed by someone with even fewer qualifications and even less experience, mandated by legislators who have no qualifications and no experience. It’s a wonder any of us can eat or sleep at all.

Two years ago, I found myself in fairly serious trouble with my superior. The incident happened after my evaluation was completed (just). I received an “excellent” (our version of “distinguished”), but it was no sure thing; and with the shadow of the incident of two years ago still stretching its gloom over my teaching life, I have no idea what to expect this time around. It’s a complicated story and it’d probably be unwise to get into the details, but I believe it all boils down to the fact that my overarching goal as a teacher has always been to coax my students into being critical thinkers. Every day, sometimes by microscopic degrees, I’m trying to prod my students toward becoming critical thinkers, or better and better critical thinkers.

To think critically one must at one’s core question literally everything. Nothing can be sacred; that is, no subject, no person, no movement — nothing — can be beyond critical analysis. With the rise of the Alt-right and Trumpism, we have seen the most extreme conservative elements in our society emboldened. The media cover the most eye-catching examples: dramatic rallies, violent attacks, policy shifts at the state and federal levels, and so on.

But the rise of extreme conservatism filters into our everyday lives, and conservatism is antithetical to critical thinking. For conservatives, there are sacred subjects: God and guns, for example, the concept of American exceptionalism, and, perhaps most sacred of all, conservatism itself. Throughout my career I have encouraged my students to question everything — all ideas, liberal and conservative, all people and their most heartfelt opinions, including me and mine. Extreme conservatives don’t want that sort of academic environment for their children. They don’t want their children critically analyzing conservatives’ sacred subjects — and teachers who encourage such analyses are considered antagonists.

I’m sure extreme conservatives in our communities have always felt this way, but from my perspective it’s only been since the rise of the Alt-right and Trumpism that they’ve been emboldened to attack individual teachers whom they see as part of some ill-defined liberal conspiracy to indoctrinate their children with unwholesome, impure and downright dangerous thoughts. My methods, however, are not designed to imprint certain kinds of thoughts on students’ brains, liberal or otherwise, but rather to enable students to develop their own ideas based on legitimately generated data — thoughts which may run contrary to my own way of viewing the world, and that’s just fine with me. Nothing brightens my day more than a student showing me a new way of seeing things.

I am not someone who seeks out and enjoys confrontation — most teachers, I would say, are not. But I find myself in a professional and personal conundrum: Do I remain true to my overarching mission of fashioning my students into lean, mean critical-thinking machines, or do I avoid conflict by kowtowing and treating certain topics as untouchable because conservatives consider them sacred? Once those walls of untouchability are erected, their confinements spread like a cancer through the anatomy of critical thinking. In fact, critical thinking ceases to exist.

What is more, teachers in the humanities, and especially teachers of older students in the humanities, are unfairly at risk to come under attack by conservatives. Teachers in the sciences and vocational areas are not duty-bound to engage controversial subjects. Conservatives don’t concern themselves with the way geometry theorems are taught, or which method of accounting the business teacher advocates, or the proper way to apply lacquer to a freshly constructed cabinet.

Life, on the other hand, is different for English teachers. How does one teach To Kill a Mockingbird without entering into discussions of racism? Or Heart of Darkness and considerations of colonialism? Macbeth and ill-gotten political power run amok? How does one teach logical fallacies and propaganda techniques, and avoid contemporary examples related to “fake news” and “alternative facts”?

My seniors graduate to schools like Cornell, Notre Dame, DePaul, Northwestern, and University of Illinois to name a few. They are considering careers in medicine, the law, engineering, psychology. As undergraduates and graduate students they will be in direct competition with peers who have come out of academic environments immune to conservative meddling. My students’ critical-thinking skills must be as finely tuned as I am capable of making them, but in recent years I have been hamstrung with the knowledge that bringing up the wrong topic in class or allowing students to pursue certain lines of inquiry could jeopardize my career. For the material we’re studying I think of apt comparisons to current events, but hold my tongue. Before, a lively and thought-provoking discussion could have ensued. Now we quietly move on to the next page of text.

Compounding the problem is that the complexly nebulous Danielson Framework can be manipulated to find teachers to be whatever evaluators want them to be: from rock star to ne’er-do-well — it all depends on what boxes an evaluator feels like checking: Does a teacher demonstrate solid or extensive knowledge of concepts? To be clear, it’s not simply a matter of ego. What difference does it make, one might ask, whether a teacher is judged this versus that according to the Danielson rubric?

Here’s the answer: Republican legislators have been chipping away at tenure and seniority laws and at teacher unions, and they’ve been successful in Illinois and elsewhere at weakening the webwork of laws to the point where a veteran teacher could be terminated in favor of a less-experienced one if their evaluation shows them to be lacking. It’s all under the pretense of giving school boards the ability to replace old, underachieving teachers with young go-getters. But it could easily be used to replace an expensive teacher with a cheaper one, a trouble-making teacher with a more docile one, a liberally minded teacher with a more conservative one — or a gay teacher with a straight one, a teacher of color with a white one, a female with a male, a Muslim with a Christian, an agnostic with a believer.

Charlotte Danielson herself noted that the biggest problem with her own Framework is the misdirected way evaluators are applying it to their teaching staff. In fact, she recommends that her Framework not be used once a teacher has achieved a particular professional status (tenure perhaps?).

The Danielson Framework, combined with the rise of extreme conservatism have opened the door to a world where ability, experience, dedication and old-fashioned hard work can be rendered moot by a series of checks on a computer screen. This new reality is what’s been weighing on me since the start of the school year, and I know I’m not alone. My posts about the shortcomings of the Danielson Framework and how the Framework is being used in education have attracted around 200,000 readers and hundreds of comments (practically all of them in support of my views) — some posted to my blog, but others sent to me via email or Messenger, or spoken in person, because many, many teachers want to avoid the public viewing of their opinions. They are afraid of reprisals.

This has become the world in which we teach.

I have reached the end of this post. My finger, in essence, hovers over the “Publish” button. My anxiety spikes. My gut takes a turn or two. Will posting this help anyone or anything, or is it merely adding another nail to my coffin?

(Note: Stock teacher image found here.)

 

Advertisements

The paradox of uniformity

Posted in April 2017, Uncategorized by Ted Morrissey on April 13, 2017

Nearly a year ago I posted “Danielson Framework criticized by Charlotte Danielson” and it has generated far more interest than I would have anticipated. As of this writing, it has been viewed more 130,000 times. It has been shared across various platforms of social media, and cited in other people’s blogs. The post has generated copious comments, and I’ve received dozens of emails from educators — mostly from North America but beyond too. Some educators have contacted me for advice (I have little to offer), some merely to share their frustration (I can relate), others to thank me for speaking up (the wisdom of which remains dubious). To be fair, not everyone has been enthusiastic. There have been comments from administrators who feel that Charlotte Danielson (and I) threw them under the school bus. Many administrators are not devotees of the Framework either, and they are doing their best with a legislatively mandated instrument.

Before this much-read post, I’d been commenting on Danielson and related issues for a while, and those posts have received a fair amount of attention also. Literally every day since I posted about Danielson criticizing the use of her own Framework the article has been read by at least a few people. The hits slowed down over the summer months, understandably; then picked up again in the fall — no doubt when teachers were confronted with the fact it’s their evaluation year (generally every other year for tenured teachers). Once people were in the throes of the school year, hits declined. However, beginning in February, the number of readers spiked again and have remained consistently high for weeks. Teachers, I suspect, are getting back their evaluations, and are Googling for information and solace after receiving their infuriating and disheartening Danielson-based critique. (One teacher wrote to me and said that he was graded down because he didn’t produce documentation that his colleagues think of him as an expert in the field. He didn’t know what that documentation would even look like — testimonials solicited in the work room? — and nor did I.)

It can tear the guts out of you and slacken your sails right when you need that energy and enthusiasm to finish the school year strong: get through student testing (e.g. PAARC), stroke for home on myriad learning outcomes, prepare students for advancing to the next year, and document, document, document — all while kids grow squirrelier by the minute with the advance of spring, warmer weather, and the large looming of year’s end.

But this post isn’t about any of that, at least not directly. The Danielson Framework and its unique failures are really part of a much larger issue in education, from pre-K to graduate school: something which I’ll call the drive for uniformity. I blame Business’s infiltration and parasitic take over of Education. It’s difficult to say exactly when the parasite broke the skin and began its pernicious spread. I’ve been teaching (gulp) since 1984 (yes, English teachers were goofy with glee at the prospect of teaching Nineteen Eighty-Four in 1984, just as I was in 2001 to teach 2001 — we’re weird like that), and even then, in ’84, I was given three curriculum guides with precisely 180 pages in each; I was teaching three different courses, and each guide had a page/lesson for each day of the school year. Everyone who was teaching a particular course was expected to be doing the same thing (teaching the same concept, handing out the same handout, proctoring the same test) on the same day.

Not every school system was quite so prescriptive. I moved to another district, and, thankfully, its curriculum was much less regimented. Nevertheless, it was at that school that I vividly recall sitting in a faculty meeting and the superintendent uttering the precept “We shall do more with less.” The School Board, with his encouragement, was simultaneously cutting staff while increasing curricular requirements. English teachers, for example, were going to be required to assign twelve essays per semester (with the understanding that these would be thoroughly read, commented on, and graded in a timely fashion). At the time I had around 150 students per day. With the cuts to staff, I eventually had nearly 200 students per day. This was the mid 1990s.

The point is, that phrase — We shall do more with less — comes right out of the business world. It’s rooted in the idea that more isn’t being achieved (greater productivity, greater profits) because of superfluous workers on the factory floor. We need to cut the slackers and force everyone else to work harder, faster — and when they drop dead from exhaustion, no problem: there are all those unemployed workers who will be chomping at the bit to get their old job back (with less pay and more expectations). CEOs in the business world claimed that schools were not doing their jobs. The employees they were hiring, they said, couldn’t do math, couldn’t write, had aversions to hard work and good attendance. It must be the fault of lazy teachers, the unproductive slackers on the factory floor so to speak.

Unions stood in the way of the mass clearing of house, so the war on unions was initiated in earnest. Conservative politicians, allied with business leaders, have been chipping away at unions (education and otherwise) wherever they can, under the euphemism of “Right to Work,” implying that unions are preventing good workers from working, and securing in their places lazy ne’er-do-wells. The strategy has been effective. Little by little, state by state, protections like tenure and seniority have been removed or severely weakened. Mandates have increased, while funds have been decreased or (like in Illinois) outright withheld, starving public schools to death. The frustrations of stagnant wages, depleted pensions, and weakened job security have been added to by unfair evaluation instruments like the Danielson Framework.

A telltale sign of business’s influence is the drive for uniformity. One of the selling points of the Danielson Framework was that it can be applied to all teachers, pre-K through 12th grade, and even professionals outside the classroom, like librarians and nurses. Its one-size-fits-all is efficient (sounding) and therefore appeals to legislators. Danielson is just one example, however. We see it everywhere. Teaching consultants who offer a magic bullet that will guarantee all students will learn, no matter the subject, grade level, or ability. Because, of course, teaching kindergarteners shapes is the same as teaching high school students calculus. Special education and physical education … practically the same thing (they sound alike, after all). Art and band … peas in a pod (I mean, playing music is a fine art, isn’t it? Duh.).

And the drive for uniformity has not been limited to K-12 education. Universities have been infected, too. All first-year writing students must have the same experience (or so it seems): write the same essays, read the same chapters in the same textbook, have their work evaluated according to the same rubric, etc., etc. Even syllabi have to be uniform: they have to contain the same elements, in the same order, reproduce the same university policies, even across departments. The syllabus for a university course is oftentimes dozens of pages long, and only a very small part of it is devoted to informing the students what they need to do from week to week. The rest is for accreditation purposes, apparently. And the uniformity in requirements and approaches helps to generate data (which outcomes are being achieved, which are not, that kind of thing).

It all looks quite scientific. You can generate spreadsheets and bar graphs, showing where students are on this outcome versus that outcome; how this group of students compares to last year’s group; make predictions; justify (hopefully) expenditures. It’s the equivalent of the much-publicized K-12 zeal for standardized testing, which gives birth to mountains of data — just about all of which is ignored once produced, which is just as well because it’s all but meaningless. People ignore the data because they’re too busy teaching just about every minute of every day to sift through the voluminous numbers; and the numbers are all but meaningless because they only look scientific, when in fact they aren’t scientific at all. (I’ve written about this, too, in my post “The fallacy of testing in education.”)

But this post isn’t about any of those things either.

It’s about the irony of uniformity, or the paradox of it, as I call it in my title. Concurrent with the business-based drive for uniformity has been the alleged drive for higher standards: more critical thinking, increased expectations, a faster track to skill achievement. Yet uniformity is the antithesis of higher standards. We’re supposed to have more rigor in our curricula, but coddle our charges in every other way.

We can’t expect students to deal with teachers who have varying classroom methods. We can’t expect them to adjust to different ways of grading. We can’t expect them to navigate differences in syllabi construction, teacher webpage design, or even the use of their classroom’s whiteboard. We can’t expect students to understand synonyms in directions, thus teachers must confine themselves to a limited collection of verbs and nouns when writing assignments and tests (for instance, we must all say “analyze” in lieu of “examine” or “consider” — all those different terms confuse the poor darlings). This is a true story: A consultant who came to speak to us about the increased rigor of the PAARC exam also advised us to stop telling our students to “check the box” on a test, because it’s actually a “square” and some students may be confused by looking for the three-dimensional “box” on the page. What?

But are these not real-world critical-thinking situations? Asking students to adapt to one teacher’s methodology versus another? Requiring students to follow the logic of an assignment written in this style versus that (or that … or that)? Having students adjust their schoolwork schedules to take into account different rhythms of due dates from teacher to teacher?

How often in our post-education lives are we guaranteed uniformity? There is much talk about getting students “career-ready” (another business world contribution to education), yet in our professional careers how much uniformity is there? If we’re dealing with various customers or clients, are they clones? Or are we expected to adjust to their personalities, their needs, their pocketbooks? For that matter, how uniform are our superiors? Perhaps we’re dealing with several managers or owners or execs. I’ll bet they’d love to hear how we prefer the way someone else in the organization does such and such, and wouldn’t they please adjust their approach to fit our preferences? That would no doubt turn into a lovely day at work.

I’ve been teaching for 33 years, and over that time I’ve worked under, let’s see, seven building principals (not to mention different superintendents and other administrators). Not once has it seemed like a good idea to let my current principal know how one of his predecessors handled a given situation in the spirit of encouraging his further reflection on the matter. Clearly I am the one who must adapt to the new style, the new approach, the new philosophy.

These are just a few examples of course. How much non-uniformity do we deal with every day, professionally and personally? An infinite amount is the correct answer. So, how precisely are we better preparing our students for life after formal education by making sure our delivery systems are consistently cookie-cutter? We aren’t is the correct answer. (Be sure to check the corresponding squares.)

Education has made the mistake of allowing Business to infect it to the core (to the Common Core, as a matter of fact). Now Business has taken over the White House, and it’s taken over bigly.

But this blog post isn’t about that.

Danielson Framework criticized by Charlotte Danielson

Posted in April 2016, Uncategorized by Ted Morrissey on April 27, 2016

I’ve been writing about the Danielson Framework for Teacher Evaluation for a couple of years, and in fact my “Fatal Flaws of the Danielson Framework” has been my most read and most commented on post, with over 5,000 hits to date. I’ve also been outspoken about how administrators have been misusing the Framework, resulting in demoralized teachers and unimproved (if not diminished) performance in the classroom. (See in particular “Principals unwitting soldiers in Campbell Brown’s army” and “Lowered teacher evaluations require special training.”) At present, teachers are preparing — at great time and expense — to embark on the final leg of the revamped teacher evaluation method with the addition of student performance into the mix (see ISBE’s “Implementing the Student Growth Component in Teacher and Principal Evaluation”). I’ve also written about this wrongheaded development: “The fallacy of testing in education.”

Imagine my surprise when I discovered an unlikely ally in my criticism of Charlotte Danielson’s much lauded approach: Charlotte Danielson herself. The founder of the Danielson Framework published an article in Education Week (April 18 online) that called for the “Rethinking of Teacher Evaluation,” and I found myself agreeing with almost all of it — or, more accurately and more egocentrically, I found Charlotte Danielson agreeing with me, for she is the one who has changed her tune.

My sense is that Ms. Danielson is reacting to widespread dissatisfaction among teachers and principals with the evaluation process that has been put in place which is based on her Danielson Framework. Her article appeared concurrently with a report from The Network for Public Education based on a survey of nearly 3,000 educators in 48 states which is highly critical of changes in teacher evaluation and cites said changes as a primary reason for teachers exiting the profession in droves and for young people choosing not to go into education in the first place. For example, the report states, “Evaluations based on frameworks and rubrics, such as those created by Danielson and Marzano, have resulted in wasting far too much time. This is damaging the very work that evaluation is supposed to improve . . .” (p. 2).

Ms. Danielson does not, however, place blame in her Framework, at least not directly. She does state what practically all experienced teachers have known all along when she writes, “I’m deeply troubled by the transformation of teaching from a complex profession requiring nuanced judgment to the performance of certain behaviors that can be ticked off a checklist.” Her opinion is a change from earlier comments when she said that good teaching could be easily defined and identified.  In a 2012 interview, Ms. Danielson said that her assessment techniques are “not like rocket science,” whereas “[t]eaching is rocket science. Teaching is really hard work. But doing that [describing what teaching “looks like in words”] isn’t that big a deal. Honestly, it’s not. But nobody had done it.”

Instead of her Framework, then, Ms. Danielson places the lion’s share of the blame with state legislators who oversimplified her techniques via their adoptions, and — especially — with administrators who are not capable of using the Framework as it was intended. She writes, “[F]ew jurisdictions require their evaluators to actually demonstrate skill in making accurate judgments. But since evaluators must assign a score, teaching is distilled to numbers, ratings, and rankings, conveying a reductive nature to educators’ worth and undermining their overall confidence in the system.”

Amen, Sister Charlotte! Testify, girlfriend!

Danielson quote 1

Ms. Danielson’s critique of administrators is a valid one, especially considering that evaluators were programmed, during their Danielson training, to view virtually every teacher as less than excellent, which put even the best-intentioned evaluators in a nitpicking mode, looking for any reason, no matter how immaterial to effective teaching, to find a teacher lacking and score them “proficient” instead of “excellent.” In her criticism of administrators Ms. Danielson has touched upon what is, in fact, a major shortcoming of our education system: The road to becoming an administrator is not an especially rigorous one — especially when it comes to academic rigor — and once someone has achieved administrative status, there tends to be no apparatus in place to evaluate their performance, including (as Ms. Danielson points out) their performance in evaluating their teachers.

Provided that administrators can keep their immediate superior (if any) content, as well as the seven members of the school board (who are almost never educators themselves), they can appear to be effective. That is, as long as administrators do not violate the terms of the contract, and as long as they are not engaging in some form of obvious harassment, teachers have no way of lodging a complaint or even offering constructive criticism. Therefore, if administrators are using the Danielson Framework as a way of punishing teachers — giving them undeservedly reduced evaluations and thus exposing them to the harms that can befall them, including losing their job regardless of seniority —  there is no way for teachers to protect themselves. They cannot appeal an evaluation. They can write a letter to be placed alongside the evaluation explaining why the evaluation is unfair or invalid, but their complaint does not trigger a review of the evaluation. The evaluator’s word is final.

Danielson quote 2

According to the law of averages, not all administrators are excellent; and not all administrators use the evaluation instrument (Danielson or otherwise) excellently. Some administrators are average; some are poor. Some use the evaluation instrument in a mediocre way; some use it poorly. Hence you can quite easily have an entire staff of teachers whose value to the profession is completely distorted by a principal who is, to put it bluntly, bad at evaluating. And there’s not a thing anyone can do about it.

Another crucial point that Charlotte Danielson makes in her Education Week article is that experienced teachers should not be evaluated via the same method as teachers new to the field: “An evaluation policy must be differentiated according to whether teachers are new to the profession or the district, or teach under a continuing contract. . . . Once teachers acquire this status [i.e. tenure], they are full members of the professional community, and their principal professional work consists of ongoing professional learning.” In other words, experienced teachers, with advanced degrees in their content area and a long list of professional accomplishments, shouldn’t be subjected to the same evaluation procedure as someone who is only beginning their career and has much to learn.

In fact, using the same evaluation procedure creates a very odd dynamic: You oftentimes have an administrator who has had only a limited amount of classroom experience (frequently fewer than ten years, and perhaps only two or three) and whose only advanced degree is the one that allows them to be an administrator (whereby they mainly study things like school law and school finance), sitting in judgment of a teacher who has spent twenty or thirty years honing their teaching skills and who has an advanced degree in their subject area. What can the evaluator possibly say in their critique that is meaningful and appropriate? It is commonplace to find this sort of situation: A principal who was a physical education or drivers education teacher, for perhaps five years, is now sitting in an Advanced Placement Chemistry classroom evaluating a twenty-year veteran with a masters degree or perhaps even a Ph.D. in chemistry. The principal feels compelled to find something critical to say, so all they can do is nitpick. They can’t speak to anything of substance.

Danielson quote 3

What merit can there be in a system that makes evaluators omnipotent judges of teachers in subject areas that the evaluators themselves literally are not qualified to teach? It isn’t that veteran teachers don’t have anything to learn. Far from it. Teaching is a highly dynamic, highly challenging occupation; and the successful teacher is constantly learning, growing, self-reflecting, and networking with professional peers. The successful principal makes space for the teacher to teach and for the student to learn, and they protect that space from encroachment by anyone whose design is to impede that critical exchange.

Ms. Danielson offers this alternative to the current approach to evaluation: “An essential step in the system should be the movement from probationary to continuing status. This is the most important contribution of evaluation to the quality of teaching. Beyond that, the emphasis should be on professional learning, within a culture of trust and inquiry. . . . Experienced teachers in good standing should be eligible to apply for teacher-leadership positions, such as mentor, instructional coach, or team leader.”

Ironically, what Ms. Danielson is advocating is a return to evaluation as most teachers knew it prior to adoption of the Danielson Framework.

(Grammar alert: I have opted to use the gender-neutral pronouns they and their etc. even when they don’t agree in number with their antecedents.)

 

 

The fallacy of testing in education

Posted in October 2015 by Ted Morrissey on October 18, 2015

For the last several years education reformers have been preaching the religion of testing as the lynchpin to improving education (meanwhile offering no meaningful evidence that education is failing in the first place). Last year, the PARCC test (Partnership for Assessment of Readiness for College and Careers) made its maiden voyage in Illinois. Now teachers and school districts are scrambling to implement phase II of the overhaul of the teacher evaluation system begun two years before by incorporating student testing results into the assessment of teachers’ effectiveness (see the Guidebook on Student Learning Objectives for Type III Assessments). Essentially, school districts have to develop tests, kindergarten through twelfth grade, that will provide data which will be used as a significant part of a teacher’s evaluation (possibly constituting up to 50 percent of the overall rating).

To the public at large — that is, to non-educators — this emphasis on results may seem reasonable. Teachers are paid to teach kids, so what’s wrong with seeing if taxpayers are getting their money’s worth by administering a series of tests at every grade level? Moreover, if these tests reveal that a teacher isn’t teaching effectively, then what’s wrong with using recently weakened tenure and seniority laws to remove “bad teachers” from the classroom?

Again, on the surface, it all sounds reasonable.

But here’s the rub: The data generated by PARCC — and every other assessment — is all but pointless. To begin with, the public at large makes certain tacit assumptions: (1) The tests are valid assessments of the skills and knowledge they claim to measure; (2) the testing circumstances are ideal; and (3) students always take the tests seriously and try to do their best.

assessment blog quote 1

But none of these assumptions are true most of the time — and I would go so far as to say that all of them being true for every student, for every test practically never happens. In other words, when an assessment is given either the assessment itself is invalid, and/or the testing circumstances are less than ideal, and/or nothing is at stake for students so they don’t try their best (in fact, it’s not unusual for students to deliberately sabotage their results).

For simplicity’s sake, let’s look at the PARCC test (primarily) in terms of these three assumptions; and let’s restrict our discussion to validity (mainly). There have been numerous critiques of the test itself that point out its many flaws (see, for example here; or here; or here). But let’s just assume PARCC is beautifully designed and actually measures the things it claims to measure. There are still major problems with its data’s validity. Chief among the problems is the fact that there are too many factors beyond a district’s and — especially — a classroom teacher’s control to render the data meaningful.

For the results of a test — any test — to be meaningful, the test’s administrator must be able to control the testing circumstances to eliminate (or at least greatly reduce) factors which could influence and hence skew the results. Think about when you need to have your blood or urine tested — to check things like blood sugar or cholesterol levels — and you’re required to fast for several hours beforehand to help insure accurate results. Even a cup of tea or a glass of orange juice could throw off the process.

That’s an example that most people can relate to. If you’ve had any experience with scientific testing, you know what lengths have to be gone to in hopes of garnering unsullied results, including establishing a control group — that is, a group that isn’t subjected to whatever is being studied, to see how it fares in comparison to the group receiving whatever is being studied. In drug trials, for instance, one group will receive the drug being tested, while the control group receives a placebo.

Educational tests rarely have control groups — a group of children from whom instruction or a type of instruction is withheld to see how they do compared to a group that’s received the instructional practices intended to improve their knowledge and skills. But the lack of a control group is only the beginning of testing’s problems. School is a wild and woolly place filled with human beings who have complicated lives, and countless needs and desires. Stuff happens every day, all the time, that affects learning. Class size affects learning, class make-up (who’s in the class) affects learning, the caprices of technology affect learning, the physical health of the student affects learning, the mental health of the student affects learning, the health of the teacher affects learning (and in upper grades, each child has several teachers), the health and circumstances of the student’s parents and siblings affect learning, weather affects learning (think “snow days” and natural disasters); sports affects learning (athletes can miss a lot of school, and try teaching when the school’s football or basketball team is advancing toward the state championship); ____________ affects learning (feel free to fill in the blank because this is only a very partial list).

assessment blog quote 2

And let me say what no one ever seems to want to say: Some kids are just plain brighter than other kids. We would never assume a child whose DNA renders them five-foot-two could be taught to play in the NBA; or one whose DNA makes them six-foot-five and 300 pounds could learn to jockey a horse to the Triple Crown. Those statements are, well, no-brainers. Yet society seems to believe that every child can be taught to write a beautifully crafted research paper, or solve calculus problems, or comprehend the principles of physics, or grasp the metaphors of Shakespeare. And if a child can’t, then it must be the lazy teacher’s fault.

What is more, let’s look at that previous sentence: the lazy teacher’s fault. Therein lies another problem with the reformers’ argument for reform. The idea is that if a student underachieves on an exam, it must be the fault of the one teacher who was teaching that subject matter most recently (i.e., that school year). But learning is a synergistic effect. Every teacher who has taught that child previously has contributed to their learning, as have their parents, presumably, and the other people in their lives, and the media, and on and on. But let’s just stay within the framework of school. What if a teacher receives a crop of students who’d been taught the previous year by a first-year teacher (or a student teacher, or a substitute teacher who was standing in for someone on maternity or extended-illness leave), versus a crop of students who were taught by a master teacher with an advanced degree in their subject area?

Surely — if we accept that teaching experience and education contribute to teacher effectiveness — we would expect the students taught by a master teacher to have a leg up on the students who happened to get a newer, less seasoned, less educated teacher. So, from the teacher’s perspective, students are entering their class more or less adept in the subject depending on the teacher(s) they’ve had before. When I taught in southern Illinois, I was in a high school that received students from thirteen separate, curricularly disconnected districts, some small and rural, some larger and more urban — so the freshman teachers, especially, had an extremely diverse group, in terms of past educational experiences, on their hands.

For several years I’ve been an adjunct lecturer at University of Illinois Springfield, teaching in the first-year writing program. UIS attracts students from all over the state, including from places like Chicago and Peoria, in addition to students from nearby rural schools, and everything in between (plus a significant number of international students, especially from India and China). In the first class session I have students write a little about themselves — just answer a few questions on an index card. Leafing through those cards I can quickly get a sense of the quality of their educational backgrounds. Some students are coming from schools with smaller classes and more rigorous writing instruction, some from schools with larger classes and perhaps no writing instruction. The differences are obvious. Yet the expectation is that I will guide them all to be competent college-level writers by the end of the semester.

The point here, of course, is that when one administers a test, the results can provide a snapshot of the student’s abilities — but it’s providing a snapshot of abilities that were cured by uncountable and largely uncontrollable factors. How, then, does it make sense (or, how, then, is it fair) to hang the results around an individual teacher’s neck — either Olympic-medal like or albatross like, depending?

As I mentioned earlier, validity is only one issue. Others include the circumstances of the test, and the student’s motivation to do well (or their motivation to do poorly, which is sometimes the case). I don’t want to turn this into the War and Peace of blog posts, but I think one can see how the setting of the exam (the time of day, the physical space, the comfort level of the room, the noise around the test-taker, the performance of the technology [if it’s a computer-based exam like the PARCC is supposed to be]) can impact the results. Then toss in the fact that most of the many exams kids are (now) subjected to have no bearing on their lives — and you have a recipe for data that has little to do with how effectively students have been taught.

So, are all assessments completely worthless? Of course not — but their results have to be examined within the complex context they were produced. I give my students assessments all the time (papers, projects, tests, quizzes), but I know how I’ve taught them, and how the assessment was intended to work, and what the circumstances were during the assessment, and to some degree what’s been going on in the lives of the test-takers. I can look at their results within this web of complexities, and draw some working hypotheses about what’s going on in their brains — then adjust my teaching accordingly, from day to day, or semester to semester, or year to year. Some adjustments seem to work fairly well for most students, some not — but everything is within a context. I know to take some results seriously, and I know to disregard some altogether.

assessment blog quote 3

Mass testing doesn’t take into account these contexts. Even tests like the ACT and SAT, which have been administered for decades, are only considered as a piece of the whole picture when colleges are evaluating a student’s possible acceptance. Other factors are weighed too, like GPA, class rank, teacher recommendations, portfolios, interviews, and so on.

What does all this mean? One of things that it means is that teachers and administrators are frustrated with having to spend more and more time testing, and more and more time prepping their students for the tests — and less and less time actually teaching. It’s no exaggeration to say that several weeks per year, depending on the grade level and an individual school’s zeal for results, are devoted to assessment.

The goal of assessment is purported to be to improve education, but the true goals are to make school reform big business for exploitative companies like Pearson, and for the consultants who latch onto the movement remora-like, for example, Charlotte Danielson and the Danielson Group; and to implement the self-fulfilling prophecy of school and teacher failure.

(Note that I have sacrificed grammatical correctness in favor of non-gendered pronouns.)

Destroying Public Education for Dummies

Posted in April 2015, Uncategorized by Ted Morrissey on March 28, 2015

“I’m as mad as hell, and I’m not gonna take this anymore!”

It’s the iconic line from the 1976 film Network in which news anchor Howard Beale (Peter Finch) is pushed beyond the breaking point and implores his viewers to get mad, go to their windows, open them and shout: “I’m as mad as hell, and I’m not gonna take this anymore!” — and people do . . . by the thousands.

This is essentially the message of Williamsville (Illinois) school superintendent David Root in the District Dispatch he sent out yesterday in which he writes: “So, want to destroy public education and prevent people from wanting to teach? Not a problem. It’s actually pretty simple.”

David Root

Superintendent David Root

 

Root uses the metaphor of the how-to books “for Dummies” to say that the dummies in charge of state government — recently elected governor Bruce Rauner and the General Assembly as a whole — have managed, without breaking a sweat, to destroy public education and the morale of educators by slashing funds, mandating a litany of pointless tests, and demonizing and demoralizing teachers. One of the points I especially appreciate alludes to the Danielson Framework for Teacher Evaluation and how its adoption by the state is part of a scheme to make teachers in Illinois look ineffective (and thus, I say, pave the way for the lucrative privatization of schools) — an argument I’ve been making for months, especially in my August 17, 2014, post “Principals unwitting soldiers in Campbell Brown’s army.”

Please read superintendent Root’s superb jeremiad in its entirety here. (Or you can also access it via the district’s webpage here.).

Some people were surprised at Root’s vitriol, even though it’s been building for some time, and suggested that perhaps Mr. Root should have held off sending it out until he’d calmed down a bit. But I unequivocally disagree: I say we are long past the point of civility. We need more — all! — administrators, teachers, school board members, parents and students to raise their windows and shout: I’m mad as hell and I’m not going to take it anymore!

And we shouldn’t stop our raging against the “education reform” machine until public schools and public educators receive the support and the respect they deserve. Because, ultimately, our students deserve no less.

Bravo, superintendent Root! I too am as mad as hell!

Principals unwitting soldiers in Campbell Brown’s army

Posted in August 2014, Uncategorized by Ted Morrissey on August 17, 2014

(This is a long post — and for that, my apologies. But it’s important, and I encourage you to take your time and read it thoroughly.)

Because of my interest in the subject (as demonstrated in my blog posts over the past few months), I was invited to participate in a video roundtable via Skype with administrators from several schools about implementing the Danielson Framework for Teacher Evaluation, and I found many of the comments, well, bewildering. Even though it was a select group, I strongly suspect that their attitudes and approaches are representative of not only administrators in Illinois, but across the country — as the Danielson Framework has been adopted by numerous states. Before I go any further I must stress that these are all good people who are trying to do their job as they understand it from the State Board of Education, their own local school boards and the public at large. Around the video table were a superintendent of a k-12 district, building principals of elementary, middle, junior high and high schools, and even a k-12 curriculum director, along with three teachers — elementary, junior high and high school (yours truly). I’m going to try to represent their words accurately, but without attribution since their comments were not on the record. In fact, as the two-hour video chat became more heated, several people were speaking with a good deal of candor, and clearly their remarks were not intended for all ears. (By the way, kudos to the tech folks who brought us all together — it worked far better than I would have suspected.)

I considered not writing about the video conference at all, but ultimately felt that I owe it to the profession that I’ve devoted my adult life to (as I enter my 31st year in the classroom), a profession that has been beleaguered in recent years by powerful forces on every side: attacking teachers’ integrity, our skills, our associations, our job security, our pensions. We feel we have so many enemies, we don’t even know where to focus our attention.

What is more, most teachers are afraid to speak candidly with their own administrators and they’re especially afraid to speak out about what’s going on in their buildings. In spite of education reformers blanketing the media with the myth of “powerful teachers unions,” the truth is that associations like the National Education Association and American Federation of Teachers aren’t all that powerful — if they were, would teachers be in the plight we are now? — and individual teachers are very vulnerable.  Nontenured teachers can be terminated without cause, and tenured teachers can be legally harassed right out of the profession. In fact, it happens all the time. Moreover, teachers tend to be naturally non-confrontational, which is why they chose to go into teaching in the first place. People with more aggressive personalities will seek other kinds of professions. As a result, we’ve been lambs to slaughter at the hands of reformers, legislators, school board members, administrators … at the hands of anyone who wants to take a whack at us. Rather than fight back, it’s easier to keep quiet and bear it, or to move on.

I’ve been writing about educational issues for the past several months — the unfair termination of young teachers, the inherent flaws of the Danielson Framework, the way the Framework affects teachers, and my issues with PARCC and the Common Core. My posts have been garnering hundreds of hits, and a few online likes, but many, many private, under-the-radar thumbs-ups and thank-yous. Teachers appreciate that someone is speaking out, but they’re not only afraid to speak out themselves, they’re even afraid to be seen agreeing with my point of view. If this isn’t evidence of the precariousness of being a teacher and the overall weakness of “teachers unions,” I don’t know what is.

Public Opinion and the Rarefied Air of Excellence

Much of the round-table discussion had to do with the Framework’s insistence that very, very few teachers rank in the top category (identified as “Excellent” in many districts’ plans). Before Danielson, districts tended to have three-tier evaluation instruments, which were often labeled as “Excellent,” “Satisfactory” and “Unsatisfactory.” Danielson adds a tier between “Excellent” and “Satisfactory”: “Proficient.” Many veteran teachers who had consistently received an excellent rating under the previous model were downgraded to merely proficient under Danielson. This downgrading was predicted as early as two years ago when the new instrument emerged on the educational horizon.  I didn’t want to believe it would be that severe, but it has been this past year, the year of implementation, with very few teachers being rated as excellent. For the record, I was rated as proficient — not as excellent for the first time since I was a nontenured teacher, more than 25 years ago.

In fact, as I wrote in a previous post, the Illinois Administrators Academy offered a special workshop this past summer to train administrators how to deliver the unpleasant news that a veteran teacher has been downgraded to proficient — the downgrading was so pervasive across the state. The Framework was originally developed by Charlotte Danielson in 1996 as a way to evaluate first-year teachers, so it made perfect sense that a single-digit percentage would be deemed as excellent. The Framework has undergone three revisions since then and now purports to be an instrument that can assess every teacher, K-12, every subject, and even nonclassroom professionals like librarians and school nurses. Nevertheless, the notion that very, very few teachers will rate as excellent has clung tenaciously to the Framework throughout each revision.

I asked the administrators why that aspect of the Framework remains even though the Framework’s purpose has been expanded dramatically since it was conceived in the mid-1990s. I was told by the k-12 superintendent that the Framework has gained such wide acceptance in large part due to that very aspect. Under previous evaluation instruments, 90% of teachers were judged to be excellent, and the public doesn’t accept that as true. In fact, the public believes (and therefore school boards, too, since they, like the public at large, are almost always noneducators with no classroom experience) that the traditional bell curve should apply to teachers. The bell curve, or Gaussian function, is of course the statistical representation that says the fewest examples of anything, qualitatively speaking, are at either extreme of the gathered samples, and the vast majority (let’s say 80%) fall somewhere in the middle, from below average to above average.

According to the superintendent, then, the public believes that the bell curve should apply to experienced, career teachers as well — that only a small percentage are truly excellent, and the vast majority fall somewhere in the middle (to use Danielson terms, in the satisfactory to proficient range). First of all, who cares what the uninformed public thinks? In our country we have a fascination with asking pedestrians on the street what they think of global warming, heightened military involvement in the Middle East, and allowing Ebola victims to enter the country. John Oliver of “Last Week Tonight” did a segment on this phenomenon that went viral on social media:

Assuming this is true — that the public believes only a small percentage of teachers are excellent based, unconsciously, on the principle of the bell curve (and I’m willing to believe that it is true) — the belief yet again speaks to the ignorance of the “man on the street.” In this instance, the bell curve is being fallaciously applied. If you take a random sampling of people (let’s say, you go to the mall at Christmas time and throw a net around a random group of shoppers) and task them with teaching some random topic to a random group of students, then, yes, the bell curve is likely to be on target. In that group of shoppers, lo and behold, you netted a couple of professional teachers, so they’re able to teach the material pretty effectively; another much larger group of shoppers who are decently educated and reasonably articulate could do a passable job imparting the information; and a smaller group on the other extreme would really make a botch of it.

But career teachers are not a random sampling of shoppers at the mall. They’re highly educated professionals who have devoted their lives to teaching, who have constantly worked to improve their craft, and who have honed their skills via thousands of contact hours with students. It stands to reason, in fact, that career teachers should be excellent at what they do after all that training and experience. No one, I suspect, would have an issue with the statement that all Major League baseball players are excellent at baseball — some may be bound for Cooperstown and some may go back to the minors or to some other career altogether after a season or two, but they’re all really, really good at playing baseball compared to the average person. Why is it so hard to believe that 90% of career teachers are excellent at what they do?

The Fallacy of the Bell Curve and Nontenured Teachers

Unfortunately, the acceptance of the bell-curve fallacy has an even more devastating impact when applied to teachers in the beginning of their careers. One administrator shared that her board expects a few nontenured teachers to be terminated every spring, that the board implies the administrators aren’t doing their jobs if every nontenured teacher is retained. I was dumbfounded by this statement. It’s barely a figurative comparison to say that it’s like having to sacrifice a virgin or two to appease the gods at the vernal equinox. It’s no wonder that many young teachers feel as if they’re performing their highly complex duties with a Damoclesian Sword poised above their tender necks. I know firsthand one young teacher who resigned last spring after two years in the classroom to pursue another career option because she’d seen the way other young teachers were treated and had already experienced some administrative harassment. And this was a teacher who by all accounts was doing well in the classroom (in a specialized area in which there aren’t a lot of qualified candidates). She didn’t even know what she wanted to do for a living, but it will have to be better (and professionally safer) than teaching, she believed. I have to believe she’s right.

But, again, in the case of young teachers, the bell curve is being applied erroneously.  Generally speaking, when teachers are hired, administrators are drawing from an applicant pool in the hundreds. They’re college educated, trained in their field, and they’ve passed their professional exams. They often have to go through multiple rounds of interviewing before being offered a position. Of course, even after all of this, there can be young teachers who have chosen their profession poorly and in fact they’re not cut out for teaching — but school board members shouldn’t just assume a certain number should be cut from the herd to make room for potentially more effective young professionals — and if that sort of pressure is being applied to administrators, to be the bearers of the bloody hatchet every spring, that is grossly unfair, too.

The Danielson Group’s Indoctrination

The evaluation training that administrators have to undergo, all forty hours of it, indoctrinates them to the Danielson Framework’s ethos that excellent is all but attainable, and it has led to all kinds of leaping logic and gymnastic semantics. An idea that was expressed multiple times in various ways during the roundtable was that proficient really means excellent, and a rating of excellent really means something beyond excellent — what precisely is unclear, but it has to do with teachers going above and beyond (above what? beyond where? … no one seems to know or be able to articulate). The Framework was often referred to as “fact-based” and “objective,” yet administrator after administrator couldn’t put into words what distinguishes a “proficient” teacher from an “excellent” one. It’s just a certain feeling — which is the very definition of subjectivity. The Framework for Teacher Evaluation approach is fact-laden, but it is far from fact-based.

The Danielson model is supposed to be an improvement over previous ones in part because it requires evaluators to observe teachers more than in the past. In the old system, typically, tenured teachers were observed one class period every other year. Now they’re observed one class period plus several pop-in visits, which may last only a few minutes, every other year. The Framework recommends numerous visits, even for veteran teachers, but in practicality evaluators are doing well to pop in a half dozen times or so because they have so many teachers to evaluate. Nevertheless, the increased frequency seems to give administrators the sense that they have a secure hold on the behaviors of their teachers and know with confidence what they’re doing in their classes. This confidence, frankly, is troubling. Let’s be generous and say that a principal can observe a teacher for a total of three class periods (one full period, plus bits of four or five other ones). Meanwhile, the typical teacher teaches, say, six periods per day for 180 days, which equals 1,080 periods. Three class periods represent less than one percent (0.3 percent, rounding up) of that teacher’s time with students during the year. How in the world can an evaluator say with confidence Teacher A is excellent and Teacher B is really close, but definitely only proficient based on seeing them teach less than one percent of the time?

Yet one principal said with confidence, bravado even, that he could observe two high-performing teachers who had always been rated as excellent in the past, and based on his Danielson-style observations he could differentiate between the excellent high-performing teacher and the proficient high-performing teacher, because, he said, the excellent teacher was doing something consistently, whereas the proficient teacher was doing that something only some of the time — what that something is was left undefined. If a writer submitted an academic article to a peer-reviewed journal and was drawing rock-solid conclusions based on observing anything .03% of the time … well, let us say that acceptance for publication would be unlikely.

The same standards of logic should be applied to judging teachers’ careers and assessing their worth to the profession. Period.

The Portfolio Conundrum

The confident administrator may point to another component of the Danielson model that is supposed to be an improvement over the previous approach: a portfolio prepared by the teacher. Teachers are supposed to provide their evaluator with evidence regarding their training and professionalism (especially for Danielson domains 1 and 4, “Planning and Preparation” and “Professional Responsibilities”), but there are some inherent problems with this approach and a lot of confusion. As far as confusion, principals seem to be in disagreement about how much material teachers should provide them. Some suggest only a few representative items, but the whole idea is for the portfolio to fill in the blanks for the evaluator, to make the evaluator aware of professional behaviors and activities that he or she can’t observe in the classroom (especially when they’re observing a teacher less than one percent of their time with students!). However, if teachers hand in thick portfolios, filled with evidence, the overburdened principal (and I’m not being sarcastic here), the overburdened principal hardly has time to pore through dozens of portfolios that look like they were prepared by James Michener (I debated between Michener and Tolstoy) — which leaves teachers in a conundrum: Do they turn in a modest amount of evidence, thereby selling themselves short, or do they submit copious amounts of evidence that won’t be read and considered by their evaluator anyway?

And it’s a moot question, of course, since nearly all teachers are going to be lumped into the proficient category to satisfy the public’s erroneous bell-curve expectations.

The Undervaluing of Content

I’ll add one bit more from the conversation because it leads to another important point — perhaps the most important — and that is one principal’s statement that he mainly focuses on a teacher’s delivery of the material and not the validity of the content because he usually doesn’t have the background in the subject area. In larger school districts, there may be department chairs who are at least in part responsible for evaluating teachers in their department (so an English teacher evaluates an English teacher, or a math teacher, a math teacher, etc.), but the vast majority of evaluations, for tenured and nontenured teachers alike, are performed by administrators outside of the content area. This, frankly, has always been a problem and largely invalidates the entire teacher evaluation system, but when the system was mainly benign, no one fussed too much about it (not even me). Now, however, when tenure and seniority laws have been weakened, and principals are programmed to be niggardly with excellent ratings, the fact that evaluators oftentimes have no idea if the teacher is dispensing valid knowledge or not undermines the whole approach.

Not to mention, the Danielson Framework claims to place about fifty percent of a teacher’s effectiveness on his or her knowledge of the subject. The  portfolios are supposed to help with this dilemma (the portfolios that aren’t being read with any sort of care because of time issues). I’m dubious, though, that this is a legitimate concern of the framers of the Danielson Framework because they definitely privilege an approach to teaching that places the burden of knowledge production with the students. That is, ideally teachers are facilitating their students’ acquisition of knowledge through self-discovery, but they’re not imparting that knowledge to them directly. Indeed, excellent teachers do very little direct teaching at all.

This devaluation of content-area knowledge has been a growing trend for several years, and it’s not surprising that administrators are easily swayed toward this mindset. After all, teachers who go into administration have made the choice to pursue knowledge not in their subject-area field. Very, very few administrators have a masters degree in their original content area in addition to their administrative degrees and certificates. In theory, they may accept the idea that broader and deeper knowledge in your subject area is important, but they can’t truly understand just how valuable (even invaluable) it is since they didn’t teach as someone with an advanced degree in their field. They’re only human after all, and none of us can truly relate to an experience we haven’t had ourselves.

Campbell Brown and Her Unwitting Campbell Brown-shirts

We didn’t talk about this during the video round-table, but it seems clear to me that none of the administrators had any sense of the role they’re playing in the larger scheme of things. The players are too numerous and the campaign too complex to get into here in any depth, but there’s unquestionably a movement afoot to privatize education — that is, to take education out of the hands of trained professionals and put it in the hands of underpaid managers so that corporations can reap obscene profits, and turn traditional public schools into barely funded welfare institutions. The well-to-do will be able to send their sons and daughters to these corporate-backed charter schools, and middle-class parents can dig their infinite hole of financial debt even deeper in an effort to keep up and send their children to the private, corporate schools as well.

Campbell Brown and the Partnership for Educational Justice were behind the lawsuit that made teacher tenure unconstitutional in California (the Vergara decision), and they’re at it again in New York (Wright v. New York). The Danielson Framework, wielded by brainwashed administrators, is laying the groundwork for Vergara-like lawsuits across the land. Imagine how much easier it will be for Brown and partners in “reform” like David Boles to make the case that public schools are failing because, see, only a handful of teachers are performing at the top of their field. The rest, 90-something percent, are varying shades of mediocre, with powerful teachers unions shielding their mediocrity from public view.

Superintendents and principals have drunk the Campbell Brown-colored Kool-Aid. In this instance the metaphor is especially apropos because there are already movements underway to dismiss traditionally trained administrators as underqualified. In Illinois, the State Board of Education is changing from certificates to licenses and in the process requiring additional training to become an administrator. It is a recent change, but already there are insinuations that administrators who received the traditional training are going to be underqualified compared to their newly licensed colleagues.

Moreover, what does it say about a principal as recruiter of young talent when a significant number of his new hires have to be terminated year after year? What a waste of money and resources, and what a  disservice to children! And what does it say about a principal as educational leader of his building when he can’t even shape the majority of his veteran teachers into excellent practitioners? Clearly, he’s not especially excellent either. And all those well-paid superintendents who hired all those lackluster principals, well … And all those publicly elected boards of education who hired all those lackluster superintendents, well … the gross mismanagement of taxpayer dollars is bordering on criminal fraud.

As I see it, the Partnership for Educational Justice’s grand scheme is to have principals help them dismantle professional associations like the NEA and AFT via their use of the Danielson Framework, state by state. Then they’ll systematically replace public schools with corporate-backed charter schools which will be staffed by undertrained, low-paid “teachers,” and instead of principals, each school/franchise will be overseen by a manager — just as it works in the corporate world now. Instead of boards of education who answer to taxpayers there will be boards of directors who answer to shareholders. Brilliant.

So every time principals sign an evaluation that undervalues their teachers, they’re also signing their own resignation letter. It’s all right: they’ll look quite fetching in their Brown-shirts as they wait in the unemployment line.

tedmorrissey.com

Lowered teacher evaluations of Danielson Framework require special training

Posted in June 2014, Uncategorized by Ted Morrissey on June 12, 2014

In an earlier post I analyzed the “Danielson Framework for Teacher Evaluation,” which has become the adopted model in numerous states, including Illinois, and I pointed out some of its many flaws. One of the aspects of Danielson that has been troubling to teachers from the beginning is its insistence that virtually no teacher is excellent (distinguished, outstanding). When the Framework was designed in 1996 it was intended to rate first-year teachers, so it made sense that very, very few would be rated in the top category. The Framework was revised three times (2007, 2011 and 2013) in an effort to be an evaluation tool for all educators and even non-classroom professionals (like librarians and school nurses). Nevertheless, the idea that virtually no teacher is capable of achieving the top echelon (however it may be labeled in a district’s specific evaluation instrument) has clung to the Framework.

In my district, we were told of the Danielson Framework a full two years before it was implemented, and from the start we were informed that it was all but impossible to achieve an “excellent” rating, even for teachers who have consistently been rated at the top level for several evaluation cycles (pre-Danielson era). After a full year of its being used, it seems that administrators’ predictions were true (or made to be true), and almost no one (or literally no one) received an excellent rating. We were encouraged to compile a substantial portfolio of evidence or artifacts to help insure that our assessment would be more comprehensive than the previous evaluation approach. I foolishly (in retrospect) spent approximately six hours pulling together my portfolio and writing a narrative to accompany it. A portfolio, as it turned out, we never discussed and could only have been glanced at given the timing of its being retrieved and the appointed hour of my conference.

As predicted, I was deemed “proficient.” It was a nearly surreal experience to be complimented again and again only to be informed at the end that I didn’t rate as “excellent” because the Danielson Framework makes it exceptionally difficult for a teacher to receive a top rating. There were literally no weaknesses noted — well, there were comments in the “weakness” areas of the domains, but they were phrased as “continue to …” In other words, I should improve by continuing to do what I’ve been doing all along. In fairness, I should note that the evaluator had numerous teachers to evaluate, therefore observations to record, portfolios to read, summative evaluations to write — so I’m certain the pressure of deadlines figured into the process. Nevertheless, it’s the system that’s in place, and my rating stands as a reflection of my merits as a teacher and my value to the district and the profession — there’s no recourse for appeal, nor, I suppose, purpose in it.

I was feeling a lot of things when I left my evaluation conference: angry, humiliated, defeated, underappreciated, naive, deceived (to list a few). And, moreover, I had zero respect for the Danielson Framework and (to be honest) little remained for my evaluator — though it seems that from the very beginning evaluators are trained (programmed) to give “proficient” as the top mark. After a year of pop-in observations in addition to the scheduled observation, the preparation of a portfolio based on the four domains, a conference, and the delivery of my official evaluation, I literally have no idea how to be a better teacher. Apparently, according to the Framework, I’m not excellent, and entering my fourth decade in the classroom I’m clueless how to be excellent in the World According to Charlotte Danielson (who, by the way, has very little classroom experience).

If the psychological strategy at work is that by denying veteran teachers a top rating, they will strive even harder to achieve the top next time around, it’s an inherently flawed concept, especially when there are no concrete directions for doing things differently. As I said in my previous post on Danielson, it would be like teachers telling their students that they should all strive for an “A” and do “A”-quality work — even though in the end the best they can get on their report card is a “B.” Or business owners telling their salespeople  to strive for through-the-roof commissions, even though no matter how many sales they make, they’re all going to get the same modest paycheck. In the classroom, students would quickly realize that the person doing slightly above average work and the person doing exceptional work are both going to get a “B” … so there’s no point in doing exceptional work. On the job, salespeople would opt for the easiest path to the same result.

Under Danielson, it will take great personal and professional integrity to resist the common-sense urge to be the teacher that one’s evaluation says one is —  to resist being merely proficient if that, in practice, is the best ranking that is available.

My experience regarding the Danielson Framework is not unique in my school, and clearly it’s not unique in Illinois as a whole. Each year administrators must participate in an Administrators Academy workshop, and one workshop being offered by the Sangamon County Regional Office of Education caught my eye in particular: “Communicating with Staff Regarding Performance Assessment,” presented by Dr. Susan Baker and Anita Plautz. The workshop description says,

“My rating has always been “excellent” [sic] and now it’s “basic”. [sic] Why are you doing this to me?” When a subordinate’s performance rating declines from the previous year, how do you prepare to deliver that difficult message? How do you effectively respond to a negative reaction from a staff member when they [sic] receive a lower performance rating? This course takes proven ideas from research and weaves them into practical activities that provide administrators with the tools needed to successfully communicate with others in difficult situations. (Sangamon Schools’ News, 11.3, spring 2014, p. 11; see here to download)

Apparently, then, school administrators are giving so many reduced ratings to teachers that they could benefit from special coaching on how to deliver the bad news so that the teacher doesn’t go postal right there in their office (I was tempted). In other words, the problem isn’t an instrument and an approach that consistently undervalues and humiliates experienced staff members; the problem, rather, is rhetorical — how do you structure the message to make it as palatable as possible?

While I’m at it, I have to point out the fallacious saw of citing “research,” and in this description even “proven ideas,” which is so common in education. The situation that this workshop speaks to, with its myriad dynamics, is unique and only recently a pervasive phenomenon. Therefore, if there have been studies that attempt to replicate the situation created by the Danielson Framework, they must be recent ones and could at best suggest some preliminary findings — they certainly couldn’t prove anything. If the research is older, it must be regarding some other communication situation which the workshop presenters are using to extrapolate strategies regarding the Danielson situation, and they shouldn’t be trying to pass it off as proof. As a literature person, I’m also amused by the word “weaves” in the description as it is often a metaphor for fanciful storytelling — and the contents of the alluded to research must be fanciful indeed. (By the way, I don’t mean to imply that Dr. Baker and Ms. Plautz are trying to deliberately mislead — they no doubt intend to offer a valuable experience to their participants.)

What is more, a lowered evaluation is not just a matter of hurting one’s pride. With recent changes in tenure and seniority laws in Illinois (and likely other states), evaluations could be manipulated to supersede seniority and remove more experienced teachers in favor of less experienced ones — which is why speaking out carries a certain amount of professional risk even for seasoned teachers.

My belief is that the Danielson Framework and the way that it’s being used are part of a calculated effort to cast teachers as expendable cogs in a broken wheel. Education reform is a billions-of-dollars-a-year industry — between textbook publishers, software and hardware developers, testing companies, and high-priced consultants (like Charlotte Danielson) — and how can cash-strapped states justify spending all those tax dollars on reform products if teachers are doing a damn fine job in the first place? It would make no sense.

It would make no sense.

tedmorrissey.com

Not speaking about Danielson Framework per se, but

Posted in April 2014, Uncategorized by Ted Morrissey on April 3, 2014

Sir Ken Robinson has several TED Talks regarding education, and his “How to Escape Education’s Death Valley” is an especially appropriate follow-up to my last post about the Danielson Group’s Framework for Teaching Evaluation Instrument. Robinson, who is very funny and engaging, doesn’t reference Charlotte Danielson and her group per se, but he may as well. The Danielson Group’s Framework, which has been adopted as a teacher evaluation instrument in numerous states, including Illinois, is emblematic — in fact, the veritable flagship — of everything that’s wrong with education in America, according to Robinson.

Treat yourself to twenty minutes of Robinson’s wit and wisdom:

Fatal flaws of the Danielson Framework

Posted in March 2014, Uncategorized by Ted Morrissey on March 23, 2014

The Danielson Group’s “Framework for Teaching Evaluation Instrument” has been sweeping the nation, including my home state of Illinois, in spite of the fact that the problems with the Group, the Framework, the Instrument, and even Ms. Danielson herself are as obvious as a Cardinals fan in the Wrigley Field bleachers. There have already been some thorough critiques of the Danielson Group, its figurehead, the Framework, and how it’s being used destructively rather than constructively. For example, Alan Singer’s article at the Huffington Post details some of the most glaring problems. I encourage you to read the article, but here are some of the highlights:

[N]obody … [has] demonstrated any positive correlation between teacher assessments based on the Danielson rubrics, good teaching, and the implementation of new higher academic standards for students under Common Core. A case demonstrating the relationship could have been made, if it actually exists.

[I]n a pretty comprehensive search on the Internet, I have had difficulty discovering who Charlotte Danielson really is and what her qualifications are for developing a teacher evaluation system … I can find no formal academic resume online … I am still not convinced she really exists as more than a front for the Danielson Group that is selling its teacher evaluation product. [In an article archived at the Danielson Group site, it describes the “crooked road” of her career, and I have little doubt that she’d be an interesting person with whom to have lunch — but in terms of practical classroom experience as a teacher, her CV, like most educational reformers’, is scant of information.]

The group’s services come at a cost, which is not a surprise, although you have to apply for their services to get an actual price quote. [Prices appear to range from $599 per person to attend a three-day workshop, $1,809 per person to participate in a companion four-week online class. For a Danielson Group consultant, the fee appears to be $4,000 per consultant/per day when three or more days are scheduled, and $4,500 per consultant/per day for one- to two-day consultations (plus travel, food and lodging costs). There are fees for keynote addresses, and several books are available for purchase.]

As I’ve stated, you should read Mr. Singer’s article in its entirety, and look into the Danielson Group and Charlotte Danielson yourself. The snake-oil core of their lucrative operation quickly becomes apparent. One of the chief purposes of the Danielson Framework, which allegedly works in conjunction with Common Core State Standards, is to turn students into critical readers who are able to dissect text, comprehending both its explicit and implicit meanings. What follows is my own dissection of the “Framework for Teaching Evaluation Instrument” (2013 edition). For now, I’m limiting my analysis to the not quite four-page Introduction, which, sadly, is the least problematic part of the Framework.  The difficulties only increase as one reads farther and farther into the four Domains.  (My citations refer to the PDF that is available at DanielsonGroup.org.)

First of all, the wrongheadedness of teacher evaluation

Before beginning my dissection in earnest, I should say that, rubrics aside, the basic idea of teacher evaluation is ludicrous — that sporadic observations, very often by superiors who aren’t themselves qualified to teach your subject, result in nothing especially accurate nor useful. As I’ve blogged before, other professionals — physicians, attorneys, business professionals, and so on — would never allow themselves to be assessed as teachers are. For one thing, and this is a good lead-in to my analysis, there are as many styles of teaching as there are of learning.  There is no “best way” to teach, just as there is no “best way” to learn.  Teachers have individual styles, just as tennis players do, and effective ones know how to adjust their style depending on their students’ needs.

But let us not sell learners short: adjusting to a teacher’s method of delivery is a human attribute — the one that allowed us to do things like wander away from the Savanna, learn to catch and eat meat, and survive the advance of glaciers — and it is well worth fine tuning before graduating from high school.  I didn’t attend any college classes nor hold any jobs where the professor or the employer adjusted to fit me, at least not in any significant ways. Being successful in life (no matter how one chooses to define success) depends almost always on one’s ability to adjust to changing circumstances.

In essence, forcing teachers to adopt a very particular method of teaching tends to inhibit their natural pedagogical talents, and it’s also biased toward students who do, in fact, like the Danielsonesque approach, which places much of the responsibility for learning in the students’ lap. Worse than that, however, a homogenous approach — of any sort — gives students a very skewed sense of the world in which they’re expected to excel beyond graduation.

In fairness, “The Framework for Teaching Evaluation Instrument” begins with a quiet little disclaimer, saying in the second sentence, “While the Framework is not the only possible description of practice, these responsibilities seek to define what teachers should know and be able to do in the exercise of their profession” (3). That is, there are other ways to skin the pedagogical cat. It’s also worth noting that the Danielson Group is seek[ing] to define — it doesn’t claim to have found The Way, at least not explicitly. Nevertheless, that is how untold numbers of legislators, reformers, consultants and administrators have chosen to interpret the Framework. As the Introduction goes on to say, “The Framework quickly found wide acceptance by teachers, administrators, policymakers, and academics as a comprehensive description of good teaching …” (3).

Teachers, well, maybe … though I know very, very few who didn’t recognize it as bologna from the start. Administrators, well, maybe a few more of these, but I didn’t hear any that were loudly singing its praises once it appeared on the Prairie’s horizon. Academics … that’s pretty hard to imagine, too. I’ve been teaching high-school English for 31 years, and I’ve been an adjunct at both private and public universities for 18 years — and I can’t think of very many college folk who would embrace the Danielson Framework tactics. Policymakers (and the privateer consultants and the techno-industrialists who follow remora-like in their wake) … yes, the Framework fits snugly into their worldview.

Thus, the Group doesn’t claim the Framework is comprehensive, but they seem to be all right with others’ deluding themselves into believing it is.

The Framework in the beginning

The Introduction begins by explaining each incarnation of the Framework, starting with its 1996 inception as “an observation-based evaluation of first-year teachers used for the purpose of licensing” (3). The original 1996 edition, based on research compiled by Educational Testing Service (ETS), coined the performance-level labels of “unsatisfactory,” “basic,” “proficient,” and “distinguished” — labels which have clung tenaciously to the Framework through successive editions and adoptions by numerous state legislatures. In Illinois, the Danielson Group Framework of Teaching is the default evaluation instrument if school districts don’t modify it. Mine has … a little. The state mandates a four-part labeling structure, and evaluators have been trained (brainwashed?) to believe that “distinguished” teachers are as rare as four-leaf clovers … that have been hand-plucked and delivered to your doorstep by leprechauns.

In my school, it is virtually (if not literally) impossible to receive a “distinguished” rating, which leads to comments from evaluators like “I think you’re one of the best teachers in the state, but according to the rubric I can only give you a ‘proficient.'” It is the equivalent of teachers telling their students that they’re using the standard A-B-C-D scale, and they want them to do A-quality work and to strive for an A in the course, but, alas, virtually none of them are going to be found worthy and will have to settle for the B (“proficient”): Better luck next time, kids. Given the original purpose of the Framework — to evaluate first-year teachers — it made perfect sense to cast the top level of “distinguished” as all but unattainable, but it makes no sense to place that level beyond reach for high-performing, experienced educators. Quite honestly, it’s demeaning and demoralizing — it erodes morale as well as respect for the legitimacy of both the evaluator and the evaluation process.

Then came (some) differentiation

The 2007 edition of the Framework, according to the Introduction, was improved by providing modified evaluation instruments for “non-classroom specialist positions, such as school librarians, nurses, and counselors,” that is, people who “have very different responsibilities from those of classroom teachers”; and, as such, “they need their own frameworks, tailored to the details of their work” (3).  There is no question that the differentiation is important. However, the problem is that it implies “classroom teacher” is a monolithic position, and nothing could be further from the truth. Thus, having one instrument that is to be used across grade levels, ability levels, not to mention for vocational, academic and fine arts courses is, simply, wrongheaded.

As any experienced teacher will tell you, each class (each gathering of students) has a personality of its own. On paper, you may have three sections of a given course, all with the same sort of students as far as age and ability; yet, in reality, each group is unique, and the lesson that works wonderfully for your 8 a.m. group may be doomed to fail with your 11 a.m. class, right before lunch, or your 1 p.m. after-lunch bunch — and on and on and on. So the Danielson-style approach, which is heavily student directed, may be quite workable for your early group, whereas something more teacher directed may be necessary at 11:00.

Therefore, according to the Danielson Group, I may be “distinguished” in the morning, but merely “proficient” by the middle of the day (and let us not speak of the last period). The evaluator can easily become like the blindman feeling the elephant: Depending on which piece he experiences, he can have very different impressions about what sort of thing, what sort of teacher, he has before him. Throw into the mix that evaluators, due to their training, have taken “distinguished” off the table from the start, and we have a very wobbly Framework indeed.

Enter Bill and Melinda Gates

The 2011 edition reflected revisions based on the Group’s 2009 encounter with the Bill and Melinda Gates Foundation and its Measures of Effective Teaching (MET) research project, which attempted “to determine which aspects of a teacher’s practice were most highly correlated with high levels of student progress” (4). Accordingly, the Danielson Group added more “[p]ossible examples for each level of performance for each component.” They make it clear, though, that “they should be regarded for what they are: possible examples. They are not intended to describe all the possible ways in which a certain level of performance might be demonstrated in the classroom.” Indeed, the “examples simply serve to illustrate what practice might look like in a range of settings” (4).

I would applaud this caveat if not for the fact that it’s embedded within an instrument whose overarching purpose is to make evaluation of a teacher appear easy. Regarding the 2011 revisions, the Group writes, “Practitioners found that the enhancements not only made it easier to determine the level of performance reflected in a classroom … but also contributed to judgments that are more accurate and more worthy of confidence” (4-5). Moreover, the Group says that changes in the rubric’s language helped to simplify the process:  “While providing less detail, the component-level rubrics capture all the essential information from those at the element level and are far easier to use in evaluation than are those at the element level” (4).

I suspect it’s this ease-of-use selling point that has made the Framework so popular among policymakers, who are clueless as to the complexities of teaching and who want a nice, tidy way to assess teachers (especially one designed to find fault with educators and rate them as average to slightly above average). But it is disingenuous, on the part of Charlotte Danielson and the Group, to maintain that a highly complex and difficult activity can be easily evaluated and quantified. In a 2012 interview, Ms. Danielson said that her assessment techniques are “not like rocket science,” whereas “[t]eaching is rocket science. Teaching is really hard work. But doing that [describing what teaching “looks like in words”] isn’t that big a deal. Honestly, it’s not. But nobody had done it.”

It’s downright naive — or patently deceptive — to say that a highly complex process (and highly complex is a gross understatement) can be easily and simply evaluated — well, it can be done, but not with any accuracy or legitimacy.

Classic fallacy of begging the question

I want to touch on one other inherent flaw (or facet of deception) in the Danielson Framework and that is its bias toward “active, rather than passive, learning by students” (5). Speaking of the Framework’s alignment with the Common Core, the Group writes, “In all areas, they [CCSS] place a premium on deep conceptual understanding, thinking and reasoning, and the skill of argumentation (students taking a position and supporting it with logic and evidence).” On the one hand, I concur that these are worthy goals — ones I’ve had as an educator for more than three decades — but I don’t concur that they can be observed by someone popping into your classroom every so often, perhaps skimming through some bits of documentary evidence (so-called artifacts), and I certainly don’t concur that it can be done easily.

The Group’s reference to active learning, if one goes by the Domains themselves, seems to be the equivalent of students simply being active in an observable way (via small-group work, for example, or leading a class discussion), but learning happens in the brain and signs of it are rarely visible. Not to get too far afield here, but the Framework is intersecting at this point with introverted versus extroverted learning behaviors. Evaluators, perhaps reflecting a cultural bias, prefer extroverted learners because they can see them doing things, whereas introverted learners may very well be engaged in far deeper thinking, far deeper comprehension and analysis — which is, in fact, facilitated by their physical inactivity.

And speaking of “evidence,” the Introduction refers to “empirical research and theoretical research” (3), “analyses” and “stud[ies]” (4) and to “educational research” that “was fully described” in the appendix of the 2007 edition (3), but beyond this vague allusion (to data which must be getting close to a decade old) there are no citations whatsoever, so, in other words, the Danielson Group is making all sorts of fantastic claims void of any evidence, which I find the very definition of “unsatisfactory.” This tactic, of saying practices and policies are based on research (“Research shows …”), is common in education; yet citations, even vague ones, rarely follow — and when they do, the sources and/or methodologies are dubious, to put it politely.

I plan to look at the Danielson Framework Domains in subsequent posts, and I’m also planning a book about what’s really wrong in education, from a classroom teacher’s perspective.

tedmorrissey.com