Learning Bets
Chapter: Brutal Facts

Yes, We Have a Problem: Five Foundational Realizations

screen-shot-2016-09-30-at-12-59-13-pmWhen Teach For All convened world-class experts with leaders from Teach For All organizations to explore the question “How Do We Best Grow Great Teachers, ”a handful of concerning realities surfaced that serve as helpful, if concerning, starting place for this inquiry:

  1. Lack of a clear, intentional theory of teacher development is a wide-spread problem.

  2. We too often conflate the “what” and the “how” of growing great teachers.

  3. Our “bets” about how teachers grow tend to be implicit and/or hidden, making learning from each other difficult.

  4. We tend to try to do a little bit of everything instead of a few things well.

  5. The research is unhelpful.

Lack of clear, intentional theory of teacher development is a wide-spread problem.screen-shot-2016-09-30-at-1-06-56-pm

We are not the only ones struggling with finding breakthrough strategies for training and supporting teachers.  Teacher preparation efforts across the global education landscape—from university-based training models to school-based professional development—are straining to see even small aggregate changes in the performance of their teachers. 

So while we might acknowledge that we are not alone in this challenge, we also have to recognize that we have few models to draw from for solving it.

Tim Daly, former head of an influential organization in the U.S. called the New Teacher Project (TNTP), reflected on what he sees across the entire education landscape:

This is deeply humbling work. It is so painful to see how much we are struggling and that the challenge of supporting teachers to better practices is so brutally slow and difficult, and is  more about the failures we’re reflecting on and learning from than the shining successes we can build on. We see this landscape where info is constantly bent or obscured and it leads to bad decisions.

At the Roundtable, we heard similarly pained descriptions of the teacher-growth landscape from countries around the world, from Australia and Peru and the UK and China.

Tim went on to give us a review of the just released TNTP report The Mirage: Confronting the Hard Truth About Our Quest For Teacher Development.This report is must-read provocation for all of us who think we might have some idea about how to grow teachers.

By studying and interviewing 10,000 teachers, 500 school leaders, and 100 teacher developers, and by trying to link investments in professional development with actual outcomes for children, TNTP revealed that almost everything we think we know about teacher development is. . .a mirage.

Consider some of TNTP’s concerning findings from its research in the U.S.:

  • In the U.S., the studied districts are spending approximately $18,000 and 10% of teachers’ work days on professional development per year, in perpetuity.  And yet, in terms of actual student outcomes, most teachers in those districts are not actually improving year to year, and in fact some are getting worse.
  • Teachers’ own assessment of their strengths and weaknesses rarely align to their actual skill levels, often because the systems they work in have suggested they are all great, and have little room to improve.More than 60% of low-rated teachers gave themselves high performance ratings. 
  • As Tim Daly explained, this gap between reality and perception undermines the heart of professional development:

It’s consequential because it impedes this idea of disconfirmation – when I know that my actions are not getting me the results that I want, which is one of the most powerful thing that starts learning. When do I decide I should learn something different?We see most info that teachers receive and most of the things they believe impede that process from happening rather than facilitating it in a healthy way.

  • Studying the few teachers who are improving reveals no actionable patterns—in terms of the type, amount, substance—of professional development that led to their improvement.Most of us insist that we know what works, and that if we just put that in an intense enough form and with an intense enough dosage, we will see teacher grow. Tim described this study’s findings bluntly:

Form and dosage have basically no relationship at all to teacher learning.  This is enormously dispiriting and important.

Tim brings the findings of The Mirage home with a stunning data-point:

In one of our sites, if you were to play forward [the teacher growth seen in these studies], at what point will the average teacher in this site be highly effective in developing students’ critical thinking skills?  It would be 172 years.

We too often conflate the “what” and the “how” of growing great teachers.

As we engaged with experts on the question “how do we best grow great teachers,” we repeatedly found that our conversations slipped into what knowledge, skills, and mindsets teachers need to have.  Our partner organization representatives did that.  Our experts did that.  We did that.

When the “how we grow teachers” question is hard to answer, we have a tendency to retreat to the question “what we want teachers to know and do.”   This tendency contributes to our lack of clarity, understanding and explicitness of our theory of development.

What “bets” different organizations make about what works to improve teacher performance are hidden and implicit, making it hard to learn from each other. And in many cases those learning bets seem to be unexamined and unclear, inhibiting success.

At one point, one of the experts at the Roundtable caught this tendency and called it out:

One contributor said we should be careful about understating what we know – for instance, we know a lot about how to teach reading.  It felt to me like [that contributor] was conflating what we know about good teaching, which is quite a bit, with how we help teachers become effective at those things.  Knowing and helping aren’t the same things.  It felt like [we] might be making the leap to say if we know/understand something, we can get teachers to do it.  That’s exactly where I worry that we know less than we think! 

While the “what” and “how” of transformational learning and leadership must be closely related to each other, they are in fact different questions.  We could, for example, purport to build a particular skill in many ways.  Should we:

        • have learners read about the skills?
        • have learners watch the skills?
        • have learners practice the skill?
        • have learners breakdown and practice small parts of the skills?
        • have learners watch themselves attempt the skill and reflect on its difference from an exemplary model?
        • have learners try a skill with a coach’s guidance?

screen-shot-2016-09-30-at-1-21-18-pmWhich “bet” about how our teachers are going to best learn is our best bet? 

That question only gets more complex when, in our context of transformational learning and leadership, we know that mindsets, values, vision and orientation to our work is critical to the short and long-term impact we aspire to.  Challenging and growing mindsets raises a similarly long and complex set of choices to be made about how we intend to influence our learners’ growth.

Our collection of experts at the Roundtable both demonstrated this unhelpful tendency to conflate “what” with “how” and  regretted it in themselves and others. 

Our almost exclusive focus on what teachers need to do, know, be and believe, leaves many programs without a strong, clear, guiding strategy of development.

Morva MacDonald from University of Washington did not mince words:

Most places are completely absent, actually, of a theory of learning.  And as a result, they do a set of activities that . . . . very often have little to do with how we actually support people to do the work of teaching. 

And, as Morva reminds us, while we must not hide from the “how” in the “what,” the question of how we grow teachers does need to be inextricably linked to what we aspire for them to know, do, and be:

We deeply believe in a very integrated approach about what somebody is learning and how they are going about learning it.  In separating them, you lose a lot of the potential of developing somebody’s capacity. . . .The challenge is that once you are actually within the moment of teaching there is drift right? Because you are faced with kids’ questions, you’re faced with the limitations of what you understand the content to be which you don’t see, you don’t really get a handle on in a planning moment. . . .We are equally concerned with [what they are learning and] how they are going about learning it because we understand that the biggest challenge of teaching is actually being able to enact what it is you understand.

Our “bets” about how teachers grow tend to be implicit and/or hidden, making learning from each other difficult.

Our collection of experts at the Roundtable shared a concern that most teacher preparation organizations lack even minimal transparency—and many lack fundamental clarity and purposefulness—in their choices and assumptions about how they grow teachers. 

What “bets” different organizations make about what works to improve teacher performance are hidden and implicit, making it hard to learn from each other.  And in many cases those learning bets seem to be unexamined and unclear, inhibiting success.

Michael Goldstein, founder of MATCH and now working with Bridge Academies in Kenya, said:

We don’t know who is best.  We just don’t.  By best I mean teachers who are creating the largest gains for kids.  Programs’ “value add” is still so shrouded that even the well-intentioned reformers do not copy the best, nor feel that improvement is an urgent must rather than a nice-to have.

We tend to try to do a little bit of everything instead of a few things well.

screen-shot-2016-09-30-at-1-30-50-pmOur quick audit of a range of teacher preparation organizations (inside and outside of Teach For All) reveals that many of us have responded to the frustrating lack of aggregate improvement in teacher performance and student outcomes by adding another, and yet another, and yet another, learning initiative until each of our programs is doing a little bit of a lot of different “bets.”   

We have heard and seen that same pattern from university partners, from non-profit teacher support organizations, and from partner organizations including Teach For America.

Tim Daly described this problem as one of the catalysts for TNTP’s overhaul of its teacher training model (called Fast Start).  TNTP now focuses in on fewer skills, more practice, and a meaningful “deselection” for those teacher candidates who are not growing and performing at a pace that will have them minimally ready for the first day.

Tim was careful not to defend those narrowed choices as “right,” but was hopeful that the narrower focus would improve teacher quality: 

I think some of it is just kind of saying “let’s just lay a bet.” It may not be the right bet, but let’s just commit, rather than try to do a bunch of different things in a kind of low-intensity or not very deliberate way. Then it is quite possible that if we explored all the pathways, we might find out that the one we bet on is not the best, but I think what we felt like what we had been doing before was trying to do a bit of everything.

For many of our programs, just like at TNTP, fixing the muddled “how” problem is actually going to mean doing LESS not more.  It will mean undoing previous layers of less-than-purposeful teacher development “bets” rather than choosing and focusing on new ones.

Faced with the realization that our underlying assumptions about teacher growth might be wrong, a natural instinct is to look for the “research” that will help us choose the right assumptions. 

Once again, we come face to face with a painful realization.  As Ben Jensen (an expert in teacher learning from Australia) put it:

The research on how teachers grow is abysmally unhelpful.

 

The evidence is disgustingly poor.

Ben described how little we actually know about what sorts of experiences and content most contribute to teacher improvement.  He emphasized that this problem stretches around the world.

Consider this, from highly respected think tank at Brookings Institute on the question “What Do We Know About Professional Development?”

. . . a study conducted by Instructional Research Group and released last week reviewed the research on professional development in K-12 mathematics.  Good research reviews whittle down an initial pool of studies based on quality of design.  This review found that of 910 PD studies identified in a search of the relevant literature, only thirty-two employed a research design for assessing the effectiveness of PD programs.  Of those, only five met the evidence standards set by What Works Clearinghouse.  Of the five studies, two had positive results, one showed limited effects, and two detected no discernible effects.  Such dismal findings aren’t confined to PD in math. 

All of our experts agreed that the research landscape is problematic.  They call out a number of factors that contribute to the “desperate” nature of the research, including education’s history of emphasis on inputs over outcomes.

Mike Goldstein, the founder of MATCH Education (a combination charter school and grad school of teacher training) who is now working in Kenya with Bridge International Academies, challenged us to recognize the need for a smart, objective, say-it-like-it-is “casino” that watches and monitors the teacher growth “bets” that different groups are making, and plays forward that learning, from both what is and is not working.  (“There are lots of unpublished failed efforts out there too, which makes our learning as a group much harder,” Mike said). 

Several of our guest experts called out the way that medical research goes through a process that moves the field forward, and how that system, culture, and institutional learning is missing across education.

Again, Tim Daly’s experience exploring the landscape of teacher development has given him clear-eyed concern about the state of teacher development:

The measures that we traditionally use to assess teacher professional learning, which are largely teacher satisfaction with it and whether they think they are growing across this data set have virtually zero relationship to actual improvement. Repeat: the things that we generally make our decisions on are almost completely unrelated across these settings to whether teachers are showing improvement on the “objective” measures.

Tim sums it all up:

There’s no way to overstate this: the research base on teacher improvement is just disturbingly bad and not instructive.


Continue Reading...

Please share your thoughts.

Notify of
avatar