Making sausage in the Department of Education

Having taught educational policy throughout most of my career as a faculty member, I would often start the course with the old saying, “Policy is like sausage – you don’t want to see either being made.”  Just as most of us would not like to watch the sausage making process, for fear that we would find out what gets ground up and put into it, many of us – even those who consider ourselves “policy wonks” – similarly wish we could close our eyes as we watch the policy making process.  As I write, there is a good example of sausage-making going on at the Department of Education in Washington.

The Department, and Secretary of Education Arne Duncan as its leader, has been pushing for more rigorous assessment of teachers and the programs that train them.  This is a goal that is understandable and laudable; there has been much attention paid lately to the issue of teacher quality and how school districts can determine which are the most effective teachers and which are the least. Mary Kennedy, a faculty member in our Teacher Education program, edited a recent book on the topic – Teacher Assessment and the Quest for Teacher Quality: A Handbook.  You can also read a briefing on the topic written by our Teacher Education program a few years ago.

The Department of Education has been given the authority by Congress to create regulations for evaluating teacher education programs around the country.  To help devise the rules, the Department late last year created a group called a – and this is where it starts to sound like sausage – “negotiated rulemaking panel.”  The purpose of the group is to bring together experts on the topics of teacher assessment and teacher education more broadly to try to agree on what the regulations should look like.  As with most panels of this type, the Department sought to include a wide range of people, and it includes in its membership individuals who are teachers, education school deans, school administrators, state administrators, and representatives from a variety of higher education institutions.

From early in the process and in fact before the panel was even created, the Obama Administration, through the Department of Education, has focused on using student test score data as a mechanism for evaluating teachers.  One way of doing this is through the use of something called value-added measures, or VAM.  The VAM process is an attempt to measure the learning gains that a classroom of students make over the course of the year, generally through the use of the state curricular frameworks tests that are used for compliance with No Child Left Behind.  The theory is that if you measure students’ knowledge of math, science, reading, and the like at the beginning of the year, and then again at the end of the year, that you would have a reliable measure of what they actually learned through the year.

While VAM may be a good tool for assessing student gains (depending on how well the tests are constructed), it has yet to be shown as a reliable and valid tool for assessing teacher quality.  And this is where the rulemaking process has gotten very messy.  According to press reports (Inside Higher Ed has a good story about it today) some members of the panel feel strongly that VAM should be used as the sole, or at least dominant, mechanism for teacher assessment.  Others on the panel evidently have grave concerns with the use of VAM for assessing teachers, and even more so, for assessing the programs in which they have been trained.  Proponents of VAM, including Secretary Duncan, want to use the results to make high-stakes decisions regarding teachers (including annual evaluations, merit raises, and even promotion and dismissal) and teacher education programs (for example, whether students in those programs should be eligible for some forms of federal financial aid).

I fall into the camp of those who question the use of VAM for these purposes.  The problems are myriad, and some of my fellow education school deans and I have sent a letter to the Department’s panel outlining our concerns.  A few of the more important issues that have been raised include:

  • Few, if any, of the tests used for assessing student performance in the classroom have been validated for the use of assessing teacher performance, never mind that of the programs in which the teachers have been trained
  • Many teachers end up in jobs beyond the borders of the states in which they are trained, and every VAM system that has been proposed and implemented (such as in Tennessee and Louisana) is state specific, i.e., they only measure teachers working in that state.  Many of our teacher education graduates at MSU, for example, are taking jobs out of state because few Michigan school districts have been hiring in recent years due to the recession.
  • It is difficult to use VAM to evaluate teachers who do not teach core subjects – such as math, English, and in some states, science – that are tested each year and in each grade.

There is an overaching concern that has been raised with respect to the use of VAM for teacher assessment, and that is the question of whether teachers are solely or even largely the only influence on student learning over the course of a year.  While research has shown teachers are an important influence on student learning, they are not the only one.  What happens to students in the 17 1/2 to 18 hours each weekday and all weekend when they are not in the teacher’s classroom – such as the influence of their home setting, what they do after school (are they in a tutoring or test prep program, or are they working at the local McDonalds?), and their prior academic experiences – also have a large impact on student performance on tests.

The negotiated rulemaking panel is supposed to be wrapping up their work today.  So soon we will learn just what the Department’s teacher assessment “sausage” will look like.  And I am sure some will find it very appetizing, and others much less so.

Leave a Reply