Over at eduwonk.com there's a reference to recent discussions on value-added. I'm not going into all of that now except to say how much the discussion in this area has changed in the last 15 years. It has gone from "It's all wrong -- the tests, the statistics, everything!" to "Well, sure it adds some value, but..." and then either a suggestion that a competing approach can do it cheaper that Dr. Sanders' model, or a statement that it isn't the end to all questions. Correct. Most data answers some questions, then raises more and better questions. That's the case with value-added.
The hanging point now seems to focus around the use of value-added in evaluating individual teachers. Here's eduwonk's point:
In my role as a policymaker in VA I would not be willing to go too far on this front because I think these models are still awfully noisy except at the extremes. In other words, they can tell you who really lousy and really great teachers are, but in the vast middle, where most teachers are, they're of less utility.
Ok, ... Wait a minute! It separates out the very effective teachers from the middle from the very ineffective and you can't find a policy use for that? Let's remember here that the real achievement gap is between the results the very effective teachers achieve compared to the middle or the very ineffective. (See here.) Isn't that useful? Doesn't it help you set a goal for moving the middle toward the top and the bottom toward the middle. Standardized test results don't do a great job separating out kids "in the middle" but we're sure using them for accountability. They can and do give useful information on the big picture (just some information, not all, and I am NOT saying we're using that information correctly!). Value-added does the same.