Friday, December 22, 2006

Value-added or value-maybe

I have a variation on an old saying that reads, "If you have to join 'em, beat 'em."

Ohio is currently rolling out its value-added system. For years, many in education feared the concept of value-added. They feared a system that purported to be able to analyze test results and see if teachers and schools were able to add value during the previous school year. The standard for added value is the oft-used phrase, a year for a year: Did the student gain a year of knowledge during the most recent school year?

The year for a year has its own set of issues; such as, what is a year of growth in absolute terms, not defined by historical gains from similarly-situated students? For this discussion, I am going to ignore such issues and concentrate solely on the whitewash being used to spin lagging results into congratulatory press releases.

In logic, something is either A or not-A; there is no gray separating the two. In addition, saying something is not-not-A implies that the thing is A; the nots cancel. But in statistics, the nots don't cancel. So, not-not-A does not imply A.

A closer look: My district uses the standard value-added definition for not achieving a year of growth; performance two or more standard deviations [1] below the expected result. The reason for this range is that the statistics used in value-added analysis cannot provide exact figures. No one can state with certainty that a child achieved growth of, say, 1.23 standard deviations from the expected growth.

We can state with great confidence that the child who achieved at 2.24 standard deviations below expectations did not grow an academic year. Because of the inexactness of the statistics used in value-added analysis, it would not be fair to designate a teacher, school, or district as a poor performer unless the students achieved below the level of confidence; two standard deviations below expectation.

I can accept that. Statistics - an inexact science subject to noise - has to allow for a range of gray before making a positive statement. But the range covered by possible outcomes above the two-standard-deviations-below-expectation mark accounts for over 97% of all possible outcomes. Or, put differently, a score in the 3rd percentile is considered as not-not-A; the level which is now defined as A, the expected result of a year for a year.

The sleight of hand: The Olentangy School District has reported to our community that its students are gaining a year for a year simply because they are achieving above the 3rd percentile. The statement that district students saw a year of growth simply means that the students did not see the not a year of growth. But that does not mean that district students saw a year of growth. Yet the not-not-A has becomes A due to a sleight of hand. What is true in logic is not true in statistics, but is good PR nonetheless.

As a recent board member (resigned three months ago), I saw the data from two years ago - poor performance - but was never able to get copies of last year's data (the latest reported data) since the administration was concerned about the results. Don't know for certain, but in my experience possible good results were easy to obtain while possible poor results were always a struggle, unless the results were bound for public release through the state or local press.

Now when I say that two years ago the district showed poor performance, I really mean that most combinations of school, grade-level, and subject, revealed achievement below expectations, but few below the two standard error mark of failure.

The system that feared value-added has simply spun its value into that which tells a story worthy of congratulations. The 3rd percentile, a very low standard indeed. But the new standard nonetheless.

"If you have to join 'em, beat 'em". They won!

Note:

[1] It's actually two standard errors, but, for a population of one, the standard error is the same as the standard deviation. Remember, the district is stating that all students will grow one academic year per school year. That means that the focus is on individual students, not some aggregate class, school, etc.

That said, the district may be using two standard errors of measurement as stated at the board meeting. If that is indeed the case, the district has created its own system, one different from the now-standard Sanders value-added analysis. Sanders uses two standard errors not standard errors of measurement.

The district's system would mean that they are at the 25th percentile, give or take, depending on the test. Still a very low standard of achievement.

11 comments:

Anonymous said...

A well-written post. It seems as though school districts often work harder at public perception than academic integrity. Do we pay taxes for skewed reporting or quality education? Thanks for your insights.

Chuck Blythe said...

Very well said. It is refreshing to have someone so close, albeit formerly, to a school system who understands both logic and statistics. (Not mention Mises and, obviously, Rand.) Too bad for the general population in Delaware county that you stepped down.

Anonymous said...

i disagree with your understanding of this issue. first of all, you preface with "In logic, something is either A or not-A; there is no gray separating the two. In addition, saying something is not-not-A implies that the thing is A; the nots cancel. But in statistics, the nots don't cancel. So, not-not-A does not imply A."

logic looks so cut and dried because it is unencumbered by the noise and uncertainty of data in the real world. so this point is not helpful at all, formal logic does not really enter into it.

i don't think you really understand statistics. you state "Statistics - an inexact science subject to noise - has to allow for a range of gray before making a positive statement."

statistics is not a science, and it is not subject to noise. statistics is the antidote to the noise in data that come from the real world. statistics allows us to quantify and account for the noise (error) in samples of data. positive statements are not made by statistics, they are interpretations made by humans. statistics just afford us a principled approach to making positive statements so that we don't overshoot.

Jim Fedako said...

Before I debate anonymous regarding all of the errors in his post, I suggest that he refer to the current source of fact, www.wikipedia.com, before he states that statistics is not a science. Wikipedia refers to statistics as a "mathematical science." http://en.wikipedia.org/wiki/Statistics

He may want to review his post before questioning mine.

Anonymous said...

i don't need to review my posts for something so elementary that it is taught in every introductory stats class. so wikipedia is your source of truth? no wonder you don't understand it.

so substantively, you think that statistics has to "allow for a range of gray before making a positive statement" - but your intellect doesn't? so statistics is less powerful a tool? exactly how deluded are you about your own intellectual power? i'll say it again, statistics is the antidote to the noise in data that come from the real world. ignoring noise is not the answer. if you check your watch and it says 3:12, and it's the only clock you have, you're pretty sure it's 3:12. if you have fifty other clocks that give a variety of different answers, you suddenly become less sure even though your watch hasn't changed at all.

to come back to your original topic, it looks to me like the amount of variation in the sample, and the unreliability of the tests, made the standard errors very large, so the confidence intervals were also large, so the scores are uninterpretable. this doesn't prove that value-added models or bad. i agree with you that spinning this positively has no merit. but your post makes it sound like testing or statistics are somehow inadequate to measuring this kind of progress. absolutely not.

Jim Fedako said...

As anonymous should know: Wikipedia is the current source of fact, not truth.

So, as I understand you, Wikipedia is incorrect yet you supply no reference to support your opinion. Why would I, or my readers, select you over Wikipedia based on nothing more than your unsupported statements?

Regarding Sanders, et. al., anonymous must realize that the system is built on an incorrect use of statistics. Richard von Mises - someone with a great intellect - proved over 50 years ago that you can't use statistics when you know more about the sample than that it is simply a random draw from the population. Districts, schools, and classes are not random draws by any means.

This alone destroys any real statistical value to value-added. Yet, I do say that value-added can provide a service to those who want to see inside the schools - you need to read additional post I wrote on tis topic. But, like all tools, value-added can be - and is being - corrupted by the schools to fit their own purposes. That is the whole point of my post.

Also, keep in mind that you are reading a page by an anti-positivist, not a positivist/empiricist such as yourself. So, don't expect me to hold statistics as being able to really explain truth.

Anonymous said...

well you're asking me to prove a negative, that statistics textbooks don't say that it's a science. i've just looked through the introductory pages of a dozen stats textbooks in my library and not one said that it was a science. but i will provide you with quotations from thompson, b. (2006). foundations of behavioral statistics. guilford press: NY.

"Nor is statistics about black and white or universally-right or universally-wrong analytic decisions. Instead, statistics is about being reasonable and reflective. Statistics is about thinking."

"Statistics is about both understanding and communicating the essence of our data."

there is nothing controversial about what i have written in previous posts among statisticians.

statistics is therefore not attempting to explain truth. it is attempting to temper our silly minds' overreaching for truth. in your last post, you wrote, "This alone destroys any real statistical value to value-added." you think that if there is any flaw in the sampling that all inferences based on probability theory must go out the window? preposterous! it has been proved mathematically that knowing more about the sample as you say can only reduce error, so inferential statistics based on a random draw can only make more conservative inferences, not less. bayesians would completely agree with the idea that prior information should be incorporated into models. Richard von Mises was, after all, a positivist. if there's information out there, why not try to use it?

you're obviously a philosopher and not a statistician, that's fine. but don't disparage statistical reasoning if some people use general ignorance of it to distort the information they put out. you are certainly not an appropriate arbiter of what is a correct or incorrect use of statistics.

Jim Fedako said...

Prove the negative? No, just show me the reference that states that "statistics is not a science." You made the claim, now back it up.

Just the reference will do. But don't search through esoteric journals for the obscure. Simply back up your statement, that's all I ask.

And, if you can't back it up, be a gentleman and withdraw it. Don't attempt to dance around it; address it and let's be done.

Please refer to fellow statisticians for critical views of Sanders. Many also question his approach.

Sorry. I am a mathematician by degree, statistician by avocation. Since you are a positivist, my statement and degree are sufficient proof of empirical truth, so please don't attempt to refute this statement as it would be very anti-positivist of you.

Yes. Richard was a positivist, while his brother was an ardent anti-positivist. I side with his brother on matters of economics and epistemology. I side with both, as they were in agreement, about useful and appropriate applications of statistics.

Oh, but I am an arbiter of truth. In a free market of ideas, we all are. That is why you can claim that statistics is a science, without reference or proof. See, it works both way.

Finally, this post was about the machinations that school districts go through in order to spin any story into something positive. anonymous has never challenged this position. I rest my case.

note: in other posts I do claim that value-added has potential benefits - in the right hands. If you follow Sanders findings, you would assign students to teacher with equivalent intellect. And, you would only have homogenous groupings of students. If indeed Sanders is correct - a tip to statistics and empiricism - but school do not reorganize, what's the point? In addition, as long as government runs the system, it is bound to fail, Sanders or not.

Jim Fedako said...

Oops. "That is why you can claim that statistics is a science, without reference or proof. See, it works both way." should have read "That is why you can claim that statistics is a not science, without reference or proof. See, it works both way.

Also, read the bizarre trash coming out of NBER (www.nber.org) to understand the supposed truths that statistics reveals. Hold onto you head ...

Anonymous said...

i will provide my refereed journal reference for the quote "statistics is not a science" when you give your refereed journal reference for "education is not a winged animal revolving around my harpsichord." QED

Jim Fedako said...

Instead of being a gentleman and providing his source, anonymous simply changes the debate once more.

That was the point of the original post: When confronted with a truth they do not like, some - such as anonymous and my local school district - spin the debate in circles.

My district stated that ALL students grew an academic year over the last school year, while anonymous claims that statistics in NOT a science. Neither claim holds, though the spin may look good.

Provide your source or withdraw your claim. Or, why don't you use the tools of your trade (statistics) and correlate "education is not a winged animal revolving around my harpsichord" with your claims. The NBER, a pseudo governmental agency that peddles this type of truth, might just publish your results.

Note: Statistics at work

Don't believe me: Here is the latest muddle from NBER - Birds of a Feather - Better Together? Exploring the Optimal Spatial Distribution of Ethnic Inventors
by Ajay Agrawal, Devesh Kapur, John McHale #12823 (PR)
http://papers.nber.org/papers/W12823

Abstract:

We examine how the spatial and social proximity of inventors affects
knowledge flows, focusing especially on how the two forms of
proximity interact. We develop a knowledge flow production function
(KFPF) as a flexible tool for modeling access to knowledge and show
that the optimal spatial concentration of socially proximate
inventors in a city or nation depends on whether spatial and social
proximity are complements or substitutes in facilitating knowledge
flows. We employ patent citation data, using same-MSA and
co-ethnicity as proxies for spatial and social proximity,
respectively, to estimate the key KFPF parameters. Although
co-location and co-ethnicity both predict knowledge flows, the
marginal benefit of co-location is significantly less for co-ethnic
inventors. These results imply that dispersion of socially proximate
individuals is optimal from the perspectives of the city and the
economy. In contrast, for socially proximate individuals themselves,
spatial concentration is preferred - and the only stable equilibrium.

http://papers.nber.org/papers/W12823