TheCRE.com
CRE Homepage About The CRE Advisory Board Newsletter Search Links Representation Comments/Ideas
Data Access
Data Quality
Regulation by Litigation
Regulation by Appropriation
Special Projects
CRE Watch List
OMB Papers
Abstracts and Reviews
Regulatory Review
Voluntary Standards Program
CRE Report Card
Public Docket Preparation
Consumer Response Service
Site Search

Enter keyword(s) to search TheCre.com:

PUBLIC COMMENT ON THE CRE DRAFT DATA QUALITY RULE

Commenter: Kenneth Green, Ph.D.

Affiliation: Director of Environmental Program, Reason Public Policy Institute

Comment:

July 7, 2000

I think that this proposed rule is an important element in the never-ending quest for improved policy-making procedures, especially where science and policy intersect, which is more and more often the case.

My comments will mostly address the "data quality" element of the proposed rule. On the question of access, I think you've pretty much covered the ground, though I think that any outside analyst should have access to the data, regardless of who they work for, or what their motivation is. We are all affected by regulation, in one way or another. And, you might want to put something in there about NGO's, and other outside policy analysis organizations who, in many ways, galvanized this issue back in the 1997 NAAQS consideration process.

Overall, while I applaud the intention of the proposed rule, I think that some work is needed to add a bit more specificity to various components.

1) Quality
The problem here is that most of the terms used to describe data quality are subjective, or fail to show an understanding of some of the complications of scientific research endeavors.

a) "Conformance to fact"

Well, this is everyone's goal, alright, but it's not something that can be guaranteed in most scientific research. The thing is, even if you have a result in which the data you gather confirms your research hypothesis to the 95 % confidence interval (the gold standard of scientific confidence), 5% of the time, that same finding could be due to purely random error, even when you've done everything right, had state-of-the-art data acquisition, etc. If you do the math, that means that roughly one in twenty experiments will get a result that is at odds with a hypothesis that accurately describes reality, even if you do everything right. The bottom line here is you need to pick some standard that you think *means* "accurate," or "valid." I'd suggest that you consult with some statisticians, but you could select certain measures such as the 95% confidence interval, which would rule out a lot of shoddy work, while retaining an objective standard of "quality" The same objections apply to "Validity."

b) "Impartiality"

What does this mean?

c) "Representativeness"

Again, that's everyone's goal, but not as easy as it might seem. Sometimes, one simply can't get such a representative sample, and one must make adjustments to account for a non-representative sample. A better approach here would be to require something specific, such as, "best-practice-randomized sampling" or some such. Again, I'd suggest talking to a good statistician, or psychometrician to help pick some objective standards of "representativeness," or at least, some standards good enough for government work.

2) Under "Objectivity," I'm not quite sure what section ii is supposed to assure. If I have a hypothesis, and I gather data to test that hypothesis, the data I gather will certainly have some relationship to my expectations. And my expectations are also likely to shape how and where I gather the data.

In conclusion, I think the idea behind the Data Quality rule is a good one, well worthy of pursuit, but the current draft would benefit from some additional specificity.

Dr. Kenneth Green
Director of Environmental Program
Reason Public Policy Institute