I’ve been reading Chris Mooney’s commentary (here, here
and here) about the Office of Management and
Budget’s (OMB) guidelines for implementing the Data Quality Act. He points out that it has
been turned into an impediment to environmental regulation by industry. I’ve
looked at the example of the U.S. EPA, and as you’ll see below, there are
examples that make his point.
At the same time, I’ve been a firm adherent of EPA’s Data Quality Objectives (DQO) process for developing study designs for
investigating hazardous waste sites. Investigations that are planned without
DQOs inevitably resemble very expensive, failed treasure hunts. Does my
insistence that waste site investigations be structured around specific
questions and defined decision points put me on the side of the OMB? Not sure –
but what I do know is that DQO guidance is part EPA’s long-standing Quality System,
which was incorporated into its data quality guidelines developed in 2002 in response to the Data Quality Act. So, I am
a user of the Data Quality Act and will say that it can have a role in
producing useable data for environmental decision-making.
However, I agree that the system is open to abuse to bogging down regulatory
decision making, with the provision for challenges to the data developed by
EPA. For example, the Perchlorate Study Group challenge of EPA’s risk assessment for perchlorate included
requests for high-resolution images of slides of brain tissue from rats dosed
with perchlorate, information on conditions under which the slides were stored
and prepared, down to who sliced the tissue sections and what kind of a microtome
did they use. To the naïve eye, this could look like the height of
obstructionism. But these requests may make some sense. According to the PSG,
“As EPA knows, it is widely believed that the neurodevelopmental effects
observed by EPA were artifacts of laboratory errors. This information is
essential because there is a serious danger that differences in tissue
compression during the histology could have created an apparent perchlorate
effect by artifact alone.”
(Hah – so there is some recognition that there could be a neurodevelopmental
effect from perchlorate exposure!)
Since the animal testing data is important in developing the Reference Dose for
perchlorate (the basis for the 1 ppb action level in drinking water proposed in
2002), perhaps chasing down artifacts in pathology is important. However, while
there is no discussion of how important is the data from that particular animal
study (I won’t debate the issue right this minute), there is the point that
noone, noone collects perfect data – not the government, industry or academia.
Some researchers are better than others, but noone is perfect. Without some
role for weight of evidence from multiple studies and some tolerance for making
decisions and drawing conclusions under uncertainty, all of science would grind
to a halt.
What’s unstated in this debate over data quality in regulatory decision making
is the failure to come up with a reasonable mechanism for making decisions
under uncertainty. I’ve struggled with how to articulate this view, but help
has recently arrived in the form of a review published in Environmental Science and Policy.
I’ve only reviewed the abstract of this article (the article is available for a
fee), but it sounds very promising:
“. . . mischaracterization concerning the use of science in the U.S. policy
process has lead to unreasonable expectations about the role that scientific
information can play in the development of environmental and public health
policies. This in turn has lead to implementation of misguided and
self-defeating policy initiatives designed to ensure the objectivity or
"soundness" of scientific inputs to the policy process.”
The authors argue that scientific findings cannot be stripped of their social
contexts, such that scientific assessment conducted in support of policy making
rarely, if ever, lends itself to descriptions of “objective” or
“non-objective”. Therefore, policy initiatives such as the Data Quality Act,
which are intended to assure objectivity, reliability, transparency, etc. etc.,
are in the end, exercises in futility. Their perspective on solving the problem
is a bit vague, “scientific findings draw much of their validity through the
context of their application in policy narratives”, but I think the diagnosis
is sound – we’re expecting too much from science in solving environmental
problems.
Thanks to Prometheus for the link to the review.
Post a Comment
<< Home