TheCRE.com
CRE Homepage About The CRE Advisory Board Newsletter Search Links Representation Comments/Ideas
Data Access
Data Quality
Regulation by Litigation
Regulation by Appropriation
Special Projects
CRE Watch List
OMB Papers
Abstracts and Reviews
Regulatory Review
Voluntary Standards Program
CRE Report Card
Public Docket Preparation
Consumer Response Service
Site Search

Enter keyword(s) to search TheCre.com:

PUBLIC COMMENT ON THE CRE DRAFT DATA QUALITY RULE

Commenter: Peter Voytek, Ph.D.

Affiliation: Regulatory Science International

Comment:

June 8, 2000

The Center for Regulatory Effectiveness (CRE)
11 Dupont Circle, N.W., Suite 700
Washington, DC 20036

Re: Draft Standards for Agency Regulations Governing Data Quality

While at EPA, I served as the Director of the Agency's Reproductive Effects Assessment Group and Deputy Director of the Office of Health and Environmental Assessment with responsibility for the development of the first Federal guidelines for assessing mutagenic, reproductive and developmental risks. These guidelines are still in use to establish exposures of environmental agents that protect the public health. The process used in developing these guidelines was quite extensive and involved establishing core panels of scientists with state-of-the-art expertise in the various fields useful for assessing both qualitative and quantitative toxicity. The draft guidelines first were reviewed by the panel members and later by additional scientists in academia, industry and government. The guidelines were reviewed internally by EPA's scientists, by EPA's Science Advisory Board, and by the public through publication in the Federal Register before being finalized. These guidelines were published in 1986.

Although scientifically sound testing protocols, procedures, and practices, coupled with scientifically sound assessment guidelines, are essential, it is equally important to assure the quality of the test data, the interpretation of the results from the test data, and the use of the test data. To have scientific credibility, any test data intended for regulatory use need critical review and comment, with the reviews made available to the public. Otherwise, the data may be misinterpreted and lead to incorrect conclusions. I have provided two examples below to illustrate how deficiencies in the interpretation of data used in regulatory risk assessments can lead to erroneous decisions and how to avoid these errors through appropriate quality control procedures.

As a first example, recently an in vitro mammalian mutagenicity screening test on a halogenated alkane was submitted to EPA as a Toxic Substances Control Act Section 8(e) notice. The test protocol, procedure, and lab practices were carried out correctly, and the results were presented as positive, meaning that the halogenated alkane had mutagenic activity for mammals. Without additional analysis one would conclude that the chemical was a potential carcinogen and consequently a potential health risk to humans. However, the positive findings were very weak and occurred only where there were high levels of toxicity to the test cells. Furthermore, the observed mutations scored were the same as spontaneous mutations, another indication that cellular toxicity, not the direct interaction of the chemical or its metabolites with DNA, caused a small increase in spontaneous mutation frequency. In addition, mutation studies from other test systems were reported as negative. Without more extensive analysis anyone would draw an incorrect, misleading conclusion from the report, in this particular case, a false indication that human exposure to the chemical might lead to cancer. To avoid misinterpretation, data submitted to regulatory agencies should not only be accompanied by an scientifically sound assessment of the protocols and procedures used, but an interpretation of the findings relevance to human health risk.

As a second example, on January 10, 1997 the Occupational Safety and Health Administration (OSHA) published its standard for methylene chloride. To OSHA's credit, it allowed extensions of deadlines to obtain and evaluate new data, and it held public meetings for interested parties to submit new findings and comment on new studies. However, OSHA set the standard internally, using unsound biological assumptions and quantitative risk assessment procedures that were never peer reviewed. The biological assumptions and quantitative risk assessment procedures were developed by contractors to OSHA and completed at least a year before the final standard was published in the Federal Register. The contractor reports included hundreds of pages of statistical calculations and assumptions, which were only made available to the public through the OSHA docket after the final rule was published. OSHA claimed that their risk assessment procedure had been peer reviewed by other Agencies within the government. However, this review entailed only a presentation of an hour or less by OSHA staff to unidentified scientists from other regulatory agencies. The "peer review" was not open to the public. Because of the nature of the risk assessment and the extensive statistical manipulations of the data, it would have been impossible for any serious peer review to have taken place after an hour of oral presentation. Certainly, validation of the statistical procedures could not have been made. Furthermore, there is no evidence that the persons present were given copies of the risk assessment for analysis, nor were any written peer review comments ever obtained by OSHA as a result of the presentation, since none were included in the public docket, either before or after the publication of the final standard.

In developing the final methylene chloride standard OSHA elected to rely on experts with statistical skills, but inappropriate biological expertise and potential scientific biases. This reliance was shielded from open peer review by recognized risk assessment experts or by the public. OSHA's methylene chloride standard provides an example of a failure to assure and validate the quality of data, as interpreted using dubious biological assumptions and risk assessment procedures.

As knowledge of molecular biology advances, its application to development of more precise estimates of human health risk, becomes more demanding. Teams of experts with a diversity of expertise should be used in assessing risk, rather than individual risk assessors. It is unlikely that agencies like OSHA have individuals possessing the different, but necessary kinds of expertise. Agencies may even lack the ability to recognize appropriate contractors to assist them in conducting a risk assessment. Therefore, as part of CRE's draft rule, each agency might form a diversified data interpretation panel that would consist of experts with appropriate capabilities from academia. Similarly, when the data will have important regulatory consequences, which CRE's draft rule already recognizes, the process should solicit public input and allow stakeholder communities the opportunity to comment on agency interpretations of data during open meetings, before a final rule is issued. These additional procedures would recognize that data interpretation is part of data quality. They would help agencies avoid internal biases and would result in more credible regulations with more scientific credibility.

Alternatively, an appeals process could be established under OMB, where the interpretation of data, as part of data quality, could be challenged. The differences between data and summary statistical interpretations of data often are close and poorly understood by the regulated public. However, such an appeals process would involve the stakeholder(s) criticizing data interpretation, as part of data quality, with the regulatory agency defending the quality of the interpretation. OMB could receive these arguments and render a decision. Yet, an OMB appeals process suffers in many ways. If OMB found the quality of an agency's data inadequate, the regulatory process would call for additional studies or reinterpretations of the existing studies. The resources used to base a standard on poor quality data would have been wasted, and new costs for generating the appropriate data would be incurred. In addition, OMB officials might lack the appropriate expertise and could be considered to have political biases. To prevent this, OMB might establish an expert panel, similar to the one suggested above, to review appeals and comments by regulatory agency technical staff. The panel could then present recommendations to OMB for final action. The use of a panel would provide needed scientific credibility, and would eliminate biases raised by both regulatory and stakeholder scientists.

Peter Voytek Ph.D.
Regulatory Sciences International