“A plea for consistency, transparency, and reproducibility in risk assessment effect models”
Valery E. Forbes, Amlie Shmolke, Chiara Accolla, and Volker Grimm authored the above-captioned article, which was published in the Environmental Toxicology and Chemistry. It reads in part as follows:
“Ecological risk assessments (ERAs) are moving toward using populations and ecosystem services as explicit protection goals, and impacts on these goals are difficult, if not impossible, to measure empirically. Mechanistic effect models are recognized as necessary tools for ERA that complement empirical data (National Research Council 2013; European Food Safety Authority 2014), but we need a strategy to make them consistent, transparent, and reproducible following principles similar to those used to develop standardized experimental designs for empirical tests, while recognizing that the models should be allowed to evolve as understanding, data availability, and ERA questions change over time.
Since the early 2000s there have been multiple initiatives with the aim of increasing the use of mechanistic effect models in ERA (Grimm et al.2009; Thorbek et al. 2010; Hommen et al.2015; Forbes et al. 2017). In 2013, the US National Academy of Sciences recommended mechanistic (population) models to assess the risks of pesticides to threatened and endangered species listed under the US Endangered Species Act (National Research Council2013), and in 2014, the European Food Safety Authority published guidance for good modeling practice for pesticide risk assessment in the European Union (European Food Safety Authority 2014). Numerous publications based on these initiatives have compiled information on available modeling tools and provided recommendations on model development, documentation, and evaluation.
Despite this progress, the use of mechanistic effect models in ERA remains rare. Although some general guidance exists, the ERA community lacks a coherent strategy for model design and implementation. Models continue to be developed on an ad hoc basis, often from a single sector, without deep engagement of the broader community. This applies to both academics producing models without consulting regulators as well as regulators launching calls for models without consultation across the modeling community (European Food Safety Authority 2016).
In addition, model design is often more determined by a modeler’s experience, skills, and preferences than by the questions to be addressed, the data needed, and the role of the model in the regulatory framework. An important consequence is that evaluation of the models is beyond the available time, expertise, and resources of regulatory authorities. What is needed are consistent standards for model design that reflect the consensus of the regulators, the regulated, and the model developers and that make the models feasible to evaluate. By analogy, most chemical toxicity tests follow nationally or internationally accepted standards that define how the tests should be conducted, what variables to control, and what criteria to fulfill for the tests to be considered valid. In contrast, there are no widely accepted criteria for model design, complexity, or validity. Some models take many years and much effort to develop, but are never accepted for regulatory use because of lack of transparency. Others are considered to be oversimplified and not used for this reason. Given this lack of consistency and transparency, regulators are understandably hesitant to consider models as a main source of evidence on which to base ERA decisions. The lack of standardization clearly impedes the extent to which mechanistic models are used, wastes valuable resources, and reduces the degree to which risk management decisions are informed by mechanistic and quantitative understanding of key biological relationships.
The need for robust mechanistic models is becoming ever more pressing as efforts are increasing to explicitly extrapolate effects of chemicals and other stressors measured in laboratory or mesocosm settings to consequences for populations in the field and ecosystem services. Moreover, there is growing focus in ERA on using data produced from high‐throughput techniques that are even further removed from ecological protection goals than traditional organism‐level responses. For such tools to ever be of practical use for ERA, robust mechanistic models will be needed to quantitatively link them to relevant ecological endpoints.
In our view, what is needed now is a strategy for the consistent and transparent design of mechanistic effect models for ERA. The strategy needs to be compatible with different legislative needs, recognize limitations in data and resources, and involve all stakeholder groups to ensure buy‐in (Figure 1). We suggest that a key step would be the creation of a classification, or taxonomy, of models that is related to specific regulatory needs. The primary feature of this taxonomy would be categorization of the models’ specific purpose within the regulatory context, which would include the modeled species and ecosystem interactions, the endpoints to be assessed, spatial and temporal scales, geographical region, and the required level of evaluation and validation. The different classes of models could be described using both a standardized written format similar to the overview part of the widely used Overview, Design Concepts, and Details protocol for agent‐based models (Grimm et al. 2010), and graphical representations of the models’ structure, causal relationships (influence diagrams), and scheduling of processes.”
Click here to read entire article, including figures.