Abstract
Computer simulations are widely used to explore options and quantify outcomes in agriculture. For example, mathematical models have been used to estimate how much of the nitrogen (N) and phosphorus (P) delivered to the Gulf of Mexico comes from agricultural sources (Alexander et al. 2008); to calculate changes in grain yields as management practices are varied (Chen et al. 2014); and to investigate the effects of different precipitation regimes on soil erosion (Nearing et al. 2005). Physical models can be linked with economic and behavioral data to calculate potential costs, such as those of treating or replacing household well water contaminated by nitrates (NO3) as grassland is converted to row crops (Keeler and Polasky 2014). Models can also estimate risk, such as that of crop failure in various climate change scenarios (Challinor et al. 2010). A wide variety of scales and systems can be simulated, from assessing farmers’ behavioral responses to climate change-related policies on a patchwork of single farms (Berger and Troost 2014), to predicting global losses of staple crops due to pest damage in a warmer world (Deutsch et al. 2018).
In fact, in many situations models are the main tools available for answering important questions. While real-world experiments and measurements are immensely valuable (Jenkinson 1991; Parmesan et al. 2018), they often involve high implementation costs and challenging (or impossible) logistics, and future conditions may vary beyond those experienced during the period of experimentation. Instead, models can be used to simulate conditions across time and space, and to examine how different assumptions may lead to different outcomes (Deutsch et al. 2018a).
At the same time, the modeling process comes with its own challenges. To produce reliable results, model parameters should be defined and adjusted using information gathered from observations or experiments. Given the difficulties involved with collecting data, such model inputs are often limited. The models themselves are simplified representations of a complex reality, and their output becomes increasingly uncertain the further the model extrapolates beyond existing conditions. Still, models allow investigation of research questions and future scenarios that cannot be addressed by experiment or measurement alone.
Prompted by the widespread reliance on models in many agricultural contexts, combined with our own recent experiences of the challenges and complexities of simulating agricultural systems, in this paper we suggest a set of talking points for discussions between researchers who employ models and those—particularly policymakers (PMs)—who may wish to make use of model-based results.
MODEL USE IN POLICYMAKING
The wide availability of model results means that policy, regulatory, and legal decisions related to agriculture are often informed by work carried out by modeling teams. In Vermont, for example, where New England’s largest freshwater lake regularly experiences harmful algal blooms, US Environmental Protection Agency regulations regarding total maximum daily P loads are based on models of P transport from farms, forest, and urban areas to the lake (Smeltzer 2016; USEPA 2016). Similarly, protocols for reducing nitrous oxide (N2O) emissions from US agriculture rely heavily on modeling (Niles et al. 2019).
Modeling can therefore supply essential information to help craft policies and design incentive structures. At the same time, making good use of academic research (of any kind) in the policy process is not always straightforward. Building upon foundational work by Innvær et al. (2002), Oliver et al. (2014) reviewed 145 studies of factors that facilitate or discourage the use of research results by PMs, mostly in health-related fields. They identified several factors that PMs perceive as contributing to successful use of research evidence in policymaking, including the availability of clearly presented, relevant research, good relationships and communication between researchers and PMs, and the PMs’ own research abilities. In the “agri-food public health” sector, Young et al. (2014) described a gap between the ability of researchers to communicate results and PMs’ abilities to evaluate and interpret research, particularly where uncertainties are concerned.
Specifically relating to modeling, White et al. (2010) presented an early version of a water management simulation tool to an audience consisting of data analysts, consultants, and PMs. They discovered that although the PMs were fairly positive about the model’s credibility, they generally had neutral or critical opinions about its relevance to their decision-making. Inviting opinions at this early stage of the model’s development allowed the programmers to redesign the tool to address the stakeholders’ concerns. More generally, models have been criticized for restricting the range of potential solutions to those that are tractable by modeling, for becoming too complex to properly interpret, and for portraying results as far more certain than they can possibly be, among other charges (Saltelli and Funtowicz 2014; Saltelli and Giampietro 2017; Pindyck 2013). In addition, Tonitto et al. (2018) argue that models are increasingly being applied by users who may not fully appreciate their complexities.
There is certainly some truth to those criticisms. Nonetheless, as models allow the exploration of problems and scenarios that are difficult to tackle in other ways, it is worth understanding how to use them appropriately and effectively in policymaking. White et al. (2010) showed that involving the end users early in the modeling exercise can yield good results, but there is room for improving communication between researchers and PMs, and for better conveying the scope, limitations, and uncertainties associated with modeling, at later stages as well. In the hope of making a contribution in this area, we would like to share some of our own experiences in simulating agricultural systems and obtaining results that are potentially relevant to policy.
In a recent paper (Mason et al. 2021), we modeled runoff, erosion, crop yields, and nutrient (N and P) losses from two dairy farms in a handful of hypothetical future climates. Using that work as a case study, here we briefly describe some of the difficulties and limitations encountered along the way, and the possible implications of the work for local agricultural/environmental policy. To help facilitate dialogue between modelers and PMs in future work, we then present a list of topics that we suggest both parties work through together, either during the design phase of a policy-relevant modeling project, or as a PM considers whether and how to incorporate existing model results in their decision-making process.
MODELING DAIRY FARMS IN VERMONT
The modeling in Mason et al. (2021) was carried out using the Agricultural Policy/Environmental eXtender (APEX) model. As its name suggests, APEX is, among other things, explicitly a policy support tool. According to Gassman et al. (2010), APEX was initially developed to support the Livestock and Environment National Pilot Project, which aimed to determine “technologies, management methods, policies, and institutions” related to the environmental impacts of animal production (Jones et al. 1993). Essentially, the model takes information about weather, site, and soil, performs user-specified farm operations, and calculates how water and nutrients move and how various soil and crop properties change over time and as farm activities occur. Depending on the inputs and operations specified by the user, APEX outputs can include quantities such as the amount of soil eroded by wind and water, the amount of N and P exiting the simulation area in various forms, and annual yields of one or more crops.
We used APEX to simulate runoff, sediment, N and P losses, and crop yields for two small watersheds on Vermont dairy farms growing silage corn (Zea mays L.) in four future climate scenarios. The steps involved in creating the models, and the limitations and sources of error introduced at each step, are summarized in table 1. To give a sense of the “behind-the-scenes” workings of the modeling process, in the following paragraphs we explain a few of the steps in more detail. Full, technical explanations can be found in companion publications (Mason 2019; Mason et al. 2020).
It has been argued that APEX produces reliable results only when calibrated using suitable field data (Baffaut et al. 2017; Ramirez-Avila et al. 2017), so we decided to model farms that had taken part in a recent water quality monitoring project (Braun et al. 2016). That project provided a rich set of soil test data, farm management records, and runoff, soil loss, and nutrient export measurements. This information allowed us to mimic the actual farm management and tune the model’s parameters so that realistic values of runoff and other outputs could be obtained. However, collecting data in the field is difficult, and Braun et al. (2016) note a number of issues ranging from icing in the flumes that collected the runoff to miscommunications with the farmers involved in the study. We calibrated APEX using the best available data, but the best available data are not perfect.
The calibration process itself can be carried out manually or automatically (Wang et al. 2014). The advantage of automatic calibration is that it allows a much wider search of parameter space. (APEX has hundreds of parameters that govern everything from how fast nutrients become available in the soil to how efficiently plants use sunlight.) It may be possible to find a model that performs better than the best one found in a limited manual search, and it may also be possible to use the models run during the automatic calibration process to estimate uncertainties by establishing how the choice of parameter values affects the model results. Automatic calibration software is now available for APEX (Wang et al. 2014), but it requires field data obtained on daily, monthly, or annual timescales, which was not the case with our on-farm data. Because of this, we performed a careful manual calibration, testing a much smaller number of parameter combinations.
In common with many other APEX studies (Ramirez-Avila et al. 2017; Gassman et al. 2010), the calibrated models performed relatively well for runoff, somewhat less so for erosion/sediment, and had mixed results for N and P losses. Guided by recommendations in the modeling literature (Moriasi et al. 2015), we used two statistical measures (PBIAS and Nash-Sutcliffe Efficiency) to assess the performance of APEX at each calibration step. These metrics allowed us to evaluate our models relative to other published models, but they say little about whether any model is accurate enough for its intended purpose (it will often be acceptable for a model used for an initial exploration of scenarios to be less accurate than one aimed at resolving a legal matter, for example). Errors in model results arise from a number of sources including inaccuracies in the data (such as rainfall numbers) fed to the model, mismatches between real physical processes and the equations used to represent them, and inappropriate values of parameters used in the model (Guzman et al. 2015). Tools and methods for estimating uncertainties for APEX and other models are being developed (Wang et al. 2015) but are not always readily accessible to the general user.
In this study we placed more emphasis on careful calibration than on expanding the scope of the modeling. This meant that we simulated only two farms with rather similar soils, management practices, etc. Perhaps encouragingly, the model results for both farms were also fairly similar. On the other hand, it is not certain that the results can be extrapolated to other crops and farming systems (e.g., hay, rotational grazing) on land with other characteristics (e.g., soil types). Similarly, we modeled just four possible future climates. While it is quite certain that temperatures will continue to rise in the next few decades, the amount, timing, and intensity of future precipitation are much less well known, and modeling other possible future climates is an exercise for future work.
In the end, we found that most of the temperature and precipitation scenarios selected for modeling had fairly modest effects on agricultural outcomes. The exception was the scenario with more intense precipitation, where the models suggested more, larger runoff/erosion/nutrient loss events, although with less pronounced effects on corn yields. Best management practices like cover cropping and reduced tillage can in principle reduce runoff, erosion, etc., but it is possible that they may be overwhelmed by the large events that our models suggest will occur if rainfall intensity continues to increase. A policy implication of this work, then, could be that resources should be invested in researching how best management practices can be implemented in order to withstand extreme events, and in encouraging their adoption. However, given the limitations discussed above, we feel that this modeling work should not be used in isolation but as one piece of information in conjunction with other supporting evidence.
DISCUSSION POINTS FOR MODELERS AND POLICYMAKERS
Based on the experiences described in the previous section, we offer a set of talking points for conversations between modelers and PMs. Table 2 is worded from the perspective of a PM reviewing existing model results, but its contents could be adapted for use earlier in the modeling process. We appreciate that PMs do not need or want to know all the details behind model results, and that researchers may not always be able to fully answer all the questions in the table. In addition, some of the questions are quite subjective, and judgment must be exercised when interpreting the answers. Nevertheless, bearing these issues in mind may assist researchers and PMs in coming to a common understanding of how (or whether) a given set of model results can be used to inform a policy or regulation, or in adapting a model or piece of model-based research to meet PMs’ needs.
In summary, models are increasingly common in agricultural research, yet they have limitations that need to be understood by PMs and acknowledged by researchers. The topics of creating better models, improving communication of scientific results, and incorporating scientific evidence in policymaking are active fields of research in their own right. In the meantime, we hope that the experiences we describe in this paper, and the talking points we suggest, serve as a useful and accessible complement to more formal and detailed guidance for researchers and PMs.
ACKNOWLEDGEMENTS
We would like to thank Stephen Posner for helpful comments during the development of this paper. The authors gratefully acknowledge support from the Gund Institute for Environment, Extension Center for Sustainable Agriculture, and College of Agriculture and Life Sciences at the University of Vermont. This project was also funded by National Institute of Food and Agriculture Award #: 2015-67020-23180.
- Received September 17, 2019.
- © 2021 by the Soil and Water Conservation Society