In the United States, there is a growing interest in the participatory development of agricultural and natural resource–focused decision support tools (DSTs). To provide greater insight for practitioners developing these DSTs, we conducted a review of manuscripts (n = 23) that describe DSTs in US agricultural and forestry sectors, both those designed through participatory processes and otherwise. Our work operationalizes a novel conceptual framework developed to support participatory DST development, as recent scholarship suggests participatory processes lead to better adoption and use of DSTs. Our analysis suggests that tool developers should, in reporting on their efforts, more clearly articulate the ways decision makers are included in DST development, from problem identification through evaluation. Failure to do so limits our collective understanding of the utility of these tools. Following our review, we present recommendations for DST developers and other practitioners who want to support effective and transparent development of stakeholder-driven DSTs. We propose practitioners (1) implement complete assessments of relevant stakeholder network(s) that might use new DSTs; (2) engage stakeholders iteratively throughout the development process; (3) improve evaluation of DSTs, including an assessment of the usability, usefulness and usage of tools across their life cycle; and (4) and describe the process of stakeholder engagement process in published work on these tools. These recommendations are designed to empower future DST developers to leverage the power of participation, and by extension improve land management decision making and resource conservation.
THE NEED FOR DECISION SUPPORT TOOLS
In the United States, since the early 2000s, DSTs available to aid land managers’ and landowners’ decision-making have proliferated (Moser 2009). DSTs are intended to assist decision makers’ exploration of various “scenarios and available options and anticipate the potential risks and gains associated with them” (Roncoli et al. 2006). Tools are typically geared toward improving social, economic, and ecological management outcomes and designed primarily by university-based researchers, federal and state management agencies, and private companies. While there is a general interest in developing tools that provide meaningful, accessible, and effective decision support for various stakeholders, the processes by which effective agricultural and natural resource management tools are developed and deployed are poorly understood. As Cabrera et al. argue, “many models never become tools used by stakeholders because they do not adequately meet their felt needs and because they are not user friendly” (2008). We argue that greater stakeholder involvement in both the research and outreach stages of tool development can improve the use and effectiveness of DSTs.
This recent proliferation in DSTs, particularly those supported by USDA and other federal agencies, is due in part to the recognition that land managers, including farmers, ranchers, and foresters, face many decisions in the context of managing for productivity and other sustainability goals. Our team initiated this effort to review and analyze DSTs while working as fellows with the USDA Climate Hubs, where we observed the need for resources to support the development of DSTs, and for agency personnel to better assess the potential efficacy and utility of existing and proposed tools. In the following section, we outline our novel conceptual framework that explores an iterative participatory approach for DST development, including recommended key activities for practitioners. We subsequently share an analysis that operationalizes the conceptual framework with relevant literature on DSTs. Finally, we provide a set of overarching recommendations and guiding questions that practitioners can use in future DST development and assessment.
DEVELOPING DECISION SUPPORT TOOLS WITH STAKEHOLDERS: A CONCEPTUAL FRAMEWORK
For the purposes of this analysis, we synthesized existing literature to construct a conceptual framework of principles and best practices in developing DSTs focused on four major components, or phases, of design: (1) stakeholder identification and assessment, (2) problem identification, (3) design and deployment, and (4) evaluation and reflection (figure 1). We propose that stakeholder engagement occurs throughout the tool development process and thus is a component of all four phases. The following sections provide a short definition, the role of stakeholder engagement in each phase, and key activities that should be undertaken during that phase.
STAKEHOLDER IDENTIFICATION AND ASSESSMENT
Definition. Stakeholder identification and assessment is the process of developing an understanding of those who are affected by an issue (Scheffran 2006) and involves differentiating between and categorizing stakeholders as well as understanding relationships between them (Reed et al. 2009). Note that this process is related to, but distinct from, “stakeholder engagement” (which is ongoing during participatory design) in that it is a distinct activity that assesses the constellation of both known and unknown stakeholders who might be interested in and affected by a shared problem. In the context of our review, stakeholders may include individual landowners, industry, and advocacy organizations; they may act at local, regional, national, or international scales. A stakeholder assessment process may be formal (e.g., a full empirical analysis that identifies people’s interests and how they interact) or informal (e.g., learning about needs, views, and experiences, such as talking to potential users of a new tool at a booth at a conference).
Stakeholder Engagement. Understanding the needs and concerns of people with a diverse range of viewpoints strengthens the capacity of a tool to inform a wider audience and reduces the chance of perpetuating biases through tool design. However, when stakeholders are identified on an ad hoc basis, there is a risk that the process of stakeholder engagement can marginalize potential user groups and limit the success of the project in the long term (Reed et al. 2014). Multiple forms of collaborative engagement in research projects, of which DST development might be just one aspect, exist along a continuum of involvement and integration. This continuum includes, according to Meadow et al. (2015): “no engagement” to “contractual” engagement, where information flows unidirectionally from researcher to what is referred to as “consultant,” where engagement is limited to certain phases or points of the project. The final two stages of engagement are articulated as “collaborative,” where stakeholders work in partnership with researchers but may have limited involvement in the scientific process and finally to “collegial,” where the process is stakeholder-driven and incorporates multiple evidence-based approaches to knowledge generation, including indigenous, local, and scientific systems.
Key Activities. We encourage practitioners to conduct a formal stakeholder assessment and integrate social science expertise. Assessment can include qualitative and/or quantitative data collection that highlights the perspectives of many stakeholders and the network connections between them. Data collection methods could include inviting groups to a public comment session, distributing mailings and community surveys, hosting a booth at a relevant community event, or contacting stakeholders for short interviews. Methodological selection should be culturally relevant and respectful of local contexts while recognizing and adjusting to current research best practices.
PROBLEM IDENTIFICATION
Definition. Problem identification is the process of carefully selecting which point(s) of view a DST will address and what management problem (e.g., reducing pesticide drift or improving nutrient management) it will seek to solve. This process should consider the risks and limitations of the decision, the spatial and temporal context of the decision, organizational decision-making roles, the extent of the problem, and what potential conflict exists as a result of the problem/solution (MacEachren and Brewer 2004).
Stakeholder Engagement. Stakeholder engagement in problem identification requires facilitating a meaningful feedback loop between stakeholders and DST designers. “Participation of potential users in the assessment of the tool [even at early stages] enables researchers to enrich the models that inform the DSTs by including subjective sources of knowledge in addition to the objective knowledge derived from theories and empirical studies” (Cabrera 2008).
Key Activities. During this stage, the tool development team should consider competing perspectives on the problem and use observations, data review, and public and key stakeholder input to clearly define the motivations for tool development and the specific decision(s) the tool will inform for users. For example, we suggest hosting listening sessions or informally gathering input among key stakeholders at community meetings. Conducting a more formal problem identification effort using the Delphi method, used to arrive at a group consensus/opinion on a core issue (Landeta 2006), might be valuable if there is a great deal of controversy or debate regarding what the problem is, its origins, and whose responsibility it is to address (e.g., point source versus nonpoint source water pollution control measures).
DESIGN AND DEPLOYMENT
Definition. The design and deployment stage of the framework encompasses both technical software design as well as operational considerations such as funding, staffing, maintenance, and training. Many scientists may consider this component of DST development to be the most critical aspect of the process (Stone and Hochman 2004) and the step that creates a functional product for decision makers to engage with. However, it is common for stakeholders to be left out of this part of the DST development process. This can lead to tools that are mismatched to their intended audience, either in terms of the technical skills needed to use the tool or other design features that limit adoption.
Stakeholder Engagement. Deploying tools involves more than developing a tool with a user-friendly interface. Often if developers can include purposeful workshops, which facilitate social learning by which “participants are led to an improved understanding of a problem and its context through interactions and shared learning” (Lacoste and Powles 2016), the deployment process can be more successful. Therefore, prototyping is critical in design and deployment phases (Breuer et al. 2008). Prototyping activities allow tool designers to understand nuances in how users approach a tool interface or workflow before a product is finalized.
Key Activities. Best practices in human-centered software design emphasize the importance of iteratively engaging stakeholders throughout design and deployment, and sometimes during redesign (Lacoste and Powles 2016; Prokopy et al. 2017) to ensure that a tool is usable from both a functional and problem-solving perspective. This can be done through beta testing, focus groups, or other virtual or in-person prototyping events where “end users” get to interface with a tool and troubleshoot problems and/or provide substantive feedback regarding the utility and usability of the tool.
EVALUATION AND REFLECTION
Definition. While evaluations might focus on any aspect of tool design, they often encompass three primary types of assessment: (1) the usability of the tool or how easily can users accomplish the task(s) for which the tool was designed (such as navigating to find specific information); (2) the usefulness of the tool (how well the tool addresses the real-world decision challenges users face); and (3) the usage of the tool (the extent to which the tool is used by intended stakeholders) (Tsakonas and Papatheodorou 2006). Any of these goals, or many others, can be addressed through formative assessment conducted during the learning process, or summative assessment conducted after the learning process has occurred following deployment of the DST.
Stakeholder Engagement. Stakeholder engagement processes are a critical element of evaluation. Assessing the usability and usefulness of a tool is integral to the process of successful knowledge production and behavioral modification, which ideally requires an iterative knowledge exchange among scientists, tool developers, and users (Dilling and Lemos 2011). In short, the effectiveness of decision support should be assessed by how well it is able to increase the probability that decision-relevant information supports and facilitates decision making (NRC 2008).
Key Activities. We encourage practitioners to develop an evaluation plan at the outset of a project, which might include hiring internal or external evaluators who can help design evaluation metrics around stated goals of the evaluation (e.g., evaluations can include usability, usefulness, and usage metrics such as number of unique users, number of hits on relevant host website, or number of shares on social media). Research teams, end-users, and/or outside evaluators may be involved in evaluations at various points in time and in various capacities collecting and evaluating data and/or applying lessons from assessment to tool design. Data sources may include primary data from pre- and post-surveys, user feedback questionnaires, interviews or focus groups, or in-depth case studies and team reflexive practice.
EXPLORING THE DECISION SUPPORT TOOL LANDSCAPE
To better understand the process and prevalence of participatory DST development, we operationalized the conceptual framework described above via an assessment of peer-reviewed literature. Specifically, we reviewed scholarly manuscripts published between 2008 and 2018 that addressed DSTs in the context of US agriculture and forestry. First, we developed a list of search terms to identify DSTs designed for the agricultural (including livestock and grazing land) and forestry sectors. We used the Web of Science search engine because it is sufficiently comprehensive of the topics of interest and has the machine-readable functionality to export and analyze search results. Given the diversity of fields that use DSTs (e.g., health, manufacturing, etc.), the majority of articles discussed topics outside our areas of interest or were removed due to our exclusion criteria. We ultimately included 23 DST papers relevant to our geographic and topical focus (see supplemental table 1). It should be noted that this review is not intended as a comprehensive treatment of the literature. We utilized the conceptual framework described in the previous section to guide our coding protocol, describing each paper’s methodology as well as how the authors addressed or failed to address key aspects of our conceptual framework (stakeholder assessment and engagement, problem identification, design and deployment, and evaluation).
How Are Stakeholders Being Identified and Their Needs Assessed? We analyzed the selected papers to determine if DSTs were requested by potential end-users, and if the authors described the network of stakeholders who were users or potential users of their tool. Nearly half of the papers used the problem that their tool is designed to fix as the justification for the tool, rather than explicitly describing the stakeholder demand for the tool. While this does not exclude the possibility that end users contributed to problem identification, it was not transparently obvious that this was the case. In contrast, several publications describing tools designed to support fire management decision making, primarily for the US Forest Service (Calkin et al. 2011a, 2011b; Drury et al. 2016; Ryan and Opperman 2013; Thompson et. al. 2015), were often explicit in their description of the need for the tools as articulated by the end user. Perhaps because many forest management DST developers were agency employees themselves, they were able to articulate the end-users’ needs more effectively than others.
All but one paper indicated the audience for whom their DST was intended, and just over two-thirds described stakeholder engagement processes at some point in the development process. The other articles were coded within the continuum of engagement, from no-engagement to contractual, consultant, collaborative, and collegial (see description above and reference to Meadow et al. [2015]). Only one article was coded as “contractual,” around a quarter were coded as “consultant” and “collaborative,” respectively, and three were coded as “collegial.” The articles that did not describe stakeholder engagement were categorized as “unclear” by our team, owing to the lack of available information, and these articles represent 30% of all the articles.
There was a great deal of inconsistency in how authors across all articles described their DST development and stakeholder engagement methods. In many cases, authors implied that they had engaged stakeholders, but did not provide further information (Calkin et al. 2011a, 2011b; Hunt et al. 2016; Ryan and Opperman 2013; Thompson et al. 2015). In other cases, more details were provided. For instance, Breuer et al. (2008) and Templeton et al. (2014) both describe the suite of AgroClimate Tools developed by the University of Florida. In these papers, the authors clearly described multiple modes of iterative engagement with target end-users in the development of the tools, including methods such as Sondeo surveys, focus groups, and regional workshops with relevant stakeholders who were connected to the University of Florida Cooperative Extension Service.
How Are Problems Identified? All articles we analyzed identified a main problem that their DST was designed to address. Tools were designed to address different types of natural resource problems, from mitigating dairy waste in Florida (see DynoFlo in Cabrera et al. [2008]) to reducing fungicide applications in strawberry (Fragaria × ananassa) production (see Strawberry Advisory System in Pavan et al. [2011]). Many of these tools were related to farming and ranching and were designed to deal with the types of complex decisions faced by producers (e.g., what crop should be planted under certain weather conditions or when to apply manure to reduce runoff risk). The majority of tools related to forestry were designed for forest industry professionals and wildfire and fuels managers who work with, or in partnership with, the US Forest Service.
In assessing how the problem was identified, we explored whether authors articulated how they defined the network of relevant stakeholders and whether they described the stakeholder needs or perspectives relative to the problem. Most papers clearly identified the potential stakeholders that might find value in using their tools. Most articulated how stakeholders understood the problem or associated problems that a tool might help them address. This was done either through formal assessment or through a review of the general background on the problem. However, 56% of the papers did not describe their methods for assessing whether stakeholders were necessarily requesting a DST to aid in their management of said problem.
How Are Decision Support Tools Designed and Deployed? Through analyzing how the authors describe beta testing, we sought to understand whether there was an iterative or regular engagement with users throughout the design and deployment process. Sixteen out of the 23 papers described some process for engaging stakeholders in this way. Again, this was not described with equal clarity or detail across papers. The process for prototyping and refining DSTs varied, from statements such as “Eighteen extension agents, researchers, consultants, and farmers provided feedback about the decisions support tool that utilize such forecasts during focus groups” (Templeton et al. 2014) to the relatively vague description in Easton et al. that simply says, “each of the tools described here was developed in response to specific users’ needs” (2017). The latter statement implied relevance to both problem identification and prototyping. Given this variability, it was not always possible to assess how engaged stakeholders were in the design, deployment, and subsequent improvements made to the tools themselves by reviewing a manuscript alone.
How Are Decision Support Tools Evaluated? An evaluation process was described in 52% of the articles, but in only 30% of articles was this evaluation considered purposeful (i.e., authors articulated a clear reasoning for why and how they implemented an evaluation). The methods for evaluation included surveys (22%), focus groups (9%), workshops and meetings (9%), and interviews (4%). In several cases, the evaluation was informal, or the methodology was unclear. Many authors suggested that their tools are critical for addressing a specific problem and were well-designed to help end-users improve their decision-making, while providing little evidence of what evaluation methods supported that conclusion. For example, in Calkin et al., authors state that “WFDSS has provided valuable real-time decision support to improve strategic decision making and communication by fire managers…and the development and application of WFDSS has helped the US Forest Service establish commitment to efficient and effective fire management with a strong focus on wildfire cost containment during a period of unprecedented fire activity” (2011a). While we have no reason to dispute this statement, the reader is provided little evidence for how authors arrived at this conclusion.
We also assessed whether authors evaluated the usefulness, usability, and usage of their tool. When evaluation was discussed in the manuscripts, authors described evaluation of terms of usefulness (39%) followed by usage (26%), and usability (17%). Few articles described more than one of these modes of evaluation. One exception was Jones et al. (2010) in which the authors describe a survey conducted to assess users’ assessment of usefulness and usability of their Decision Aid System as well as database tracking of tool usage. In most cases, however, authors focused on one aspect of evaluating a tool. For instance, in Pavan et al. (2011), the authors assessed the usefulness of the Strawberry Advisory System by working closely with three large commercial strawberry farms in Florida to provide iterative input on the development of the tool. For the purposes of this paper, we did not seek to evaluate the methodological rigor of a particular evaluation method (e.g., the use of a survey versus focus group) but rather sought to note whether or how the methods of evaluation were described.
TOWARD A MORE EFFECTIVE DECISION SUPPORT TOOL
As a result of the construction of our conceptual model and subsequent analysis, we propose four recommendations for DST developers and other practitioners who want to support effective and transparent development of stakeholder driven DSTs to better support US agriculture and natural resource management decision making. We propose that practitioners (i.e., DST developers) (1) implement a complete assessment of the relevant stakeholder network(s); (2) engage stakeholders iteratively throughout the development process; (3) improve evaluation of DSTs, including an assessment of the usability, usefulness, and usage of tools across their life cycle; and (4) describe the process of stakeholder engagement in published work on these tools.
To support these recommendations, we provide some guiding questions that DST developers might explore as they develop, deploy, and evaluate their tools:
Who has been included in the conceptualization of the problem? What stakeholder groups might be missing?
How many opportunities are there for decision makers to provide feedback at different stages of tool development?
Is stakeholder feedback integrated into the tool meaningfully?
What evaluation strategy is feasible and appropriate?
What evaluation methods (e.g., survey, interviews, focus groups, etc.) will be employed, and how will the results of the evaluation be used?
What are you trying to evaluate (i.e., usefulness, usability, usage)?
Do the tool developers use social science best practices for engaging stakeholders using both qualitative and quantitative methods? (For example, what expertise are they bringing to the development of survey methods, exit evaluations, interviews, focus groups, etc.?)
We suggest that if researchers and DST developers more purposefully explore these questions, they will be more successful in ensuring more meaningful engagement with tools over time. By extension, development of better DSTs has the potential to assist land managers to make better decisions, meet production and conservation goals, and ensure long-term sustainability of natural resources.
SUPPLEMENTAL MATERIAL
The supplementary material for this article is available in the online journal at https://doi.org/10.2489/jswc.2021.0618A.
ACKNOWLEDGEMENTS
Thank you to the USDA Climate Hub Fellows program that gave this author team a forum for sharing the ideas that gave rise to this paper. Thank you to Amanda Cravens, US Geological Survey, Fort Collins, Colorado, and Hailey Wilmer, USDA Agricultural Research Service, Dubois, Idaho, who provided early input on the conceptual framework.
- Received June 18, 2021.
- © 2021 by the Soil and Water Conservation Society