Skip to main content

Main menu

  • Home
  • Content
    • Current Issue
    • Early Online
    • Archive
    • Subject Collections
  • Info For
    • Authors
    • Reviewers
    • Subscribers
    • Advertisers
  • About
    • About JSWC
    • Editorial Board
    • Permissions
    • Alerts
    • RSS Feeds
    • Contact Us

User menu

  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Soil and Water Conservation

  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Soil and Water Conservation

Advanced Search

  • Home
  • Content
    • Current Issue
    • Early Online
    • Archive
    • Subject Collections
  • Info For
    • Authors
    • Reviewers
    • Subscribers
    • Advertisers
  • About
    • About JSWC
    • Editorial Board
    • Permissions
    • Alerts
    • RSS Feeds
    • Contact Us
  • Follow SWCS on Twitter
  • Visit SWCS on Facebook
Research ArticleA Section

Supporting practitioners in developing effective decision support tools for natural resource managers

Gabrielle E. Roesch-McNally, Sarah Wiener, Julian Reyes, Caitlin M. Rottler, Jennifer Balachowski and Rachel E. Schattman
Journal of Soil and Water Conservation July 2021, 76 (4) 69A-74A; DOI: https://doi.org/10.2489/jswc.2021.0618A
Gabrielle E. Roesch-McNally
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sarah Wiener
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Julian Reyes
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Caitlin M. Rottler
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jennifer Balachowski
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Rachel E. Schattman
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Supplemental
  • References
  • Info & Metrics
  • PDF
Loading

In the United States, there is a growing interest in the participatory development of agricultural and natural resource–focused decision support tools (DSTs). To provide greater insight for practitioners developing these DSTs, we conducted a review of manuscripts (n = 23) that describe DSTs in US agricultural and forestry sectors, both those designed through participatory processes and otherwise. Our work operationalizes a novel conceptual framework developed to support participatory DST development, as recent scholarship suggests participatory processes lead to better adoption and use of DSTs. Our analysis suggests that tool developers should, in reporting on their efforts, more clearly articulate the ways decision makers are included in DST development, from problem identification through evaluation. Failure to do so limits our collective understanding of the utility of these tools. Following our review, we present recommendations for DST developers and other practitioners who want to support effective and transparent development of stakeholder-driven DSTs. We propose practitioners (1) implement complete assessments of relevant stakeholder network(s) that might use new DSTs; (2) engage stakeholders iteratively throughout the development process; (3) improve evaluation of DSTs, including an assessment of the usability, usefulness and usage of tools across their life cycle; and (4) and describe the process of stakeholder engagement process in published work on these tools. These recommendations are designed to empower future DST developers to leverage the power of participation, and by extension improve land management decision making and resource conservation.

THE NEED FOR DECISION SUPPORT TOOLS

In the United States, since the early 2000s, DSTs available to aid land managers’ and landowners’ decision-making have proliferated (Moser 2009). DSTs are intended to assist decision makers’ exploration of various “scenarios and available options and anticipate the potential risks and gains associated with them” (Roncoli et al. 2006). Tools are typically geared toward improving social, economic, and ecological management outcomes and designed primarily by university-based researchers, federal and state management agencies, and private companies. While there is a general interest in developing tools that provide meaningful, accessible, and effective decision support for various stakeholders, the processes by which effective agricultural and natural resource management tools are developed and deployed are poorly understood. As Cabrera et al. argue, “many models never become tools used by stakeholders because they do not adequately meet their felt needs and because they are not user friendly” (2008). We argue that greater stakeholder involvement in both the research and outreach stages of tool development can improve the use and effectiveness of DSTs.

This recent proliferation in DSTs, particularly those supported by USDA and other federal agencies, is due in part to the recognition that land managers, including farmers, ranchers, and foresters, face many decisions in the context of managing for productivity and other sustainability goals. Our team initiated this effort to review and analyze DSTs while working as fellows with the USDA Climate Hubs, where we observed the need for resources to support the development of DSTs, and for agency personnel to better assess the potential efficacy and utility of existing and proposed tools. In the following section, we outline our novel conceptual framework that explores an iterative participatory approach for DST development, including recommended key activities for practitioners. We subsequently share an analysis that operationalizes the conceptual framework with relevant literature on DSTs. Finally, we provide a set of overarching recommendations and guiding questions that practitioners can use in future DST development and assessment.

DEVELOPING DECISION SUPPORT TOOLS WITH STAKEHOLDERS: A CONCEPTUAL FRAMEWORK

For the purposes of this analysis, we synthesized existing literature to construct a conceptual framework of principles and best practices in developing DSTs focused on four major components, or phases, of design: (1) stakeholder identification and assessment, (2) problem identification, (3) design and deployment, and (4) evaluation and reflection (figure 1). We propose that stakeholder engagement occurs throughout the tool development process and thus is a component of all four phases. The following sections provide a short definition, the role of stakeholder engagement in each phase, and key activities that should be undertaken during that phase.

Figure 1
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1

The Participatory Development Framework for Decision Support Tools was designed to guide participatory development of decision support tools and illustrated their iterative and interconnected components.

STAKEHOLDER IDENTIFICATION AND ASSESSMENT

Definition. Stakeholder identification and assessment is the process of developing an understanding of those who are affected by an issue (Scheffran 2006) and involves differentiating between and categorizing stakeholders as well as understanding relationships between them (Reed et al. 2009). Note that this process is related to, but distinct from, “stakeholder engagement” (which is ongoing during participatory design) in that it is a distinct activity that assesses the constellation of both known and unknown stakeholders who might be interested in and affected by a shared problem. In the context of our review, stakeholders may include individual landowners, industry, and advocacy organizations; they may act at local, regional, national, or international scales. A stakeholder assessment process may be formal (e.g., a full empirical analysis that identifies people’s interests and how they interact) or informal (e.g., learning about needs, views, and experiences, such as talking to potential users of a new tool at a booth at a conference).

Stakeholder Engagement. Understanding the needs and concerns of people with a diverse range of viewpoints strengthens the capacity of a tool to inform a wider audience and reduces the chance of perpetuating biases through tool design. However, when stakeholders are identified on an ad hoc basis, there is a risk that the process of stakeholder engagement can marginalize potential user groups and limit the success of the project in the long term (Reed et al. 2014). Multiple forms of collaborative engagement in research projects, of which DST development might be just one aspect, exist along a continuum of involvement and integration. This continuum includes, according to Meadow et al. (2015): “no engagement” to “contractual” engagement, where information flows unidirectionally from researcher to what is referred to as “consultant,” where engagement is limited to certain phases or points of the project. The final two stages of engagement are articulated as “collaborative,” where stakeholders work in partnership with researchers but may have limited involvement in the scientific process and finally to “collegial,” where the process is stakeholder-driven and incorporates multiple evidence-based approaches to knowledge generation, including indigenous, local, and scientific systems.

Key Activities. We encourage practitioners to conduct a formal stakeholder assessment and integrate social science expertise. Assessment can include qualitative and/or quantitative data collection that highlights the perspectives of many stakeholders and the network connections between them. Data collection methods could include inviting groups to a public comment session, distributing mailings and community surveys, hosting a booth at a relevant community event, or contacting stakeholders for short interviews. Methodological selection should be culturally relevant and respectful of local contexts while recognizing and adjusting to current research best practices.

PROBLEM IDENTIFICATION

Definition. Problem identification is the process of carefully selecting which point(s) of view a DST will address and what management problem (e.g., reducing pesticide drift or improving nutrient management) it will seek to solve. This process should consider the risks and limitations of the decision, the spatial and temporal context of the decision, organizational decision-making roles, the extent of the problem, and what potential conflict exists as a result of the problem/solution (MacEachren and Brewer 2004).

Stakeholder Engagement. Stakeholder engagement in problem identification requires facilitating a meaningful feedback loop between stakeholders and DST designers. “Participation of potential users in the assessment of the tool [even at early stages] enables researchers to enrich the models that inform the DSTs by including subjective sources of knowledge in addition to the objective knowledge derived from theories and empirical studies” (Cabrera 2008).

Key Activities. During this stage, the tool development team should consider competing perspectives on the problem and use observations, data review, and public and key stakeholder input to clearly define the motivations for tool development and the specific decision(s) the tool will inform for users. For example, we suggest hosting listening sessions or informally gathering input among key stakeholders at community meetings. Conducting a more formal problem identification effort using the Delphi method, used to arrive at a group consensus/opinion on a core issue (Landeta 2006), might be valuable if there is a great deal of controversy or debate regarding what the problem is, its origins, and whose responsibility it is to address (e.g., point source versus nonpoint source water pollution control measures).

DESIGN AND DEPLOYMENT

Definition. The design and deployment stage of the framework encompasses both technical software design as well as operational considerations such as funding, staffing, maintenance, and training. Many scientists may consider this component of DST development to be the most critical aspect of the process (Stone and Hochman 2004) and the step that creates a functional product for decision makers to engage with. However, it is common for stakeholders to be left out of this part of the DST development process. This can lead to tools that are mismatched to their intended audience, either in terms of the technical skills needed to use the tool or other design features that limit adoption.

Stakeholder Engagement. Deploying tools involves more than developing a tool with a user-friendly interface. Often if developers can include purposeful workshops, which facilitate social learning by which “participants are led to an improved understanding of a problem and its context through interactions and shared learning” (Lacoste and Powles 2016), the deployment process can be more successful. Therefore, prototyping is critical in design and deployment phases (Breuer et al. 2008). Prototyping activities allow tool designers to understand nuances in how users approach a tool interface or workflow before a product is finalized.

Key Activities. Best practices in human-centered software design emphasize the importance of iteratively engaging stakeholders throughout design and deployment, and sometimes during redesign (Lacoste and Powles 2016; Prokopy et al. 2017) to ensure that a tool is usable from both a functional and problem-solving perspective. This can be done through beta testing, focus groups, or other virtual or in-person prototyping events where “end users” get to interface with a tool and troubleshoot problems and/or provide substantive feedback regarding the utility and usability of the tool.

EVALUATION AND REFLECTION

Definition. While evaluations might focus on any aspect of tool design, they often encompass three primary types of assessment: (1) the usability of the tool or how easily can users accomplish the task(s) for which the tool was designed (such as navigating to find specific information); (2) the usefulness of the tool (how well the tool addresses the real-world decision challenges users face); and (3) the usage of the tool (the extent to which the tool is used by intended stakeholders) (Tsakonas and Papatheodorou 2006). Any of these goals, or many others, can be addressed through formative assessment conducted during the learning process, or summative assessment conducted after the learning process has occurred following deployment of the DST.

Stakeholder Engagement. Stakeholder engagement processes are a critical element of evaluation. Assessing the usability and usefulness of a tool is integral to the process of successful knowledge production and behavioral modification, which ideally requires an iterative knowledge exchange among scientists, tool developers, and users (Dilling and Lemos 2011). In short, the effectiveness of decision support should be assessed by how well it is able to increase the probability that decision-relevant information supports and facilitates decision making (NRC 2008).

Key Activities. We encourage practitioners to develop an evaluation plan at the outset of a project, which might include hiring internal or external evaluators who can help design evaluation metrics around stated goals of the evaluation (e.g., evaluations can include usability, usefulness, and usage metrics such as number of unique users, number of hits on relevant host website, or number of shares on social media). Research teams, end-users, and/or outside evaluators may be involved in evaluations at various points in time and in various capacities collecting and evaluating data and/or applying lessons from assessment to tool design. Data sources may include primary data from pre- and post-surveys, user feedback questionnaires, interviews or focus groups, or in-depth case studies and team reflexive practice.

EXPLORING THE DECISION SUPPORT TOOL LANDSCAPE

To better understand the process and prevalence of participatory DST development, we operationalized the conceptual framework described above via an assessment of peer-reviewed literature. Specifically, we reviewed scholarly manuscripts published between 2008 and 2018 that addressed DSTs in the context of US agriculture and forestry. First, we developed a list of search terms to identify DSTs designed for the agricultural (including livestock and grazing land) and forestry sectors. We used the Web of Science search engine because it is sufficiently comprehensive of the topics of interest and has the machine-readable functionality to export and analyze search results. Given the diversity of fields that use DSTs (e.g., health, manufacturing, etc.), the majority of articles discussed topics outside our areas of interest or were removed due to our exclusion criteria. We ultimately included 23 DST papers relevant to our geographic and topical focus (see supplemental table 1). It should be noted that this review is not intended as a comprehensive treatment of the literature. We utilized the conceptual framework described in the previous section to guide our coding protocol, describing each paper’s methodology as well as how the authors addressed or failed to address key aspects of our conceptual framework (stakeholder assessment and engagement, problem identification, design and deployment, and evaluation).

How Are Stakeholders Being Identified and Their Needs Assessed? We analyzed the selected papers to determine if DSTs were requested by potential end-users, and if the authors described the network of stakeholders who were users or potential users of their tool. Nearly half of the papers used the problem that their tool is designed to fix as the justification for the tool, rather than explicitly describing the stakeholder demand for the tool. While this does not exclude the possibility that end users contributed to problem identification, it was not transparently obvious that this was the case. In contrast, several publications describing tools designed to support fire management decision making, primarily for the US Forest Service (Calkin et al. 2011a, 2011b; Drury et al. 2016; Ryan and Opperman 2013; Thompson et. al. 2015), were often explicit in their description of the need for the tools as articulated by the end user. Perhaps because many forest management DST developers were agency employees themselves, they were able to articulate the end-users’ needs more effectively than others.

All but one paper indicated the audience for whom their DST was intended, and just over two-thirds described stakeholder engagement processes at some point in the development process. The other articles were coded within the continuum of engagement, from no-engagement to contractual, consultant, collaborative, and collegial (see description above and reference to Meadow et al. [2015]). Only one article was coded as “contractual,” around a quarter were coded as “consultant” and “collaborative,” respectively, and three were coded as “collegial.” The articles that did not describe stakeholder engagement were categorized as “unclear” by our team, owing to the lack of available information, and these articles represent 30% of all the articles.

There was a great deal of inconsistency in how authors across all articles described their DST development and stakeholder engagement methods. In many cases, authors implied that they had engaged stakeholders, but did not provide further information (Calkin et al. 2011a, 2011b; Hunt et al. 2016; Ryan and Opperman 2013; Thompson et al. 2015). In other cases, more details were provided. For instance, Breuer et al. (2008) and Templeton et al. (2014) both describe the suite of AgroClimate Tools developed by the University of Florida. In these papers, the authors clearly described multiple modes of iterative engagement with target end-users in the development of the tools, including methods such as Sondeo surveys, focus groups, and regional workshops with relevant stakeholders who were connected to the University of Florida Cooperative Extension Service.

How Are Problems Identified? All articles we analyzed identified a main problem that their DST was designed to address. Tools were designed to address different types of natural resource problems, from mitigating dairy waste in Florida (see DynoFlo in Cabrera et al. [2008]) to reducing fungicide applications in strawberry (Fragaria × ananassa) production (see Strawberry Advisory System in Pavan et al. [2011]). Many of these tools were related to farming and ranching and were designed to deal with the types of complex decisions faced by producers (e.g., what crop should be planted under certain weather conditions or when to apply manure to reduce runoff risk). The majority of tools related to forestry were designed for forest industry professionals and wildfire and fuels managers who work with, or in partnership with, the US Forest Service.

In assessing how the problem was identified, we explored whether authors articulated how they defined the network of relevant stakeholders and whether they described the stakeholder needs or perspectives relative to the problem. Most papers clearly identified the potential stakeholders that might find value in using their tools. Most articulated how stakeholders understood the problem or associated problems that a tool might help them address. This was done either through formal assessment or through a review of the general background on the problem. However, 56% of the papers did not describe their methods for assessing whether stakeholders were necessarily requesting a DST to aid in their management of said problem.

How Are Decision Support Tools Designed and Deployed? Through analyzing how the authors describe beta testing, we sought to understand whether there was an iterative or regular engagement with users throughout the design and deployment process. Sixteen out of the 23 papers described some process for engaging stakeholders in this way. Again, this was not described with equal clarity or detail across papers. The process for prototyping and refining DSTs varied, from statements such as “Eighteen extension agents, researchers, consultants, and farmers provided feedback about the decisions support tool that utilize such forecasts during focus groups” (Templeton et al. 2014) to the relatively vague description in Easton et al. that simply says, “each of the tools described here was developed in response to specific users’ needs” (2017). The latter statement implied relevance to both problem identification and prototyping. Given this variability, it was not always possible to assess how engaged stakeholders were in the design, deployment, and subsequent improvements made to the tools themselves by reviewing a manuscript alone.

How Are Decision Support Tools Evaluated? An evaluation process was described in 52% of the articles, but in only 30% of articles was this evaluation considered purposeful (i.e., authors articulated a clear reasoning for why and how they implemented an evaluation). The methods for evaluation included surveys (22%), focus groups (9%), workshops and meetings (9%), and interviews (4%). In several cases, the evaluation was informal, or the methodology was unclear. Many authors suggested that their tools are critical for addressing a specific problem and were well-designed to help end-users improve their decision-making, while providing little evidence of what evaluation methods supported that conclusion. For example, in Calkin et al., authors state that “WFDSS has provided valuable real-time decision support to improve strategic decision making and communication by fire managers…and the development and application of WFDSS has helped the US Forest Service establish commitment to efficient and effective fire management with a strong focus on wildfire cost containment during a period of unprecedented fire activity” (2011a). While we have no reason to dispute this statement, the reader is provided little evidence for how authors arrived at this conclusion.

We also assessed whether authors evaluated the usefulness, usability, and usage of their tool. When evaluation was discussed in the manuscripts, authors described evaluation of terms of usefulness (39%) followed by usage (26%), and usability (17%). Few articles described more than one of these modes of evaluation. One exception was Jones et al. (2010) in which the authors describe a survey conducted to assess users’ assessment of usefulness and usability of their Decision Aid System as well as database tracking of tool usage. In most cases, however, authors focused on one aspect of evaluating a tool. For instance, in Pavan et al. (2011), the authors assessed the usefulness of the Strawberry Advisory System by working closely with three large commercial strawberry farms in Florida to provide iterative input on the development of the tool. For the purposes of this paper, we did not seek to evaluate the methodological rigor of a particular evaluation method (e.g., the use of a survey versus focus group) but rather sought to note whether or how the methods of evaluation were described.

TOWARD A MORE EFFECTIVE DECISION SUPPORT TOOL

As a result of the construction of our conceptual model and subsequent analysis, we propose four recommendations for DST developers and other practitioners who want to support effective and transparent development of stakeholder driven DSTs to better support US agriculture and natural resource management decision making. We propose that practitioners (i.e., DST developers) (1) implement a complete assessment of the relevant stakeholder network(s); (2) engage stakeholders iteratively throughout the development process; (3) improve evaluation of DSTs, including an assessment of the usability, usefulness, and usage of tools across their life cycle; and (4) describe the process of stakeholder engagement in published work on these tools.

To support these recommendations, we provide some guiding questions that DST developers might explore as they develop, deploy, and evaluate their tools:

  • Who has been included in the conceptualization of the problem? What stakeholder groups might be missing?

  • How many opportunities are there for decision makers to provide feedback at different stages of tool development?

  • Is stakeholder feedback integrated into the tool meaningfully?

  • What evaluation strategy is feasible and appropriate?

  • What evaluation methods (e.g., survey, interviews, focus groups, etc.) will be employed, and how will the results of the evaluation be used?

  • What are you trying to evaluate (i.e., usefulness, usability, usage)?

  • Do the tool developers use social science best practices for engaging stakeholders using both qualitative and quantitative methods? (For example, what expertise are they bringing to the development of survey methods, exit evaluations, interviews, focus groups, etc.?)

We suggest that if researchers and DST developers more purposefully explore these questions, they will be more successful in ensuring more meaningful engagement with tools over time. By extension, development of better DSTs has the potential to assist land managers to make better decisions, meet production and conservation goals, and ensure long-term sustainability of natural resources.

SUPPLEMENTAL MATERIAL

The supplementary material for this article is available in the online journal at https://doi.org/10.2489/jswc.2021.0618A.

ACKNOWLEDGEMENTS

Thank you to the USDA Climate Hub Fellows program that gave this author team a forum for sharing the ideas that gave rise to this paper. Thank you to Amanda Cravens, US Geological Survey, Fort Collins, Colorado, and Hailey Wilmer, USDA Agricultural Research Service, Dubois, Idaho, who provided early input on the conceptual framework.

  • Received June 18, 2021.
  • © 2021 by the Soil and Water Conservation Society

REFERENCES

  1. ↵
    1. Breuer, N.E.,
    2. V.E. Cabrera,
    3. K.T. Ingram,
    4. K. Broad, and
    5. P.E. Hildebrand
    . 2008. AgClimate: A case study in participatory decision support system development. Climatic Change 87(3):385–403. https://doi.org/10.1007/s10584-007-9323-7.
    OpenUrlCrossRef
  2. ↵
    1. Cabrera, V.E.,
    2. N.E. Breuer, and
    3. P.E. Hildebrand
    . 2008. Participatory modeling in dairy farm systems: A method for building consensual environmental sustainability using seasonal climate forecasts. Climatic Change 89(3):395–409. https://doi.org/10.1007/s10584-007-9371-z.
    OpenUrl
  3. ↵
    1. Calkin, D.C.,
    2. M.A. Finney,
    3. A.A. Ager,
    4. M.P. Thompson, and
    5. K.M. Gebert
    . 2011a. Progress towards and barriers to implementation of a risk framework for US federal wildland fire policy and decision making. Forest Policy and Economics 13(5):378–389. https://doi.org/10.1016/j.forpol.2011.02.007.
    OpenUrl
  4. ↵
    1. Calkin, D.E.,
    2. M.P. Thompson,
    3. M.A. Finney, and
    4. K.D. Hyde
    . 2011b. A real-time risk assessment tool supporting wildland fire decisionmaking. Journal of Forestry 109(5):274–280.
    OpenUrl
  5. ↵
    1. Dilling, L., and
    2. M.C. Lemos
    . 2011. Creating usable science: Opportunities and constraints for climate knowledge use and their implications for science policy. Global Environmental Change 21(2):680–689.
    OpenUrlCrossRef
  6. ↵
    1. Drury, S.A.,
    2. H.M. Rauscher,
    3. E. Banwell,
    4. S. Huang, and
    5. T.L. Lavezzo
    . 2016. The Interagency Fuels Treatment Decision Support System: Functionality for fuels treatment planning. Fire Ecology 12(1):103–123. https://doi.org/10.4996/fireecology.1201103.
    OpenUrl
    1. Easton, Z.M.,
    2. P.J.A. Kleinman,
    3. A.R. Buda,
    4. D. Goering,
    5. N. Emberston,
    6. S. Reed,
    7. P.J. Drohan, et al.
    2017. Short-term forecasting tools for agricultural nutrient management. Journal of Environmental Quality 46(6):1257–1269. https://doi.org/10.2134/jeq2016.09.0377.
    OpenUrl
  7. ↵
    1. Hunt, V.M.,
    2. S.K. Jacobi,
    3. J.J. Gannon,
    4. J.E. Zorn,
    5. C.T. Moore, and
    6. E.V. Lonsdorf
    . 2016. A decision support tool for adaptive management of native prairie ecosystems. INFORMS Journal on Applied Analytics 46(4):334–344. https://doi.org/10.1287/inte.2015.0822.
    OpenUrl
  8. ↵
    1. Jones, V.P.,
    2. J.F. Brunner,
    3. G.G. Grove,
    4. B. Petit,
    5. G.V. Tangren, and
    6. W.E Jones
    . 2010. A web-based decision support system to enhance IPM programs in Washington tree fruit. Pest Management Science 66(6):587–595. https://doi.org/10.1002/ps.1913.
    OpenUrlPubMed
  9. ↵
    1. Lacoste, M., and
    2. S. Powles
    . 2016. Beyond modelling: Considering user-centred and post-development aspects to ensure the success of a decision support system. Computers and Electronics in Agriculture 121:260–268.
    OpenUrl
  10. ↵
    1. Landeta, J.
    2006. Current validity of the Delphi method in social sciences. Technological Forecasting and Social Change 73(5):467-482.
    OpenUrlCrossRef
  11. ↵
    1. MacEachren, A.M., and
    2. I. Brewer
    . 2004. Developing a conceptual framework for visually-enabled geocollaboration. International Journal of Geographical Information Science 18(1):1-34.
    OpenUrlCrossRef
  12. ↵
    1. Meadow, A.M.,
    2. D.B. Ferguson,
    3. Z. Guido,
    4. A. Horangic,
    5. G. Owen, and
    6. T. Wall
    . 2015. Moving toward the deliberate coproduction of climate science knowledge. Weather, Climate, and Society 7(2):179–191.
    OpenUrl
  13. ↵
    1. Moser, S.
    2009. Making a difference on the ground: The challenge of demonstrating the effectiveness of decision support. Climatic Change 95(1–2):11.
    OpenUrl
    1. NRC (National Research Council)
    . 2000. From Research to Operations in Weather Satellites and Numerical Weather Prediction: Crossing the Valley of Death. Washington, DC: National Academies Press.
  14. ↵
    1. Pavan, W.,
    2. C.W. Fraisse, and
    3. N.A. Peres
    . 2011. Development of a web-based disease forecasting system for strawberries. Computers and Electronics in Agriculture 75(1):169–175. https://doi.org/10.1016/j.compag.2010.10.013.
    OpenUrl
  15. ↵
    1. Prokopy, L.S.,
    2. J.S. Carlton,
    3. T. Haigh,
    4. M.C. Lemos,
    5. A.S. Mase, and
    6. M. Widhalm
    . 2017. Useful to usable: Developing usable climate science for agriculture. Climate Risk Management 15:1–7. https://doi.org/10.1016/j.crm.2016.10.004.
    OpenUrl
  16. ↵
    1. Reed, M.S.,
    2. A. Graves,
    3. N. Dandy,
    4. H. Posthumus,
    5. K. Hubacek,
    6. J. Morris,
    7. C. Prell,
    8. C.H. Quinn, and
    9. L.C. Stringer
    . 2009. Who’s in and why? A typology of stakeholder analysis methods for natural resource management. Journal of Environmental Management 90(5):1933–1949.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Reed, M.S.,
    2. L.C. Stringer,
    3. I. Fazey,
    4. A.C. Evely, and
    5. J.H.J. Kruijsen
    . 2014. Five principles for the practice of knowledge exchange in environmental management. Journal of Environmental Management 146:337–345.
    OpenUrlPubMed
  18. ↵
    1. Roncoli, C.,
    2. J. Paz,
    3. N. Breuer,
    4. K. Ingram,
    5. G. Hoogenboom, and
    6. K. Broad
    . 2006. Understanding farming decisions and potential applications of climate forecasts in South Georgia. Technical Report 06-006. Gainesville, FL: Southeast Climate Consortium.
  19. ↵
    1. Ryan, K.C., and
    2. T.S. Opperman
    . 2013. LANDFIRE – A national vegetation/fuels data base for use in fuels treatment, restoration, and suppression planning. Special issue: The Mega-Fire Reality. Forest Ecology and Management 294(April):208–216. https://doi.org/10.1016/j.foreco.2012.11.003.
    OpenUrl
  20. ↵
    1. Scheffran, J.
    2006. Tools for stakeholder assessment and interaction. In Stakeholder Dialogues in Natural Resources Management, p. 153-185. Berlin, Heidelberg: Springer.
  21. ↵
    1. T. Fisher
    1. Stone, P., and
    2. Hochman,
    3. Z.
    2004. If interactive decision support systems are the answer, have we been asking the right questions? In New Directions for a Diverse Planet: Proceedings of the 4th International Crop Science Congress, ed. T. Fisher, September 26-October 1, 2004. Brisbane: The Regional Institute, Ltd.
  22. ↵
    1. Templeton, S.R.,
    2. M.S. Perkins,
    3. H.D. Aldridge,
    4. W.C. Bridges, and
    5. B.R. Lassiter
    . 2014. Usefulness and uses of climate forecasts for agricultural extension in South Carolina, USA. Regional Environmental Change 14(2):645–655. https://doi.org/10.1007/s10113-013-0522-7.
    OpenUrl
  23. ↵
    1. Thompson, M.P.,
    2. J.R. Haas,
    3. J.W. Gilbertson-Day,
    4. J.H. Scott,
    5. P. Langowski,
    6. E. Bowne, and
    7. D.E. Calkin
    . 2015. Development and application of a geospatial wildfire exposure and risk calculation tool. Environmental Modelling & Software 63:61–72. https://doi.org/10.1016/j.envsoft.2014.09.018.
    OpenUrl
  24. ↵
    1. Tsakonas, G., and
    2. C. Papatheodorou
    . 2006. Analysing and evaluating usefulness and usability in electronic information services. Journal of Information Science 32(5):400–419. https://doi.org/10.1177/0165551506065934.
    OpenUrlCrossRef
PreviousNext
Back to top

In this issue

Journal of Soil and Water Conservation: 76 (4)
Journal of Soil and Water Conservation
Vol. 76, Issue 4
July/August 2021
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Front Matter (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Soil and Water Conservation.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Supporting practitioners in developing effective decision support tools for natural resource managers
(Your Name) has sent you a message from Journal of Soil and Water Conservation
(Your Name) thought you would like to see the Journal of Soil and Water Conservation web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
12 + 5 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Supporting practitioners in developing effective decision support tools for natural resource managers
Gabrielle E. Roesch-McNally, Sarah Wiener, Julian Reyes, Caitlin M. Rottler, Jennifer Balachowski, Rachel E. Schattman
Journal of Soil and Water Conservation Jul 2021, 76 (4) 69A-74A; DOI: 10.2489/jswc.2021.0618A

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Supporting practitioners in developing effective decision support tools for natural resource managers
Gabrielle E. Roesch-McNally, Sarah Wiener, Julian Reyes, Caitlin M. Rottler, Jennifer Balachowski, Rachel E. Schattman
Journal of Soil and Water Conservation Jul 2021, 76 (4) 69A-74A; DOI: 10.2489/jswc.2021.0618A
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • THE NEED FOR DECISION SUPPORT TOOLS
    • DEVELOPING DECISION SUPPORT TOOLS WITH STAKEHOLDERS: A CONCEPTUAL FRAMEWORK
    • STAKEHOLDER IDENTIFICATION AND ASSESSMENT
    • PROBLEM IDENTIFICATION
    • DESIGN AND DEPLOYMENT
    • EVALUATION AND REFLECTION
    • EXPLORING THE DECISION SUPPORT TOOL LANDSCAPE
    • TOWARD A MORE EFFECTIVE DECISION SUPPORT TOOL
    • SUPPLEMENTAL MATERIAL
    • ACKNOWLEDGEMENTS
    • REFERENCES
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • References
  • PDF

Related Articles

  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

A Section

  • Flooding: Management and risk mitigation
  • Twenty years of conservation effects assessment in the St. Joseph River watershed, Indiana
  • Developing cover crop systems for California almonds: Current knowledge and uncertainties
Show more A Section

Feature

  • Twenty years of conservation effects assessment in the St. Joseph River watershed, Indiana
  • Developing cover crop systems for California almonds: Current knowledge and uncertainties
  • The flood-drought syndrome and ecological degradation of the Indo-Gangetic Plains of South Asia
Show more Feature

Similar Articles

Content

  • Current Issue
  • Early Online
  • Archive
  • Subject Collections

Info For

  • Authors
  • Reviewers
  • Subscribers
  • Advertisers

Customer Service

  • Subscriptions
  • Permissions and Reprints
  • Terms of Use
  • Privacy

SWCS

  • Membership
  • Publications
  • Meetings and Events
  • Conservation Career Center

© 2023 Soil and Water Conservation Society