Abstract
The Conservation Effects Assessment Project (CEAP) was established to develop a scientific understanding and methodology for estimating environmental benefits and effects of conservation practices on agricultural landscapes at watershed, regional, and national scales. CEAP's goal is to improve the effectiveness of conservation practices and programs by quantifying conservation effects and providing the science and education needed to enrich conservation planning, implementation, management decisions, and policy. Field observations and computer-based simulation of the effects of agricultural conservation are important components of CEAP. This research supports the CEAP effort by utilizing well-documented field data in conjunction with the Agricultural Policy/Environmental eXtender (APEX) model to quantify long-term conservation effects. In this study, field data collected at a research site located near Tifton, Georgia, were used to evaluate APEX and to quantify long-term benefits of implementing conservation tillage in the Atlantic Coastal Plain region of the United States. The objectives of this research were to (1) quantify differences in crop yield, hydrology, and sediment transport for conventional and conservation tillage systems in the Atlantic Coastal Plain; (2) develop a calibrated APEX simulation for these tillage systems; and (3) quantify the performance of the APEX model with respect to crop yields, hydrology, and sediment yield. Fourteen years of crop yield, surface runoff, subsurface flow, and sediment transport data were quantified comparing conventional tillage (CT) to strip tillage (ST). No treatment differences were found for either cotton (Gossypium hirsutum L.) or peanut (Arachis hypogea L.) yields. Surface runoff from the CT was found to be 1.7 times that of the ST, while subsurface flow from the ST was found to be 1.7 times that of CT. Total water loss (surface and subsurface) was nearly equivalent for the two systems: 30% of annual rainfall. Satisfactory model performance was found for APEX surface runoff simulations. Mixed results were found for model performance of crop yield while model simulations of subsurface flow and sediment yield were less than satisfactory. The APEX modeling framework provides a useful tool for assessing crop yield and hydrologic differences between tillage management systems in the Atlantic Coastal Plain. Additional refinement of modeling approaches may be necessary to adequately represent subsurface flow and sediment transport in these same systems.
Introduction
Agriculture is a key component of the US economy. The share of US agricultural production exported is more than double that of any other US industry (OECD 2011). The associated demand to increase production across productive agricultural lands in the United States has had the adverse impact of altering hydraulic flows and increasing nonpoint source (NPS) pollution. The 1987 amendment to the 1972 Clean Water Act specifically addressed NPS pollution and elevated the importance of best management practices aimed at NPS reductions. Since that time, the United States has struggled with developing regulatory and voluntary approaches to managing NPS pollution. USDA conservation programs are voluntary. The voluntary approach relies upon incentives and reliable information and education to persuade farm managers to implement best management practices.
The Conservation Effects Assessment Project (CEAP) was initiated by the USDA Natural Resources Conservation Service (NRCS), Agricultural Research Service (ARS), and Cooperative State Research, Education, and Extension Service (CSREES) in response to a call for better accountability of how society would benefit from the 2002 farm bill's substantial increase in conservation program funding (Mausbach and Dedrick 2004). The original goals of CEAP were to establish the scientific understanding of the effects of conservation practices at the watershed scale, and to estimate conservation impacts and benefits for reporting at the national and regional levels. One of the primary components of CEAP is to provide in-depth quantification of water quantity and water and soil quality impacts of conservation practices at the local level and to provide insight on what practices are needed to meet local environmental goals (Duriancik et al. 2008). Reliable field data, which quantify the impacts of conservation practices, are critical to this process. Studies that quantify impacts of agricultural practices at the edge of the field examine and quantify critical cause and effect relationships between implemented conservation practices and offsite water quality impacts. Controlled field studies where practices can be implemented and the effects of the practice measured are fundamental to the CEAP program.
Since its inception, natural resource models have been a key component of CEAP (Duriancik et al. 2008). The use of these models allows the examination of climatic and management combinations that would not be possible through field implementation. One of the core CEAP models is the Agricultural Policy Environmental eXtender (APEX) model (Williams and Izaurralde 2006). APEX is a daily time-step model that can simulate plant growth, water movement, and fate and transport of sediment, nutrients, and pesticides. APEX is capable of simulating management and land use impacts for whole farms and small watersheds. Individual fields can be simulated as linked subareas and model output can be examined at the field and watershed level. APEX functions can perform long-term continuous simulations and can be used for simulating the impacts of different nutrient management, tillage, conservation, and cropping practices. The model quantifies changes in hydrology and NPS transport associated with these practices. APEX is a well-documented and tested model (Gassman et al. 2004).
The objectives of this research were to (1) quantify differences in crop yield, hydrology, and sediment transport for conventional and conservation tillage systems in the Atlantic Coastal Plain; (2) develop a calibrated APEX simulation for these tillage systems; and (3) quantify the performance of the APEX model with respect to crop yields, hydrology, and sediment yield. Here we utilize observed crop, hydrologic, and sediment data collected from fields in the Coastal Plain region of Georgia, United States, from 1999 to 2012. Along with the comparisons to the observed data, APEX simulations were used for summarizing differences associated with the treatments. Primary focus of calibration was on the hydrologic budget, with secondary focus on crop yields and sediment transport.
Materials and Methods
Site Description. Field data collected at the University of Georgia Gibbs Farm located in Tift County, Georgia, United States (N 31°26’13”, W 83°35’18”), were used for this research. Detailed descriptions are available from Bosch et al. (2005, 2012, 2015), Plotkin et al. (2013), Endale et al. (2014, 2017), and Potter et al. (2015). Research began at the site in 1999 with the goal of examining long-term impacts of conservation tillage on infiltration, subsurface flow, soil conditions, and environmental quality. A 1.9 ha parcel was divided into two paired 0.6 ha blocks running up and down the prevailing slope with a 0.4 ha field at the top of the hillslope set aside for companion rainfall simulation studies. The blocks were established on a naturally occurring hillslope to characterize surface and subsurface water loss from a typical Coastal Plain landscape. The two 0.6 ha blocks were divided into six approximately 0.2 ha fields, paired by slope position (upslope, midslope, and downslope) (figure 1).
Topographic map of the research fields.
Over the period from 1999 through 2012 the fields were used for several studies. From 1999 through 2009, the north block consisting of fields 1, 3, and 5 was randomly assigned to conventional tillage (CT) while the south block consisting of fields 2, 4, and 6 was assigned to strip tillage (ST) (figure 1). Beginning in October of 2009, all fields were planted without tillage to increase soil organic carbon (C). Slopes of fields 1 through 6 were 3.0%, 2.2%, 2.7%, 2.4%, 2.7%, and 2.6%, respectively. Data from 1999 through 2012 were used for this study.
Hydrologic data collection, surface runoff and subsurface tile flow, began on March 18, 1999. Bosch et al. (2012) published 11 years of hydrologic data (1999 to 2009), reporting surface runoff losses of 22% of annual precipitation from the CT and 13% from the ST. Subsurface losses from the CT were 10% of annual precipitation while they were 19% from the ST (Bosch et al. 2012). Total water losses from the two systems were similar. Endale et al. (2014) reported on 11 years of sediment transport data (1999 to 2009), finding sediment losses from the CT were eight times those observed from the ST. Bosch et al. (2015) reported on five years of nutrient transport measurements (2004 to 2008), finding total N losses from ST were two times those from the CT with the majority of the losses occurring via lateral subsurface flow in both systems.
Hydrologic Monitoring. During site development, 0.6 m high earthen berms were established to direct surface runoff to the northwest corner of each 0.2 ha field (figure 1). Metal 0.46 m H flumes captured surface runoff from each field (Brakensiek et al. 1979). A 15 cm (inside diameter) tile drain was installed at 1.2 m depth across the slope between the lower boundary of field 7 and the upslope berm of fields 1 and 2 (figure 1) to intercept lateral subsurface flow originating upslope of field 7, directing subsurface flow away from the fields lower in the landscape. Two loops of 15 cm (i.d.) tile drain were also installed at 1.2 m depth at the bottom of the hillslope to capture lateral subsurface flow originating in each tillage block (figure 1). This tile drain was installed at the bottom of the hillslope and not at the bottom of each field in order to provide for an uninterrupted subsurface flowpath with flow accumulating naturally downgradient. Metal 0.24 m H flumes on the outlets of the tile drains from the two tillage blocks were installed to measure subsurface flow. Additional details are provided by Bosch et al. (2012).
Sediment Monitoring and Analysis. Automated ISCO samplers (Teledyne ISCO, Lincoln, Nebraska) integrated with data loggers were programmed to collect 50 mL of runoff water for every 566 L of runoff (Potter et al. 2004). Autosampler intakes with strainers were mounted to the floor of the approach section of the H flume. Samples collected by the autosamplers were used for water quality (Potter et al. 2004; Bosch et al. 2015) and sediment analysis (Endale et al. 2014). As indicated by Potter et al. (2004) and also by Endale et al. (2014), physical limitations of the sample intake prevented sample collection from flow depths less than 2 cm. Few events (<0.7%) exceeded the upper bound on the sample collection system (9 L) but many produced runoff with insufficient depth for sampling. In addition, some samples were missed due to equipment and instrument issues. Each runoff sample was used for analysis of pesticides, nutrients, and sediment concentration, in that order of priority. In some cases, insufficient sample volume was available for complete analysis. For samples with no visual evidence of sediment, the concentration was assumed to be negligible and no analysis was conducted. In total, water samples were collected during field runoff that represented 88% to 95% of the total flow (Endale et al. 2014). The majority of the nonsampled events were assumed to be small. Sediment mass was determined gravimetrically by vacuum filtration of runoff samples using 0.7 μm nominal size GFF filters, (Whatman, Maidstone, United Kingdom) followed by drying overnight at 105°C. Sediment concentration was determined from sediment mass and filtered runoff volume (Endale et al. 2014).
Daily sediment loss was determined by multiplying the runoff volume for each day by the sediment concentration for that day. Measured sediment concentration was assigned by runoff event. In some cases, samples collected after periods when personnel were not available for immediate sample collection were used to represent multiple events. This typically was used to assign concentrations to events occurring on sequential days. In total, sediment concentrations were assigned to 60% to 80% of the flow events. No attempt was made to estimate the sediment load for unsampled flow events or for sampled flow events that were not analyzed for sediment concentration.
Management. Fields were managed in a cotton (Gossypium hirsutum L.) and peanut (Arachis hypogea L.) rotation from 1999 through 2009 (table 1). Approximately three weeks prior to planting the first cotton crop in May of 1999, fields 1, 3, and 5 were chisel plowed to 20 cm, followed by a disk harrowing to 8 cm to form beds for planting. An in-row shank subsoiler was used on the ST fields (2, 4, and 6) to create 15 cm wide strips for planting with tillage to 20 cm. All fields were converted to no-till following the 2009 growing season. Pearl Millet (Pennisetum glaucum [L.] R.Br.) was planted in 2010 and 2011 and a type of sorghum (Sorghum bicolor [L.] Moench.) was grown for biomass in 2012. In December of 2012 gypsum was applied to fields 2, 4, and 6 in the south block to test the impacts of high rates of gypsum on root development and soil aggregation.
All crops were planted in early May of each year and harvested in September or October. Yield data were collected from the fields in all years except 2010 and 2011 when pearl millet was grown. Following cotton harvest, all stalks were mowed to 5 cm. Peanuts were harvested conventionally, with some soil disturbance caused through mechanical digging. The pearl millet was mowed to 10 cm and the residue left in the field. All biomass from the 2012 sorghum crop was harvested and the crop was mowed to 10 cm. All fields were planted with a rye (Secale cereale L.) grain cover crop without tillage each fall (ca. November 1) at rates varying from 63 to 125 kg ha−1. The rye was seeded with crimson clover (Trifolium incarnatum L.) in 2004 at a rate of 11 kg ha−1 and with Austrian winter pea (Pisum sativum L.) at 34 kg ha−1 in 2009 through 2011. All cover crops were terminated by glyphosate application about four weeks prior to planting peanut or cotton (ca. April 1). To reduce soil compaction, all fields were paratilled with an in-row shank subsoiler to approximately 40 cm in 2002, 2011, and 2012. In addition, only the strip-till fields were paratilled in 2004 and 2007.
Planting, fertilization, and pesticide treatment on all fields were identical. Fertilizer and pesticide applications and crop management practices were in accordance with University of Georgia recommendations and soil testing. All fields received 4.5 Mg ha−1of poultry litter one month prior to planting in 1999 through 2002, 2005, 2007, and 2010. Poultry litter was not applied the other years due to soil test and extension service recommendations. Inorganic fertilizer was applied before planting and side-dressed four to six weeks after planting when cotton was grown. A solid-set irrigation was used to meet plant-water needs not met by precipitation. Irrigation measurements were collected directly from the fields using stationary rain gauges.
Climate Data. Precipitation data were collected at a station located 10 m north of field 1. Precipitation data were collected with a TE525 tipping bucket rain gauge (Texas Electronics, Inc., Dallas, Texas) from January 1, 1999, through October 22, 2007. The TE525 rain gauge has a reported accuracy of ±3%. On October 23, 2007, the TE525 was replaced with a TB3 rain gauge (Hydrological Services Pty Ltd, Liverpool, Australia). The reported accuracy of the TB3 is ±2% at low rainfall intensity (0 to 250 mm h−1) and ±3% at high rainfall intensity (250 to 500 mm h−1). Both instruments were calibrated twice a year during the study. One-minute precipitation data were collected during all rainfall events.
Additional climate instruments were co-located with the rain gauge on May 18, 2005. Other climatic data collected included five-minute wind speed and direction, relative humidity (RH), air temperature, vapor pressure, and solar flux. Climate data for this study prior to May 18, 2005, were obtained from the University of Georgia climate station at the University of Georgia Tifton Animal and Dairy Science site (N 31°29’39”, W 83°31’35”) located 8.5 km northeast of the Gibbs site. Data after May 17, 2005, were obtained from the SEWRL Gibbs Farm Climate Station (N 31°26’16”, W 83°35’16”). Daily data were used as input to the model. The University of Georgia data included daily average RH, whereas the Gibbs farm station only included daily maximum and minimum RH. For the period where the Gibbs farm climate data were used, the daily average RH was calculated as the mean of the daily maximum and minimum RH. Some outliers in temperature and solar radiation were removed from the data set. Less than 0.1% of the data points were modified.
Site management.
APEX Model. APEX input includes physical characteristics of each field, management information, physical and chemical soil characterization, and daily weather data. Model inputs include control parameters that select methods used to simulate specific processes as well as global parameters that define thresholds and rate coefficients for selected processes. Physical field characteristics and climatic data collected at the site were used as model inputs. For this study the APEX1501 version was used to simulate the production effects of the CT and ST fields separately. Information pertaining to agricultural management (tillage type and dates, fertilizations, and planting and harvest dates) were derived from site management records (Bosch et al. 2005, 2012, 2015; Plotkin et al. 2013; Endale et al. 2014, 2017; Potter et al. 2015). Surface runoff was examined by individual field while subsurface flow was summed by treatment block. Soil and geophysical inputs were specific to each field (Bosch et al. 2012; Plotkin et al. 2013). Crop specific parameters were kept the same across all fields. Crop management inputs varied by treatment (table 1).
APEX was setup to run the CT and ST fields separately with hydrologic connection between the CT fields (1, 3, and 5) and ST fields (2, 4, and 6). The hydrologic connection and characteristics of each field were described in subarea input files for CT and ST, respectively. The subarea files appoint different soil input files to use for each field and associated field operation management file to reflect the CT and ST fields' tillage, planting, harvesting, fertilization, irrigation, and pesticide application operations. APEX simulated the CT and ST conditions based on information provided in the field operation files. The field managements described in the Management section were simulated using a different combination or a different number of tillage operations between CT and ST. Moreover, for some key field operations, the curve number (CN) estimated by the model when the tillage operation occurred was manually overwritten through the management file to reflect the impact of the different tillage operations on the runoff. The application of glyphosate to terminate the cover crop was simulated using the operation that forces the model to stop the simulation of plant growth and thus converting the plant biomass in residues.
The Hargreaves and Samani (1985) evapotranspiration estimation method has been found to perform well for temperate climates (Nair et al. 2011; Mudgal et al. 2010) and was used for these simulations (table 1). The modified rational equation (Kuichling 1889) was selected to calculate the peak runoff rate. The CN method (USDA NRCS 2004) was selected to estimate the surface runoff. The stochastic CN estimator and the variable daily CN soil moisture index (Wang et al. 2012) were selected to estimate daily CN adjustments. The MUST formulation was selected for sediment transport (Williams 1995). Crop growth was simulated using pre-existing crop characteristics contained in the model crop parameter file.
In this study we utilize observed crop yields, surface and subsurface flow, and sediment transport data collected at the site from 1999 to 2012. Yield comparisons were based upon 1999 to 2009 cotton and peanut yields. Surface runoff, subsurface flow, and sediment yield data from 1999 to 2012 were used. Along with the observed data, APEX simulations were used for summarizing differences associated with the treatments. The APEX model was calibrated using data collected from the CT field 1 and ST field 2 and validated using data from the CT field 3 and ST field 4. Yields, and indirectly surface runoff, from fields 5 and 6 were highly impacted by nematode and fertility issues related to the sandier soil texture for those fields. Only a calibration period (1999 to 2012) was simulated for the subsurface flow due to a lack of replication of these data. Primary focus of calibration was on the hydrologic budget, with secondary focus on crop yields and sediment transport. To accomplish this, parameters that impacted evapotranspiration were first determined through calibration. These parameters were then fixed and additional parameters that further impacted surface and subsurface runoff and crop yield were determined. Lastly, parameters that impacted sediment transport were determined. Calibration parameters were selected according to a combination of professional judgment and literature values (table 2). The Sobol sensitivity analysis yielded the top 13 most sensitive parameters (Sobol 1993). These parameters were then concurrently evaluated by manual adjustments and auto-calibration (APEX-CUTE) to produce the optimum fit for the calibration. Fields 1 and 2 were calibrated concurrently, obtaining the optimum fit for both fields with a single set of calibration parameters.
Plotkin et al. (2013) conducted a simulation tracking pesticide losses at the site with the APEX model. APEX was calibrated and validated using a nine-year record (1999 to 2009) of crop yield, surface runoff, and subsurface flow and an eight-year record (1999 to 2006) of soluble pendimethalin and fluometuron herbicide losses. Monthly runoff data produced r2 values between 0.62 and 0.82 when comparing observed and simulated surface runoff, Nash-Sutcliffe efficiency (NSE) values between 0.62 and 0.80, and percentage bias (PBIAS) values within ±19% during the calibration and validation periods. Monthly subsurface flow data produced r2 values between 0.19 and 0.51 when comparing observed and simulated subsurface flow, NSE values between 0.14 and 0.46, and PBIAS values within ±27% during the calibration and validation periods. Measured and predicted crop yield met satisfactory statistical criteria. In the previous study the weighted runoff, crop yield, and pesticide losses across all the CT and ST fields, respectively, were used for the calibration and validation.
Data Analysis and Model Performance. Precipitation, runoff, and subsurface flow were reduced to daily, monthly, and annual totals. Flow rates were multiplied by collection time interval to determine volumes and summed over the observation interval. Model performance was assessed based upon comparisons between observed and simulated crop yield, surface runoff, subsurface flow, and sediment yield. NSE, PBIAS, correlation coefficient (r2), and root mean square error (RMSE) were calculated using the R package hydroGOF (Zambrano-Bigiarini 2017) and used to assess model performance. NSE is a normalized statistic that determines the relative magnitude of the residual variance compared to the measured data variance (Nash and Sutcliffe 1970). A NSE of 1.0 is considered an optimal value. A negative value of NSE indicates the mean value of the measured data would be a more accurate predictor than the simulation. A value of 0 for the RMSE indicates a perfect fit. PBIAS measures the average tendency of the simulated data to be larger or smaller than their observed counterparts (Gupta et al. 1999). The optimal value of PBIAS is 0, with low absolute magnitude values indicating accurate simulation results. A positive PBIAS indicates an underestimation of the observed values whereas a negative PBIAS indicates an overestimation.
These performance coefficients have been shown to be good predictors of model performance (Moriasi et al. 2007; Wang et al. 2012; Baffaut et al. 2017). Following Baffaut et al. (2017) and Wang et al. (2012), acceptable performance evaluation criteria (PEC) for the APEX simulations were established (table 3). PEC for daily surface runoff were set at r2 ≥ 0.5, NSE ≥ 0.30, and |PBIAS| ≤ 25% based upon recommendations of Baffaut et al. (2017). Monthly comparisons were used for subsurface flow and sediment yield, while the crop yields were evaluated annually (table 3). No established PEC exist for subsurface flow. In general, simulation of subsurface flow would be expected to be more difficult than surface runoff and similar to simulating sediment transport. Subsurface flow can be affected by conditions developed outside of the simulated area, such as lateral flow entering the area from outside the simulation area. In addition, there is often greater uncertainty associated with estimation of the characteristics in the soil subsurface. Because of this, the PEC values for comparisons of monthly subsurface flow were set at r2 ≥ 0.5, NSE ≥ 0.3, and |PBIAS| ≤ 60%. For annual crop yields, PEC were set at r2 ≥ 0.6 and |PBIAS| ≤ 25% as suggested by Wang et al. (2012). The NSE PEC for annual crop yield was set at ≥0.3.
Key APEX parameters determined through calibration for the Gibbs site.
Performance evaluation criteria (PEC) used for model evaluation.
Results and Discussion
Calibration Parameters. Key APEX parameters resulting from the calibration are presented in table 2. From the sensitivity analysis, the yield simulations had the greatest sensitivity to P28 (sensitivity index, SI = 0.60), P34 (SI = 0.20), and P92 (SI = 0.15). P28 is the upper nitrogen (N) fixation limit. P34 is the exponent in the Hargreaves evapotranspiration equation. P92 is inversely related to daily CN adjustment based on soil moisture (SM) content. Wang et al. (2012) reported that APEX crop yield is typically sensitive to P34. For hydrology, the greatest sensitivity was observed for P92 (SI = 1.0). The APEX initial condition 2 (CN2) is impacted by the CN index coefficient (P42), if the variable daily CN SM index method is used, or P92 if the variable daily CN nonlinear CN/SM with depth weighting method is used. The variable daily CN nonlinear CN/SM with depth weighting method was used for this study. P92 and P34 have been reported to be influential for runoff and water-related outputs (Wang et al. 2014). Since CN2 is fixed by land use (crop) type, conservation practice, and hydrologic soil group, daily CN sensitivity is reflected in changes in P92. For sediment, the greatest sensitivity was observed for P92 (SI = 0.70) and APM (SI = 0.60). APM is the peak runoff-rate-rainfall energy adjustment factor. Slope length was set to 5 m to account for the crop rows running perpendicular to the general slope of the fields and to reduce predicted sediment yields.
Precipitation and Irrigation. The average annual precipitation from 2000 to 2012 was 1,107 mm, with a maximum of 1,488 mm in 2005 and a minimum of 773 mm in 2011 (table 4). Irrigation varied from a minimum of 25 mm in 2003 to a maximum of 302 mm in 2007 (table 4). Extended periods of reduced precipitation were observed from 1999 to 2001 and 2010 to 2011. The maximum observed single event occurred on March 28, 2009, and was 124 mm.
Crop Yields. Yield data were available for cotton and peanut production years from 1999 to 2009. Observed cotton lint yields varied from 0.60 Mg ha−1 to 1.80 Mg ha−1 while observed peanut yields varied from 3.0 Mg ha−1 to 5.5 Mg ha−1. Yield treatment differences were not statistically different (alpha = 0.05) for either the cotton or the peanut crops. Observed cotton yields were above the study average in 2001 and 2009 (figure 2). Precipitation in 2001 was below average, but the crop received greater irrigation that year (table 4). Precipitation in 2009 was above average (table 4). Cotton yields were below the study average in 2000 (figure 2). Precipitation totals were slightly below average in 2000 (table 4). Peanut yields were not as variable as the cotton yields (figure 3) (coefficient of variation for cotton was 32 while it was 20 for peanuts). Both the north and south blocks were in ST after the 2009 growing season. The dry matter yield of energy sorghum harvested from the north, former CT, fields in 2012 was 14.8 Mg ha−1 while it was 16.6 Mg ha−1 from the south, former ST, fields.
Annual observed precipitation, irrigation, field, and treatment surface runoff, and treatment subsurface flow for the study period.
APEX (a and c) calibration and (b and d) validation results for annual cotton lint yields under (a and b) conventional tillage (CT) and (c and d) strip tillage (ST).
Cotton and peanut crop yields from 1999 through 2009 were simulated. A comparison between APEX simulated crop yields indicates good overall predictability for cotton lint (figure 2) and for peanut (figure 3) for both the calibration and the validation fields. Most cotton lint yield estimates were within ±20%. One exception was 2009, which had above average annual precipitation, when the model underestimated cotton lint yields for all fields (figure 2). Most peanut yield estimates were within ±10%. Summary statistics for the calibration and the validation periods indicated mixed simulation success for both cotton lint and peanut yield (table 5). Cotton lint simulations were satisfactory for all but the CT calibration period. While the model tracked trends in cotton lint yield well, deviations from the observed yield were large for some years (figure 2). While APEX tracked trends in the observed peanut yields (figure 3), PEC were only satisfactory for the validation for both treatments (table 5). The greater difficulty simulating peanut yields may have been due to a limited number of peanut production years (n = 4). RMSEs for cotton lint were <0.33 Mg ha−1, or within 25% of the observed yield, for all comparisons (table 5). RMSEs for peanut yields were <0.74 Mg ha−1, or within 17% of the observed yield, for all comparisons (table 5). Gassman et al. (2010) summarize comparisons of APEX simulated and observed crop yields for several different crops. Their study reported good agreements for model simulations of cotton, but no results with respect to peanuts (Gassman et al. 2010). Estimates of yield obtained here appear in agreement with the ranges presented by Gassman et al. (2010).
APEX (a and c) calibration and (b and d) validation results for annual peanut yields under (a and b) conventional tillage (CT) and (c and d) strip tillage (ST).
Calibration and validation statistics for the annual cotton lint and peanut yields by field and treatment.
Surface and Subsurface Hydrology. Observed annual surface runoff and subsurface flow varied considerably from 1999 to 2012 (table 4). The first year of the study, 1999, was not included in the annual comparisons because hydrologic data were not collected the entire year. The highest observed annual surface runoff total was observed from CT field 1 in 2003—718 mm or 58% of annual rainfall. The 13-year (2000 to 2012) average annual surface runoff for CT was 214 mm while it was 125 mm for ST (table 4), 1.7 times that of the ST. For the period from 1999 through 2009 when the tillage treatments were in place, surface runoff from the CT fields was consistently greater than that from the ST fields (table 4). Cumulative annual surface runoff difference during the 1999 to 2009 period when the treatments were in place was found to be significantly different (alpha = 0.025) (Bosch et al. 2012).
The highest annual subsurface flow was observed from the ST fields in 2005—509 mm (table 4), which was 34% of the annual rainfall. The 2000 to 2012 average annual subsurface flow for the CT was 115 mm while it was 200 mm for the ST (table 4). Subsurface flow for the period from 2000 to 2012 for the ST was 1.7 times that of the CT. For the period from 1999 through 2009 when the tillage treatments were different, ST subsurface runoff was consistently greater than that from the CT (table 4). Annual subsurface flow differences during the 1999 to 2009 period when the treatments were installed were found to be significantly different (alpha = 0.025) (Bosch et al. 2012). Average total water loss for the two treatments was nearly equal—329 mm for the CT and 325 mm for the ST, or approximately 30% of annual rainfall. Following the conversion of the north block CT fields to no-till in 2010, surface runoff from these fields decreased and subsurface flow increased to levels relatively equivalent to those of the south block ST (table 4).
Goodness of fit statistics for the daily estimates of surface runoff for the CT and ST are provided in table 6. PEC for the daily surface runoff simulations indicate a satisfactory fit for both the calibration and the validation periods, CT and ST, apart from a high negative PBIAS for the ST validation period (tables 3 and 6). RMSE for daily surface runoff averaged 2.46 mm across all comparisons. Comparisons between simulated and observed annual totals of surface runoff for the CT treatment indicated an overestimation of runoff for the years with less runoff (<300 mm) (figure 4). Similar results were observed for the ST treatment for the years with low runoff totals (<200 mm) (figure 4). Both treatments had equal distribution of over- and underprediction for years with greater annual runoff (>200 mm) with the exceptions of large underpredictions for CT in 2003 and for ST for 2002 (figure 4). For both the 2002 ST and 2003 CT periods, APEX consistently underpredicted surface runoff during the higher runoff producing periods. Examination of daily runoff estimation indicated an even distribution of estimates above and below the 1:1 line for the CT (figure 5). As noted with the PEC results, there was a tendency to overestimate daily surface runoff for the ST (figure 5). Considerable scatter was observed around the 1:1 line for the small events (<50 mm) (figure 5), indicating difficulty simulating the events that produced less runoff for both treatments.
Calibration and validation statistics for daily surface runoff for the calibration and validation fields.
Calibration and validation annual observed and simulated surface runoff for the (a) 2003 conventional tillage (CT) and (b) 2002 strip tillage (ST).
Comparison between observed and simulated daily runoff.
Gassman et al. (2010) present evaluation criteria for several different APEX studies. For the field scale studies (<100 ha), monthly surface runoff r2 values ranged from 0.54 to 0.91 while NSE values ranged from 0.44 to 0.86. Event-based comparisons between APEX simulated and observed surface runoff for a 0.75 ha field in Georgia reported an NSE of 0.70 and a PBIAS of 2.6 (Ramirez-Avila et al. 2017). Similar model performance was reported by Baffaut et al. (2017). Values found here fall within these values in the literature for surface runoff.
Goodness of fit statistics for the monthly estimates of subsurface flow aggregated by treatment are provided in table 7. Only a calibration period was simulated for the subsurface flow due to a lack of replication of these data. PEC statistics for monthly subsurface flow indicated less than satisfactory fits for both the treatments (tables 3 and 7). NSE and PBIAS coefficients were satisfactory, but r2 values fell below the established monthly criteria. Comparisons between the annual total subsurface flow for the two treatments generally indicated good predictions of annual totals for both CT and ST (figure 6). Exceptions were observed in 2002 and 2011 for CT and in 2002 and 2005 for ST (figure 6). As reported by Bosch et al. (2012), years with high subsurface flow are driven by saturated conditions in the spring and the fall. Underpredictions of these large subsurface flow events by the model would indicate a difficulty representing these conditions. Gassman et al. (2006) found good agreement between average monthly APEX simulated and observed tile flow from several sites in the Midwest, with a reported r2 of 0.70. Prior APEX simulations of the same site used in this study found monthly PEC for the subsurface flow at the Gibbs site of r2 from 0.19 to 0.51, NSE from 0.14 to 0.46, and PBIAS from 7.1% to 27.4% (Plotkin et al. 2013). The results found in this study (table 7) indicate an improvement over the prior simulations. While APEX simulations of annual subsurface flow tracked observed patterns, our results indicate greater difficulty representing lateral subsurface flow in the Coastal Plain landscape than for simulating tile flow in the midwestern United States.
Annual observed and simulated subsurface flow for the (a) conventional tillage and (b) strip tillage.
Calibration statistics for monthly subsurface flow, aggregated by treatment.
Sediment Yields. Sediment yield data for 2000 to 2009 have been presented by Endale et al. (2014). These data are included for completeness. Sediment data for 2010 to 2012 were added to this data set. From 2000 to 2009, sediment yields from the CT were consistently greater than those from ST (figure 7). Sediment yields from the south block ST, and later from the north block fields that were converted to reduced tillage (2010 to 2012), were consistently <1,000 kg ha−1. Average annual sediment yield from the CT from 2000 to 2009 was 1,823 kg ha−1 while it was 256 kg ha−1 from the ST during the same period. All fields were in no-till from 2010 to 2012, while pearl millet and energy sorghum were grown. As expected, sediment yields dropped considerably from 2010 to 2012 for the fields that had been in CT the prior years (figure 7). The relatively large sediment yield observed for field 2 in 2012 was from a single large event that occurred from August 7 to 8, 2012. During this event, 155 mm of precipitation generated 34 mm of runoff from field 2 and a sediment load of 967 kg ha−1. The relatively large runoff volume for this event led to a large estimate of sediment load.
Annual observed and simulated sediment yield for the (a) conventional tillage (CT) calibration and (b) validation and the (c) strip tillage (ST) calibration and (d) validation.
Goodness of fit statistics for the monthly estimates of sediment transport aggregated by treatment are provided in table 8. Predictability of the sediment transport was low for both treatments. PEC for monthly sediment yield indicated less than satisfactory fits for both treatments for the calibration and the validation periods. Examination of annual sediment yield data indicated an equal distribution of over- and underestimations of annual sediment yield for the CT treatment, but an overestimation of annual sediment yield for the ST treatment (figure 7). Predictions of sediment load were particularly high for field 4 as indicated by the large negative monthly PBIAS (table 8) and the annual sediment yield comparison (figure 7). Long-term patterns in sediment yield examined by summing up the cumulative sediment yield over the entire simulation period illustrated that APEX tracked long-term behavior well for all fields except field 4 (figure 8). Examination of cumulative sums of surface runoff indicated greater errors in the estimation of surface runoff for field 4 as well (data not shown). As discussed by Bosch et al. (2012), field 4 had higher clay fractions and lower sand fractions in the top 50 cm of the soil profile. Incorporation of this information into APEX parameterization led to greater surface runoff (table 6) and greater sediment yields (figures 7 and 8) when compared to field 2, which had similar physical characteristics and management.
Prior comparisons between observed and simulated soil loss points to the difficulty of representing the natural variations of soil loss with deterministic models (Nearing 1998). Evaluation of various soil erosion models with large data sets have consistently shown that these models tend to overpredict soil erosion for small measured values and underpredict soil erosion for larger measured values (Nearing 1998). For the field-scale studies (<100 ha), Gassman et al. (2010) reported annual sediment yield r2 values ranged from 0.68 to 0.99 while NSE values ranged from 0.60 to 0.99. Baffaut et al. (2017) presented PEC from 12 different sites, reporting event sediment PEC of r2 from 0.25 to 0.80, NSE from −0.26 to 0.51, and PBIAS from 9% to 85%. Senaviratne et al. (2018) simulated event-based sediment transport on three watersheds ranging from 1.54 to 4.44 ha and reported r2 from 0.25 to 0.28, NSE from −0.04 to 0.22, and PBIAS from −35% to 20%. Values reported here would be on the low end of the ranges found in the literature, indicating greater difficulty simulating sediment transport at this site.
Calibration and validation statistics for monthly sediment yield, aggregated by treatment.
Sediment yields in this study averaged 1,546 kg ha−1 y−1 for CT and 255 kg ha−1 y−1 for ST. As noted, it can be difficult to predict low sediment yields (Nearing 1998). The relatively low sediment yields, particularly for the ST, may have adversely affected model performance. In addition, low sediment loads are difficult to measure precisely due to inherent measurement uncertainty (Harmel et al. 2006a; Harmel and Smith 2007). As observed with the August of 2012 event for field 2, annual totals can be dominated by single events in cases where sediment loads are relatively small. Errors in estimating sediment totals can thus be heavily influenced by observations for these events. Harmel et al. (2009) reported that typical uncertainty is about ±16% to ±27% for sediment. Sediment samples as well as all surface runoff water samples for this study were collected from a 25 mm D intake strainer mounted in the approach section of the H flumes. Automated samplers can disproportionally sample total sediment and sediment particle size distribution based upon where the sample intake is located (Federal Inter-Agency Sedimentation Project 1940). Intakes that are mounted on the floor of the H flume as was done in this study can disproportionally sample sand rich sediment. Studies examining the placement and type of sample intake indicate that intakes mounted to the bottom of the measurement device tend to overestimate sediment yields by up to 300% due to the distribution of larger particles along the channel floor (Gettel et al. 2011). Studies by Harmel et al. (2006b) indicate that this is the greatest concern for particles exceeding 62 μm D. No data were collected on the particle size distribution of the sediment transported from the fields studied here. However, rainfall simulation studies conducted on 2 m by 3 m plots located near these fields indicate that particles exceeding 53 μm D can make up 20% to 40% of the sediment for CT and 60% to 70% of the sediment for ST (Strickland et al. 2012). Preferential deposition of the larger particles can be anticipated between the plot and the field scales. However, transported particles exceeding 62 μm D could have led to a biased oversampling of total sediment load in this study. Difficulties associated with collecting samples during low flows, equipment failures, and disproportionate settling of larger particles within the sample intake tubing may have reduced this bias. The net impact of sediment sampling uncertainty is unknown but may have impacted model performance statistics.
On the modeling side, a stochastic approach was used to estimate rainfall intensity because this information was not provided as input data. Since the rainfall intensity has a direct impact on sediment yield, the stochastic component used by the model can lead to not satisfying results on a one-to-one comparison while the simulation of the cumulative values can be quite accurate. The similar average calculated for the observed and simulated annual sediment (table 8), and a high r2 calculated for the sorted observed and simulated cumulative monthly values (0.94 and 0.99 for CT fields 1 and 3, 0.71 and 0.78 for ST fields 2 and 4), give us some confidence on the ability of the model in simulating long-term sediment yield.
Summary and Conclusions
Fourteen years of crop yield, surface runoff, subsurface flow, and sediment transport data were collected comparing CT to ST in the Georgia Coastal Plain. No treatment differences were found in either the observed cotton lint or the peanut crop yields from 1999 to 2009. Observed surface runoff from CT was found to be 1.7 times that of ST, while observed subsurface flow from ST was found to be 1.7 times that of CT (table 4). Total water loss (surface and subsurface) was nearly equivalent for the two systems, approximately 30% of annual rainfall.
The APEX model was calibrated by adjusting 13 parameters (table 2). Sensitivity analysis found that most sensitive parameters to be P28, the upper N fixation limit; P34, the Hargreaves PET equation exponent; P92, the coefficient inversely related to daily CN adjustment; and APM, the peak runoff-rate-rainfall energy adjustment factor. While considerable variability existed between the annual (figure 4) and event (figure 5) based observed and simulated surface runoff, results were satisfactory based upon typical PEC used to evaluate surface runoff. For the calibration period the r2 for linear comparison between simulated and observed daily surface runoff was 0.72 for both the CT and ST. Also, for the daily surface runoff during the calibration period the NSE was 0.70 for the CT and 0.72 for the ST. These are considered satisfactory based upon established PEC (table 3) (Moriasi et al. 2007). While PBIAS for the ST validation period exceeded the established standards, other PEC fell within established limits for satisfactory model performance (tables 3 and 6). While examination of daily runoff estimations indicated an even distribution of estimates above and below the 1:1 line for the CT, considerable scatter was observed around the 1:1 line for the small events (<50 mm) (figure 5). These results indicate difficulty simulating the events, which produced less runoff for both treatments.
Mixed results were found for model performance of crop yield while model simulations of subsurface flow and sediment yield were less than satisfactory. For both the cotton lint and peanut yield simulations, long-term aggregated predictions across all fields were within ±10%, indicating good long-term model performance. The r2 for linear comparison between simulated and observed annual crop yields for the calibration and validation periods varied from a low of 0.24 (calibration period for ST peanut) to a high of 0.84 (validation period for CT peanut). The NSE values for the annual crop yields for the calibration and validation periods varied from a low of −1.04 (calibration period for CT peanut) to a high of 0.72 (validation period for CT peanut). Despite these relatively low performance statistics, the model matched general trends in yield for both crops over the observation period (figures 2 and 3). While the simulations of monthly subsurface flow did not meet the established PEC, predictions of annual totals generally tracked observed trends (figure 6). PEC for monthly sediment yield indicated a less than satisfactory fit for both treatments for the calibration and the validation periods. Examination of annual sediment yield data indicated an equal distribution of over- and underestimations of annual sediment yield for the CT treatment (figures 7a and 7b). For the ST, where sediment loads were typically low, APEX tended to overestimate annual sediment yield (figures 7c and 7d).
Sum of annual observed and simulated sediment yield for the (a) conventional tillage (CT) calibration and (b) validation and the (c) strip tillage (ST) calibration and (d) validation.
The established APEX simulation presents a good framework for examining crop yield and surface runoff in conventional and conservation tillage systems in the Coastal Plain Region. APEX simulations of annual subsurface flow should prove useful for examining long-term trends. Less than satisfactory results with subsurface flow and sediment simulations indicate further refinement of the model calibration or the model framework is necessary before it is used as a tool for examining these processes for Coastal Plain conditions such as those examined here. This simulation illustrates the trade-offs associated with multi-objective model applications. The selected parameter set yielded the optimum results for jointly examining surface runoff, subsurface flow, crop yield, and sediment. While better model performance may have been obtained by narrowing the focus, this approach is believed to most accurately describe the many interactions of agricultural production systems.
Acknowledgements
The authors are grateful for support for this research and assessment through the the USDA Natural Resources Conservation Service Conservation Effects Assessment Project Watershed Assessment Studies and Agricultural Research Service National Program 211. This research is a contribution of the USDA Agricultural Research Service Gulf Atlantic Coastal Plain Long-Term Agroecosystem Research site. The authors are also grateful for the assistance of the many scientists and field and laboratory technicians who have supported the research.
Footnotes
Disclaimer
The findings and conclusions in this publication are those of the author(s) and should not be construed to represent any official USDA determination or policy. Mention of company or trade names is for description only and does not imply endorsement by the USDA. The USDA is an equal opportunity provider and employer.
- © 2020 by the Soil and Water Conservation Society