Rivers are a means of rapid and long-distance transmission of pathogenic microorganisms

In Central and Northern Europe, where the cultural response to environmental stress involved the direct consumption of milk, this appears to buffer environmental stress and may have fueled the patterns of human growth and body size observed in the late Holocene. The trends observed in this paper may be a direct or indirect consequence of shifts in energy allocation to somatic growth associated with the digestion of lactose but may otherwise be influenced by patterns of disease load during growth, changes in weaning patterns, population density, migration, or genetic drift. A life-history framework may help to understand how the interaction of such factors influences somatic investment . We note that there are other regions where LP genetic variants are found in high frequencies, including the Mongolian Steppe , and convergent evolution of MCM6 in East Africa with what may be a stronger directional selection among the Maasai . At present, we do not have data of sufficient resolution to investigate whether ancient selection and dairying fueled phenotypic change in these regions. However, our results suggest that the transition to agriculture may have had regionally specific influences on human populations that can be elucidated through analyses of long-term diachronic trends in human-culture-environment interactions. Long-term trends are best investigated through broadscale integration of bio-archaeological and phenotypic data with aDNA, paleoecology, and archaeological data that account for the spatiotemporal complexity of Holocene cultural and dietary transitions.Waterborne zoonotic pathogens pose a public health risk due to their consistent point and non-point sources,plastic pot which can significantly impair the ecological quality of aquatic systems. Pathogens enter streams and rivers in a variety of processes including overland flow and groundwater fltration.

Viruses, bacteria, and parasites persist for varying amounts of time, especially within streambed sediments, leading to long-term disease transmission. Of particular concern is the protozoal parasite, Cryptosporidium, which can remain infective for weeks to months under cool and moist conditions, with the infectious state largely resistant to chlorination. Te 50% infectious dose for livestock-derived Cryptosporidium, specifcally C. parvum, for healthy humans ranges between 10–1,000 oocysts. Monitoring programs assess the microbiological quality of waters to minimize health risk associated with pathogenic microorganisms. However, as it is still unfeasible to experimentally monitor pathogen levels at the high spatiotemporal resolution ofen needed to assess risk, sampling is often complemented with a model. Both environmental and hydrological processes control the residence time and persistence of pathogens within a stream network. Current models consider stream flow conditions, but it is imperative to incorporate the wide variety of processes that control the transport and retention of Cryptosporidium in a dynamic stream environment. Commonly, hyporheic exchange – the two-way exchange of water with the underlying sediments induced by pressure variations associated with stream flow over stream channel topography – is ignored in surface water modeling of pathogen and microbial transport in streams. However, as the size and specific gravity of Cryptosporidium are low, it is mainly removed from the water column by hyporheic exchange and to a lesser extent by sedimentation via association with larger and denser suspended aggregates. Microbial interaction with the streambed and other stream transient storage areas has been greatly underestimated by only assuming gravitational settling without considering the key mechanism of hyporheic exchange.

Hyporheic exchange of particles differs from solutes because of strong particle deposition during porewater transport. Even though the settling velocities of fine particles are low, gravitational settling can be more important within porewaters, where porewater velocities are extremely small. Filtration in the bed leads to pathogen immobilization. However, filtered microbes and fne sediment are often remobilized, corresponding to reversible fltration. This slow release of microbes after initial deposition has been observed in streams. If models do not consider hyporheic exchange within the hyporheic region then pathogens will not be conceptualized as being in the nearbed or hyporheic region to experience these additional immobilization processes, thus underestimating pathogen immobilization and retention in streams. This underestimation of pathogen accumulation during base flow can lead to inaccurate predictions for pathogen resuspension during storm events, when the majority of pathogens are transmitted downstream. Te main objectives of this modeling study was to improve storm flow predictions of potential resuspended Cryptosporidium oocysts by appropriately characterizing the transport and retention of Cryptosporidium during base flow conditions through incorporating hyporheic exchange and immobilization processes, calculating residence times of Cryptosporidium in surface water and accumulation in immobile zones, such as streambed sediments, to estimate long-term persistence, and estimating Cryptosporidium accumulation during base flow conditions that can potentially be resuspended during a storm event. We apply a previously developed mobile-immobile model for microbes to Cryptosporidium, which accounts for hyporheic exchange and transport through pore water, reversible ifltration within the streambed, and inactivation of microbes, to accurately predict the long-term persistence of pathogenic microorganisms within stream storage areas.

The mobile-immobile model framework is convenient for river transport as the water column can be considered mobile and material retained in streambed sediments or slow-moving surface waters is comparatively immobile. This model framework, in contrast to previous work, was developed for microbial transport in streams to incorporate detailed measurements of transport and retention processes at multiple scales, which allows the use of lab-scale measurements to parameterize key transport and retention processes and apply them to reach-scale modeling. We used the model to assess the transport, retention, and inactivation of Cryptosporidium within stream environments, specifically under representative conditions of California’s Central Valley, where pathogen exposure can be at higher risk due to agricultural and wildlife non-point sources. Comparison of modeling results with and without immobilization processes provided novel insights into the significance of hyporheic exchange and subsequent immobilization processes on pathogen retention and long-term persistence within streams. This study provides new understanding of pathogen transport and retention dynamics in streams to help improve future risk assessment.Te study site is California’s Central Valley and the Sierra Nevada foothills , draining ~23,000 sq. miles of the western slope of the Sierra Nevada mountains down to the floor of the Central Valley. These streams originate in snow-fed lakes and streams surrounded by United States Forest Service and National Park lands where cattle and other livestock are grazed and descend through rolling foothills to the low-lying Central Valley, with food crop agriculture and animal feeding operations supported by a network of man-made canals. This site thus has potential inputs of fecal organisms from grazed allotments in the higher elevations and in animal feeding operations along the lower reaches and canals,grow bag in addition to inputs from wildlife such as striped skunks, coyotes, California ground squirrels, and yellow-bellied marmot. California’s Central Valley has a Mediterranean seasonal climate, with highly seasonal water inputs, with precipitation primarily in the winter and spring. Following the spring snow melt period, most of the Central Valley is relatively dry with occasional summertime precipitation at higher elevations.Approximately 1.7×105Cryptosporidium oocysts/animal/day are shed from a California adult beef cow and 6×105 oocysts/animal/day from California beef calf. An average beef herd size is approximately 100 adult cows and 150 calves, which equates to 1.1×108 oocysts/day/livestock operation. Cryptosporidium species include C. bovis, C. ryanae, and the more infectious C. parvum.

Oocysts deposited on the terrestrial portion of a watershed from beef cattle only reach streams through overland flow run of and direct deposition of feces. Oocysts remain trapped in the fecal matrix or are eluted and reach streams via groundwater or overland flow, with loads dependent on storm intensity, soil structure and infiltration rates. The high risk fecal pats are those deposited directly into the stream, less than 1–5% of total fecal loads, depending on cattle access to the stream via lack of fencing. Thus, a reasonable estimate of Cryptosporidium that could potentially reach a stream in the region, estimating that 20 beef lots may impact a single stream, from a combination of direct deposition, overland flow, and groundwater inputs is approximately 1–5% of the oocysts shed. Dairy lots are assumed to have minimal release of Cryptosporidium oocysts as the operations are confined and run of into stream is not permitted. Other non-point sources include wildlife, where the highest loads come from the California ground squirrel , averaging 1.13×105 C. parvum oocysts/animal/day. Ground squirrel populations result in loading rates of 9×105 oocysts/hectare/day for low density populations with 8 to 94 adults/hectare in California. Therefore, non-point sources could potentially be a large source of Cryptosporidium to streams, even larger than beef herds and dairy lots combined. To account for these variable inputs, the highest figure from beef herds was assumed to represent an upper boundary of the oocysts load, given the uncertainty in the number of oocysts shed by beef lots, the limited oocysts that may be released from dairy lots, and the additional contribution of non-point sources in the watershed from wildlife excretions.Stochastic theory predicts that the slowest mechanism will control the long-term tailing behavior and model parameterization of a tracer concentration vs. time surface water profile. This concept can link multiple scales of transport, as previously demonstrated by combining lab scale and field reach-scale studies on solute, particle, and microbial transport and retention in streams. We apply this scaling concept to this study by using the column Cryptosporidium model parameters from a published study to characterize the pathogen transport and retention within the immobile zone of the reach scale mobile-immobile model framework. Specifically, Cryptosporidium breakthrough curves in a sand column showed power-law behavior, with a particle immobilization rate within the immobile zone, ΛIMM =0.2 s−1 and a power-law slope of the pathogen residence time distribution within the immobile zone, βIMM =0.35 . We assume Cryptosporidium release under summer base flow conditions with an average flow of 60L/s and an average velocity, v, of 5 cm/s. This velocity within an agricultural stream was associated with a dispersion, D, of 0.095 m2 /s, an exchange rate between the mobile and immobile zone, Λ, of 6×10−2 s−1 , and a power-law slope within the immobile zone, β =0.7. These parameters are reasonable and are within the range of hydrologic model parameters for solute transport within streams during base flow conditions. Inactivation rates of Cryptosporidium in the mobile zone and immobile zone are estimated for summer conditions with water temperatures of approximately 20°C as 0.088/ day and 0.011/day, respectively. A summary of all parameters used within the model simulations is shown in Table 1. The model simulations were run with the 1-month scenario detailed in the previous section. The 1-month duration of the release was chosen as an arbitrary reference duration to assess the permanence of oocysts in the stream even after the release has stopped. A total count of Cryptosporidium oocysts immobilized and inactivated were determined at each sampling distance and at different time points of interest . A model output breakthrough curve with and without inactivation was produced. These model outputs were integrated to the different time points of interest using the trapezoidal method to determine a total number of oocysts that passed by the sampling point within the surface water with and without inactivation. The difference between with and without inactivation was used to calculate the number of oocysts inactivated vs. immobilized within the stream at each time point of interest. The # of oocysts immobilized at each downstream distance was estimated as the difference between the previous sampling point . The values for % Cryptosporidium immobilized were calculated by dividing the total # immobilized by the known model input of Cryptosporidium oocysts .Model simulations for a 1-month input of Cryptosporidium to an agricultural stream show in-stream pathogen counts at 100, 300, 500, and 700m downstream of the input . Cryptosporidium transmission is presented under two scenarios with only hyporheic exchange and inactivation and with hyporheic exchange, inactivation, and additional immobilization processes in transient storage areas . For scenario 1, a decrease in maximum in-stream concentrations from 2.1×10−2 #/mL at 100m to 2.0×10−2 #/mL at 700m downstream of the input demonstrates how hyporheic exchange delays downstream transport, but does not greatly reduce the maximum in-stream concentration. As described previously, a safe water supply is considered to have less than 10−5 oocysts/mL. This value assumes a typical human consumption of 2L/day and a safety/error factor of 300 to 1,000, which is typical for public health standards. In-stream concentrations remained above 10−5 oocysts/ mL for 1269 and 2357 hours for sites 100 and 700 m downstream of the input, respectively.

The aim of this paper is to close this gap by investigating how rural labor markets impact rural wages

The model is estimated using the sample of children ages 24 to 59 months living in rural areas. In these and in all other models standard errors are clustered at the treaty basin level. In the next-to-last two rows we report the F-statistic of the first stage for the “own dam” coefficient and for the “upstream dam” coefficient . The instruments for irrigation dams appear sufficiently strong to avoid bias from weak instruments. However the instruments for all “own dams” are small, and the bias that could result from this should be kept in mind when interpreting the results. I also run the Sargan-Hansen test to check that the instruments are relevant. The null hypothesis here is that the instruments are valid, i.e. they are uncorrelated with the error term and correctly excluded from the second stage. The p-values of the tests are when we estimate the model using irrigation dams only and and when including all types of dams. These p-values confirm that the instruments are relevant. We first analyze how dams with some irrigation purpose have affected the nutritional status of children in the 6-digit river basin where the dam is located and downstream. The first column of table 1.4 shows estimates from models which include controls for demographics and rainfall. The IV estimate indicates a large and significant decrease in the nutritional status of children in the 6-digit river basin where a dam is located and an increase in downstream 6-digit river basins. An additional dam reduces child height-for-age z-score in the 6-digit river basin basin where it is located by point of standard deviation and increases the probability of being stunted by points. In contrast an additional dam increases height-for-age z-score in downstream 6-digit river basins by points and reduces stunting by .03 points,french flower bucket but this last coefficient is small and insignificant.

These effects are large given that average height-for-age and stunting are -1.91 and 47 per cent respectively. The ratio of the effect of an additional dam in the 6-digit river basin where it is located to the mean are 18 per cent for height-for-age and 34 per cent for stunting. The same ratio for downstream 6-digit river basins are 8 per cent for height-for-age and 6 per cent for stunting. Columns and show estimates conditional on children’s gender, a dummy for twin birth, mother’s age, mother’s years of education, a dummy for a child living in a female headed household, the number of household members and the number of children in the household less than 5 years old. The coefficient on “own dam” drops by less of a standard error while the coefficient on “upstream dam” increases slightly. Columns and further control for rainfall during the survey year and the two years prior to the survey year. This has little effect on the coefficient estimates. Next we turn to the impact of all types of dams on weight-for-age and malnourishment. We find that dams with some irrigation purpose have, on average, little effect onweight-for-age in the 6-digit river basin where they are built but increase the incidence of malnourishment by points or 49 percent. However an additional irrigation dam increases weight-for-age in downstream 6-digit river basins by .10 to .20 points on average but this has no effect on the incidence of malnourishment. Lastly we analyze the impact of all types of dams taken together on child nutrition. We find that the impact of dams on weight-for-age are small and not precisely estimated. However we find that while an additional dam has no effect on height-for-age, on average, it increases the proportion of children who are stunted by .08 points or 17 percent. Moreover we find that an additional dam increases height-for-age and reduces stunting in downstream 6- digit river basins by .15 and .08 points respectively . It is important to know the overall net effect of dam construction on the nutritional status of children. We focus on the results for irrigation dams to carry out this analysis, results for all dams show a similar pattern.These impacts are small compared to standard errors of the point estimates or sample standard errors suggesting that dam construction has little aggregate effect on height-for-age and and weight-for-age z-score.

Turning to stunting and malnutrition, we find that dam construction has increased stunting in the average river basin by .025 points and malnutrition by .012, effects that are small compared to standard errors of the point estimates and sample standard errors. Taken together our findings show that while dam construction in Sub-Saharan Africa has had little aggregate effects, it clearly generates losers and winners. This suggests the scope for more effective policy making in order to capture the benefits from dam construction while compensating those who may lose. As most poor regions, rural African economies are characterized by limited access to insurance against risk. The inability to cope against shock may interact with the increased variance of agricultural production from dam construction to reduce household’s income and food security. We investigate this in Table 1.5 by examining how dam construction reduces or exacerbates the impact of rainfall shocks on child nutrition. We use rainfall deviation from its mean between 1970 and 2002 as a measure of rainfall shock, and we consider two types of intensity of rainfall shocks: the number of instances during the survey year and the two years prior to the survey year when rainfall was at least.3 point .6 point of standard deviation below the mean. So these rainfall shocks can take any value between 0 and 3. In columns to rainfall shock is the number of times rainfall was at least .3 point of standard deviation below the mean, while columns to show results where the rainfall shock is the number of times during which rainfall was at least .6 point of standard deviation below the mean. We find that dams amplify the impact of rainfall shocks both in the basin where they are built and in downstream river basins . We also find that dams amplify the effect of rainfall shocks more in the 6-digit river basin where they are built than in downstream basins, but these differences are significant only for larger shocks. In this section we analyze whether the effect of dams are more pronounced along certain demographic characteristics.

It is particularly important to know whether the construction of dams affect more significantly vulnerable children and poor households. Because the DHS do not collect measures of income or consumption we use demographic characteristics to proxy for a household’s likelihood to be poor or vulnerable: a dummy for whether the household is female headed and mother’s years of education. We also investigate whether the impact of dams is different for boys and girls. These regressions have 31037 observations each. All models control for child gender, a dummy for twin child, mother’s age, mother’s education, a dummy for female headed household, the number of household members and children under 5 years, rainfall during the survey year and the two years prior to the survey year. The results reported in Table 1.6 consistently show that girls, female headed households and children of more educated mothers benefit more from a dam in an upstream river basin. For children living in river basins where a dam is located we don’t find, across different measures of child nutritional status, any systematic difference in the effect of the dam by demographic characteristics: for height-for-age the “own dam” effect are lower for girls, male headed households,bucket flower and less educated mothers; while for weight-for-age these effects are larger. The framework presented in Section 2 highlights improved access to food as an important channel in the causal link between dam construction and the nutritional channel of children. The relevance of this channel is confirmed by Strobl and Strobl who find that large dams in Africa increased crop productivity in downstream regions while cropland within the vicinity of the dam tends to experience productivity losses. However the construction of a new dam may be accompanied or substituted for the provision of other public goods that may affect children’s nutritional status. We examine this possibility in Table 1.7. Table 1.7 shows estimation of model where the dependent variables are measures of access to health services, tap water and electricity. The model is estimated following an IV strategy using the same instruments described above. Columns to show that dam construction improves access to health services and electricity in the basin where the dam is built. However columns to show that access to health services, tap water and electricity are affected little by the construction of an irrigation dam.

Put together with the findings in Strobl and Strobl, the results in Table 1.7 also point to the importance of improved access to food as the main mechanism between dam construction and the nutritional status of children in Africa. Rural-urban and rural-rural population movements are central mechanisms in the process of structural transformation and economic development. As the Agricultural sector shrinks, workers leave rural areas for manufacturing and services jobs in cities. Moreover differences in agricultural development and economic activity between rural areas may prompt a process of labor reallocation between these regions. What is the effect of these population flows on rural economic activity and welfare? Despite a large empirical literature on the effects of rural out-migration on individuals and households, little empirical evidence exists on their consequences at a macro or meso level. Exceptions include papers that focus on a specific aspect related to migration such as remittances and investigate how they affect village-level investment and economic growth. An under-investigated question in this literature is how rural out-migration influences rural labor market outcomes. This analysis is related to papers that examine the effect of international emigration on domestic labor markets. Lucas shows that migration of mine workers to South Africa increased wages in both Malawi and Mozambique – the two largest source of foreign mine workers in South Africa – suggesting that out-migration may contract the labor market at origin. However Mishra is the first paper to provide evidence of a causal link between emigration and wages in source countries. The analysis presented in this paper is the first attempt to measure the effect of internal migration on wages in sending regions. I focus on a specific internal migration process, rural out-migration, because it accounts for a large share of a country’s internal migration and is particularly relevant for understanding rural economic development. Moreover given the magnitude of internal labor flows relative to international labor flows, it is surprising that the literature overlooked the effects of rural out-migration on rural labor markets. By way of comparison Mishra estimates that, in 2000, the number of Mexican emigrants living in the US was about 16%. In other words, in 2000, for every 100 Mexicans living in Mexico, 16 were residing in the US. I find much larger rural emigration shocks in Brazil. My analysis shows that between 1980 and 2000, for every 100 individuals residing in rural areas, 95 had already migrated out to cities or other rural areas outside their state of original residence. The empirical framework developed in this paper builds on Card and Lemieux, Borjas and Mishra. As in Mishra, this paper investigates how emigration affects the wages of those who stay behind. The empirical strategy in Mishra follows the framework in Borjas to investigate how an emigration shock of Mexican workers to the US in a specific skill group defined by education and experience affects the wages of workers of that skill group who remained in Mexico. In this paper, I use a cohort analysis similar to the approach in Card and Lemieux to examine how rural out-migration of a given cohort affects the wages of rural workers in that cohort. This cohort analysis is motivated by several considerations. First, rural workers in Brazil present little differentiation by educational attainment. In 2000, less than 13% of individuals included in my sample – individuals residing in rural areas and aged 20 to 54 years in 1991 – had completed secondary education with the majority having less than a complete primary education. Second, occupations in rural areas are characterized by many routine activities with a high level of learning-by-doing which may reward more experience than education.

EPAeliminated virtually all indoor and outdoor residential uses for both chemicals

Growers’ reactions to the updated waiver, especially the 2010 Draft Order, were diverse and abundant. Interestingly, many farmers and agricultural stakeholders highlighted their disappointment in how the negotiations were handled above all else, emphasizing the process itself more than individual mandates. A letter from the Santa Barbara Farm Bureau lamented the new approach, stating that its members supported the 2004 Ag Waiver because it “focused on collaboration” and was “based on a good faith effort from both the agricultural community as well as [the Regional] Board,” however, they were “extremely disappointed” by the stakeholder participation process for the updated waiver, calling it a “failed” attempt due to staff members’ “reluctance to collaborate”. Another stakeholder organization, the Salinas River Channel Coalition , shared similar sentiments: “The SRCC have been involved for many years with water quality solutions in the Central Coast. The first Ag Waiver process was about improvement of water quality, but this current process has become nothing more than regulation to develop fines and fees.” The SRCC also added that the new Regional Board staff did not show they wanted to understand the agricultural industry, nor did they have “a desire to continue the proactive cooperation and educational approach which was used to develop the last Agricultural Waiver”. While the Clean Water Act has achieved significant results in water quality standards,procona florida container and the Federal Insecticide, Fungicide and Rodenticide Act endeavors to prevent chemicals from causing unreasonable harm to the environment and human health, pesticides continue to contaminate America’s waters.

In California’s Central Coast, two pesticides in particular have been identified as the primary sources of water column toxicity and targeted for regulation. Because agricultural operations in the Central Coast have historically relied on diazinon and chlorpyrifos for use on several crops, the region has been a testing ground for important research on the effects of these two organophosphate pesticides. Impacts of chlorpyrifos and diazinon on regional ambient and sediment toxicity are well-documented in the literature . However, less researched have been the policy implications of their use and discharge into waterbodies. This chapter fills several critical gaps. Identifying challenges and successes of applied pesticide control policy offers valuable information and recommendations to water quality regulatory agencies charged with controlling agricultural pollution in the region and beyond. Several studies have reviewed policy tools aimed at agricultural non-point source pollution , including a comprehensive policy analysis specific to California’s Central Coast region , yet even the authors of that study cite a dearth of case studies of implemented policy approaches. This case study analyzes several specific pesticide-related provisions of the 2012 Agricultural Waiver. Of particular interest is why and how two pesticides—chlorpyrifos and diazinon—rose to the top of the policy agenda during the recent regulatory process over a long list of other chemicals used in the region, and what intended and unintended consequences have resulted from this regulatory spotlighting. This study utilizes a blend of historical and social scientific methods to comprehensively evaluate rich datasets relevant to issues of agricultural pesticide use, pollution, chemical switching and environmental governance.

Integrating information from policy documents, meeting minutes, interviews, survey responses, water quality data, monitoring and enforcement data, organic crop production data, and Pesticide Use Records from County Agricultural Commissioner offices and the California Department of Pesticide Regulation, this chapter advances the conversations on pesticide and water quality policy at the regional level and offers insights into larger systemic issues of regulatory spotlighting a limited number of pesticides.Chlorpyrifos and diazinon are both broad-spectrum organophosphate insecticides used throughout the U.S. and California for the control of invertebrate pests . Historically, both were widely applied for home pest control. But in 2000, due to mounting evidence of human health risks, the U.S. Consequently, the overall use of diazinon and chlorpyrifos in California urban areas has dramatically declined , and both pesticides are now used almost exclusively for agricultural pest control. In the Central Coast region, chlorpyrifos is primarily used on broccoli and cauliflower to control soil maggots and on wine grapes to target vine mealybug and ants. From 2006 to 2010, the Salinas Valley, Imperial Valley, Santa Maria Valley and Pajaro Valley regions used only 10% of statewide chlorpyrifos, but they had the highest frequencies of chlorpyrifos detections and exceedances . All of these regions except the Imperial Valley are located within the Central Coast. Diazinon is predominantly applied to head lettuce, leaf lettuce and spinach to kill a variety of insect pests, including green peach aphid , potato aphid , pea leafminer seed corn maggot , spring tails and cutworms . In 2001, diazinon was one of the only registered options for these pests . Diazinon use in the Salinas Valley, “the salad bowl of the world,” nearly tripled from 1997 to 2004 , before it began its steady decline. Seasonal use of chlorpyrifos and diazinon fluctuates with the cropping cycles . Because two or three vegetable crops per growing season are common in the region for brassicas and leafy greens, chlorpyrifos and diazinion use often peaks several times a year.

Between 2011 and 2014, over 20 waterbodies in the Central Coast Region were listed as impaired for chlorpyrifos and/or diazinon and/or unknown toxicity. These water bodies included the Lower Salinas River Watershed and several more in the Pajaro River Watershed and Santa Maria River Watershed . While the use and target species vary between chlorpyrifos and diazinon, the mechanisms of toxicity and associated risks of organophosphates are similar. Several studies suggest that even low-level contact with these neurotoxicants can have serious health implications. The EPA determined that the amount of chlorpyrifos and diazinon that can be consumed in drinking water at which no adverse health impacts would occur for adults is 0.02 mg/L and 0.0006 mg/L and respectively. Exposure has been associated with neurobehavioral deficiencies, including attention deficit and hyperactivity disorder in children . A study conducted in the Salinas Valley of Latina mothers and newborns found that exposure to the pesticides in utero can cause serious health effects to babies, whom are less able to detoxifyorganophosphates . Another recent study links exposure of organophosphates to lung damage in children . Despite the long list of serious human and environmental health implications posed by diazinon and chlopyrifos, one advantage of using these pesticides over others is their relatively shorter half-lives. The half-life of chlorpyrifos and diazinon in the water column ranges from 30-138 days depending on field conditions .In the U.S., a number of major federal and state laws govern pesticides. The Federal Insecticide, Fungicide and Rodenticide Act was passed in 1947 with the original goal of protecting consumers from ineffective products. Through a series of amendments, the Act’s function has evolved to include protecting human health and the environment from unreasonable adverse effects of pesticides. One such amendment that fundamentally changed EPA’s regulation of pesticides towards a health-based focus was the 1996 Food Quality Protection Act . The FQPA was the first to mandate the evaluation of a pesticide’s sensitivity to children, procona London container infants and fetuses as well as the aggregate risk of multiple exposures. Since its passage, the EPA has taken action under the FQPA targeting chlorpyrifos and diazinon for review due to their potential risk to children. Between 2000-2004, the Agency reviewed the two pesticides through a comprehensive Interim Registration Eligibility Decision and Registration Eligibility Decision . During the review, an agreement was made with the technical registrants of chlorpyrifos and diazinon to terminate the registration and begin a phase out for nearly all residential uses of both chemicals. As an extra measure to mitigate health risks, the EPA also required that all use of chlorpyrifos products be discontinued on tomatoes, and restricted its use on apples, citrus and tree nuts. The diazinon RED required more extensive mitigation measures for diazinon use on agricultural crops, including canceling or restricting agricultural uses for more than 20 crops, eliminating all aerial application except for lettuce crops, and limited overall use of the chemical. The agreement also began the process of developing special dormant spray label restrictions for diazinon and chlorpyrifos products. By 2006, product labels were amended to include restricted use during the rainy season, increasing buffer zones, prohibiting certain applications, requiring recommendations from pest control advisors and mandating certain best management practices. In 2012, after initiating a new registration review of chlorpyrifos, the EPA expanded the size of required buffers around sensitive sites, like schools.

Chlorpyrifos and diazinon are also regulated under several sections of the 1972 U.S. Clean Water Act . Water monitoring data collected during the IRED review process highlighted areas where more regulation was needed and where efforts to curb water pollution were already underway. Based on the detection of diazinon and chlorpyrifos in effluent from publically owned treatment facilities, National Pollution Discharge Elimination System permits were amended to include more monitoring and in many cases effluent restrictions. Many states had already begun listing water bodies impaired by chlorpyrifos and diazinon, and begun the process of setting Total Maximum Daily Loads for these waters. In California’s Central Coast, over 20 water bodies have been listed as impaired by chlorpyrifos and/or diazinon: the Pajaro River, Pajaro River Estuary, Llagas Creek, Santa Maria Watershed, Lower Salinas River, Arroyo Paredon, Moss Landing Harbor, Old Salinas River, Tembladero Slough, Blanco Drain, Salinas Reclamation Canal, Espinosa Lake, Chualar Creek, Quail Creek, Espinosa Slough, Alisal Slough, Natividad Creek, San Lorenzo River, Zayante Creek, Arana Gulch, Branciforte Creek, and San Antonio Creek. Additionally, the review process brought attention to several cases of toxic amounts of diazinon and chlorpyrifos in drinking water, forcing regulators to take action under the CWA and Safe Water Drinking Act. The two pesticides have also been identified as impacting several endangered species: California’s red-legged frog, Pacific salmon and steel head species, the Delta smelt and tidewater goby. Under the Endangered Species Act, the EPA has assessed the risks of the chemicals on each of these species and mandated specific practices for their protection. Mandates have included designating critical habitats, vegetative buffers, no spray zones, wind speed restrictions and fish mortality incident reporting requirements. In addition to federal laws, states may also have their own pesticide and water quality regulation programs. For example, in California, the 1969 Porter-Cologne Act gave all nine Regional Water Quality Control Boards broad authority to grant waste discharge requirements for all dischargers in their jurisdiction, as well as the authority to waive those requirements. However, in 1999, with evidence of increased water pollution, the state repealed the Regional Board’s authority to issue waivers, requiring them to, at the very least, attach conditions to waivers and to review these conditions every five years . To comply, each Regional Board has issued individual “Conditional Waivers of Waste Discharge Requirements” . In some cases, like in California’s Central Coast, a Conditional Waiver can act as the primary means for achieving TMDL requirements, raising important policy implications since Waivers have not historically allocated numeric loads to dischargers. California is the only state in the country where a permit and license are needed to apply pesticides. County Agricultural Commissioner offices collect the licensing information and pesticide use records and report these data to the state regulatory agency, the California Department of Pesticide Regulation . In addition to collecting information, the CDPR, as authorized by the California’s Food and Agricultural Code, has the power to reduce pesticide use. In 2015, the CDPR exercised that authority, restricting agricultural uses of chlorpyrifos by requiring applicators to obtain an additional permit from their County Agricultural Commissioner’s office. Yet another means of restricting pesticides is through litigation. For example, in 2015, in response to a lawsuit filed by Earthjustice on behalf of Pesticide Action Network and the Natural Resource Defense Council , the 9th Circuit Court of Appeals ordered the U.S. EPA to file status reports on chlorpyrifos. As discussed in previous chapters, two Conditional Agricultural Waivers have been adopted in the Central Coast Region—one in 2004 and an updated version eight years later, on March 15, 2012. In addition to controlling farm discharges from entering waterbodies, one of the major goals of the Agricultural Waiver is to collect monitoring data. Water quality data are not only used to assess the state of the regions’ waters, but also to assess and select appropriate BMPs, help characterize agricultural pollution problems, and identify pollution hotspots. In the 2012 Ag Waiver, BMP and monitoring requirements vary by tier .

The E. coli sources of highest concern were from animals passing through crop fields

The first employs a within-case method of process tracing to assess the factors that acted as drivers or limitations to the policy process. Part two, uses six evaluative criteria to assess the effectiveness of specific outcomes, such as water quality improvements and the value of monitoring data. The 1972 Clean Water Act employs a technology-based standards approach, whereby any discharger must obtain a permit that contains the limits on what an individual or industry can discharge into a given water body as well as details their monitoring and reporting requirements, all these provisions are defined and enforced by the National Pollution Discharge Elimination System permit system . This approach aims to control pollutants at the point of discharge by setting uniform discharge limitations based on the best available technology pertaining to a particular industrial category. The U.S. EPA grants states the primary responsibility of issuing NPDES permits, and monitoring and enforcing performance. When the technology-based approach does not adequately control pollution, an additional control tool, water quality-based standards, is implemented. The EPA and states use a calculation, Total Maximum Daily Load , to determine the maximum amount of a pollutant that a waterbody can receive while still meeting water quality standards. Water quality standards are set by designating a “beneficial use” for each waterbody as well as the criteria to protect the designated use of that water. The TMDL calculation is a multi-step process: first, the state lists each impaired waterbody within its jurisdiction, called the “303 list”; second,plastic planter pot using the state’s already-established “beneficial use” categories, a numeric TMDL is calculated for each waterbody; finally, a portion of the load is allocated to each discharger.

The fundamental problem of TMDLs, especially in waters polluted with non-point sources, is that they must be translated into specific numeric discharge limitations for each source of pollution . Because non-point source pollution , such as agricultural runoff, is inherently diffuse, the task of monitoring dispersed and dynamic discharges and connecting them back to their sources to identify what operation is polluting and to what extent is both expensive and complicated. However, efforts by the EPA are underway to make water quality modeling, specifically targeted at regulators implementing TMDLs and water quality standards, more easily accessible and affordable . Similar to the Clean Water Act, California’s Porter-Cologne Act gives broad authority to nine Regional Water Quality Control Boards to regulate water quality at a sub-state, localized scale. Regional Boards are responsible for water quality protection, permitting, inspection, and enforcement actions.The Regional Board issues permits on the condition that beneficial uses are protected and water quality objectives will be met. The Regional Boards also have the right to waive Waste Discharge Requirements for individuals or groups, including agriculture, if it is in the public interest . For agricultural discharges, Regional Boards have historically granted waivers rather than force growers to comply with WDRs. In October of 1999, with water quality high on the political agenda, Senate Bill 390 was passed, mandating that Regional Boards attach conditions to waivers and review them every five years .Such monitoring requirements must be adequate to verify the effectiveness of the Waiver’s conditions . In effect, the Conditional Waivers function similarly to Waste Discharge Requirements: the discharger needs to meet conditions specified in the Waiver/Permit. Each Regional Board has taken a different approach to controlling runoff from agricultural lands within their jurisdiction , but almost all have issued Conditional Waivers. In 2004, the Central Coast Region was the first to adopt a Conditional Agricultural Waiver .

The conditions attached to the 2004 Waiver required growers to enroll in the Agricultural Waiver program, complete 15 hours of water quality education, prepare a farm plan, implement water quality improvement practices, and complete individual or cooperative water quality monitoring. The 2004 Agricultural Waiver expired in July 2009, but the Order was extended five times from 2009 until 2012. After nearly three years of continued negotiation, on March 15, 2012 the Central Coast Regional Board adopted a new Conditional Agricultural Waiver, Order No. R3-2012-0011. The updated 2012 Ag Waiver places farms in one of three tiers, based on their risk to water quality . Bigger and more polluting farms are held to tougher standards. For most of the Tier 1 and 2 farms, the 2012 requirements are similar to those in the 2004 Waiver: water quality education, water quality management plans, implementation of management practices, and either cooperative or independent surface receiving water monitoring and reporting. For Tier 3 farms and a subset of Tier 2 farms, additional conditions are added, including submitting an annual compliance form, conducting individual discharge monitoring and reporting, and implementing vegetative buffers. Soon after the 2012 adoption, the State Board received petitions from five parties, representing both the agricultural community and environmental organizations, requesting a “stay” on specific provisions of the new waiver. The agricultural community argued that the Ag Waiver was too harsh, and environmentalists contended it did not go far enough . The State Board asked the Central Coast Regional Board to review and estimate the costs of the provisions of concern and further explain the environmental and public benefits that the updated Waiver would accrue from compliance . The State Board rewrote sections of the Agricultural Waiver, and released a final version in September 2013. Unsatisfied with the State Board’s revisions, a coalition of environmental groups, together with an elderly woman who could not drink water from her tap because it was contaminated with agricultural waste, filed a lawsuit in Sacramento’s Superior Court challenging the 2012 Central Coast Agricultural Waiver and the changes made by the State Board. The coalition claimed the State Board changes “cripple the already weak order,” and as it’s currently written, the Ag Waiver is “so weak, it did not comply with state law” .

In his ruling on August 11, 2015, Superior Court Judge Frawley agreed that the Central Coast’s Conditional Agricultural Waiver was doing an inadequate job of protecting regional water quality and needed to develop more stringent conditions.A more contextualized story of adopting the 2004 and 2012 Ag Waivers is laden with complex and contentious trade-offs, negotiations, lobbying efforts, alliance building, scientific findings, and difficult to foresee “focusing events” . This study pays special attention to assessing the effectiveness of the monitoring program and significance of data collected under the Conditional Agricultural Waiver. Monitoring data are arguably the most pressing concern for non-point source pollution control plans. This Central Coast case illustrates a common trend in non-point source pollution control and what Sunstein would mark as “regulatory failure due to information limitation.” The current monitoring data on agricultural water discharges are inadequate to allocate TMDLs and therefore implement and enforce water quality standards. In the absence of sufficient data, the Ag Waiver regulatory program cannot comply with state and federal law,30 litre plant pots and water protections are further delayed . In an attempt to comply with water quality standards, the Central Coast Regional Board has endeavored to ratchet up monitoring efforts. For example, the updated 2012 Agricultural Waiver program modestly expanded the amount of information it requires of Tier 3 growers to include some individual monitoring. Unfortunately, many are skeptical that this more “robust” monitoring program will, in practice, amount to much more in terms of useful information than the previous monitoring program, especially given the small number of growers in Tier 3. This study fills a gap in research on where monitoring efforts have succeeded and failed in the Central Coast’s agricultural NPS pollution control policies and in reaching TMDL goals. There is also a growing need to identify realistic tools for water quality agencies charged with the difficult task of regulating agricultural NPS pollution. While this study will tailor recommendations specifically to the Central Coast Region, other states and localities facing similar difficulties can utilize results from this research to better manage agricultural pollution with their jurisdiction.Though a general causal hypothesis can be made that certain independent variables have a causal effect on policy-making, process tracing allows the researcher to narrow down the list of potential influential causes as well as uncover independent variables that otherwise would have been left out . Process tracing can also identify whether or not these influential variables have a positive or negative effect on the policy outcome. Such a research design is an iterative, cyclical process—a broad hypothesis can be refined as more data are gathered. King, Keohane, and Verba explain that this type of “exploratory investigation”—selecting on the basis of variance in dependent and independent variables—generates a more precise hypothesis than that which can be made at the beginning. Process tracing requires an in-depth understanding of causal mechanisms in the policy making process in each case, relying on data from newspapers and magazine articles, websites, meeting minutes, policy documents, government reports, public comments, monitoring and enforcement data, and other archival documents.

Key informants for this part of the current research include Regional Water Board staff, university extension specialists, agricultural organizations, growers, water quality agencies, and stakeholders involved in water quality efforts. Interviews were conducted in a semi-structured manner and key informants were identified using “snowball” sampling—starting with a few identified stakeholders who then share names of additional significant individuals to interview. In this study, data from interviews are used to help contextualize events, perspectives, language or definitions and reaffirm information identified during the document analysis. Just as water quality was rising on the agenda, circumstances changed and priorities shifted. In September 2006, two years after passing the first Agricultural Waiver, an E. coli outbreak traced to the Salinas Valley killed three people and sickened more than 200 . Due to public concern, large supermarket chains including Safeway and Costco Wholesale Corporation, demanded that growers have more stringent food safety requirements . Subsequently, food safety auditors began requiring a “scorched-earth” policy including minimizing any vegetative habitat around farms that could attract wildlife. One farmer stated that the “Western Growers Association said they wouldn’t buy anything from farms with vegetative buffer strips.” Because maintaining vegetation on a field’s edge protects water quality from discharging into nearby waterbodies, calling for its removal could threaten efforts to address water pollution on the Central Coast . The E. coli “focusing event” forced the Regional Board to rethink this key provision , which was already under discussion in drafts of the updated Agricultural Waiver. Mandating vegetative buffer strips for all farms would, quite literally, compete with food safety requirements, which require farms to clear vegetation. The contradictory food safety requirement versus water quality requirement left growers confused about which policies to follow. A representative from the Farm Bureau voiced frustration on behalf of the agricultural community, “ever since E. coli there has been a series of complex overlay of regulations” . Two additional issues related to buffer implementation concerned growers: the cost and the science driving the policy. Growers worried about the price not only of installing, irrigating and maintaining the new vegetation around their farms, but also the lost revenue from taking cropland out of production and replacing it with vegetation. Moreover, some agricultural stakeholders contended that the science driving this mandate was inadequate. The improved water quality from vegetative buffers, including pollutant, nutrient and sediment retention, infiltration, sediment deposition, and absorption are well documented in the literature . However, regional agronomic research demonstrating the effectiveness of vegetative buffers is limited to only a few studies, and their results are mixed, especially in regards to the most effective width-size and vegetation . Buffer width became a cornerstone of debate since the jury was still out on exactly how wide a buffer should be to improve water quality. The results of a meta-analysis of over 80 scientific articles on vegetated buffers and sediment trapping efficacy concluded that while wider buffers provide a longer “residence” time for runoff water and thus, are more effective in reducing sediment, sediment trapping efficacy does not improve significantly when buffer width was increased beyond 10 meters . In other words, beyond 10 meters, the law of diminishing returns takes effect. The analysis by Liu and colleagues also concludes that buffer width alone only explains about one-third of retention effectiveness, and other factors, such as soil, slope and vegetation play an equally important role. Because of these competing interests, the vegetative buffer requirement was substantially weakened throughout the Agricultural Waiver deliberation process.

The pamphlet appeared in the months before the Asian Exclusion Act was made permanent

John Eperjesi has shown that this vision espoused by the character Cedarquist is largely based on two contemporary figures, Charles Conant and Albert Beveridge. Although Western writers had dreamed of making a fortune in the China market since the days of Marco Polo, it was Conant who popularized the idea that new overseas markets were the only solution to economic recessions at home. The late nineteenth century saw a fall in profits from manufacturing and a rise in financialization, not unlike the late twentieth century.Thus the closing of Cedarquist’s U.S. factories prompt him to look to China. Like Karl Marx, Conant argued against classical economists that supply and demand could not remain in balance, and that overproduction crises, or recessions, were not an aberration but a structural and repeating feature of capitalist markets. Conant proposed that what he called the problem of “oversaving” on the part of Americans could be counteracted by controlling foreign markets through Imperialism, which would not involve the political difficulties of direct rule as in “Colonialism.”Furthermore, while many celebrated the China market, the Senator Albert Beveridge did so with Cedarquist-like rhetorical flourish. In a speech in 1900, while Norris was writing The Octopus, Beveridge argued: “The Pacific is our Ocean. More and more Europe will manufacture the most it needs,black plastic nursery pots secure from its colonies the most it consumes. Where shall we turn for consumers of our surplus? Geography answers the question. China is our natural customer” . The China market has been called a myth not because there was no market, but because the idea of the market involved a complex narrative of world-historical developments in China and the West, and so structured plans in excess of actual conditions.

In Beveridge’s rhetoric, the myth emerges as a historical narrative grounding the United States’s destiny in the inevitable unfolding of natural processes. The problem of overproduction turns out not to be overproduction at all, but the lack of population increase. Food is not to be produced to feed the population, but ideally the population would be grown to meet the supply of food. As in many places in the text that enter into elevated language such as this, the word wheat is capitalized to indicate its divinity.Wheat is the “concrete example” that stands in for all American goods, due to its seemingly-natural production on the farm—growth—and consumption as food—digestion. Food is the key commodity of the coming twentieth-century where population must be made to depend on the global market. Whereas Cedarquist laments that the European population does not increase to meet U.S. supply, a suitable population does exist in China. Together with the standard Malthusian argument of an ever-expanding number of bodies who have overrun a limited food supply, there is also a decline in the quality of Chinese food, and so the danger is to each individual body. The supposed inability of the Chinese to feed themselves, and specifically the deficiency of their rice crops, is a boon to American agribusiness. Norris did not invent this idea, which reflects one competing view of Chinese agriculture among Americans at the time, which will be taken up in detail in the following chapter. Briefly, one common view of nineteenth-century Americans had been that China was the preeminent traditional agrarian society, but around the turn of the century, however, as China’s position in the world continued to decline and the U.S.’s continued to rise, the agrarian hierarchy was also reversed, and soon American agricultural experts such as Lossing Buck began traveling to China to teach. Indeed there was a crisis in Chinese rural economy at the time, though Norris does not find the cause in European or American interventions, nor even in domestic political failings.

He reverses the causality so that Empire will deliver food to Asia rather than famine, and moreover applies the naturalist trope of degeneracy to Chinese agricultural production. Chinese agriculture does not have an economic problem of production or circulation, the two great “watchwords” of American development, but instead a biological problem, the degeneration of the species itself. As we will see below, agriculture is understood in The Octopus to be propelled by a vital force, the nutritive quality perhaps, that is passing away in the Orient, replaced by the younger vigor of the wheat. The sublime hunger of the Chinese can never actually be relieved, so California wheat production can continue to expand indefinitely, without ever again saturating the market. Cedarquist predicts how such market “effects” will continue to help them in Europe, yet somehow not hurt them in China: “When in feeding China you have decreased the European shipments, the effect is instantaneous. Prices go up in Europe without having the least effect upon the prices in China” . China remains insulated, unaffected by changes in the rest of the world. It is simultaneously the key to international business success, and forever outside of the world market, playing a supplementary and ultimately mystical role. As the text continues, the mathematical sublime established in the Chinese population is transferred to American wheat, which itself becomes infinite: “We hold the key, we have the wheat,—infinitely more than we ourselves can eat” . The sublime quality of the wheat has actually begun with its infinite consumption in China, and then has been logically extended to an infinite production in the U.S. Here we see most clearly how the myth of the China market is the condition of possibility for imagining an infinitely expanding agricultural commodity production. This is how the qualities of the hungry Chinese body, discussed in more detail below, play into the transformation of the meaning of the land in the American West at the “close” of the frontier. The key to China’s role in the novel is that it is not simply a new market, but one where the “laws” of markets cease to apply; it is the limit point of capitalism beyond the horizon.

Norris uses the adjective “vague” throughout the novel to indicate a character’s intentions when his or her reason and common sense are overwhelmed by emotional excitation. This can happen either in business dreams, as here, or also in feelings of romantic love. The novel continuously oscillates between the precise calculation of grain rates, land values, and train schedules, and the “vague” stirrings of personal ambition, revenge, and love. This language encapsulates the tension between the realistic and the romantic that we saw earlier. In this passage, we see that the same word sums up the Orient, another object of ambition and love. Whereas Cedarquist has just given a pseudo-scientific account of nutrition and population figures to argue for the China trade, it is the fact that he cannot prove any of it that makes it so desirable; it must remain mysterious. By flowing over the water to China,greenhouse pot the wheat itself becomes water. By evading the Trust, they become a trust. As many commentators have noted, The Octopus constantly invokes the popular hostility for the middleman in late-nineteenth century American agrarianism while ultimately suggesting that all capitalist enterprise, even farming, operates on the same principles. This sense of geography as destiny follows from Frederick Jackson Turner’s thesis on the historical end of the frontier in the American West, widely influential from Norris’s time well into the twentieth century. In “The Significance of the Frontier in American History,” a paper delivered to the American Historical Association in 1893, Turner argued that the frontier had been the decisive factor in shaping the course of U.S. history, and that the end of the frontier meant the closing of the first period of that history. The Octopus is addressed to Turner’s thesis in a double sense: the advent of industrial agriculture proves that California is no longer a frontier, while the interest in China expands the westward push beyond the continent. As Turner put it, “Up to our own day American history has been in a large degree the history of the colonization of the Great West. The existence of an area of free land, its continuous recession, and the advance of American settlement westward, explain American development” . Here colonization is understood as overtaking and ruling “free” land. Many commentators at the time and since have seen the frontier thesis as underlining the importance of expansion across the Pacific. What separates the actions around the Spanish American War, the era that U.S. politicians openly debated imperialism as a policy, from the earlier settler colonial policies on the continent and Hawaii is first that these areas of Asia are not imagined as empty “free land,” and second that they are occupied for their strategic geographical positions, specifically for access to the China market.

For Frank Norris, moreover, the end of the frontier can be pinpointed to the specific moment of U.S. military action in China, when U.S. troops joined with an international force to put down the Boxer Rebellion in 1900. In a significant essay that has received little attention from critics, “The Frontier Gone at Last,” he wrote that “[u]ntil the day when the first United States marine landed in China we had always imagined that out yonder somewhere in the West was the borderland where civilization disintegrated and merged into the untamed” . Once the marines have landed, that is, Americans can no longer imagine that there is a frontier to the west. The frontier is by definition untamed, uncivilized, whereas China is understood to be a civilization of ancient provenance—in The Octopus, as we have seen, it is in fact the first empire, ancestor of the present U.S. Thus moving into this area is no longer frontier expansion, but meeting, in Turner’s words, “other growing peoples [to be] conquered” . Finally, “the day” when the marines landed and the frontier vanished took place while Norris was writing the novel, which perhaps partly accounts for Cedarquist’s oracular style. Norris’s conception of the U.S. encounter with China as the historical as well as geographical end to the frontier is what links the land dispute plot to the dream of the China market. When the ranchers read the circulars advertising land that is virtually free, they are still operating with a frontier mentality. Their leader, Magnus Derrick, in particular is presented as a veteran of the gold rush, a 49’er, who has shifted to ranching as a new form of prospecting. When the railroad comes to charge the current market value, however, the frontier has been closed. Although commentators on the novel have tended to analyze either the environmental meaning of new agriculture, or the representation of China and the Chinese, but not both, the structural connection between these two foci needs to be emphasized. As Cedarquist explains to the group, there can be no going back to the economics of the frontier, but they must compete in an industrial capitalist market. The only way profits can be guaranteed in this new world is through the China market, and in order to secure this ideologically, Asia must be seen as having been America’s destiny all along. Thus, I follow William Conlogue’s argument that The Octopus should not be understood as pastoral, the term that is most often used to describe the representation of the land in American literature. In the U.S. context, the pastoral is used slightly differently than its classical meaning in the Western tradition, in which is the countryside is imagined by urban cultural elites. Instead, Leo Marx famously argued that Americans’ attitudes toward rural space betrayed a contradiction, both idealizing the scene of natural purity and simultaneously displaying enthusiasm for industrialization. Literary writers displayed this tension by producing a compromise formation captured in Marx’s paradoxical term “complex pastoral,” which is somewhat analogous to the classical pastoral’s position as the cultivated middle space between the city and the wasteland. The Octopus is one of his key examples, not only because the railroad is the paradigmatic “machine in the garden” but more specifically because of the novel’s vivid depictions of industrial agriculture as the sexual union of machines and the soil. Whereas Marx establishes the complex pastoral as a trope that repeats across the full range of fiction and nonfiction genres, for Walter Benn Michaels The Octopus represents something more specific,which is the “central problem for naturalism, the irruption in nature of the powerfully unnatural” .

A critical factor known to affect crop yield in a given field is the crop rotational history of that field

Farmers make a wide range of decisions regarding the management of their crops, involving pest management, planting/harvest dates, fertilization, irrigation, and, as we focus on in this study, crop rotation. These decisions are, along with external factors that fall outside farmers’ control, such as weather, likely to affect crop performance and yield substantially. A rigorous quantitative understanding of the factors, including farmer management decisions, that affect crop yield is an essential prerequisite for developing management strategies that maximize yield. There are several possible mechanisms by which the crops previously grown in a field can affect crop yield. First, different crops have different effects on the nutrient composition of the soil, so the identities of crops previously grown in a field can affect nutrient availability and crop yield. For example, nitrogen-limited crops can benefit from rotation with nitrogen-fixing legumes, and phosphorus nutrition in California cotton is shaped by whether or not the previous crop received phosphorus fertilizer. Second, certain crops may increase the local abundance of particular insect pests and pathogens. Since different crops are often susceptible and resistant to different pathogens and pests, the identities of the crops recently grown in a field can affect yield. For example, if one crop increases local abundances of an insect pest that also attacks a second crop, planting the second crop immediately following the first may lead to decreased yield resulting from attack from the built up local pest population. In contrast,raspberry container size such a yield depression could potentially be averted if the second crop were planted following a crop that does not lead to local accumulation of the pest.

In monocultures of wheat, substantial yield declines have been noted and attributed to the buildup of the soil-borne fungal pathogen Gaeumannomyces graminis. Third, many studies have shown that a field’s crop rotational history can strongly affect weed densities. Numerous other mechanistic explanations for the yield effects of crop rotation have also been suggested. Crop rotation has been practiced for thousands of years; evidence for its inception dates back to ancient Roman and Greeksocieties. Experimental studies on the effects of crop rotation first appeared in the early 20th century, revealing that growing crops in rotation led to increased crop yields of up to 100% compared to continuous planting of a single crop. Interest in the yield effects of crop rotation waned during the middle of the 20th century, due to the increasing availability of cheap fertilizers, insecticides, and herbicides. However, crop rotation continues to be a relevant and important practice; low-input farming remains desirable due to the costs of fertilizers and pesticides, and fertilizer and pesticide applications can often not fully compensate for the benefits afforded by crop rotation. In addition, the significant environmental and public health concerns surrounding fertilizer and pesticide use highlight the desirability of methods of increasing crop yield through alternative methods such as crop rotation. The effects of rotational histories on yield are well understood for some crops, such as corn, where rotation is recognized to be crticial in avoiding the buildup of corn root worms. However, for many crops, the direction, magnitude, and mechanism of the effect of crop rotational histories on crop yield remain poorly understood.

Cotton is one such crop. Experimental field studies of the effect of crop rotation on cotton yield have demonstrated increased cotton yield, compared to continuous cultivation of cotton, when cotton is grown in rotation with sorghum, corn, and wheat. Despite these useful results, only a small subset of possible rotations has been studied, experiments have been restricted to plots significantly smaller than typical commercial cotton fields, and mechanisms for these effects remain poorly understood. To help address these limitations, we seek to expand upon this work by exploring the effects of crop rotational histories on yield in commercial cotton fields in California, using an ‘‘ecoinformatics’’ approach capitalizing on existing observational data gathered by growers and professional agricultural pest consultants. In recent years, there has been a surge in research and interest involving the rapidly emerging field of ‘‘big data.’’ The big data movement has been fueled by several developments, including a dramatic increase in the magnitude of data generation, an improved ability to cheaply store, manipulate, and explore massive datasets, and the development of new analytic methods. Most importantly, the movement has been driven by a growing realization that existing data, and data generated as a byproduct of our everyday lives, can be leveraged to explore key questions about nature and human behavior, even if the data were not collected for this purpose. Ecoinformatics is a nascent field focused on harnessing the power of big data to address questions in environmental biology. Ecoinformatics approaches typically involve the analysis of large datasets, the synthesis of diverse data sources, and the analysis of pre-existing, observational datasets. In some commercial agricultural settings, farmers, along with hired consultants, collect a great deal of regular data about their fields that are used to guide real-time crop management decisions, such as the timing of pesticide applications.

By capitalizing on data that are already generated as a byproduct of commercial agriculture, ecoinformatics provides a low-cost means of obtaining a large dataset that can be used to explore key questions in agricultural biology, some of which might be too difficult or too costly to explore experimentally. Furthermore, the large size of datasets created for ecoinformatics can afford greater statistical power than could possibly be generated through experimental work. Experimentally studying the yield effects of crop rotational histories is challenging for several reasons. There are a plethora of possible rotational histories, which means that a large number of treatments would be required to explore the space of possible rotational histories thoroughly. Furthermore, experimentally studying effects of crop rotations requires experiments spanning several growing seasons, which may be logistically challenging. Finally, in order to maintain realism and applicability to commercial fields, which are typically quite large, sizeable experimental plots would be required, especially in light of research suggesting that landscape composition as far as 20 km from a focal field can affect the densities of agricultural pests in that field. While yield effects of non-mobile factors such as soil characteristics may be readily detected through small plot experimentation, the effects of highly mobile arthropods may only be detected at much larger spatial scales. An ecoinformatics approach offers attractive solutions to these challenges. Since we analyze a large preexisting dataset that includes over a thousand records,raspberry plant container a diversity of the possible crop rotational histories already exists in the dataset. In addition, our dataset spans 11 years of data, so the data span the temporal scale necessary to ask questions regarding effects of multi-year rotational histories. And, since the data come from the exact setting where we wish to apply our results, the data are realistic and capture the appropriate spatial scale of commercial agriculture. First, we sought to identify which crop rotational histories are associated with increased and decreased cotton yield, and to quantify these yield effects. We then explored possible explanations for the yield effects identified in the previous step by examining the associations between crop rotational histories and pest abundance.We employed a hierarchical Bayesian modeling approach, fitting linear mixed models to explore our questions about the effects of crop rotational histories on cotton yield. Mixed models combine the use of random effects and fixed effects, making them ideally suited for analysis of data that are structured, or clustered, in some known way, such that separate observations from within clusters are expected to be similar to one another. When we model a source of clustering using a random effect, we assume that each cluster-specific parameter was drawn from a common distribution, and we estimate the parameters of this distribution from the data. We use this common distribution as the prior when calculating the posterior distribution of each cluster-specific parameter. The parameters of the distribution of cluster-specific parameters have posteriors that are estimated from the data, typically after assuming uninformative priors for the hyperparameters. Using a common, empirical prior for all cluster-specific parameters allows pooling of information across clusters, so that data from all clusters can help inform estimates of every other per-cluster parameter. Assuming all clusters are the same introduces high bias and tends to underfit the data, whereas estimating fixed effects for each cluster introduces high variance and tends to overfit the data; however, using a random effect provides an optimal compromise between introducing bias and introducing variance.

In this dataset, there are several plausible sources of clustering. 1. First, we expect the data to be clustered by field, since there likely exist field-specific factors that affect yield, such as soil characteristics, local climate, and grower agronomic and pest management practices. We controlled for variable yield potential between fields by including field identity as a random effect in our models. Random effects allow pooling of information across clusters, so they are particularly useful when there are few observations from some clusters – a situation in which it is difficult to accurately estimate each percluster parameter with only the data from that one cluster. Since there are three or fewer records for 78% of the fields in our database, we feel that including field as a random effect was preferable to trying to estimate field-specific fixed effects with very few observations per field. Additionally, including field as a random effect provides a straightforward way to make predictions for fields not represented in our database. Since modeling field as a random effect involves sample a field-specific parameter from this distribution if we wish to make predictions about a previously unobserved field. Uncertainty in this field-specific parameter can be propagated by simulating many samples from this distribution, while simultaneously accounting for uncertainty in the parameters of this distribution. However, if we were to model field as a fixed effect, we would not estimate a distribution of field-specific parameters. We would only estimate parameters for the specific fields in our database, leaving us with no obvious way to make inferences about new fields. 2. Second, we expect that our data are clustered by year, since there is substantial between-year variability in climate, particularly in the winter and early spring. Climatic variables can affect crop performance, planting date, and insect pest populations, all of which can in turn affect cotton yield. To control for and quantify variation in yield due to year-specific factors, we included year as a random effect in our models. Our reasons for including year as a random effect are the same as those for field: there are few observations from some years, and we may wish to make predictions for future years not covered by the existing database. All models were fit using a No-U-Turn Sampler variant of Hamiltonian Markov Chain Monte Carlo implemented in Stan version 1.3.0, accessed through the rstan packing in R. We ran three chains from random initializations, each with 10,000 samples, and discarded the first 5,000 samples from each as burn-in. Inferences were based upon the remaining 15,000 samples. We checked convergence by making sure that ^ R, an estimate of the potential scale reduction of the posterior if sampling were to be infinitely continued, was near 1.To explore the yield effects of the crop grown in the same field the previous year, we fit a linear mixed model with yield as the response variable. The predictor variable of primary interest was the identity of the crop grown in that field the previous year, which was included as a fixed effect. Given that we are working with an observational dataset, a critical step in order to make meaningful inferences about the variable of primary interest – the crop grown the year before – was to control, to the extent possible, for potentially confounding variables that could generate spurious correlations and taint the validity of our inferences about crop rotation. To control for variable yield potential between fields and years, field and year were included in the model as random effects. The field terms control for the possibility that some fields may have higher yield potential due to their location, soil characteristics, or growing practices; the year terms control for the substantial year-to-year variation in cotton yield, which likely results from yearly weather differences. A term indicating cotton species was included in the model to account for yield differences between cotton species.

We explored the use of location as a proxy variable but the results remained similar

When cohort participants were 6 and 12 months old, most households showed signs of moderate or extensive mold at either visit. At age 7, based on maternal report, the majority of families was living below the Federal Poverty Level, 15.7% of cohort children experienced a runny nose without a cold within the past year, 16.3% displayed asthma symptoms, and 6.1% were currently taking asthma medication. Table 2 shows the distributions of wind-weighted fumigant use within 8 km of CHAMACOS residences during the prenatal and postnatal exposure periods. Methyl bromide and chloropicrin were the most heavily used fumigants during the prenatal period, with mean ± SD wind-adjusted use of 13,380 ± 10,437 and 8,665 ± 6,816 kg, respectively. Reflecting declines in methyl bromide use, the use of chloropicrin was greater than the use of methyl bromide during the postnatal period, with median values of 127,977 and 109,616 kg during the 7 years, respectively. When we examined correlations within each fumigant, use within 3, 5, and 8 km from the home was highly correlated for each fumigant . Fumigant use during the prenatal and postnatal periods was also highly correlated for methyl bromide and chloropicrin, but was not correlated for metam sodium use and was inversely correlated for 1,3-DCP use . We also examined correlations among fumigants and observed high correlations between prenatal methyl bromide and chloropicrin use and between prenatal metam sodium and 1,3-DCP use . There were negative correlations between prenatal methyl bromide and chloropicrin use with prenatal metam sodium and 1,3-DCP use .Adjusted associations between a 10-fold increase in the amount of fumigants applied within 8 km of the home and the highest lung function measurements are presented in Table 4. We did not observe any significant adverse relationships between prenatal or postnatal fumigant use within 8 km and lung function. A 10-fold increase in wind-adjusted prenatal methyl bromide use within 8 km was associated with higher FEV1 and FEF25–75 . Additionally, a 10-fold increase in wind-adjusted prenatal chloropicrin use within 8 km was positively associated with FEF25–75 .

Associations between methyl bromide and chloropicrin use and lung function observed in the prenatal exposure period were not observed in the postnatal period. Results were similar, although no longer statistically significant,plastic gardening pots for prenatal methyl bromide and chloropicrin use within 5 km of residences . There were no associations between fumigant use within 3 km of residences and lung function . We did not observe associations between postnatal fumigant use at any distance and lung function measurements or between fumigant use during the year prior to the assessment and lung function measurements . In sensitivity analyses using multivariable models including other pesticide exposures that have been previously related to respiratory symptoms and lung function including childhood urinary DAP metabolites , proximity to agricultural sulfur use during the year prior to lung function assessment and prenatal DDT/DDE blood concentrations , the results were very similar to those presented in Tables 3 and 4. For example, the relationships between prenatal methyl bromide use within 8 km were very similar for FEV1 and FEF25–75 . Prenatal fumigant use was generally not correlated with other pesticide exposures that we found to be associated with lung function in this cohort, except for weak correlations between agricultural sulfur use within 1 km during the year prior to spirometry and prenatal use of metam sodium and 1,3 – DCP with r = 0.14 and r=0.26 respectively. The results were very similar when we only included children with two acceptable reproducible maneuvers in the analyses . The results were also similar when we excluded those currently using asthma medication, excluded the one outlier for FEV1 models or used inverse probability weighting to adjust for participation bias . Risk ratios estimated for asthma symptoms and medication using Poisson regression were nearly identical to the ORs presented in Table 3 and Supplemental Table 2. We did not observe effect modification by asthma medication use. Maternal report of child allergies modified the relationship between FEV1 and prenatal proximity to methyl bromide use and we only observed higher FEV1 among children without allergies .

After adjusting for multiple comparisons, none of the associations reached significance at the critical p-value 0.002 based on the Benjamini-Hochberg false discovery rate. This is the first study to examine lung function or respiratory symptoms in relation to residential proximity to agricultural fumigant use. We found no significant evidence of reductions in lung function or increased odds of respiratory symptoms or use of asthma medication in 7-year-old children with increased use of agricultural fumigants within 3 – 8 km of their prenatal or postnatal residences. We unexpectedly observed a slight improvement in lung function at 7 years of age with residential proximity to higher methyl bromide and chloropicrin use during the prenatal period and this improvement was limited to children without allergies. Although these results remained after adjustment for other pesticide exposure measures previously related to respiratory symptoms and lung function in our cohort, they do not remain significant after adjustment for multiple comparisons. There is a strong spatial pattern of methyl bromide and chloropicrin use during the pregnancy period for our study because of heavy use on strawberry fields near the coast at the northern portion of the Salinas Valley . There could be other unmeasured environmental or other factors that are confounding the relationship we observed between higher prenatal fumigant use and improved lung function. Previously published studies of prenatal exposure to air pollutants and lung function have generally observed links to alterations in lung development and function and to other negative respiratory conditions in childhood, and plausible mechanisms include changes in maternal physiology and DNA alterations in the fetus . Improved lung function was associated with higher estimates of recent ambient exposure to hydrogen sulfide in a study of adults living in a geothermal area of New Zealand . However, hydrogen sulfide has been shown to be an endogenously produced “gasotransmitter”,blueberry pot size with anti-inflammatory and cytoprotective functions , and is being explored for its use for protection against ventilator-induced lung injury .

In previous studies of this cohort, we found increased odds of respiratory symptoms and lower FEV1,  and FVC  per 10-fold increase of childhood average urinary concentrations of metabolites of organophosphate pesticides . Other studies of prenatal pesticide exposure and respiratory health in children have mostly evaluated exposure using cord blood concentrations of DDE, a breakdown product of DDT, and have observed an increased risk of respiratory symptoms and asthma with higher levels of DDE . Most studies of postnatal pesticide exposure and respiratory health in children have utilized self-reported information from mothers to assess pesticide exposure and have observed higher odds of respiratory disease and asthma with reported pesticide exposure . None of the previous studies of pesticide exposure and respiratory health have specifically evaluated fumigants. Another strength of the study is that CHAMACOS is a prospective cohort followed since pregnancy with extensive data on potential confounders of respiratory health and other measures of pesticide exposure. Our study also had some limitations. We did not have information on maternal occupational exposure to fumigants or the geographic location of maternal workplaces during pregnancy, and we did not have the location of schools during childhood. These limitations likely resulted in some exposure misclassification during both the prenatal and postnatal periods. An important consideration in this study is that we estimated fumigant exposure using proximity to agricultural fumigant applications reported in the PUR data, which is not a direct measure of exposure. However, the PUR data explains a large amount of the variability of measured fumigant concentrations in outdoor air . In conclusion, we did not observe adverse associations between residential proximity to agricultural fumigant use during pregnancy or childhood and respiratory health in the children through 7 years of age. Although we did not observe adverse effects of fumigants on lung function or respiratory symptoms in this analysis, we have seen adverse associations in previous analyses of the CHAMACOS cohort between residential proximity to higher fumigant use and child development. We observed an association between higher methyl bromide use during the second trimester of pregnancy and lower birthweight and restricted fetal growth . We also observed decreases of ~2.5 points in Full-Scale intelligence quotient at 7 years of age for each 10-fold increase in methyl bromide or chloropicrin use within 8 km of the child’s residences from birth to 7 years of age . Future studies are needed in larger and more diverse populations with a greater range of agricultural fumigant use to further explore the relationship with respiratory function and health.

The fact that the annual water used in growing California agricultural products is far greater than the total urban water use is well known . As pressures on water resources intensify globally, there is a growing interest in evaluating the complex ways in which human activities impact the world’s water resources . Globally, the majority of water consumption is used in the production of agricultural products . As a result, the agriculture industry is by far, the most dominant water-using sector. To assess the amount of water used throughout the production and distribution process to produce a final product, researchers have used the term ‘water footprint’, to describe this quantity . Water footprint assessment had emerged as a tool for quantifying consumption of goods and services in one location and the cumulated water use associated with the production of those goods and services in other distant locations . Following the introduction of the water footprint concept, various studies were conducted to quantify global virtual water footprints and assessed virtual water flows between nations , , and . Virtual water flows and water footprint assessments became important elements in evaluating local, national, and global water budgets as reported by Chen and Chen , Duarte et al., , Guan and Hubacek , Hubacek et al. , Velazquez , Yang et al. , Yu et al. , Zhao et al., . Mekonnen and Hoekstra showed that the international virtual water trade in agricultural and industrial products were 2320 billion cubic meter per year in the period 1996-2005, equivalent to 26% of the global water footprint of 9087 Gm3 . noted that although practically, every country participates in the global virtual water trade, few governments explicitly consider assessing virtual water footprint and its impact in their management policies. The majority of the water footprint studies have examined international virtual water footprints between nations . Few have also analyzed the virtual water footprints at a sub-national or state level such as regions within Australia , China , India , and Spain . Within the United States, two studies have been conducted. Fulton et al., reported that California imported more than twice virtual water as it exported and that more than 90% of its water footprint is associated with agricultural products. Mubako et al., quantified virtual water for California and Illinois, and reported that the two states were net virtual exporters in agricultural water trades. Previous studies on virtual water footprints only aimed to quantify the cumulative water footprint required to produce a final product. No study has focused specifically on quantifying the physical water content contained in agricultural commodities and the associated evapotranspiration being exported. The total exported water in agricultural products is distinctively different than the virtual water footprint in that the former is physically exported outside of a geographical boundary, whereas the majority of the water used in quantifying virtual water footprint may remain within the local geographical boundary and be absorbed or reused in some ways. The exported water content in crops is permanently lost and is no longer available for natural hydrologic cycle. This research seeks to fill the gap of knowledge by quantifying the exported water contained in agricultural products and associated induced evapotranspiration. The research also seeks to analyze the energy advantage of applying reclaimed water in crop irrigation, by assessing the carbon footprint reduction and monetary savings for using reclaimed water in arid and semi-arid regions. Fresh water availability has always been the major constraint to growth and development in California.

Agriculture’s prosperous condition in the 1970s was followed by a recession in the early 1980s

The Salinas River sampling site has been used as a least-impacted reference site in previous toxicity studies and is generally classified as non-toxic, based on acute exposure studies. This increase in potency after a rain event is consistent with an influx of pesticides, and the chemical analyses show higher levels of several pesticides of concern in November at all three sites . Climate change is altering rainfall patterns in many areas of the world and understanding how these changes may impact sensitive aquatic systems is crucial for monitoring water quality. Surface water exposure caused significant changes in D. magna swimming behavior both before and after a first flush event, even at low concentrations. In September, prior tofirst flush, we detected strong dose-response patterns in total distance moved and log-linear dose response in photomotor response. Daphnia magna exposed to all concentrations of surface water in September increased their movement in response to light stimulus, while control groups reduced their activity. This may have implications for survival in natural populations. Individuals who cannot respond to predator cues, or that show impaired and/or altered responses, may have an increased risk of predation. It is important to note that changes in swimming behavior in organisms exposed to water samples from Alisal Creek in September may have been partially capturing a lethal response. This treatment group had significant mortality in all exposure concentrations , so it is possible these individuals were exhibiting not only sublethal,planting blueberries in pots but also delayed lethal toxic responses. Future studies should consider including recovery periods in their experimental design and analyses to parse out whether behavioral impacts are reversable, indicative of long-term effects, or even subsequent mortality.

Due to the high mortality observed for Quail Creek in September, we were unable to make any behavioral comparisons. It is notable that the level of methomyl detected at this site was greater than three times the EPA chronic fish exposure level, and it is likely that methomyl represents a main driver of the toxicity for this site. It is possible that additional contaminants are present at this site, which were not included in our analysis. Many pharmaceuticals are known to cause hyperactivity and have been detected in wastewater at other sites in California. Taken together, these findings illustrate the importance of conducting sublethal assessments to link physiological responses to chemical monitoring data. After the first flush , we measured hypoactivity for all sites during at least one light condition, in at least one concentration. Many of the pesticides we detected in surface water samples are known to reduce the swimming speed and distance of D. magna at concentrations relevant to those detected in our samples. We detected changes to the photomotor responses of D. magna exposed to low concentrations of surface water from all three sites when compared with controls, demonstrating biologically relevant impacts. Despite low mortality observed in the Salinas River site during both testing dates, we detected altered behavior even at the highest dilution of 6% ambient water in November. Hypoactivity and altered photomotor responses may reduce the capacity of D. magna to follow normal behaviors, such as patterns of diel vertical migration and horizontal distribution, thus increasing predation risk and reducing overall fitness. We measured significant changes in the swimming behavior of D. magna after acute exposures to CHL and IMI in single and binary chemical exposures, and as components of agricultural surface waters both before and after a first flush event.

Surface waters contained complex mixtures including CHL and IMI, but also other pesticides of concern including neonicotinoids, pyrethroids, carbamates, and organophosphates. We determined that swimming behaviors of D. magna are sensitive endpoints for the sublethal assessments of the tested pesticides, and for surface water exposures. We detected chemical-specific changes in D. magna swimming behavior for both CHL and IMI exposures. Imidacloprid exposure at environmentally relevant concentrations caused hypoactivity for both concentrations tested, across both dark and light conditions, following a dose-response pattern. The increase in activity over the light period represents a return to baseline following a change in light conditions. Our results are consistent with previous findings: IMI negatively impacts nerve conduction and alters swimming behavior in D. magna and is known to inhibit acetylcholinesterase . Past research has shown AChE inhibition is linked to changes in swimming response, and that a 50% decrease in AChE activity can cause enough change in swimming behavior in D. magna to be described as toxic . In a recent study examining the effects of IMI on the amphipod Gammarus fossarum, IMI stimulated locomotor activity at low exposure concentrations and inhibited activity at higher concentrations . Daphnia magna are particularly tolerant to neonicotinoids, illustrating the potential for impacts in other more sensitive organisms known to inhabit IMI-polluted waterways. We detected significant hypoactivity in individuals exposed to CHL under dark conditions. This is consistent with previous studies on D. magna demonstrating that CHL is a known neurotoxicant for this species, causing changes in muscle contraction via interaction with the ryanodine receptor. Low levels of CHL exposure have been shown to produce dose-dependent inhibition of swimming, and decreased responses to light stimulation in a recent study. Another recent study examining effects of single chemical exposure to CHL and IMI, among other chemicals, at low concentrations had effects on total distance moved of D. magna.

We observed hypoactivity under dark conditions and hyperactivity under light conditions for D. magna after exposure to binary mixtures of CHL and IMI. Hyperactivity could suggest a possible disruption of signal transmission in the vision or nervous systems and has been observed for IMI exposures at low exposure levels in other studies. The hyperactivity observed in the low IMI exposure group was notable in that the response was inverse from both single chemical exposures performed at the same concentrations, potentially indicative of an antagonistic response. Our finding is partially consistent with Hussain et al.; however, where investigators also found hyperactivity under light conditions but no significance under dark conditions. It is relevant to note that our experimental design differed from that of Hussain et al. , who used one exposure vessel containing 50 Daphnids per treatment group, whereas we used fewer Daphnia per exposure vessel, with six exposure vessels per treatment. For future studies, increased replication could improve the ability to determine whether small changes in total distance moved could also be significant. Considering the significance of our other treatments and endpoints, and that our replication exceeded many previously published studies, we propose that our experimental design was sufficient to detect many sublethal effects . Sublethal impacts can result in ecologically relevant effects on individual fitness, populations, and communities. In pesticide-contaminated aquatic environments, overall invertebrate biomass and diversity are reduced as sensitive individuals and species decline . With the increasing number of pesticides being detected in waterways worldwide,blackberries in containers rapid and standardized testing approaches are urgently needed. For many species and chemicals of interest, biochemical reactions can visually manifest via behavioral changes, making behavior a highly integrative and informative endpoint for exposure. Meta-analysis of behavior in comparison to other toxicological endpoints such as development, lethality, and reproduction, showed that behavioral analyses are advantageous to assess the effects of environmental chemicals due to their relative speed and sensitivity. Behavioral assays possess great potential as rapid, high throughput monitoring tools. The world now seems prepared to seriously consider agricultural trade liberalization and domestic food and farm policy reform. The economic summits of the major western countries, the Organization for Economic Cooperation and Development , the World Bank, the International MOnetary Fund, the General Agreement on Tariffs and Trade , and numerous other international agencies now recognize the necessity of multilateral and phased liberalization. In other words, a dramatic reduction in protection for agriculture throughout the world would appear to be the right answer. Simple economic analysis has demonstrated that, in a world in which pure competition maximizes net economic payoff, the deadweight losses resulting from current policy interventions in food and agriculture are enormous.

Unfortunately, we do not live in such a world: Only second-best outcomes are possible, governments do not maximize social welfare, pure nondistortionary–that is, decoupled– transfers do not exist, political and economic markets are not separable, and policies for other sectors–especially general macroeconomic policies–are not perfectly designed and implemented. Simply put, there are many complications in evaluating agricultural and food policy reform. This paper will examine one in particular–the macroeconomic risk nations face in the implementation of food and agricultural policy reform. In all of the recent studies of agricultural trade liberalization and agricultural policy reform, little if any attention has been paid to the macroeconomic environment that might exist during the implementation phase of various proposals. This is indeed surprising because the origins of many farm policies can be traced directly to the macroeconomic environment. Moreover~ the dynamic adjustment paths that would evolve following the implementation of particular reform proposals would be heavily dependent upon macroeconomic conditions~ such as the level of real interest rates and exchange rates~ the nature of monetary and fiscal policies–whether expansionary or deflationary– and so on. This paper focuses on four major themes. First~ macroeconomic and international linkages are significant and must be recognized in any framework for policy design and reform. Second~ the intercountry linkages of both agricultural and macroeconomic policies are especially important for less-developed countries . Third~ political economic markets for policy reform exist and governments throughout the world have an opportunity to supply reform through the reduction of transaction costs. Transaction costs can be reduced through alternative compensation schemes which are motivated by behavioral analysis of political economic markets. And fourth, macroeconomic and international linkages are a major component in the design of flexible agricultural policies that can respond to changing conditions. These themes·are used to examine agricultural policy reform and trade liberalization in the current environment.Throughout much of the developed world~ macro policies in the two decades following World War II afforded a unique period of macroeconomic stability. As a result~ concern regarding the macroeconomic linkages with food and agricultural systems largely disappeared. In the early 1970s with the major changes in monetary polices and central bank behavior, macroeconomic linkages were once again recognized as prime factors complicating agriculture and food policy. The roller coaster ride that agriculture has experienced over the last two decades has been significantly influenced by macro and international linkages . Recent history stands in sharp contrast to the basic stability of the 1950s and 1960s. This roller coaster is not unprecedented. For example, the period 1900 through 1915 is surprisingly similar to the 1970s, and the late 1920s through 1930s have some of the same characteristics of the 1980s. A longer historical perspective demonstrates that macroeconomic disturbances and their links to agricultural sectors throughout the world were central to the emergence of direct governmental intervention in food and agricultural systems. For example, in the case of OECO countries, there have been abrupt increases in governmental intervention during periods of macroeconomic contractions accompanying downward movements in agricultural prices. The first major wave of increasing intervention.in agriculture occurred during the last quarter of the 19th Century, following several decades of trade liberalization. Prior to this, agricultural trade had expanded dramatically due to the removal of tariffs and import quotas and to the increasing availability of low price grain from the United States and Europe. The protectionism following this trade expansionary period was motivated by what was then referred to as Europe’s great depression.Policy responses varied across countries. England alone maintained a staunch free trade position while Germany, France, and Italy restored agricultural tariffs from the mid-1880s onward. In Denmark and the Netherlands, falling grain prices encouraged the expansion of livestock activities. In the the United States, despite expanding grain exports, farmers did not ignore depressed prices. The period from 1873 to 1896 witnessed increasing levels of farmer mobilization through the Grange and populace movements. Farmer demands were wide ranging, but a major objective was a change in banking policy to promote inflationary expansion of money supplies. Lobbying efforts to this end continued into the Twentieth Century and were partially responsible for the institutional changes that created the Federal Reserve in 1913 and the federal land banks in 1916. The U. S. government’s massive intervention in agriculture in the 1930s followed a farm crisis that had its origins in the macroeconomic adjustments after World War I.

Use of tail water ponds and sediment traps also plays an important role in soil and water quality

California has committed to cutting greenhouse gas emissions by 40% of 1990 levels by 2030. As a sector, agriculture is responsible for 8% of state emissions. Approximately two-thirds of that is from livestock production ; 20% from fertilizer use and soil management associated with crop production; and 13% from fuel use associated with agricultural activities . California plays an essential role in the nutritional quality of our national food system, accounting for, by value, roughly two-thirds of U.S. fruit and nut production, half of U.S. vegetable production and 20% of U.S. dairy production. Assembly Bill 32, California’s primary climate policy law, adopted in 2006, has spurred research into practices and technologies that could assist in reducing emissions and sequestering carbon. Here we report on more than 50 California-based studies prompted by this landmark legislation. We note that the California Department of Food and Agriculture, California Air Resources Board, California Energy Commission and California Department of Water Resources have been critical to funding much of the science reviewed here. This article grew out of conversations with state agencies concerning the need for a review of the current evidence base to inform emissions-reduction modeling and revisions to the state Climate Change Scoping Plan , which specifies net emissions reduction targets for each major sector of the California economy . It is important to note that the Scoping Plan states that work will continue through 2017 to estimate the range of potential sequestration benefits from natural and working lands . With over 76,000 farm and ranch operations in California, covering about 30 million acres , there are no one size fits all solutions. But as we outline below,raspberry container there are numerous opportunities to both reduce GHG emissions and sequester carbon across diverse agricultural operations — small to large, organic and conventional, crop and livestock.

Perhaps most importantly, many of these practices have cobenefits for water conservation, restoration and conservation of natural lands, or farm economics. Since 1984, farming and grazing lands have been converted to urban development at an average rate of 40,000 acres per year . At this rate, and considering the higher rate of emissions from urban versus agricultural land, slowing agricultural land conversion represents one of the largest opportunities for agriculture to contribute to California’s climate plan. Research from one county estimates that GHG emissions associated with urban landscapes are up to 70 times greater per acre than those from irrigated farmland when human emissions related to transportation, electricity, natural gas, and water are accounted for . With continued population growth in the state, policies that promote more energy efficient patterns of urban development are critical to meeting climate targets and preserving irreplaceable farmland. Models show that coupling such urban development policies with farmland conservation could reduce transportation and building related emissions from new residential development by 50% by 2050 under a low-emissions scenario . With 80% of California’s most productive rangeland privately owned, losses are projected at 750,000 acres by 2040 . Conversion of rangeland to urban uses may increase GHG emissions up to 100-fold depending on how the rangeland is managed, and conversion to irrigated agriculture may lead to increases of up to 2.5-fold . Land-use-related policies to reduce GHG emissions in California are still at an early stage. Several new incentive programs warrant future research to optimize their impact. These include the Sustainable Agricultural Lands Conservation Program , for purchase of conservation easements on farmland at risk of suburban sprawl development; the Affordable Housing and Sustainable Communities Program , supporting development of affordable housing within existing urban areas; and the Transformative Climate Communities Program , slated to provide GHG-reducing planning grants to disadvantaged communities beginning in 2017. Together with legislation requiring a regional Sustainable Community Strategy, these can create a land use planning framework in California to preserve farmland, reduce GHG emissions, and achieve other co-benefits such as improved quality of life, public health and social equity.

Soils are complex biological systems that provide ecosystem services and can be managed to store carbon, reduce emissions and provide environmental and economic co-benefits. The diversity of California agriculture requires different management strategies to mitigate GHG emissions or sequester carbon. Soil GHG emissions increase with soil moisture and nutrient availability. Significant reductions in GHG emissions can be achieved by shifting management practices to more efficient irrigation and fertigation systems such as micro-irrigation and subsurface drip. A comparison of subsurface drip versus furrow irrigation showed decreased GHG emissions in the former . While cover crops often increase GHG emissions, integrating more efficient irrigation with cover crop practices decreased nitrous oxide emissions two- to three-fold in California processing tomatoes . In semi-arid regions such as California, the long term implementation of no-till practices reduced emissions by 14% to 34%, but only after 10 years of continuous management. Under shorter time horizons, emissions increased by up to 38% . Socioeconomic and biophysical limitations unique to California have led to low no-till adoption rates in California of roughly 2% . Improved nitrogen management provides a high potential for reductions in emissions, including emissions associated with applied fertilizer as well as emissions related to the production and transport of inorganic nitrogen fertilizer . N2O emissions respond linearly to fertilizer application in lettuce, tomato, wine grape and wheat systems in California . However, once fertilizer rate exceeds crop demand, emissions increase at a logarithmic rate . Fertilizer source has been broadly shown to influence N2O emissions . Only a few California studies compare synthetic fertilizer sources. One shows that ammonium sulfate reduced N2O emissions approximately 0.24 to 2.2 kg N per acre compared to aqua ammonium . Another study of comparing fertilizer sources found emissions reductions of up to 34% ; however, the results were not statistically significant. Recently, California research has shown that the use of manure and green waste fertilizers can increase emissions when applied to the soil surface , particularly if their use is not timed to crop demand . Fertilizer source and timing, along with the use of nitrification inhibitors, are key areas for future research in the California context. Management practices have the potential to increase total soil carbon, but the magnitude and persistence of sequestration is dependent on inputs and time. In grasslands, pilot studies of carbon sequestration associated with compost application are being conducted to validate early findings throughout the state . For cultivated systems, in two long-term projects at UC Davis, soil carbon increased 1.4 and 2.3 tons per acre in the top 12 inches of soil over 10 years in cover cropped and organically managed soil, respectively . In an ongoing experiment at the UC Agriculture and Natural Resources West Side Research and Extension Center, no-till combined with cover cropping and standard agronomic practice in a tomato-cotton rotation system has increased soil carbon 5.3 tons per acre over 15 years compared to the standard tillage, no cover crop treatment .

In these two long-term studies,growing raspberries in container the soil carbon increase occurred between 5 and 10 years. However, when cover cropping and compost inputs were ceased at the first site , it led to a rapid loss of soil carbon. This shows that soil carbon sequestration is highly dependent on annual carbon inputs and if management changes, soil carbon is prone to return to the atmosphere. Given the reality of inconsistent management, rates of soil carbon sequestration that can be expected in row crop systems practice are perhaps 10% of the values seen in these long-term research trials, namely in the range of 0.014 to 0.03 tons per acre per year . If soil carbon sequestration and storage are priorities, management plans and incentive structures should account for the wide variability of California soils and the need for consistent management over time. While any single soil and nutrient management practice may have limited impact on GHG emissions, many have well-documented co-benefits, including reductions in erosion, improved air quality , reduced farm machinery fossil fuel use , reduced nitrogen leaching , enhanced water infiltration and reduced soil water evaporation , and increased carbon stocks below the root zone to improve carbon sequestration .Integrated or diversified farming systems are multipurpose operations that may produce several commodities and utilize renewable resources. Examples include integrated crop and livestock systems; organic production; orchard and annual crop intercropping; use of perennial, salt-tolerant grasses irrigated with saline drainage water on otherwise marginal land; and pastures improved by seeding beneficial plants such as legumes. Through reliance on biological processes to build healthy soils and support above and below ground biodiversity, diversified systems offer potential GHG emission reductions . Also, resilience to climate perturbations can occur by spreading economic risks across multiple farm products and by relying on on-farm resources and biodiversity, with less dependence on synthetic fertilizer and pesticides to improve soil and crop health . Other environmental co-benefits can include more efficient use of water, improved water and soil quality, pest reduction or suppression, or enhancement of wildlife habitat and biodiversity. These systems have been shown to reduce soil nitrate and nitrous oxide emissions, and increase carbon sequestration both in soils and above ground biomass . For example, frequent addition of various types of organic inputs increases labile and resistant soil carbon over a period of several years, so that soils exhibit more tightly coupled plant soil nitrogen cycling. In turn, plant nitrogen demand is adequately met, but losses of nitrate are minimized . In another case, an organic vegetable production system, the annual use of cover crops over 6 years led to greater increases in microbial biomass carbon pools, and compost additions increased measured soil organic carbon pool and microbial diversity in comparison to a cover crop grown every fourth year . Many of these studies examined California organic farms where multiple practices are often stacked, such as combining organic soil amendments, integrating cover crops into crop rotation for year-round plant cover and reducing tillage. In addition, farm scaping with perennials on field margins and maintenance of vegetated riparian corridors sequester carbon in the soil and woody biomass of trees and shrubs . Planting native woody species tolerant of drought for hedgerows, or resistant to water flux in riparian corridors, is a way to ensure adaptation and growth over many decades. Diversified, multipurpose systems provide other co-benefits depending on the set of practices involved. Practices that increase soil carbon also improve soil structure, nitrogen-supplying power and water-holding capacity . For example, a practice like cover cropping also can suppress weeds, influence crop nutrition and quality, especially in perennial systems like wine grapes, and provide habitat for beneficial predators . Filter strips and riparian corridors can reduce soil erosion and thereby diminish contamination of surface water with valuable soil and nutrient resources, and pathogenic microbes . Hedgerows have been shown to increase pollinators and other beneficial insects in California . Given the promise for multiple co-benefits, more types of California diversified systems deserve study, which would provide a better basis for metrics to evaluate their long-term contributions to climate and other goals. Intensive livestock operations, particularly the state’s large dairy sector, produce two-thirds of California’s agricultural GHG emissions, and thus are a primary target for state climate regulations as well as incentives for emission reduction. At the same time, policies should account for the already high levels of resource efficiency in the California dairy sector. A key climate policy concept is to avoid “leakage,” whereby strict climate policy to reduce emissions in one region causes increases in another. A recent comparison of the dairy sectors of the Netherlands, California and New Zealand documents that California dairies on average produce more milk per cow than dairies in the Netherlands, and more than 2.6 times as much as dairies in New Zealand, while operating under stricter environmental regulations . Currently, the Intergovernmental Panel on Climate Change recommends using a fixed emission factor for dairy operations that is based on gross energy intake, which does not take diet composition into consideration . Calibration of GHG models for California using dietary information will provide a more accurate basis for measuring progress than current IPCC values, and for assessing the potential benefits of different forage and feed practices on emissions.

DYCORS has been shown to perform better than a variety of popular surrogate optimization techniques

Additionally, bio-markers of proliferation and cell health such as Pax7, MyoD, and Myogenin may be measured to improve the robustness of predictions and correlations across assays. None of these metrics will aid in optimization if a sufficient model of the relationship between cell growth, media cost, and overall process cost is not considered. Therefore, a techno-economic model of the process is needed to tie together the large-scale production process to bench-top measurements. Secondly, further “white-box” studies that focus on the meta bolomics of the cell lines would be very useful in defining the upper / lower bounds and important factors of these DOE studies. Developing robust cell lines adapted to serum-free conditions would open up the design space for use in DOE studies because very poorly growing cells are difficult to optimize in DOE studies. In general, white-box or traditional studies act to constrain the complexity of future DOE studies, so must be conducted in collaboration with DOE. Experimental optimization of physical and biological processes is a difficult task. To address this, sequential surrogate models combined with search algorithms have been employed to solve nonlinear high-dimensional design problems with expensive objective function evaluations. In this article , a hybrid surrogate framework was built to learn the optimal parameters of a diverse set of simulated design problems meant to represent real-world physical and biological processes in both dimensionality and nonlinearity. The framework uses a hybrid radial basis function/genetic algorithm with dynamic coordinate search response, utilizing the strengths of both algorithms. The new hybrid method performs at least as well as its constituent algorithms in 19 of 20 high-dimensional test functions,plastic pots for planting making it a very practical surrogate framework for a wide variety of optimization design problems.

Experiments also show that the hybrid framework can be improved even more when optimizing processes with simulated noise.The design and optimization of modern engineering systems often requires the use of high-fidelity simulations and/or field experiments. These black box systems often have nonlinear responses, high dimensionality, and have many local optima. This makes these systems costly and time consuming to model, understand, and optimize when simulations take hours or experiments performed in the lab require extensive time and resources. The first attempt to improve over experimental optimization methods, such as ‘one-factor-at-atime’ and random experiments was through the field of Design of Experiments . Techniques in DOE have been adapted to many computational and experimental fields in order to reduce the number of samples needed for optimization. These methods often involve performing experiments or simulations at the vertices of the design space hypercube. Full-Factorial Designs are arguably the simplest to implement, where data is collected at all potential combinations of parameters p for all levels l requiring l p samples in total. Even when l = 2 the number of experiments or simulations quickly becomes infeasible so Fractional-Factorial Designs using l p−k experiments for k ‘generators’ are often used to reduce the burden. While such designs are more efficient, they have lower resolution than full designs and confound potentially important interaction effects. Therefore, DOE techniques are often combined with Response Surface Methodology to iteratively move the sampling location, improve model fidelity as more data is collected, and focus experiments in regions of interest. Stochastic optimization methods such as Genetic Algorithms , Particle Swarm Optimization, and Differential Evolution have also been used to explore design spaces and perform optimization on both simulated and experimental data, often requiring fewer experiments than traditional DOE-RSM techniques. The quickly developing field of surrogate optimization attempts to leverage more robust modeling techniques or Kriging / Gaussian Process models to optimize nonlinear systems.

They often employ a stochastic, uncertainty-based, or Bayesian search algorithm to intelligently select new sample points to query for experimentation or simulation. Due to the variety of modeling techniques and search algorithms available, hybrid algorithms, which attempt to leverage each methods strengths, have proliferated. These hybrid approaches usually involve taking ensembles of surrogate models and asking each surrogate for its best set of predicted query points. New queries are then conducted at these points, often weighted in favor of regions/surrogates with low sample variance or optimal response values. The drawback of many of these algorithms is that they are not always generalizable to design problems of diverse dimensionality and nonlinearity. A surrogate optimization algorithm is presented here, which uses an evolving RBF model and hybrid search algorithm. This search algorithm selects half of its query points using a Euclidean distance metric truncated to provide diversity in suggested query points. This is based on a neural network genetic algorithm developed for bio-process optimization, which has been shown to be more efficient than traditional DOE-RSM methods . The other half of the query points are selected using a dynamic coordinate search for response surface methods algorithm based on work developed for computationally expensive simulation.The performance of the NNGADYCORS hybrid algorithm is tested against NNGA and DYCORS separately. Further evaluation is performed to probe potentially useful extensions of the hybrid algorithm to address simulated experimental noise, to improve algorithm convergence over time, and to address cases in which certain groups of parameters have a greater influence on the response values than others. The NNGA algorithm is based on a RBF-assisted GA. The NNGA uses an RBF model to suggest points that are close to but not directly on top of optima, using a truncated genetic algorithm .

One advantage that GAs have over gradient-based methods is that their randomness allows them to efficiently explore both global and local regions of optimality. This makes them very attractive for an optimization framework attempting to look for global optima while facing uncertainty associated with a sparsely explored parameter space, and thus untrustworthy RBF models. This framework is shown in Figure 2.1 and the TGA is illustrated in Figure 2.2. First, a database of inputs X and outputs Y of No total queries is collected . An RBF model is constructed using the training regime discussed in Section 2.2.1. Next, a TGA is run using a randomly initiated population of potential query points with the goal of minimizing the RBF predicted output. In each iteration of the TGA, queries expected to perform the best survive a culling process and have their information propagated into the next iteration by a pairing, crossover and random mutation step. After each iteration, the best predicted query is recorded. When the average normalized Euclidean distance between the TGA’s current predicted best query and its next N −1 predicted best queries, dav,norm, is less than or equal to the critical distance parameter CD = 0.2,drainage for plants in pots the TGA is considered to be converged and submits this list of N best points for potential querying . This TGA is run a total of kmax = 4 times, and its query selections from all rounds of TGA queried to give the next set of data for simulation or experiments.The NNGA-DYCORS algorithm was tested against its constituent algorithms, NNGA, and DYCORS individually. Examining the performance of the constituent algorithms , the NNGA algorithm consistently works well in high dimensions , while the DYCORS algorithm performs better in low dimensions . This was the case both over time and at the final optimal query points . Given these differences in performance, it stands to reason that a hybrid approach would provide a sensible route to a more robust algorithm that could be used on a wider variety of dimensions. As seen in Figure 2.3, the hybrid NNGA-DYCORS often outperforms or performs similarly to the next best constituent algorithm in each experiment. This is reinforced by the data in Tables A.1 and A.2, where the final optimum of the hybrid NNGA-DYCORS is lessthan or equal to the final optimum of the next best constituent algorithm in 19 of 20 experiments . An optimum may be considered better if its upper bound is less than the mean of another algorithm’s optimum.

While this is a rough approximation of the comparative performance of the algorithm, it strongly indicates that the NNGA-DYCORS is robust on a wide variety of problem sets and dimensions. In intermediate cases , the NNGA-DYCORS continued to outperform or perform as well as its most competitive constituent algorithm, showing its usefulness in design optimization problems where it is not obvious a priori what dimensionality counts as ’high’ and ’low’. To test the effect of random noise on the ability of the surrogate optimization algorithms to find optimal parameters, a random noise ewas added to the output of the simulation. It is common practice, especially in noisy, low-data, and data-sparse models, to improve the out-of-sample generalizability by model selection procedures such as cross-validation to avoid overfitting. To address the issues with stochasticity in these experiments this, a hyperparameter optimization loop for the number of nodes nnodes in the RBF model was added to the NNGADYCORS algorithm, where cross-validation over the database was used to select the optimal nnodes. In this case we deliberately trade higher bias for lower variance to reduce overfitting. As can be seen in Figure 2.4, application of a node optimization scheme improved the learner’s performance over the regular scheme in nearly all cases. It should be noted that in these experiments, the linear tail of the RBF was excluded, so Equation 2.3 was modified to be Φλ = Y and was solved. There is a seemingly infinite number of modeling techniques, search optimization algorithms, and initialization/infill strategies in the literature to facilitate optimizing expensive objective functions. However, the characteristics of the experimental system and design space are never really known a priori, so having an algorithm that is more efficient than traditional methods and able to work with a wide variety of problems is advantageous. Therefore, the goal of this article was to develop a surrogate optimization framework that could be successfully applied to test problems with a wide range of dimensionality and degrees of nonlinearity. The NNGA-DYCORS algorithm runs two surrogate optimization algorithms in parallel. The NNGA uses a Euclidean distance-based metric to truncate a genetic algorithm, whose best members are k-means cluster distilled into a final query list. This acts as a global optimization process because the internal genetic algorithm searches over the entire design space. The DYCORS algorithm perturbs the best previous queries using a dynamic Gaussian distribution, where the perturbations are adjusted based on cumulative success and the total number of queries in the database. Thus, DYCORS acts as a local search method in the region defined by a Gaussian centred at its best queries. Both arms of the hybrid algorithm use an RBF for prediction.The result was that the NNGA-DYCORS hybrid algorithm was statistically equal to or outperformed its constituent algorithms in the 19 of 20 test problems. This demonstrates the robustness of the NNGA-DYCORS, as it performs as a best case scenario on a variety of test problem dimensions and shapes. This is important because, in real experimental problems, one does not know the shape of the surface a priori, highlighting the utility of a generalizable optimization framework such as the NNGA-DYCORS. In addition, it is never clear what constitutes a ‘high’ and ‘low’ – dimensionality design problem, so an algorithm that performs well in arbitrary dimensions should have large practical value. The DYCORS algorithm was already shown to be competitive compared to other heuristics, and the NNGA was demonstrated to be significantly more efficient than traditional experimental optimization methods. It stands to reason that this hybrid framework should extend the usefulness of both algorithms to test problems of arbitrary dimensionality and degree of nonlinearity. Using a node optimization scheme to reduce model variance during query selection improves hybrid algorithm performance, especially for noisy surfaces . Practitioners should therefore consider built-in regularization to avoid overfitting of the data when dealing with expensive, data-sparse and noisy systems. Optimizing the number of nodes was specific to this RBF variant, but the optimization loop in Section 3.2 could be applied to any model hyperparameter. In the next set of experiments, the method of making the NNGA-DYCORS convergence parameters dynamic during query selection did not improve performance. This indicates that it may not be fruitful to pursue extensive algorithm parameter adjustments/heuristics for this algorithm, and there is little sensitivity in the selection of algorithm convergence parameters on the outcome, unlike the results in previous articles on the subject. Finally, to mimic typical engineering scenarios where response sensitivity varies with the inputs, the test functions were scaled with a sensitivity vector.