There are also limitations in the measurement of the variables we used for analysis

The exposure variable as well as covariates were all measured using self-reported survey data and subject to recall bias, which has been well described for exposure and disease studies. We limited recall bias in the survey by anchoring the past in memorable events such as recent rainy and dry seasons as well as holidays. While survey respondents sometimes found it difficult to precisely quantify household land area, the evidence for recall bias in agricultural surveys in sub-Saharan Africa is limited. Additionally, infection outcomes are limited by the sensitivity and specificity of available diagnostic methods: urine filtration for S. haematobium and duplicate Kato-Katz examination of two stool samples for S. mansoni. The detection methods used for S. haematobium are more sensitive compared to those used for S. mansoni , but the low sensitivity of diagnostic techniques used to detect S. mansoni infections—especially low intensity infections—may have contributed to the inconclusive results we observed for this parasite species. Our findings add a new dimension to the notion that the benefits of water resources development for food security are offset by infectious disease. While we cannot speak to the dam’s net impact, we find that schistosomiasis risk may be a result of land use for subsistence livelihoods as well as landscape-level environmental change. Residents of the lower basin of the Senegal River face an unfortunate trade-of where the prevailing economic activity may make them sick.Every bio-process in which cells are the final product or used in the production process requires suitable culture conditions for cell growth and product quality. In the rapidly growing cellular agriculture/cultivated meat industry, where cells are grown for consumption to replace carbon‐intensive and often unethical animal agriculture,plastic plant container cost‐effective media has been identified as the most critical aspect in scale‐up and commercialization .

Optimizing these conditions is difficult due to a large number of media components with nonlinear and interacting effects between cells, medium, matrix material, and reactor environment . Typically, culture media used for processes in cellular agriculture consist of a basal medium of glucose, amino acids, vitamins, and salts supplemented with fetal bovine serum for improved cell survival. FBS is an undefined, animal‐derived serum consisting of proteins, hormones, and other large molecular weight components, and contributes substantially to the cost of media . Even when enriched with additional growth factors or FBS, media is often far from optimal for all cell types and requires adaptation and/or optimization , which is difficult for media mixtures with >30 components, as is common in cell culture. To manage this complexity, design‐of‐experiments methods are often employed in which factors are set to a user‐specified value and outputs are measured . These DOE designs are arranged in such a way that statistically meaningful correlations can be found in fewer experiments than techniques like intuition, “one‐factor‐at‐a‐time” sequences, or random designs. A more advanced form of this is to use sequential, model‐based DOEs such as a radial basis function or Gaussian Process , combined with an optimizer/sampling policy, to automatically select sequences of optimal designs. These approaches are often more efficient than traditional DOE at optimizing systems using fewer experiments and allow for more natural incorporation of process priors , measurement noise , probabilistic output constraints and constraint learning , multi-objective , multi-point , and multi‐information source designs . Even with these methods available, limitations still exist. In previous work, we applied a machine learning approach to optimize complex media design spaces but had limited success due to the difficulty in measuring cell number for multi-passage growth . Therefore, in this study, we utilized a multi‐information source Bayesian model to fuse “cheap” measures of cell biomass with more “expensive” but higher quality measurements to predict long‐term medium performance.

We refer to the simpler and cheap assays as “low‐fidelity” IS, and more complex and expensive assays as “high‐fidelity” IS. While not always predictive of long‐term growth, these lower fidelity assays are at least correlated with cell health and can help in identifying interesting regions of the design space for further study with the high‐fidelity IS. We used this model, with Bayesian optimization tools, to optimize a cell culture medium with 14 components while minimizing the number of experiments, optimally allocating laboratory resources, and building process knowledge to improve our optimization scheme and model. In Section 2 we discuss the computational and experimental components of this BO method. In Section 3 we present the results of the BO method in comparison to a traditional DOE method, followed by Section 4 where we demonstrate the importance of fusing multiple sources of information to obtain relevant process knowledge and/or optimization results.The system under consideration was the proliferation of C2C12 cells. These cells are immortalized muscle cells with similar metabolism and growth characteristics as other adherent cell lines useful in the cellular agriculture industry. Cells were stored in 70% DMEM , 20% FBS , 10% dimethyl sulfoxide freeze medium at −196°C until thawed. Vials were thawed to 25°C and the freezing medium was removed by centrifugation at 1500 g for 5 min. The centrifuged cell pellet was resuspended in 17 ml of DMEM with 10% FBS and placed on 15 cm sterile plastic tissue culture dishes . Cells were incubated in a 37°C and 5% CO2 environment. After 24 h the medium was removed, the culture dish‐washed with Phosphate Buffer Solution , and fresh DMEM with 10% FBS was introduced. After an additional 24 h, cells were harvested using tripLE solution , diluted in PBS, and counted using Countess II with trypan blue exclusion and disposable slides . The process of removing cells from a plate, counting, and re‐plating them with fresh medium is called sub-culturing or passaging.

How well the C2C12 cells survive and grow after passaging is indicative of their long‐term potential in a large cellular agriculture process. The design space was comprised of the components and minimum/maximum concentrations listed in Table 1. These components were chosen because they are often used to supplement standard DMEM to improve cell growth; this represents a reasonable test case for the industrial application of these multi‐IS BO methods to the cellular agricultural industry. The composition of standard DMEM , is shown in Table 3, and should not be confused with the base DMEM “supplement” , which contains only amino acids, trace metals, salts, and vitamins and none of the other 14 components. pH and osmolarity are not controlled in this study, so act as latent variables.Production scale cellular agricultural processes will require >10 passages of cell growth so optimizing growth based on single‐passage information is not adequate . However, multi-passage growth assays are difficult/ expensive to measure, and even more difficult to optimize when given many components. We managed this complexity by coupling long‐term cell number measurements with simpler but less valuable rapid growth chemical assays in murine C2C12 cultures as a model system for cellular agricultural applications, capturing a more wholistic model of the process. We combined this with an optimization algorithm that efficiently allocates laboratory resources toward solving argmax D x for desirability function D x , a function that incorporates both cell growth and medium cost. This resulted in a 38% reduction in experimental effort, relative to a comparable DOE method, to find a media 227% more proliferative than the DMEM control at nearly the same cost. As the longer‐term passaging study suggests, our Passage 2 objective function and IS were well‐calibrated to mimicking the complex industrial process of growing large batches of cells over many passages,blueberry container with Passage 4 cell numbers well‐predicted by this objective function. The reasons for the success of the BO are myriad. The BO method iteratively refines a single process model to improve certainty in D x‐optimal regions, whereas the DOE relies on a series of BB designs where the older data sets are ignored because they were outside of the optimal factor space. The BO also used a variety of IS, whereas the DOE only used a single low‐fidelity AlamarBlue metric . Looking at Figure 8c, the AlamarBlue and LIVE tended to cluster around the point y = 1, making it difficult to distinguish between high‐quality and low‐quality media. This may be due to the deviation of linearity of the %AB and F530 metric at high biomass. The BO method also refined its multi‐IS model over the entire feasible design space, allowing it to take advantage of optimal combinations and concentrations of all 14 components over the entire domain, whereas the DOE needed to reduce the design and factor spaces to reduce the number of experiments needed, and may have identified the wrong optimal boundary locations resulting in suboptimal experimental designs. The BO method was also able to leverage information about process uncertainty to improve the model is poorly understood regions of the design space, whereas the steepest accent method used by the DOE chased after improved D x with little regard for overall noise or experimental errors.

This was worsened by the sensitivity of the polynomial model to random inter‐batch fluctuations in %AB, which may have driven the DOE to suboptimal media. Note that the success of our BO method should not be taken as generic superiority over all potential instantiations of DOE or commercial media used for C2C12 growth. While the BO method worked well at solving the experimental optimization problem, the multi‐IS GP accuracy was limited to highly sampled regions of the design space, thus limiting the efficacy of sensitivity analysis. This was a conscious decision made to trade off postfacto analysis for sampling media with high desirability D x . Accuracy was also limited by the low amount of data N available relative to the large dimensionality p, which is inherently the case in complex biological experiments where each batch of q experiments takes >1 week to evaluate. Finally, the hyperparameters θ* used in the multi‐IS squared exponential kernel were deliberately regularized with prior distributions to smooth the posterior of the prediction μ x . Regularization may have diminished the quality of the inter‐IS correlations; the model hyperparameters ignored features where IS differed in favor of a simpler correlative structure to explain the data. This is seen in Figure 8b,c, where the kernel evaluations show nearly equal inter‐IS correlative strength for most IS used. This may have “squished”/ignored features that could have provided additional information, but at the cost of sampling the design space too widely, again a deliberate choice of model skepticism towards outliers. Even with these limitations, the BO method clearly performs well on media optimization systems relevant to cellular agriculture, that is, those with multiple and potentially conflicting information sources with varying levels of difficulty in measuring. The media resulting from the BO algorithm supported significantly more C2C12 cell growth with only a small increase in cost. This algorithm performs better than traditional DOE in this case, especially in incorporating critical data from growth after the multiple passages in an affordable manner. With these results, it should be possible to implement this type of experimental optimization algorithm in other systems of importance to cellular agriculture and cell culture production processes with difficult‐to‐measure output spaces, including for optimization of serum‐free media for cell growth and for differentiation.Water management is becoming more challenging by the effects of climate change, population growth, and severe competition for water by the municipal, agricultural, industrial, and energy sectors. Accordingly, integrated water resources management focuses on water demand and supply management to achieve sustainable development. Water is a scarce resource essential for societal survival and functioning. This makes the application of integrated water resources management essential to cope with scarcity and the challenges posed by climate change and increased water demand to by expanding economies. A conceptual framework combining integrated landscape management and institutional design principles perspectives was applied to analyze cooperation initiatives involving water suppliers and agricultural stakeholders from agricultural wastewater. A national drought risk assessment for agricultural lands taking into account the complex interaction between different risk components was presented. The research showed that crop diversification, crop pattern management, and conjunctive water management can be effective in improving agricultural water.

Each of the four output measures has strengths and weaknesses

Many have predicted that such nations must bear much of the responsibility to produce the varieties that will feed the world in the coming decades . The large number of breeding centers in China, the decentralized nature of its research system, and the great heterogeneity among its centers offer a unique research opportunity to identify the relationship between varietal production, size of institute, and mix of crops in the breeding program. Finally, the results are of interest to China’s leaders who recently have announced a new round of research reforms in agriculture . To make our analysis tractable, and because of budgetary constraints, we limited the scope of our study to the breeding institutes of two crops within northern China. We chose crop-breeding institutes because crop breeding has been central to the growth of agricultural productivity in China as well as in the world. In China, crop breeding takes the largest proportion of resources in its agricultural research system . Crop-breeding institutes were chosen also because their research outputs and their consequences can be measured relatively easily—compared with, say, less applied research, research oriented towards natural resources management, or research leading to disembodied technological change. Wheat and maize are two of the most important staple crops in China, ranking second and third respectively after rice in terms of sown area and production. Wheat and maize production occupy somewhat overlapping areas and a large share of China’s wheat and maize breeding programs are located in the same institutes and similar regions,pot raspberries which allows us to measure economies of scope.The small and highly significant coefficients of economies of scale imply a significant cost saving associated with expanding the scale of breeding institutes. Such results are robust Most of the data used in this study were collected by the authors during 12 months of field work in China that began in the summer of 2001. Enumerators assembled panel data from 46 wheat and maize breeding institutes covering the years from 1981 to 2000.

The sample institutes include 40 prefectural-level institutes and 6 provincial-run institutes, selected at random from a comprehensive list of prefectural and provincial institutes in seven major wheat and maize provinces in northern China.Thirty-two of the sample institutes produce both wheat varieties and maize varieties . Four institutes specialize in producing wheat varieties . The other ten produce only maize varieties .To collect the data, teams of enumerators visited each institute for periods of up to one week and completed a set of questionnaires filled out by accountants and by enumerators. In general, the data cover four broad categories: income, costs, research output, and data on other characteristics of the institute. Since the data were not kept by a single department in any of the institutes, a great deal of cross-checking was needed to make the data consistent among the various departments. For example, the research coordination department typically kept information on income and expenditures. Personnel departments provided the data on salaries, educational accomplishments, and other information about current and past staff. Breeders kept the best information on the varieties they produced and the methods that they used in their breeding efforts. To examine cost efficiency, information is needed on two key variables, costs and output, especially since there is an a priori reason to believe that the small scale of many institutes may be an important source of inefficiency. In using our survey data to define measures of these key variables, we have had to deal with several methodological issues. As an economic activity, crop breeding has several characteristics that make it relatively hard to measure output and match measures of output to measures of costs associated with those outputs. These characteristics include the long lags between the time when costs are incurred and the resulting output is realized , uncertainty about what is an appropriate measure of output both conceptually and in practice , and the fact that output itself is uncertain when costs are incurred .Our measure of the total variable costs of each crop’s breeding activities includes the institute’s operating expenses, such as salaries, project administration, and other direct operating expenses. For cost categories that cannot be matched directly to a breeding project , we assigned a share of the costs of each category to breeding according to the number of full-time breeding staff . We deflated total variable costs by a provincial consumer price index, putting our cost figures into real 1985 terms . We assume that the products of China’s wheat and maize variety “factories” are the varieties that the breeders produce that are adopted by farmers. To measure output, we collected information on the number of varieties that were produced by the research institutes , the area sown to the varieties, and the trial yields of each variety . With these data, we constructed four measures of research output: the number of varieties released by the institute sown in the field during a given year, the number of varieties, weighted by the trial yields of the variety , the total area sown to all of the institute’s varieties during a given year ; and the number of varieties weighted by sown area and trial yields .

Although it is the most readily measured, the obvious flaw with number of varieties is that it does not take into account any quality characteristics of each variety, either yield or its other characteristics . Yield-weighted output accounts for the relative productivity of a variety in pure output terms. However, such a measure still leaves out all other quality characteristics, which an earlier study shows may be highly valued by farmers . For this reason, our third measure, area-weighted output, should be superior to the other two measures. If farmers value the characteristics in a variety—whether high yields or some other characteristic—they demonstrate their preference by adopting the variety . The last measure, yield-area weighted output combines the second and third measures. Since the variation in trial yields is small, the correlations between the third and fourth output measures are high . Hence, we would not expect much difference to result from using one versus the other. One special feature of crop variety production is the significant time lag between the time when costs are incurred in a breeding research program and the time when the resulting research output is realized. This issue is commonly discussed in studies of the returns to agricultural R&D , especially in relation to specification of econometric models relating agricultural productivity to research expenditures. In the present setting, the lag between investment and output has some further implications, akin to those that arise more generally in agricultural production economics, associated with biological lags. In microeconomic theory texts, the firm manager first chooses an output level , and then determines the cost-minimizing combination of inputs that will produce that output at minimum cost. The crop breeding institute’s director does not have that luxury, because the output from today’s investment is uncertain and will not be known for many years . As an approximation to this problem of decision-making under uncertainty, we might suppose that the director seeks to minimize the institute’s cost based on current expectations of the output that will be produced in the future as a result of the current research expenditures. Unfortunately,plastic gardening pots we cannot observe or measure, ex post, such expectations. One option is to use the output that was actually produced from the expenditures as a proxy of those expectations, but the problem remains of matching actual outputs to particular expenditures .

To deal with this problem empirically, we defined an average research lag to represent the number of years between the time when a breeding project officially begins and the time when a variety is released for commercial extension to the fields of farmers. Using this defined lag length, we modeled the cost of variety production as a function of the research output produced after a certain lag. To find the length of lag, we designed a section of the questionnaire to ask breeders in each of the 46 institutes specifically to estimate the average lag length for each crop. Based on the data we collected, the average lag length was 5.3 years for wheat and 4.5 years for maize. In our base model, we used a 5-year lag for both wheat and maize variety production. However, we also tried different lag lengths to check the robustness of our results. China’s agricultural research system has produced a steady flow of crop varieties in the past. On average, in each year during the period 1982-1995, China’s farmers grew 200 to 300 wheat varieties and 130 to 180 maize varieties in their fields . However, the number of new varieties being produced by research institutes varied significantly over time and across institutes. Based on our survey, 141 wheat varieties and 155 maize varieties were produced by our sample institutes during the period 1985- 2000 . Nineteen percent of the wheat varieties were developed by provincial institutes. The rate of production of new wheat and maize varieties increased over time. For example, prefectural maize institutes produced 34 maize varieties during 1985-1990, 47 varieties during 1990-1995, and 74 during 1995-2000. The number of wheat varieties created and commercialized by the sample institutes rose from 31 in 1985-1990 to 55 during each subsequent period . The number of varieties, however, varies sharply among institutes. For example, the Henan provincial wheat institute produced 12 wheat varieties from 1985 to 2000. The Mianyang prefectural crop breeding institute in Sichuan produced 14 wheat varieties. In contrast, 24 out of 36 of the sample wheat institutes produced fewer than 5 varieties. In fact, three wheat institutes did not produce a single variety during the entire 15-year sample period. Maize variety production also varies greatly among the sample institutes.In the same way that output varies across time and space, so does total cost. On average, the annual real total variable costs of the breeding program per institute for our sample of wheat institutes increased from 24,000 yuan to 38,000 yuan between 1981 and 2000. Similarly, the average annual total variable breeding cost for our sample of maize institutes rose from 38,000 to 53,000 yuan.The total cost of wheat and maize breeding, however, varies greatly among institutes. For example, the average provincial institute invested five times more in wheat breeding and about six times more in maize breeding than the average prefectural institute did. When comparing prefectural breeding stations, the total cost of wheat breeding in one institute could be more than three times that of the average prefectural institute. Dandong prefectural institute in Liaoning spent five times more than the average maize-breeding institute did. The average cost of variety production also varies from institute to institute and can be seen to vary systematically with research output. To compare costs and output, we have to account for the research lag. In the analysis that follows, research output is the annual mean of five years’ total research output from one of three five-year periods, 1985-1990, 1991-1995 and 1996-2000. The average annual cost associated with this output is the annual mean of five years’ total cost, lagged by five years. Therefore, the corresponding three five-year periods of cost are, respectively, 1980-1985, 1986-1990 and 1990-1995. Unlike total costs, average costs fall as the institutes produce more varieties . For wheat the cost per variety falls from 152,000 yuan for breeding institutes that produce only one variety to 60,000 yuan for those that produce more than four varieties. Similar patterns can be seen in the data when using area-weighted output. A plot of the data reveals a distinct L-shaped relationship between average cost and the size of research output .No matter what measure of output is used, or for what crop, as research output increases, the average cost of breeding research falls. The L-shaped relationship also is robust, holding over time and over institutes . The sharp fall in average costs of breeding as an institute’s output rises suggests that China’s wheat and maize research institutes are producing in an output range with strong economies of scale, such that efficiency might be increased by expanding the scale of production of China’s wheat and maize research institutes.

We find striking evidence of strong economies of scale in crop breeding in China

The role and aim of food experiences is, therefore, not necessarily only to provide its customers with fresh produce but also to instill and create a sense of community for both the citizen as well as the consumer, who are no longer viewed as two separate categories. This historical division of consumer and citizen is rather questionable , and in recent literature the meanings of consumption, and thus use of things, are treated, rightly, with much seriousness as the full meanings and uses of objects to the consumer is more fully unfolded and understood. This is done even to the point where the object makes the human actors, involved with its transformation, its subjects. This can be seen in a historical perspective , or using present research methodologies of nonhuman agency . Event studies have, also, recently emerged as an original field of investigation separate from tourism and marketing, traditionally the most prominent areas of event research, although celebrations, events and exhibitions – in all their forms – have been used extensively within anthropology and sociology for multiple purposes. Today, there are entire dissertations focusing on food events, including analyses of different food scapes . The meanings of the “festive and/or affective food scape” in these last writings are multiple and their influence on both individuals and structures should not be underestimated in regards to community building and branding. These works show how certain food festivals thrive in some communities but not in others, and they document the continuing importance of food festivals, both internally and externally, for the hosting communities, as well as their branding of and economic potentials for the hosting city. Food events are legitimized by both social arguments of community building and economic arguments relating more to branding and economic benefits for participating cities. But it is not necessarily the events’ potentials to unite all these well-intended societal/economical intentions,maceteros reciclados de plastico be they based in community, communicative and economic wel-fare and/or reassuring reconciliation between the roles of the consumer and citizen; any surge in planned celebrations and events could also be a sign of what has been labelled a form of neo-localism.

A localism that – paradoxically – might have arisen due to the removal of spatial barriers and the emergence of globalised society: “The elaboration of place-bound identities has become more rather than less important” . Place, and its celebration is, therefore, pursued, valued and marketed more, exactly because its significance, in the traditional sense, is actually vanishing. When you can go anywhere, you want to be somewhere, and somewhere that is perceived as “real.” It should be noted that this form of boosterism using events is not a new phenomenon. In an American context agricultural celebrations and fairs have been used extensively to promote rural places . These events even serving as “feeders” of exhibition materials and activities to the World Fairs, which, arguably, together with the Olympics, have been the events to host for any international metropolis, or those seeking to become one, through the late nineteenth century and most of the twentieth century, and to which we will return. There are several explanations and theories as to the rise of the importance of experiences in most developed countries. Pine and Gilmore mention the structural development that people, in general, have accumulated more expendable income to spend on experiences overall. Experiences and events can therefore be seen as the transitory symptom/symbol of moving from an industrial society to a knowledge/service society. This development of an experience economy is also linked to the commercialization of traditions and “glocalization” of culture coupled with a general broadening in people’s cultural orientation, where individuals are not necessarily limited to one way of life, but can throughout their lives experience and/or consume many different lifestyles. It is sometimes suggested that social interaction and related experiences are on the decline in contemporary society . The bedevilled individualization is often accused of being the main perpetrator of this decline, though studies suggest that neither sociability nor its associated experiences and events are under threat. A larger study by Ingen & Dekker makes the case that many perceptions held about the decline of traditional community pursuits and celebrations is not necessarily due to less sociability, but can be explained, rather, by an increase in informalization processes.

This means that traditional social activities, be they through formal memberships in clubs, associations or otherwise, might be replaced, or supplemented, by more informal settings and occasions that do not require memberships or particular spatial environments, for instance. The decline in – say – bowling memberships can thus be offset by other social interaction less recordable, but not necessarily less sociable. This process of informalization could be viewed as part of a larger social “re-arrangement” where individuals partaking in the “Network Society” find new outlets and forms to interact and express sociability: “Knowledge transfer takes place within defined circuits between different groups and ‘scenes’ in the creative sector. One of the essential requirements of this system is physical spaces where people can meet and validate new cultural forms, or ‘play-grounds of creativity’ such as cafes, squares, museum foyers. These are also the new spaces that are often so attractive to tourists” .Castell’s, himself, is an advocate of networks to further sociability for the individual – though admittedly these expressions will be mostly utilized by already well-connected people, and critical voices are heard as to the nature of such “networking individuals and their sociability”: “The mobility and independence of network nomads who swing from contact to contact and project to project, socially and spatially, without insisting on a consistent self image, is now considered the most valuable asset of human capital” . Increased individualization, or individual freedom, might actually lead to more sociability as expressed through events and other informal spatial arrangements. These might differ slightly or significantly from the form and content of their predecessors of yesteryears, but their use as vehicles of expression and meaning to community and beyond surely remains – though, admittedly, community and associated meanings might also be ontologically different than yesteryears’, if we agree, that we are moving from industrial to knowledge society in the developed world. Urbanization has become coupled with the legitimization of new economic instruments and parameters and their importance, where events and cultural symbols play a determining role, as metropolises compete globally for attention and money through culture-economic initiatives, while midsized cities compete regionally, and so on.

The particular frequency and intensity of urban competition and development might be particular to contemporary society, but using events and experiences to further urbanization is not new. The Olympic Games, for instance, have been used extensively as a “catalyst of urban change” throughout the twentieth century: “What began simply as a festival of sport has grown into an unusually conspicuous element in urban global competition and, for its host cities, a unique opportunity to attract publicity, bring in investment and modernize their infrastructures and images” . And it might not just be commercialization of traditions taking place but also traditionalization, or “culturalization,” of commerce or the mix of worldly and spiritual spheres. Marxists might interpret this next step as the ultimate alienation of man and nature secured through an advanced form of commodity fetishism as exemplified by Debord’s “Spectacle Society” , but its materiality and presence in everyday life is undeniable, and perhaps most illustratively performed at the celebrations and events of our times. Could such developments be due to the merger between citizen and consumer? The blurred lines between citizen/consumer might work to influence both market and non-market,fabrica de macetas plasticas as the market is attached to traditional non-market values and vice versa as is evident in the rise of farmers markets, food co-ops, for instance. It should be noted, however, that the strong division between that of the consumer and citizen could be somewhat of an historical illusion, as the two categories have always transgressed and shared common areas . Again such sharp distinctions could be due to an idealization of the past, whose societies so often are narrated as less complex than present society. Food, and to some degree agriculture, and its associated experiences, carry much significance today, as symbols of the longevity, and dare one say the sustainability of societies and communities in general, and as media and/or vehicles for social change, as well as a symbolic and very real mark of distinction and identity. These observations are in themselves nothing new, as previous literature has proficiently shown that food and its associated experiences have been used in multiple ways . But the particular form and associated expressions and meanings associated with these might be, somewhat, unique to our present times. It is hard not to recognise the influence of the “risk society” in the above themes and the themes they invoke, along, or together, with the perceived negative separation between man and nature that is reminiscent of earlier literature also involving the urban/rural continuum, which suggests real differences between urbanity and rurality , dismisses these differences , or suggests that the urban/rural continuum continues to hold real significance as part of an identification of special rural and urban qualities .

The rural/urban continuum illustrates, perhaps better than anything, the condition of post modernity where “imagined” and/or perceived differences are as important as any “real” measurable differences – indeed reality is guided by perceptions of the “other” and associated places. Due to increased mobility, place should, in theory, matter less, but this does not seem to be the case. The importance of place, and its celebration, can be explained by increasing globalization and homogenization – including cultural commodification and standardization – where global structures are often associated with homogenizing markets and local structures associated with diverse community values, the latter often being perceived as intrinsically better exactly because of its locality or adherence to place. Though literature of historic consumption provides ample examples of how global trade interacts and changes the local , and that viewing the local as a separate entity from the global might be counterproductive and/or naïve. The strongly felt presence of concepts such as authenticity, could be explained due to the, perceived, alienation between man and nature/agriculture generating farmers markets, school gardens etc. to cure ills and provide solutions that have barely been articulated before like eco-literacy or food literacy. Authenticity can, also, provide individuals living under the, perceived, fragmented and illusive condition of post-modernism a re-assurance of their future choices by invoking ties to nature and community, which are seen as good due to their perceived universal/stable/traditional structures. Again this complexity, and the acknowledgement of its existence, can actually strengthen the desire for authenticity. Authenticity within food consumption and places can thus be viewed as an elitist concept that through the appropriation of communal spaces and individual consumption patterns makes opaque the ‘actual’ societal and political structures that guide our food consumption and understanding. But also it could be seen as the first step toward a new way of thinking about food and social systems and food scapes that dares to question the productivist rationale of modernism and its spatial expressions, invoking and awakening a new understanding of, and relation between, the local and global – probably it will do both. This conceptual sentiment of doing both is perhaps the “true” lesson of post-modernism, if we accept its premise. Concepts like authenticity have the thematic potentials to address conflicting messages and provide contradictory explanations to societal issues. Maybe this ability is also mirrored in our materiality of the twenty-first century and the growing acknowledgement that we can produce more of everything for everybody and still – even if it is “only” on a perceptual level – miss that something of the past, imagined or otherwise. The increased individualization might run parallel to an increased socialization. But both are different forms and hold different aspects and experiences than individuality and sociability thirty, or even a hundred years ago. Providing credence to the concept of a universal humanism that is shared, but also one that is provided new meanings and forms by its participants, both good and bad, throughout history. Indeed, and this is the paradox, the mastery and control of nature which is the basis of our modern food systems can be perceived as alienating and unnerving.

Concentrations were calculated using an established air-sampling rate

At the 15 targeted nephelometer locations, plus an additional 14 locations near the burns, trained local personnel placed passive samplers to measure particulate matter and naphthalene for 24 to 120 hours and then sent the samplers to our laboratory for analysis. Due to winds shifting from the predicted direction, our samplers were directly downwind only at the Dunham burn. At that burn, although passive samplers were mounted on several telephone poles immediately adjacent to the burned field, only one PM10 nephelometer was successfully deployed. Highly elevated PM10 values were observed at the Dunham downwind monitor: a maximum hourly concentration of 6,500 µg per cubic meter occurred from 1:00 to 2:00 p.m., then a dramatic decline to 4.3 µg per cubic meter by 4:00 p.m. The average 24-hour PM10 concentration at this Dunham location was 276 µg per cubic meter, well above the federal criteria for unhealthy air, 150 µg per cubic meter . Although we only successfully deployed one monitor, the highly elevated concentrations were consistent with PM10 levels reported in another study of a burned field . Photo evidence was also consistent with visibility of less than 1 mile, which is expected at hazardous air levels . As noted, wind speed at this burn was somewhat higher than at the other burns . At several of the other 12 nephelometer locations, much smaller peaks were apparent in PM2.5 and PM10 after the burns were initiated, up to 57 µg per cubic meter of PM10 within the hour. Similar to the E-BAM findings, evening-to-morning peaks in PM2.5 and PM10 were observed. Although all of these peaks were relatively brief , these measurements were collected at places of public access, and even short-term exposures may have health risks. An increase in PM2.5 concentrations in air samples from city centers as low as 10 µg per cubic meter for as little as 2 hours has been associated with increased daily mortality in the surrounding population .At the laboratory, computer-controlled scanning electron microscopy and energy-dispersive X-ray spectroscopy were used to obtain the individual sizes and chemistry of particles collected on the samplers. Then, PM2.5 and PM10–2.5 concentrations and particle size distributions were calculated using assumed particle density and shape factors and a particle deposition velocity model . In samples from the downwind locations at the Dunham burn,macetas plásticas por mayor concentrations of both PM2.5 and PM10–2.5 were elevated compared to an upwind sample. The fine fraction was primarily carbonaceous with a peak at the sub-micron range , while the coarse fraction had a lower carbonaceous percentage .

These carbonaceous percentages were higher than those measured upwind for fine and coarse fractions, as well as those reported for fine and coarse fractions in San Joaquin Valley ambient air . The coarse fraction in the downwind sample also had higher percentages of potassium, phosphorus and chlorine . Potassium and chlorine are considered potential indicators of biomass smoke , and phosphorus is found in most plant material. We also analyzed samples of unburned and burned bermudagrass and found that among inorganic elements, they contained similar peaks of potassium, phosphorus and chlorine . Their identification here may assist air pollution researchers attempting to identify sources of particulate matter in air samples. Naphthalene. Samples were analyzed for vapor-phase naphthalene by gas chromatography/mass spectroscopy.Naphthalene was occasionally detected at the five targeted burns with levels above the reportable limit at seven of the 23 locations near the burns and at one of the six more-distant locations . The highest level was detected in a sampler placed directly downwind of the Dunham burn. That highest level was lower than a reference level for respiratory effects , but only two samples were collected directly downwind and concentrations elsewhere in the plume could have been higher or lower. To compare, vapor-phase naphthalene measured in a laboratory from directly above the burning of agricultural debris was 60 µg per cubic meter . To assess health educational needs, we interviewed community leaders, community residents, farmers and school representatives from the agricultural area of Imperial County. We used a qualitative method called Key Informant Interviews , which allows for candid and in depth responses and the characterization of how interviewees discover and act on information. Potential participants were informed that the interview would take 30 to 60 minutes and that responses would be anonymous. If a respondent declined an interview, no information was recorded. Community leaders. Ten community leaders were interviewed out of 15 contacted. Those interviewed held management positions within either county environmental health agencies, nonprofit agencies that supported agriculture, or environmental organizations that promoted clean air. More than half of the community leaders ranked burning as a medium or high concern for their organization. Respondents representing the agricultural industry considered outreach important because, as one respondent said, “The public’s view of burning is fairly negative.” Suggestions for educational outreach included training for staff on the health impacts of smoke and “simple recommendations, options of actions to take during a burn.” Residents. 

Seven interviews were conducted after we contacted 15 residents who lived either in single-family homes or apartments within 2 miles of fields. Most considered burning a high or medium health concern compared to other community health concerns. One person said, “You’re closing doors and windows, just trying to keep the smoke out.” No respondent had ever called or inquired with government agencies. One respondent explained, “We all have to live with our neighbors. . . it would be difficult to file a complaint or inquiry.” None of the respondents were aware of any educational materials. Farmers. Of 30 farmers that we contacted, three agreed to participate. All three burned bermudagrass or wheat fields, thousands of acres in some years. The farmers discussed the benefits: as one explained, “Burned fields are more profitable.” All had considered disking their fields or using minimum tillage as an alternative to burning, which they had learned about by trial and error. All three discussed a certificate program used by the Air Pollution Control District to accredit and stimulate financial rewards for farmers who do not burn . All three also had voluntarily notified their neighbors about planned field burns. .School representatives. Out of 30 contacted, we interviewed five teachers or superintendents who each worked at a separate school or district near historically burned fields. School representatives were concerned about enforcement. Their suggestions included: “Have people call a number if they notice illegal burning or something suspect” and implement “stiff penalties for those who don’t [follow burning rules].” They had ideas about community education, such as public service announcements on television. Two respondents, who were not enthusiastic about doing outreach, said, “There’s so much that we have to do.” This consideration may have also been part of the reason why the participation rate was low for key informants in this group, and possibly the farmer group.Responses from our key informants indicated that educational messages were needed. We developed two-page fact sheets for three Imperial County audiences — the general public, school representatives and farmers. These covered the reasons for burning, burn regulations, potential health impacts and behavioral recommendations to reduce exposures. In our studies,cultivo del arandano azul elevated particulate matter levels and visible drift were observed as far as 500 feet from the edge of burning fields, and wind directions could quickly change. We advised that anyone who could see or smell smoke or was within 300 feet of a burning field should go inside. If people had to be outside near a burning field, we recommended face-piece particulate respirators , which are available at most hardware stores. A worker who must be outdoors and near a burn must be in a respiratory protection program that includes medical evaluations and fit-testing of the respirator’s seal on the worker’s face . A draft of the fact sheet for the general public was tested with community members at a health clinic and shopping center. Although there were complaints about its length, the fact sheet was highly rated for usefulness: all 20 participants gave it either a four or a five on a scale of one to five . The final fact sheets were distributed to local organizations and are available on the Internet .In our studies, agricultural burning created potentially hazardous air levels immediately downwind; during evening-to-morning hours, PM2.5 levels increased 2 to 8 µg per cubic meter.

Many studies have associated total daily human mortality with mean daily particulate matter levels measured in urban centers, and some have observed a relationship at levels as low as 2 µg per cubic meter . In California, increases in children’s total daily hospital admissions for respiratory problems are also associated with increases in daily PM2.5 and potassium air levels, the latter an indicator of biomass smoke . To protect public health and potentially reduce exposures to smoke from agricultural burns, we recommend additional health education, smoke management and air quality research. Health education. Fact sheets are needed for other California counties where agricultural burning takes place, as well as educational materials for outdoor and field workers about respiratory mask protection and smoke visibility guidelines . As interviewees suggested, broader community education could include public service announcements. Smoke management. Currently, CARB declares a permissive-burn day when meteorological conditions ensure the regional dispersion of smoke, for example, a wind speed at 3,000 feet of at least 5 miles per hour . Imperial County’s smoke management plan states that the Air Pollution Control District may put in place additional restrictions based on meteorological and air quality conditions, including strong ground-level or gusty winds . We observed substantial drift at a slightly greater wind speed than that previously suggested for a vertical column of smoke to occur . Local Air Pollution Control Districts could reduce ground level drift by specifying a ground-level wind speed above which burns should not take place. Additionally, evening to-morning levels of particulate matter could be reduced if warranted by other restrictions, such as shortening allowable burn hours. Interviewed residents expressed reluctance to report neighbors who might be out of compliance. Supplemental Imperial County Air Pollution Control District activities could include online instructions about how to make a complaint. In addition, posting visibility guidelines for hazardous drift and a daily listing of the areas in the county where burns were scheduled would improve community notification. Research. Additional air monitoring is needed to further characterize the nature and extent of ground-level plumes and how they are affected by local crop type and conditions. Although crop-specific particulate emission factors from burning bermudagrass stubble have not yet been developed, factors for other grasses, such as Kentucky bluegrass, are about twice those for rice and wheat . The moisture level of burned residue can also significantly affect particulate matter emissions, with a change in moisture from 10% to 25% more than tripling particulate emissions during the burning of rice, wheat and barley straw . Ambient monitoring should also include indoor air, as outdoor PM2.5 may substantially infiltrate buildings , and we observed that outdoor particulate matter increases overnight when people are likely to be inside. Residents may be amenable to researchers installing unobtrusive passive samplers to monitor indoor air. In further studies, methods might be modified to allow the further identification of carbonaceous material, the gaseous component of other PAHs and some of the thousands of other volatile gases found in smoke . Information is also needed on whether residents are following recommendations to reduce their exposure to smoke from agricultural burning. Finally, farmers expressed a willingness to try alternative farming practices, notably tilling. We recommend further study of alternative farming techniques such as conservation tillage, which may reduce the need for burning, conserve water and soil, and reduce air quality impacts . In addition, integrating livestock grazing with grain and hay farming as a substitute for burning or tilling may reduce pests, herbicide use and erosion and provide additional income . Further study is needed on how farmers could viably integrate alternative techniques into current practices, particularly for local crops such as bermudagrass, and the estimated human health impacts of such changes.

Penal farms were laboratories of economic botany and labor control

The implications of this coincidence in timing were not lost on the USDA officials who rallied behind annexation of the Philippines. OFSPI director David Fairchild thought the USDA should “send an expedition with the invading army to gather together such information and material, plants, seeds, etc., as would give an idea to the resources of the country.” USDA Secretary James Wilson pushed for direct oversight of the archipelago.Both men anticipated opening a tropical research institute in the Philippines modeled on the Lands Plantentium in Buitenzorg, Java—then the premier colonial research institute in equatorial Asia.Fairchild, who had studied at Buitenzorg during his collection expeditions in Southeast Asia, wrote that the institute awakened him to “the possibilities there are in the organizing of such colonies if they are properly managed.”American annexation of the Philippines the next year did not result in formal USDA offices in the colony. Instead, USDA botanists and crop and farm machinery specialists staffed a separate Philippine Bureau of Agriculture that retained close ties with the USDA. Secretary Wilson urged the PBA’s first director, Frank Lamson Scribner, to work with the needs of an industrializing American economy in mind. “Fibers, coffee, rubber, spices, and such things as we cannot produce should have most attention.”Coconuts, notably, were missing from this list. Coconuts came onto the radar of the Philippine Bureau of Agriculture through the food experiences of soldiers and trans-imperial scientific exchanges facilitated by research centers like Buitenzorg. Wary of canned commissary foods following the “embalmed beef scandal” in which cheap meat tinned in Chicago poisoned soldiers in Cuba, US privates flush with cash turned to the communities they were occupying for food. René Alexander D. Orquiza’s mining of soldiers’ letters shows how a taste for coconuts and native foods turned soldiers into boosters for the development of Philippine food industries.Private Andrew Pohlman wrote home that, “[w]e learned that the interior of a young coconut tree would furnish a meal which was not complete for heavy marching but it did not make us sick,cultivo de frambuesas as some meals in the company mess.”

The PBA sent economic botanists and plant explorers to Java, Sri Lanka, and Ceylon to investigate tropical crops. These travels resulted in the PBA’s first report on the coconut plant and copra trade, authored by William S. Lyon in 1903. Lyon’s report detailed the impressment of the coconut into an emerging military-industrial complex as an oleochemical, a general term for a vegetable fat with industrial applications. “Chemical science,” Lyon wrote, “produced from the cocoanut a series of food products whose manufacture has revolutionized the industry and placed the business of the manufacturer and of the producers upon a plane of prosperity never before enjoyed.”French chemists in Marseilles distilled from copra lauric acid, an essential washing agent, and incorporated coconut oil into oleomargarine, a solid fat composed of beef tallow, water, and a vegetable oil such as coconut valued for its shelf stability. By 1902, four or five large factories in France met the “world’s demand for ‘vegetaline,’ ‘cocoaline,’ or other products with suggestive names, belonging to this infant industry.”The high triglyceride content of coconut oil led British chemists to investigate its potential as a source of nitroglycerin when heated under pressure with an alkali such as lye. According to one mid-twentieth century account, the “recovery of [nitro]glycerin” from copra was twenty-five to thirty percent higher than that of other high lauric acid vegetable oils.The coconut tree—and by extension its planters, pickers, and Pacific landscapes—were incorporated into an industrial war machine. So valuable were coconuts during the Great War that the British Home Office imposed high duties on copra exports from the colonies. The shredded husks, meanwhile, became gas mask filters protecting soldiers from chemical weapons. The war that began as a response to the geopolitical rivalries of technological imperialism was fought over and with the biological materials of empire.In addition to detailing the economic potential of copra, Lyon offered a set of proscriptions for the growth of a Philippine copra export industry. This included bioengineering a tree best suited to the needs of a plantation economy. Coastal palm trees fruited infrequently due to the need to expend more energy on root growth in search of subterranean nutrition. Inland trees, by contrast, directed that energy toward trunk growth which, when paired with top pruning, encouraged greater flowering. Far from the spindly tree of the tropical imagination, the coconut tree of the plantation economy was a short and squat prolific flowerer.

Further travels through the coconut zone endowed PBA botanists with insight into how to manipulate trees to frequent flowering. Thomas P. Hanley, a special agent in charge of farm machinery, reported that a chance train ride from Colombo to Kandy placed him “in the acquaintance of an educated Singalese” who owned a plantation “upon which he made coconut growing a specialty.” The unnamed informant shared that the trees generated best when spaced twenty-five to thirty feet apart, which in turn mandated more acreage for each plantation. The resulting dwarf trees better withstood strong winds and “the fruit can easily be gathered by our native boys, who are accustomed to the work.”The biggest challenge to the potential coconut planter, though, was attracting investment capital while the tree took seven years from planting to mature. Here, agents of the colonial state filled in, proving the efficacy of large coconut plantation by using forced labor on penal farms. PBA scientists and colonial administrators moved seamlessly between service to the colonial state and their own private agricultural entrepreneurial schemes. Key to this rotation was their access to unfree Filipino laborers and their ability to attract American investors to the archipelago. No figure better embodies this set of relations than Dean Conant Worcester. Worcester, a University of Michigan-trained zoologist, made two late-nineteenth-century collecting expeditions to South America and Southeast Asia along a Brazil to the Philippines route first blazed by British naturalist Alfred Russell Wallace.Worcester may have remained in Michigan had not the American war against Spain generated press and political interest in its largest Pacific colony. Worcester and his former expedition partner, Frank S. Bourns, published a series of articles detailing the history of Spanish misrule, the archipelago’s untapped natural wealth, and the Philippine “types” too divided to constitute an independent nation.Worcester’s deft pen and expert self-promotion earned him a post on the governing Philippine Commission, a seven-member body appointed by the US president. Worcester served as the commission’s “director of the interior” until 1913, making him one of the longest-serving US administrators in a colonial government known for short tenures. Worcester’s longevity was due, in part, to his portrayal of US rule as a defense of upland “tribal peoples” from more Hispanicized yet vicious lowland “Malays.” The portrayal earned Worcester the enduring ire of the Philippine landowning elite but was nonetheless embedded into the racial geography of empire.

While elite power forced Philippine commissioners to work out power-sharing agreements in the form of an elected Assembly, commissioners and the American military retained direct oversight in areas deemed “non-Christian.” The racial division effectively gave Worcester’s Interior Bureau an open hand to mine Luzon’s upland Cordillera for mineral wealth and to work alongside the US military government in the southern “Moro Province,” a vast area that included the island of Mindanao. Among Worcester’s many initiatives were explorations into gold mining in Benguet, Luzon; the introduction of cattle grazing in Bukindon, Mindanao; and ample assistance to Bourns’s Philippine Lumber and Development Company,maceta 40 litros which maintained interests across the islands. Finally, with the PBA under his purview, Worcester was in close touch with Lyon and the chemists who had turned their attention toward the copra.Worcester’s 1911 pamphlet, “Coconut Growing in the Philippines,” beckoned investors to the islands.His rhetoric is an exemplar of the strategies agricultural entrepreneurs invoked to draw financial capital to the growing fruit empires in the Pacific and Central America. Agricultural entrepreneurs balanced their praise for the natural capacity of equatorial lands with a condemnation of the “native” practices that failed to develop robust export economies. “The agricultural methods of the natives,” Worcester wrote, “have violated every known rule. Seldom has the ground been really prepared for planting. The trees invariably stand too thickly. The Filipino cannot rid himself of the idea that the more seed he sows the greater will be his harvest.” Such carelessness produced the dreaded “tall spindling trees” that bore “nuts sparingly.” Yet, despite the waste, “the Philippine Islands produced during the fiscal year ended June 30, 1909, approximately 231,787,050 pounds of copra … This output excels that of Java, of the Straits Settlements, of Ceylon, or of the South Sea Islands, and places the Philippines at the head of the list of coconut growing countries. In fact, during the year mentioned the Philippines produced about one third of the world’s output.”Worcester asked his investor-reader to imagine the potential if Philippine labor could be disciplined to scientific methods. “If this result has been obtained under the haphazard methods in vogue, what may be anticipated when due care is exercised in selecting suitable land, when it is properly cleared and planted, and when suitable cultivation is continued while the young trees are growing and after they begin to produce?”The pamphlet paid immediate dividends. Worcester boasted to the army general Frank McIntyre that he was “glad the publication was insisted upon, because it has already brought a good bit of money out here for investment in coconut growing. There are two men in the islands now hunting land.

One of them has $250,000 available, and the other has $50,000 with the assurance of more as fast as it is needed.”The prison was the institution by which Americans disciplined Philippine labor to copra exports. The declared end of war in 1902 saw the transformation of insurgents from enemies of the state to criminals. The Philippine Constabulary, an archipelagic wide police force composed of American leadership and Philippine recruits, continued the wartime practice of concentrating subversive communities, policed new crimes such as vagrancy, and accompanied US land surveyors and scientific expeditions throughout the islands. The Constabulary’s arrest policies effectively created a pool of laborers to build an extractive infrastructure of roads, plantations, and penal farms. Five hundred “well-behaved” prisoners constructed a road between the Province of Albay’s Tabaco and Ligao municipalities. “In this way,” wrote one commissioner, “one of the most beautiful roads in the archipelago was constructed, and served a most useful purpose, as it tapped a region very productive of Manila hemp.”In southern Luzon’s Laguna province, an additional five hundred prisoners constructed roads to serve the young coconut industry. Laguna Provincial Governor Cailles requested that “Moros, Ilocanos, Bicols, and Visayans, but not Tagalogs” be sent to the Tagalogspeaking province so that the prison laborers would not escape. Cailles ordered each man to wear a “light chain welded around his ankle and fastened to his belt, so that he cannot move without making a slight clanking sound” and displayed the bullet-ridden body of one unfortunate soul who attempted to escape.In Mindanao, military governor Leonard Wood oversaw a prison labor road-building project between Overton and Marahui, an area that American officials hoped to devote to rubber plantations.The two largest were the San Ramon colony in Zamboanga, Mindanao and the Iwahig colony on the island of Palawan. The San Ramon penal farm in Zamboanga, Mindanao housed Muslim dissidents and an early order called for the planting of cacao, rubber, hemp, coffee, and a variety of vegetables in addition to coconuts on the prison’s approximately one thousand four hundred and fifteen hectares.The cacao orchard failed, and rubber did not take but coconut thrived. Bureau of Agriculture officials “urgently” recommend that “labor, farming tools, and draft animals be found to ready the ground for an additional 200,000 coconut trees.”By 1915, the colony’s coconut plantation had twenty-five thousand mature trees, nine thousand seedlings awaiting transplanting, and five thousand sprouts in seed beds. The penitentiary also included a state-of-the-art drying house, which sped the drying process by controlling the heat. Officials selected Iwahig for a penal farm due to its proximity to the deep-water port, Puerta Princessa. As “virgin” land, that plantation required vast amounts of wartime labor to clear the site’s dense and bio-diverse rain forest.

Estimated impacts of such limited delays on crop production should be minimal

Therefore, the impact of TOU rates become apparent in years post 2010 in Figure 4. The time period highlighted in yellow, indicates the summer peak hours of 12:00PM to 6:00PM. In addition to TOU, several utilities offer various DR programs tailored toward agricultural irrigation customers with a combined load shed magnitude of 0.7 GW dating back to 2004. Although largely successful, challenges faced by agricultural DR programs include unreliable shed rates and low participation rates . Most agricultural irrigation systems operate in a manual or semi-automated fashion which require long notification periods in order to participate in DR programs. This along with challenges such as lack of communications, manual controls, and farm operational limitations has led to a low participation in DR programs by agricultural customers . Currently agricultural irrigation pumping can only participate in traditional DR programs offered through utilities . In the near future, fast responding DR services that can participate directly into the electricity markets will become more valuable . Automated DR , another DR strategy in which loads are shed automatically in response to grid control signals unless the customer opts-out, allows quicker, more reliable load shedding with less effort required by grid operators and growers alike. ADR has the potential to be used for ancillary services, which are growing in importance due to the load uncertainty and variability caused by the integration of large shares of renewables . Such services are referred to as supply side DR. In order to provide supply side DR to the grid, loads should directly interact with the California Independent System Operator . Besides limited pilot programs such as Demand Response Auction Mechanism , there are currently no other mechanisms in place that allows pumping loads to directly provide supply side DR,cultivo del arandano so agricultural customers can only provide resources to the grid by enrolling in a TOU, DR, or ADR program offered by their local utility or through a third party aggregator.The examples presented below illuminate the nature of the demand management challenges from the irrigators’ perspective.

Over-voltage incidents can result in significant damages to equipment and disrupt normal operations for extended periods. Therefore, demand management of agricultural loads is not only beneficial to the grid, but it also makes farming operations more resilient. In 2016, the peak demand of the California’s electricity grid was 46 GW . In the same year, the peak demand for agricultural irrigation pumping was 1.3 GW . As of 2015, California Investor Owned Utilities’ total DR portfolio was 2.1 GW . Theoretically, 62% of the current IOU DR portfolio can be satisfied through agricultural irrigation DR alone. Agricultural irrigation can help address several challenges highlighted in Figure 2. As shown in Figure 3, agricultural load is highly concentrated in the summer months, coincident with the peak demand of the grid as a whole. In addition, highest daily demand for agricultural irrigation occurs during hours with highest levels of evapotranspiration, which are coincident with highest levels of solar electricity generation. Solar curtailment, whereby solar generators are disconnected from the grid to protect the grid from being overwhelmed, occurs between the hours of 12-6 PM, hours of peak irrigation demand. A flexible and dynamic irrigation system can take excess load off the grid by over-irrigating during certain hours of the day in order to facilitate higher levels of solar integration into the grid and eliminate solar curtailment. In the absence of cost effective battery storage, irrigation pumping can be a valuable resource for balancing the electricity grid. Time of Use pricing is a cost effective option for modifying load shapes because there are minimal, if any, site-level technology enablement costs. And while the load reduction at any given site is typically small, the breadth of participation if the rates are default or mandatory provides a substantial statewide effect. TOU can contribute substantially to overall DR potential. The impacts of TOU pricing on agricultural accounts is clearly distinguishable in average daily demand profiles of agricultural accounts recorded by Pacific Gas and Electric’s Smart Meters as shown in Figure 4. Mandatory TOU rates were introduced in 2009 and over 75 percent of firms faced their first month of mandatory TOU pricing in the summer of 2010 .

This limited overview of demand management for irrigated agriculture in the San Joaquin Valley illustrates the management decisions that must be made. The following examples are based on an actual almond farm located in Turlock, California. The 92 acre farm is supplied by one groundwater pump. The farm received 38.4 inches of water, plus 4 inches of rainfall in 2017. In all the following examples, irrigation schedules are modified so that the water requirement of 38.4 inches is satisfied. The first case involves shifting time of use for a 92 acre almond orchard with ample delivery system capacity, a readily available water supply . The orchard is irrigated in three sets. Most irrigation events were 24 hours or more, so most irrigation events span three days. The actual sequence of irrigation dates and durations in 2017 is indicated by the histogram in Figure 5. The wide spacing between irrigation events indicates ample irrigation system capacity, allowing the farm to easily shift irrigation dates and durations. This represents an ideal opportunity for energy load shifting. It is simple to plan and implement, and presents a clear financial benefit. Energy rates for the farm are $0.195 per kWh for off peak hours and $0.445 per kWh for 8 peak hours daily. An alternative schedule, indicated in Figure 6, would restrict irrigations to the 16 off-peak hours each day. Capacity Bidding Program and Base Interruptible Program are examples of two DR incentive programs offered by Pacific Gas and Electric Company that are most suited for agricultural customers. The incentive program stipulates that interruptions will last no more than four hours, with no more than one interruption per day and no more than ten per month. In this example, we illustrate how the participation of the same farm as “Example 1” in a program similar to BIP2 will impact its normal operation. If the farm were also following the TOU schedule as illustrated in Figure 6, it will be operating close to maximum pumping capacity. The analysis begins with the irrigation schedule based on 16 hour sets presented in the previous example . A modified schedule with occasional interruptions generated at random times is overlaid on the TOU management-schedule . DR event days are illustrated with lighter bars. Irrigation events on DR event days do not exceed 12 hours to indicate a 4 hour interruption per event.

If an interruption is called when no pumping was planned it is indicated as a negative four-hour bar. On those days when no pumping was planned additional pumping for 8 to 12 hours can be inserted to compensate for preceding interruptions.It appears from Figure 7 that the irrigator could compensate for most interruptions shown by shifting irrigation dates by a day or two. The same total volume of water was applied in both Figure 6 and Figure 7. This example can also illustrate an important constraint common to DR programs, which is that a farm shall only be compensated for DR participation in months when they would normally be using a significant percentage of pumping capacity. For example, maceta hidroponica the requirement might stipulate that the pump enrolled in a DR program must operate at least 70% of the time. In this case the seasonal pumping with TOU considerations from May through August would exceed the 70% level. If the financial incentive for participating in the DR program were ~$8 per kW per month and the farm qualifies for four months, the payout would be an additional $2400 per year. However it is important to note that enrollment and successful participation in such a DR program could entail capital investment for remote system control and variable speed pumping, which are not considered here. Evapotranspiration3 is a widely used irrigation parameter for estimating crop yields and for estimating yield impacts when irrigation is limited . Depending on the crop, some degree of deficit irrigation may actually increase farm profits by reducing costs of water, energy and other inputs, and by increasing management flexibility. With some crops deficit irrigation can also improve crop quality if carefully applied at specific growth stages. Modeling of ET and the impacts of ET deficits during the season is, therefore, a central issue for DR management. Figure 8 shows a schedule for another orchard in which a similar TOU strategy as the first example was developed for a maximum of 15 off-peak hours per day. However, in this case the irrigation capacity could not meet scheduled crop water demands on six days in late July and August, indicated by red bars, each representing 15 hours of additional pumping needed to maintain the intended soil moisture pattern. The cumulative irrigation deficit during that interval would be 6% of intended seasonal water use. Scheduling of additional irrigations to compensate for the 6% deficit would involve significant rescheduling of water application to the field. And the farm orchard will not have an opportunity to catch up with lost irrigation until late August. Additional irrigations in late August will not mitigate the impacts to the crop of a month long period of stress from mid-July to mid-August. Because that deficit is concentrated in a one month interval and roughly coincides with the onset of harvest, effects on yields could be even more severe.The consequences of such periods of stress will depend on the complex relationships between irrigation timing and amounts, crop water availability, and crop response to available water. The ability of a crop to recover from a delayed or missed irrigation will depend on the stage of growth, the reserves of water in the soil, atmospheric conditions and the physiology of the crop.

Operating with crop stress as part of an irrigation strategy requires an advanced irrigation management model to estimate the effects of reduced crop water availability on the cumulative daily ET. Currently such advanced irrigation management models are not commercially available. As indicated in the previous sections, irrigation planning to accommodate TOU and DR strategies will need to anticipate occasions of high crop water demand weeks or months ahead of time, especially when allocating water among multiple fields that share a common water supply. If optimal water use involves some degree of deficit irrigation, the planner will need to assess the possible yield impacts incurred by delaying, reducing or eliminating some irrigations. This requires being able to estimate in advance and across the whole season the impact and value of each irrigation and how each irrigation will translate into crop available water at full or partial ET, particularly at critical growth stages. This requires sophisticated modeling of the relationships between the crop, the soils, the atmosphere and the irrigation system, combined with site specific measurements and the irrigator’s management goals. Meeting these challenges requires accurately modeling the disposition and fate of applied water and modeling crop response to available soil moisture not just daily, but looking forward over extended periods of time. Seasonal irrigation strategies and schedules need to be easily and quickly updated to match weather variations, the availability of water, disease problems and other factors that evolve during the season. And planning needs to account for farm-specific constraints due to contractual arrangements, operating practices, risk tolerance and other factors that differ from one farm to another. The most effective irrigation management technologies in the market today focus on monitoring daily and weekly estimated ET conditions to provide a limited water balance calculation. A water balance model calculates how much water is applied against ET estimates of how much water is used by the crop. While accurate on a weekly basis, these conventional methods of scientific irrigation scheduling do not provide adequate forecasting and accurate forward-looking schedules for the management challenges presented by deficit irrigation. Growers need to conduct long range planning and management of irrigation strategies, including deficit irrigation, to deal with these complex management challenges .

Some manufacturers report a canopy temperature reduction of up to 6oC when using their products

At currently low chill portion ranges of 55-60, the effect is around 25%, again consistent with the stipulation of Pope et al. that a significant effect threshold would be located there. Considering alternate bearing and other factors contributing to the background fluctuation in yields, it is easy to understand how such effects on relatively small areas within the pistachio growing counties have not been picked up by researchers so far. Anecdotal yield losses due to low chill have happened on relatively small scale and passed undetected in the county-level statistics, especially when only one or two chill measures per county were considered. In this case, while the resulting curves are very similar, I find the structural approach more convincing. First, it has a smaller confidence area, and therefore seems more precise. Second, a polynomial of low order will not approximate the process described by agronomists very well. However, estimating higher order polynomials results in estimates that are not statistically significant. The implications of my estimates for pistachio yields are depicted in the lower half of Figure 3.1. The bottom left panel shows the effects on the 1/4 warmest years in 2000– 2018. They are mostly between 10-20% yield decline. These rates are easy to miss due to substantial yield fluctuations in pistachios. What do these estimates mean for the future of California pistachios? Prediction of yield effects for the years 2020–2040 are depicted in the bottom right panel, again for the 1/4 warmest years in the 2020-2040. They show substantial yield drops, which could amount to costs in the hundreds of millions of dollars. Chapter 4 in this dissertation explores the potential gains from a technology that could help deal with low chill in pistachios: applying kaolin clay mixtures on the dormant trees to block sunlight. Thee expected net present value of this technology is estimated at the billions of dollar in economic gains.

Considering my results,cultivo de arandanos there may be significant gains from using these technologies even in warmer years today. Concluding this chapter, I want to stress the fact that even in the era of “big data” in agriculture, data availability is still a challenge when estimating yield responses to temperature in some crops, especially perennials and local varieties. Weather information required for assessing potential damages and new technologies might not always be available for a researcher. This chapter develops a methodology to recover this relationship, using local weather data and techniques for dealing with aggregated observations. I use this setup to empirically assess the yield effects of insufficient chill in pistachios, recovering this relationship from commercial yields for the first time in the literature. I then look at the threat of climate change to pistachio production in southern California. As winters get warmer, lowering chill portion levels are predicted to damage pistachio yields and disrupt a multi-billion dollar industry within the next 20 years. These results were made possible by using precise local weather data, applying relevant statistical methods, and using agronomic knowledge in the modeling process. This approach for information recovery from a small yield panel, with limited useful variability at first sight, could be useful for other crops as well.In the introduction chapter, I discuss the nature of temperature challenges posed by climate change. The rising average temperatures, according to the empirical literature, might not be the major source of potential loss. Rather, it’s the elongating and fattening temperature distribution tails that would be responsible for much of the damage. Could there be a way for farmers to target these tails directly? If so, such technologies could have potential uses for climate change adaptation. It so happens that farmers already deal with temperature extremes, and are capable of tweaking the tails of temperature distributions to avoid losses. The introduction already discussed “air disturbance technology”, basically large wind generators, used to deal with some types of frosts . Solutions for right side temperature tails exist as well.

Of course, shading plants using nets or fabric is an existing practice, but these technologies are costly and not very flexible. However, other products that reflect sunlight and lower plant exposure to excess heat are available on the market. Perhaps the most common ones are based on a fine kaolin clay powder, which is mixed with water and sprayed directly on plants to form a reflective coat, sometimes referred to as a “particle film”. These products have been commercially available since 1999, and are shown to effectively lower high temperature damages by literally keeping plants cooler .cultivo de arandanosSpraying of this mix requires special rigs and equipment, but the costs are reasonable, and far lower than setting up shading in the form of nets . This technology can be thought of as cheap, disposable shading. Surprisingly, even though kaolin clay has been used by farmers to deal with other problems, less related to climate change , I could find no economic literature discussing this technology. As with the case of air disturbance technology, these types of technologies have mostly been ignored by economists. One reason for this gap in the literature could be that economists have not yet realized that these individual products and practices share a common conceptual framework: they are tweaking temperature distribution tails, while leaving the main probability mass untouched. This is an approach I call “Micro-Climate Engineering” . These are relatively small interventions in temperature distributions, limited in space and time, which aim to avoid the nonlinear effects of the extremes. Farmers know the available technologies for MCE and use them regularly, but their potential applications for climate change have not really been explored. The concept of MCE could be very important for climate change adaptation in agriculture, especially when considering the role of extreme temperatures on predicted future losses. MCE solutions, where feasible and profitable, could assist in preserving current crop yields and delaying more costly adaptation strategies. This chapter sets to explore the concept of MCE in general, and assess the gains from MCE in California pistachios as a case study. Specifically, pistachios are threatened by warming winter days, which could threaten existing acreage within the next twenty years .

This challenge stands out in the existing literature in three ways: first, while much of the climate change literature focuses on annual crops, pistachios are perennial. This means that the opportunity cost of variety switching are higher. Second, the challenge does not occur in the “growing season”, but on the winter months when trees are dormant and seemingly inactive. This emphasizes the importance of climate change effects year round, rather than just in the spring and summer. Third, the challenge stems from a biological mechanism that is not heat stress. Heat stress is perhaps the most obvious process by which rising temperatures can have adverse effects on yields, and by far the most studied in the economic literature on climate change. However, other biological mechanism are affected by weather as well, and can cause substantial yield losses. This paper incorporates agronomic knowledge on bloom disruption due to increased winter temperatures, a mechanism that is relatively unexplored in the economic literature. Scientists at the University of California Cooperative Extension have been experimenting with kaolin clay applications on pistachios,macetas redondas de plastico and the results seem promising . This could mean a great deal to growers and consumers. This chapter analyzes the potential economic gains from this MCE application in California pistachios. Introduced to California more than 80 years ago, and grown commercially since the mid 1970’s, pistachio was the state’s 8th leading agricultural product in gross value in 2016, generating a total revenue of $1.82 billion dollars. According to the California Department of Food and Agriculture , California produces virtually all pistachio in the US, and competes internationally with Iran and Turkey . In 2016, five California counties were responsible for a 97% of the state’s pistachio crop: Kern , Fresno , Tulare , Madera , and Kings . Since the year 2000, the total harvested acres in these counties have been increasing by roughly 10% yearly. Each increase represent a 6 – 7 year old investment decision, as trees need to mature before commercial harvest . The challenge for California pistachios has to do with their winter dormancy and the temperature signals required for spring bloom. I discuss the dormancy challenge and the Chill Portion metric in Chapter 3. It is worth noting that in fact, for the areas covered in this study, chill portions are strongly correlated with the 90th temperature percentile between November and February, the dormancy season for pistachios.

The correlation is very strong, with a goodness of fit rating of about 0.91. In essence, insufficient chill is a right side temperature tail effect, comparable with similar effects in the climate change literature. Chapter 3 estimates the yield response of pistachios to CP. Substantial losses are predicted below 60 CP. Compared to other popular fruit and nut crops in the state, this is a high threshold , putting pistachio on the verge of not attaining its chill requirements in some California counties. In fact, there is evidence of low chill already hurting yields . Declining chill is therefore considered a threat to California pistachios.Chill in most of California has been declining in the past decades, and is predicted to decline further in the future. Luedeling, Zhang, and Girvetz estimate the potential chill drop for the southern part of San Joaquin valley, where virtually all of California pistachio is currently grown. For the measure of first decile, i.e. the amount of CP attained in 90% of years, they predict a drop from an estimate of 64.3 chill portions in the year 2000 to estimates ranging between 50.6 and 54.5  in the years 2045-2060. Agronomists and stakeholders in California pistachios recognize this as a threat to this valuable crop . Together with increasing air temperatures, a drastic drop in winter fog incidence in the Central Valley has also been observed. This increases tree bud exposure to direct solar radiation, raising their temperature even further . The estimates cited above virtually cover the entire pistachio growing region, and the first decile metric is less useful for a thorough analysis of pistachios. I therefore need to create and use a more detailed dataset, in fact the same one described in Cahpter 3. Figure 3.1 shows the geographic distribution of chill and potential damage in the 1/4 warmest years of observed climate and predicted climate . While not very substantial in the past, these losses are predicted to reach up to 50% in some regions in the future.The linear supply curves take weather as given. On an ideal weather season, the supply curve is S0. On a year with warm winter, the supply curve is multiplied by a coefficient smaller than one, i.e. shifts left and rotates counter-clockwise, resulting in curve S1. Without MCE, the intersection of demand with S1 determines the market equilibrium. Once that is solved, the welfare outcomes-consumer surplus, grower sector profits, and total welfare-are calculated as the areas above or under the appropriate curves. When MCE technology is available, a modified supply curve starts with a section overlapping S1, and then “bends” right towards S0. If demand is high enough, market equilibrium is attained at this bend. Again, the welfare outcomes with MCE are calculated with the equilibrium price and quantity, together with the demand and SMCE curves. The gains from MCE are the differences between these market outcomes, i.e. the outcomes with MCE minus the outcomes without it. Note that the expansion of supply by MCE is guaranteed to result in positive gains from MCE in terms of total welfare and consumer surplus: the price is lower and quantity is higher. As for the grower sector, it does enjoy extra profits from being able to produce more, but the resulting lower price also decreases its profits from the output that would have been produced anyway without MCE. Therefore, one cannot tell a priori if grower profits increase or decrease when MCE is available. The sign and magnitude will need to be determined in the simulations, given the various parameters and functional forms.

Past county yields are from crop reports published by the California Department of Food and Agriculture

The gains are much higher than the ones found in the 1996 report. This is partly due to increased economic activity in general, but probably has to do with more adoption of smart irrigation as well. The total yearly gains in agriculture range between $492 million, taking only the intensive margin effects, and up to about $1,982 million considering the extra acres that can be grown with the saved water. A surprisingly large sector using CIMIS is landscaping and golf courses, with yearly monetary savings of at least $201 million for our survey sample alone. Several other user types were included in the survey, indicating a substantial role of CIMIS in areas crucial for California’s economy. Respondents use CIMIS to plan drainage in agricultural and urban settings, taking advantage of CIMIS historic rainfall records. CIMIS is used for water budgeting and even pricing. Researchers in the public and private sector use CIMIS for diverse purposes, from basic research to calibration and verification of other weather related products. These are just a few of many additional uses of CIMIS we know about, but do not quantify here due to the complicated methodological framework required. The economic gains from CIMIS surely surpass the ongoing costs of a system with less than a dozen employees. However, could these gains be achieved by the private sector? The decreasing costs of weather sensors mean that growers and other users could potentially access precise data on their own. If we wanted a cheap weather station, costing about $1,000, for every 1,000 acres of drip irrigated land in California, the total cost would surpass$2.8 million, plus some ongoing costs for maintenance. This, however, would prevent many benefits from the centralized aggregation of data and the historical records that are crucial for research and planning, as one could not assure that aggregation of the data from all these separate private stations would occur. While several online aggregators of weather information exist,planting in pots ideas many rely on the public information provided by networks such as CIMIS and other government bodies such as airports and air quality monitoring systems. It is not obvious that private aggregators would be profitable if they had to purchase this information, or what their WTP would be.

Moreover, the ET measurements which many growers use are usually not captured by commercial stations, and there are concerns regarding the reliability of ET approximations by other variables. The development of satellite technology might change these conditions in the future.California pistachios are a high value crop, with grower revenues of $1.8 billion in 2016. The most common variety is “Kerman” , and almost all the California acreage is planted in five adjacent counties in the southern part of the San Joaquin valley. In recent years, rising winter daytime temperatures and decreasing fog incidence have lowered winter CP counts. Climatologists have concluded that winter chill counts will continue to dwindle , putting pistachios in danger at their current locations. To better predict the trajectory for this crop and make informed investment and policy decisions, the yield response function to chill must first be assessed. This task has proven quite challenging. The effects of chill thresholds on bloom can be explored in controlled environments, but for various reasons these relationships are not necessarily reflected in commercial yield data. For example, Pope et al. report that the threshold level of CP for successful bud breaking in California pistachios was experimentally assessed at 69, but could not identify a negative response of commercial yields to chill portions of the same level or even lower. They use a similar yield panel of California counties, but only have one “representative” CP measure per county-year. Using Bayesian methodologies, they fail to find a threshold CP level for pistachios, and reach the conclusion that “Without more data points at the low amounts of chill, it is difficult to estimate the minimum-chill accumulation necessary for average yield.” The statistical problem of low variation in treatment at the growing area, encountered by Pope et al., is very common in published articles on pistachios. Simply put, pistachios are not planted in areas with adverse climate. Too few “bad” years are therefore available for researchers to work with when trying to estimate commercial yield responses.

An ideal experiment would randomize a chill treatment over entire orchards, but that is not possible. Researchers resort either to small scale experimental settings, with limitations as mentioned above, or to yield panels, which usually are small in size , length , or both. Zhang and Taylor investigate the effect of chill portions on bloom and yields in two pistachio growing areas in Australia, growing the “Sirora” variety. Using data from “selected orchards” over five years, they note that on two years where where chill was below 59 portions in one of the locations, bloom was uneven. Yields were observed, and while no statistical inference was made on them, the authors noted that “factors other than biennial bearing influence yield”. Elloumi et al. Investigate responses to chill in Tunisia, where the “Mateur” variety is grown. They find highly non-linear effects of chill on yields, but this stems from one observation with a very low chill count. Standard errors are not provided, and the threshold and behavior around it are not really identified. Kallsen uses a panel of California orchards, with various temperature measures and other control variables to find a model which best fits the data. Unfortunately, only 3 orchards are included in this study, and the statistical approach mixes a prediction exercise with the estimation goal, potentially sacrificing the latter for the former. Besides the potential over-fitting using this technique, the dependent variables in the model are not chill portions but temperature hour counts with very few degree levels considered, and no confidence interval is presented. Finally, Benmoussa et al. use data collected at an experimental orchard in Tunisia with several pistachio varieties. They reach an estimate for the critical chill for bloom, and find a positive correlation between chill and tree yields, with zero yield following winters with very low chill counts. However, they also have many observation with zero or near-zero yields above their estimated threshold, and the external validity of findings from an experimental plot to commercial orchards is not obvious.Pistachio growing areas are identified using USDA satellite data with pixel size of roughly 30 meters. About 30% of pixels identified as pistachios are singular. As pistachios don’t grow in the wild in California, these are probably missidentified pixels. Aggregating to 1km pixels, I keep those pixels with at least 20 acres of pistachios in them. Looking at the yearly satellite data between 2008-2017, I keep those 1km pixels with at least six positive pistachio identifications.

These 2,165 pixels are the grid on which I do temperature interpolations and calculations. Observed temperatures for 1984-2017 come from the California Irrigation Management Information System , a network of weather stations located in many counties in California, operated by the California Department of Water Resources. A total of 27 stations are located within 50km of my pistachio pixels. Missing values at these stations are imputed as the temperature at the closest available station plus the average difference between the stations at the week-hour window. Future chill is calculated at the same interpolation points,growing blueberries in pots with data from a CCSM4 model CEDA . These predictions use an RCP8.5 scenario. This scenario assumes a global mean surface temperature increase of 2o C between 2046-2065 . The data are available with predictions starting in 2006, and include daily maximum and minimum on a 0.94 degree latitude by 1.25 degree longitude grid. Hourly temperature are calculated from the predicted daily extremes, using the latitude and date . I then calibrate these future predictions with quantile calibration procedure , using a week-hour window. Past observed and future predicted hourly temperatures in the dormancy season are interpolated at each of the 2,165 pixels, and chill portions are calculated from these temperatures. Erez and Fishman produced an Excel spreadsheet for chill calculations, which I obtain from the University of California division of Agriculture and Natural Resources, together with instructions for growers . For speed, I code them in an R function . The data above are used for estimation and later for prediction of future chill effects. For the estimation part, I have a yield panel with 165 county-year observations. For each year in the panel, I calculate the share of county pixels that had each CP level. For example: in 2016, Fresno county had 0.4% of its pistachio pixels experiencing 61 CP, 1.8% experiencing 62 CP, 12% experiencing 63 CP, and so on.Figure 3.1 presents chill counts and their estimated effects in percent yield change for two time periods: 2000-2018 and 2020-2040. The top left panel shows the chill counts in the 1/4 warmest years between 2000 and 2018 . The top right panel shows the chill counts in the 1/4 warmest years in climate predictions between 2020 and 2040. Chill at the pistachio growing areas is likely to drop substantially within the lifespan of existing trees.Results from the polynomial regression are presented in Table 3.2 . The first coefficient is for an intercept term, and it is a zero with very wide error margins. This makes sense, as centering around the means also gets rid of intercepts. The second coefficient is positive, as we would expect, and statistically significant. The third coefficient is negative, as we would also expect since the returns from chill should decrease at some point, but not statistically significant even at the 10% level. However, as dropping it would eliminate the decreasing returns feature, I keep it at the cost of having a wide confidence area. With the estimated coefficients, I build the polynomial curve that represents the effect of temperatures on yields. It is presented in Figure 3.2 with a bold dashed line. The 90% confidence area boundaries are the dotted lines bounding it above and below. Note that the upper bound of the confidence area does not curve down like the lower one. This is the manifestation of the third coefficient’s P-value being greater than 0.1. In both cases, the confidence area was calculated by bootstrapping. The data was resampled and estimated 500 times, producing 500 curves with the resulting parameters. At each CP level, I take the 5th and 95th percentiles of bootstrapped curve values as the bounds for the confidence area. This approach also deals with the potential spatial correlation in error terms. Another minor issue requiring the bootstrap approach is that the implicit potential yield estimation should change the degrees of freedom in the non-linear regressions when estimating the standard errors. In the lower panel of Figure 3.2, a histogram of positive shares is presented. That is, for each chill portion, the count of panel observations where the share of that chill portion was positive. The actual shares of the very low and very high portions are usually quite low. This shows the relatively small number of observations with low chill counts. The two yield effects curves look very similar in the relevant chill range. By both estimates, the yield loss is very close to 0 at higher chill portions, and starts declining substantially somewhere in the upper 60’s, as the experimental literature would suggest. Interestingly, the polynomial curve does not exceed zero effect, although it is not mechanically bounded from above like the logistic curve. This probably reflects the fact that historically, the average growing conditions has not deviated much from the optimal range. The “within” transformation hence did not deviate the potential yield much from the optimum in this case. At currently low chill portion ranges of 55-60, the effect is around 25%, again consistent with the stipulation of Pope et al. that a significant effect threshold would be located there. Considering alternate bearing and other factors contributing to the background fluctuation in yields, it is easy to understand how such effects on relatively small areas within the pistachio growing counties have not been picked up by researchers so far.

Plants are fertilized with controlled-release fertilizer applied at plant-out

The management of the agricultural lands will be guided by an advisory committee, but the overall goal is to develop models, for the greater watershed, of ecologically and economically sustainable methods for crop production. An additional research site is located on the Elkhorn Slough National Estuarine Research Reserve. The site includes a small pond drained by sloping uplands. It is very similar to the three drainages on the Azevedo ranch, with the important exception that it has never been cultivated. Although the pond is larger than any of the Azevedo marshes and is subject to greater flushing, it provides the opportunity to obtain background data on soils, sediments, and biota in the absence of agricultural disturbances. During the first two years of the study we established critical measurements, protocols, and characterizations of these watersheds under standard cultivation practices. These data will serve as a baseline for comparison once the property is converted to low-input sustainable agricultural management and habitat restoration is completed in the wetland buffer. Conversion and restoration will occur in 2 to 4 years, once the land has been fully paid for. The project is guided by a Technical Advisory Committee which meets monthly. Although this report marks the end of Project Number UCAL-WRC-W-801, the project is ongoing. Our long term goal is the investigation of linkages between different farm management practices and health of the adjacent slough, as monitored by sedimentation, input of anthropogenic chemicals, water quality, and the response of wetlands flora and fauna. In the future, we will implement and test alternative farming practices that lessen or eliminate the dependence on synthetic chemical inputs. We will also be able to assess the influence of border zones at the land-water margin as buffers between agricultural uplands and estuarine receiving waters. The lead author recently submitted a proposal to the UC Water Resources Center entitled, “Evaluating Vegetated Buffer Zones Between Commercial Strawberry Fields and the Elkhorn Slough Estuary.”Soil water and nitrate movement through the surface soil were studied using porous cup lysimeters. In the first year,tomatoes grow bags twelve lysimeters were installed in the Central Field and six in the grassland control site at the Elkhorn Slough NERR.

Lysimeters were place in pairs at one foot and two foot depths to sample the root zone and below the principal root zone. In the crop field, three pairs were placed low on the slope, and three pairs higher up on the slope. In the grassland all three pairs were placed at a similar slope position. First year results showed a great deal of variation in nitrate-nitrogen levels in strawberry bed soil-water. It was not possible to determine the direction of movement or any strong response to seasonality. Furthermore, we found that surface runoff was extremely significant in nutrient loading into the pocket marshes. We also became interested in focusing on the vegetated borders between the pocket marshes and the cropland as a potential means to reduce the soil-water nutrient content before it entered the marshes. Therefore, in year two, the lysimeters were moved to the uplands/wetlands interface, at the border of the cropped area. Stations consisting of four lysimeters placed at the two foot depth, one meter apart from one another, were placed at four locations in the field. Two stations were placed at the bottom of roadways, where surface flow is concentrated, and two stations were placed where there were no roads and surface flow was minimized. During sampling times, vacuums were drawn using a hand pump and left overnight. Water samples were collected the next day and nitrate-N levels measured on the same day in the lab using a selective nitrate ion probe. Subsamples were frozen and later analyzed for nitrate-N using a spectrophotometer. . Samples were collected approximately monthly from January to June 1993, with a final sample in August. During 1994, samples were collected twice-monthly, from January to April. These 1994 samples were also analyzed for ammonium-nitrogen and phosphorous content. In 1993, grassland soils dried up by mid-June so no samples could be collected. Grassland samples were not collected in 1994.The pocket marshes on the Azevedo property are separated from the slough by a railway dike , and water exchanges between the slough and the pocket marshes through culverts under the dike.

We call the marshes South Marsh, Central Marsh, and North Marsh. The heights of the culverts varies and there is a gradient of flushing and size with the North Marsh being the largest and best flushed and the South Marsh being small and having little if any connection to the tidal waters. The Central Marsh is intermediate in size and flushing with some input from a perched freshwater pond at the upland end of the marsh. The pocket marshes were added as sample sites to a hydrological monitoring program that has been sampling surface water at 21 stations around the slough since 1988 . Once each month we monitored water temperature, salinity, oxygen, turbidity using a Nephelometer, pH, phosphate, nitrate, and ammonium. Water in the pocket marshes was sampled from the 0.5 m depth without disturbing sediments. Water chemistry analyses were conducted by the Monterey County Water Resources Agency . Hydrologists developed contour maps of the marshes which established limits of pickle weed and upper limits for the potential restoration of tidal action. Characterization of the vegetation entailed the use of line intersect, point intersect, and quadrat methods. The sampling was stratified based on vegetation patterns. Vegetation was sampled in the upper mudflat zone, midPickle Weed . During one hour observation periods, all waterbirds present were counted using binoculars and a lOx spotting scope. Birds were observed from the railroad berm at five to eight feet above marsh level and 10 to 100 meters distant. Detailed results are reported in a senior thesis by Neuman and Hickey.Although strawberries are a perennial plant, they are treated as an annual in most coastal California strawberry growing areas. Because of the marginal nature of Elk hom Slough soils for fanning, strawberry culture in the region is generally done on a two-year cycle in order to avoid the expense of replanting every year. Field preparation begins in September when irrigation systems are removed and fields are ripped and chiseled in preparation for whole-field fumigation with a mix of methyl-bromide and chloropicrin. Fumigation destroys soil pathogens, weed seeds, and most soil biota. After fumigation,grow bags garden beds and furrows are formed and planting in raised beds begins in mid-October or November. Varieties used were Selva and Seascape. Drip irrigation systems are installed and plants are irrigated as necessary. Harvest begins in March! April and continues until fall. Plants left in the ground for a second harvest season yield a berry that is smaller and softer, and these fruits are generally not fresh-marketed but used for processing. The Azevedo Ranch is divided into two agricultural leases. Mr. C. directly leases the South Field, and Mr. S. subleases the central and North Fields from a shipping firm which also functions as a lender. Both leases were planted in fall 1991, including fumigation, and again in fall 1993. Mr. C. leases 30 acres and has leased that plot for 7 years.

Mr. S. farms a total of 64 acres, with 10 acres planted with cv. Seascape, and the remainder in cv. Selva. Mr. S. farms a total of 357 acres of strawberries in the Elkhorn Slough watershed, and this was his first crop on the Azevedo property. Prior to and during the harvest season, applications of insecticides, miticides, and fungicides are made on a regular basis, in a cycle sometimes called pick and-spray. Harvest crews work from one end of the field to the other, and sometimes they are still finishing up picking when the spray crew starts its work at the opposite end of the field. Mr. C. utilizes a tractor to apply pesticides, which is more typical of practices in the watershed, while Mr. S. uses farm laborers to apply pesticides by hand using hoses connected to a tank truck. The analysis of fertilizer applied on the north and Central Fields at plant-out in1993 was 18% nitrogen , 8% available phosphorous, and 13% soluble potassium. Mr. S. applies this material by hand to the surface of the beds, down the center, while Mr. C. drills the material into a slot in the center of the beds so that it is buried. In January 1993, Mr. S. applied granular 6-20-20 by hand to the top of the strawberry beds. In January 1994 he applied ammonium sulfate in the same fashion, at the rate of about 80 pounds per acre . Both growers apply nitrogen through the drip irrigation system during the summer. The Elkhorn Slough area is extremely active and has been subject to large scale changes over geologic time. Hydrologically, the area may have been a flood plain for one of the largest rivers in Pleistocene California. More recently, the area has been subject to continued changes in land-use patterns and vegetation cover due to human influences. The Azevedo Ranch lies within the Salinian Composite Terrane, which is bound by the San Andreas Fault to the east and the Sur-Nacimiento Fault Zone to the west. It is unrelated to contiguous terranes, suggesting that it has migrated from its presumed origin 1500 km to the south . The basement rocks consist of quartz-di orite-ganodio rite rock which are Precambiam to late Mesozoic. In the early Pleistocene the lower reaches of Elkhorn Slough and Elkhorn Valley appear to have been part of a large riverine system draining the Santa Clara Valley and/or the California Central Valley . In the late-Pleistocene the watershed area for the Elkhorn Valley was tectonically truncated, substantially reducing the volume of water moving through to the ocean and limiting the flushing and scouring of Elkhorn Valley. During the most recent glaciation event 16,000-18,000 years B.P., a channel over 29 m below present day sea level was cut through the slough. As the earth’s temperatures rose and sea-level began to rise, this channel was flooded. The sediments in the slough are characterized by a finer texture size as one approaches the top of the sediment layers from below where non-marine gravels dominate. During the past 5,000 B.P. until 1946, salt marshes developed along landward margins of the slough. These marshes reduced the energy of the water, allowing further sedimentation and development of marsh vegetation toward the axis of the slough. This is the process by which the Azevedo Ranch pocket marshes were created. Until 1946 the slough was a shallow, quiet-water embayment with restricted tidal action. In 1947 Moss Landing Harbor was built, opening the Elkhorn Slough to direct tidal action. The effect was rapid and dramatic, and today erosion of wetland habitat in the slough continues to be a major concern .There are three main land formations from which the major soil types have been derived: aeolian or colluvian Aromas Red Sands, wave cut terraces, and alluvial sand, silts, and clays. The mapped soils and their classifications are listed in Table 1. Several soil pits made in the terrace and Central Field suggest much greater diversity in soil types and origins on the Azev-vlo Ranch. Many seeps have been observed, where ground water surfaces through soil discontinuities or is forced to the surface by impermeable boundaries. Furthermore, a thick clay layer devoid of sands was found on the slopes of the marshes, suggesting lake deposited clays in a time of slower moving water. The discontinuity of the marine terrace sandstone indicates that it has been eroded by water draining from the uplands. The Alviso series is alluvial consisting of fine texture sizes. The soils have a great deal of organic matter, and unless artificially drained can be almost completely anaerobic below the soil surface. These soils are dominated by wetland plants, Salicornia virginica and Distichlis spieata. This soil encompasses the pocket marshes and their margins, including the area presently farmed. These are considered to be relatively young soils. The soil survey map for the Azevedo Ranch shows Arbuckle gravely loam on the terrace between the north and Central Field.

Subsequent studies have largely confirmed these initial estimates

Unlike industrial systems, agricultural systems are subject to the influence of weather patterns, soil type, geography, and management practices. Even the same agricultural product may have drastically different input structures, hence environmental impacts, in different regions. Therefore, average data with generic descriptions of material and energy fluxes are hardly adequate to capture the high degree of system variability of agricultural products. With the rising interests in bio-fuels as a means to combat climate change across the world, we strongly recommend future studies in this area to take into consideration the spatial variability of biomass growth. Just as technological and environmental variability exists across states, there is probably certain variability within a state, too, that may not be precisely captured by state average data. This does not mean, however, that state-level data should be dismissed for the research question at hand because they are still likely more reflective of local or farm-level practices than national averages. In addition, state average data are especially valuable and representative, more so than farm-level data, in situations in which massive land shift between crops takes place within a state. Nevertheless, we encourage finer-scale, more detailed studies into land shift between cotton and corn and associated environmental impacts, which could not have been conducted in our analysis due to the data limitation and resources constraints.Additional research is needed to paint a more complete picture on the impact of cropland conversion to corn: In 2005, 41 states grew corn and 17 states grew cotton, among which only 19 of the corn-growing states and 7 of the cotton-growing states had data on major inputs that can be used to generate LCIs . Among these states, only three overlap, namely, North Carolina, Georgia, and Texas. Therefore, this study does not quantify the environmental impact and their trade-offs in other cotton-growing states where conversion to corn might have happened. Nevertheless,plastic garden container environmental implications of cotton-to-corn land shift in these other states are probably worse than that indicated by Fig. 2.1 and closer to that indicated in Fig. 2.2 because cropland in southern states are generally less suitable for corn growth than the Corn Belt.

Future studies pursuing this line of research may make the effort to quantify the magnitude of land shift in each cotton-growing state when relevant data on agricultural inputs, environmental outputs, and acreage of conversion become available. Furthermore, it is worth noting that spatially detailed data are often unavailable or incomplete, although such data can improve the environmental relevance of an LCA study. In this case, one may rely on assumptions or spatially generic data to fill the gaps, and this may increase the uncertainty of the LCA results . In our study, data on agricultural inputs such as fertilizers and pesticides were available at the state level, but we often relied on spatially generic emission factors to estimate their emissions . Further, the LCA results for corn and cotton were found to be moderately sensitive to the emission factors which are likely to vary across regions . Future spatially explicit LCAs on agricultural systems may take this into account and direct efforts to estimate spatially differentiated emission factors.For the potential to mitigate climate change, reduce dependence on oil imports, and invigorate rural economic development, bio-fuel development in the USA has been supported by an array of policy measures . Among them is the federal Renewable Fuel Standard , a mandate that requires 140 billion liters bio-fuels to be produced annually from different sources by 2022. Corn ethanol is currently the primary bio-fuel and is likely to continue dominating US bio-fuels market as cellulosic and other advanced bio-fuels are far from mass production . Driven by the favorable policies and high oil prices, corn ethanol production has increased eight-fold since 2000, to the current level of about 50 billion liter per year. Early Life Cycle Assessment research on corn ethanol was largely in support of the policies aiming partly at reducing greenhouse gas emissions. As is typically done in LCA, these studies quantified GHG emissions generated at each stage of corn ethanol’s life cycle, summed them up, and then compared the results against that of gasoline. Corn ethanol was found to have 10–20 % lower life cycle GHG emissions than gasoline and, therefore, concluded to provide a modest carbon benefit in replacing gasoline . However, the conclusion was later called into question, when the land use change effects of corn ethanol expansion emerged in the literature . Converting natural vegetation or forestland to corn field for ethanol production releases a substantial amount of carbon from soil and plant biomass, creating a “carbon debt” that could not be repaid in dozens of years or even a century . Similarly, diversion of existing cropland for ethanol could generate indirect LUC effect through market-mediated mechanisms . In this scenario, corn ethanol expansion reduces food supply, which could lead to conversion of natural vegetation or forestland elsewhere in the world to compensate for the diverted grains.

While the concept of iLUC has become widely accepted in academic and policy arenas , quantification of iLUC emissions is known to be difficult and highly uncertain . Plevin et al. , for example, estimated the range from 10 to 340 CO2e MJ−1 y−1. This wide range is due in large part to a lack of quality data and detailed understanding as to how the global agricultural market would respond to bio-fuels expansion . In contrast, the direct land use change emissions can be relatively accurately quantified . Previous studies used the concept of carbon payback time to measure the magnitude of dLUC effect of corn ethanol. While the initial carbon debt due to land conversion may be large, it can be repaid over time by the annual carbon savings corn ethanol yields in displacing gasoline because corn ethanol has lower life cycle GHG emissions. The first dLUC study estimated that 48 years would be required for corn ethanol to pay back its carbon debt if the Conservation Reserve Program land is converted and that 93 years would be required if central grassland is converted .Gelfand et al. conducted a field experiment on CRP land conversion to measure its carbon loss. They found that approximately 40 years would be required for the use of corn ethanol to pay back this carbon loss with the converted land under no-till management. In another study, Piñeiro et al. arrived at a similar estimate of approximately 40 years for the payback time for CRP land conversion to corn ethanol. However, these studies were based on several oversimplifications that may substantially affect their results. First, these studies assumed that newly converted land has the same yield as existing cornfields, neglecting the potential yield differences of newly converted land. In particular, CRP land is generally less fertile than cornfields that have been in continuous use . Thus, corn ethanol from CRP land generates lower annual carbon savings, hence a longer payback time. Land with extremely low yield may even fail to provide any carbon savings, in which case the carbon loss due to land conversion is permanently lost. Second, the dLUC studies relied primarily on life cycle assessments based on early bio-fuel conversion processes that did not reflect the productivity improvements that have occurred in the past decade due to yield and energy efficiency increases at both the corn growing and ethanol conversion stages . Recent studies have shown that corn ethanol’s carbon benefit has increased to up to 50 % , compared with earlier estimates of 10–20 % . The productivity of the gasoline production system over the same period of time has been fairly steady . The productivity improvements in the corn ethanol system result in greater amounts of annual carbon savings that, if considered, would yield a shorter payback time than previously estimated. Finally, the dLUC studies used the global warming potential 100 to assess the global warming impact of corn ethanol, gasoline, and dLUC emissions. This approach assumes equal weights to GHGs emitted at different times. More recent literature explores the application of different weights to GHG emissions emitted in different times. First,plastic pot from a scientific point of view, increasing background GHG concentrations in the atmosphere result in a diminishing marginal radiative forcing for a unit GHG emission . The rate at which the relative radiative forcing effect of a unit GHG emission diminishes depends on future atmospheric GHG concentrations.

Reisinger et al. , for example, estimated that the 100-year Absolute Global Warming Potential of CO2 from 2000 to 2100 could decrease by 2 to 36 % under various GHG concentration scenarios. Second, a series of articles have attempted to synchronize the temporal system boundary under which life cycle emissions are taken into account and the time horizon under which characterization factors are derived. For example, if GWP100 is to be used, one can set the temporal system boundary to the next 100 years and account for the radiative forcing effects that occur within that time horizon . One of the rationales is that the efforts to reduce GHG emissions today is perhaps more valuable than those in the future because climate change may bring about irreversible damages to the planet . In this class of literature, simple climate-carbon cycle model like Bern model or simple first-order decay model is used to calculate atmospheric load of GHGs over time, and corresponding radiative forcing . Background concentrations of GHGs are, however, generally assumed to be constant in the literature. Third, some argues that future climate change impacts should be discounted at certain rates using the net present value approach . These approaches use different rationales and involve varying degrees of subjectivity in, e.g., the choice of emission scenarios and discount rates. For the sake of simplicity, however, these approaches are collectively referred to as dynamic characterization method in this paper. The objective of this study is to re-examine corn ethanol’s CPT, taking into account the potential yield differences of converted land and technological advances within the corn ethanol system. We also examine how dynamic characterization of GHG emissions changes the CPT using one particular approach as an example. We focus on conversion of CRP land primarily for ease of comparison with previous studies and also because there is evidence indicating that conversion of CRP land to cornfield has occurred with the expansion of corn production in the past decade . We start with estimating the amount of annual carbon savings that can be generated by corn ethanol from an average cornfield and how the amount changes over time. For this analysis, we use the Bio-fuel Analysis Meta-Model with several modifications . Specifically, because the base year of EBAMM is 2001 , we incorporate into the model historical data on the process inputs and outputs of corn growth and ethanol conversion for 2005 and 2010 to reflect the system’s productivity improvements in the past decade . We project further productivity improvements to 2020 using projections in the Greenhouse gases, Regulated Emissions and Energy use in Transportation model . We assume that technology advancement stabilizes after 2020 . Detailed information is provided in Appendix B. We then incorporate yield differences into the model to approximate the amount of annual carbon savings that the CRP-corn ethanol system provides. The CRP program, established by the Food Security Act of 1985, is intended to retire highly erodible and environmentally sensitive cropland from production . Because highly erodible land is less productive in general, the program enrolls land with lower productivity indirectly . Additionally, due in part to the early payment scheme— the maximum acceptable rental rates—farmers tended to offer their low-quality land for CRP consideration while retaining productive land for continuous cultivation . As a result, CRP land appears less productive than other types of cropland, including land that shifts into or out of the cultivated cropland from less, other intensive uses . Direct measurements of crop yield on CRP land are scarce, but measurements of crop yield on marginal land, including CRP and shifting land, can be used as indications of the relative yield differences between CRP land and average croplands .