There is now strong evidence that the Earth’s climate is changing due to human activities

Modern nanochemistry has developed efficient techniques to manipulate nanoscale objects with a highly advanced degree of control. Chemically engineered nanoparticles can be synthesized with a large choice of sizes, shapes, constituent materials and surface coatings, and further assembled spatially into self-assembled structures, either spontaneously or in a directed manner. Advances in particle self-assembly and the quasi unlimited range of nanostructures with controlled architectures and functions available suggest that such assemblies may also provide a simple route to meta materials at infrared and visible length scales. Indeed, nanochemistry and self-assembly strategies are able to inexpensively produce materials whose inner structure is natively in the right range of sizes for optical and infrared applications and can provide fully three dimensional structures, thus opening the way to the fabrication of 3Dmeta material samples of finite volume of the highest importance to many applications. Such meta materials may be used, for example, to create 3D homogeneous, isotropic negative index materials , with simultaneously negative permittivity and magnetic permeability, cloaking devices or light-based circuits manipulating local optical electric fields rather than the flow of electrons. In this work we investigate certain EM properties of meta materials formed by densely arrayed clusters of plasmonic nanoparticles, big round plant pot which will be referred to as nanoclusters. Nanoclusters are formed by a number of metal nanocolloids attached to a dielectric core, as in the examples shown in Fig. 1, and can be easily realized and assembled by current state-of-the-art nanochemistry techniques.

Such a kind of structure generalizes the concept of nanorings originally proposed in [2] to realize a magnetic media at visible frequencies and has been recently shown in [3] to have the potential of providing resonant isotropic optical magnetism. An approximate model based on the single dipole approach in conjunction with the multipole expansion of the scattered field is used here to evaluate the electric and magnetic polarizabilities of the nanocluster. Then, the permittivity and permeability of the composite medium are estimated by the Maxwell Garnett homogenization model. Results obtained by this approximate method will be compared with data from full-wave simulations, focusing on the characterization of the nanocluster resonant isotropic electric and magnetic responses to anincident wave field, and the possibility to realize an isotropic NIM at optical frequencies.Extreme droughts are increasing in frequency, severity, and duration in arid and semiarid regions around the world due to climate change. As a result, plant species that are typically capable of withstanding regular drought stress are exposed to conditions outside of their normal range, rendering them susceptible to opportunistic disease-causing agents. Theoretical frameworks describing the roles of environmental and biotic stressors in driving plant mortality are well established. However, there is a lack of empirical data with which to resolve how these factors interact in vivo. Furthermore, studies that document progression of stress and die back throughout the course of a multi-year drought event in situ are rare. In this dissertation, I detail a series of studies aimed at understanding mechanisms of dieback and mortality by focusing on a severe canopy dieback event in a classically drought tolerant chaparral shrub, big berry manzanita in Santa Barbara, California, during an historic California drought.

I provide strong evidence that dieback is caused by members of the fungal Botryosphaeriaceae family in conjunction with extreme drought, and that dieback is also related to increased drought stress along an elevational gradient. By conducting a field survey, I identify Neofusiccocum australe as the most prevalent and widely distributed fungal pathogen in A. glauca., and that dieback is strongly correlated with Bot. infection. Using a full-factorial design in a greenhouse experiment, I provide evidence that extreme drought and infection by N. australe can indeed act synergistically, together driving faster and greater mortality in young A. glauca than either factor alone. Lastly, by taking measurements on water availability, dark-adapted leaf fluorescence, and photosynthesis in A. glauca shrubs across an elevational gradient, I provide evidence that landscape-level factors can contribute to localized variability in water stress and canopy dieback severity in A. glauca, and may be useful in predicting vulnerabilities during future drought. Remarkably, no new mortality was observed throughout the study, suggesting extreme resiliency in adult shrubs. However, canopy dieback alone can impact wildlife and fuel loads, even when not associated with mortality. Together, these results provide strong evidence that A. glauca dieback was caused synergistic effects between extreme drought and infection by N. australe, and that lower elevations and exposed slopes may be at greatest risk for future events. According to the most conservative estimates, global mean annual temperatures are now outside the historic range of the last 1,300 years . Simultaneously, mean annual precipitation has declined in many parts of the Northern Hemisphere, resulting in increased drought events . Extreme climatic shifts are predicted to affect, both directly and indirectly, biogeochemical cycling, energy fluxes, wildlife habitat, and ecosystem goods and services on a global scale . An important component in preparing for the effects of these events is to understand how communities will change in response to them, making this a critical topic for ecological research . For species to survive in dry climates, they must have evolved drought tolerance mechanisms .

However, extreme climate events can expose species that are typically capable of withstanding regular drought stress to conditions outside of their normal range. Furthermore, physiological responses to extreme drought can also have a negative feedback on plants’ defensive abilities, rendering them susceptible to biotic attack including by insects or disease agents . Consequently, synergies between extreme climatic events and biotic attack will likely lead to more dramatic changes than would otherwise occur in historically “drought tolerant” plant communities . Future climate change is expected to exacerbate these interactions worldwide. . Widespread tree mortality from drought has been documented in forested systems around the world , and biotic attack has been associated with many of these events . However, much less focus has been given historically to understanding the consequences of extreme drought on shrubland communities like chaparral , particularly in conjunction with biotic influences. Therefore, as we face predictions of hotter, longer, and more frequent drought , it is becoming increasingly critical to hone in on the mechanisms, tipping points, and ecosystem impacts of these events. Furthermore, identifying plant mortality thresholds is of upmost importance for predicting susceptibility to extreme drought events of the future . California recently experienced a record-breaking, multi-year drought from 2012- 2018, estimated to be the most severe event in the last 1,000 years , with the 2013-2014 winter season being one of the driest on record . Drought tolerance has long been considered a common trait of shrub species in California chaparral communities where hot, rainless summers are the norm . However, round plant pot in the Santa Ynez mountain range in Santa Barbara County, the dominant and widespread big berry manzanita exhibited dramatic dieback related to multi multi-year drought along with infection by opportunistic fungal pathogens in the Botryosphaeriaceae . These observations indicate that this species may be reaching a threshold in its drought resistance capabilities. Studies have reported Arctostaphylos spp. to exhibit unusual scales of dieback during periods of extreme drought stress, , however this could be the most severe dieback event in recent history, both in terms of scale and severity. Manzanita are important members of the chaparral ecosystem, providing habitat for wildlife and food through their nectar and berries . Additionally, their structure makes them important components of historical chaparral fire regimes, and their fire-induced germination strategies contribute to post-fire successional trajectories . Large-scale mortality of this species could reduce resource availability for wildlife, as well as alter fuel composition and structure in the region, resulting in an increased risk of more intense, faster burning fires. Therefore, the potential continued dieback of A. glauca is of great concern for both ecosystem functions and human populations alike. Significant dieback of A. glauca in Santa Barbara county, California, was first observed in winter, 2014 . Preliminary observations revealed patterns of dieback occurring along an elevational gradient, with effects being most pronounced at lower elevations than at higher elevations. It was also observed that dieback was most prevalent in stands located on steep, exposed southerly-facing slopes. These observations are consistent with findings by previous studies , Since A. glauca is classically drought-tolerant and able to function at very low water potentials , it raises the question of what is driving this extreme dieback event?

Could A. glauca be reaching a tipping point as a result of extreme drought stress, presence of a fungal pathogen, or both?My dissertation research focuses broadly on the influence of drought and fungal pathogens on this classic, drought tolerant chaparral shrub species. Through a combination of methods, I explore the individual and interacting roles of water stress and opportunistic fungal pathogens in A. glauca in a major dieback event, and track the fate of individual shrubs through the progression of an historic drought. My findings are organized into three chapters based on the following questions: What are the identities and distribution of fungal pathogens associated with A. glauca dieback ; How do drought stress and fungal infection interact to promote dieback and mortality in A. glauca ; and How does A. glauca dieback progress over time during drought, and how do landscape variables and drought stress correlate with dieback ? In Chapter 2, I identify fungal pathogens in A. glauca, and discuss their distribution across the landscape in the Santa Barbara county front country region. Based on preliminary findings showing significant levels of N. australe in the field, I expected to find high incidence of this opportunistic pathogen in A. glauca across the landscape, suggesting their role in drought-related dieback. The data support this prediction, as over half of the pathogens isolated were members of the Bot. family, and the majority of these were identified as N. australe, a novel pathogen in the region. Furthermore, Bot. infection was highly correlated with dieback severity, which was greatest at lower elevations. Taken together, the results show that opportunistic Bot. pathogens, particularly N. australe, are highly associated with A. glauca dieback across the landscape, and that lower elevations may be particularly vulnerable. In Chapter 3, I address the hypothesis that extreme drought and N. australe function synergistically to promote faster and greater mortality than either factor alone. I designed a full-factorial greenhouse experiment to identify whether A. glauca dieback is driven by extreme drought, infection by the fungal pathogens, or both. The results of this experiment support my hypothesis. Young A. glauca inoculated with N. australe while simultaneously exposed to extreme water stress exhibited faster stress symptom onset, faster mortality, and overall higher morality than those subjected to either factor alone. These results provide strong evidence that the severe A. glauca dieback event observed during the 2012-2018 drought was the result of synergistic interactions between extreme drought and opportunistic pathogens, rather than the nature of the drought or particularly virulent pathogens. In Chapter 4, I explore factors that are associated with climatic stress in order to draw correlations between A. glauca stress and dieback severity. Identifying such relationships can be useful in making predictions on dieback and mortality across the landscape. By analyzing data on predawn xylem pressure potentials and net photosynthesis in shrubs along an elevational gradient, I found that patterns of water availability and physiological function both varied greatly across the landscape, and only weakly correlate with dieback severity, suggesting factors other than elevation and aspect must also be important in driving plant stress and dieback. Extreme heterogeneity across this landscape likely confounded my results, yet may also play an important role in supporting the resiliency of A. glauca populations as a whole. By measuring the progression of dieback in these same shrubs over time, I found that dieback severity throughout the drought increased most at lower elevations compared to high, providing evidence that shrubs at lower elevations may be particularly vulnerable. Unexpectedly, no new mortality was observed in surveyed shrubs as the drought progressed, even though many plants exhibited severe levels of dieback throughout the study. This result shows that high levels of dieback severity do not necessarily predict morality in A. glauca. In summary, my dissertation provides strong evidence that A. glauca dieback during the recent California drought was caused by synergistic interactions between extreme drought stress and infection by widely distributed opportunistic fungal pathogen N. australe.

The method and analysis for the study area could be extended to include net metering

Analysis in the following sections expands on previous work by including consideration of recharge water from reservoir reoperation, evaluation of recharge water sourcing, cropland characteristics and groundwater hydrology for a site-specific setting and demonstrating a hydro-economic optimization approach that simulates separate decisions for land access and water delivery in the performance of Ag-MAR.The regional-scale analysis is conducted for a semi-arid part of California, USA that has conditions fairly common for many parts of the globe. The two groundwater sub-basins in the study area are part of the much larger Central Valley groundwater system with an interfingered assemblage of alluvial and flood-basin deposits of local maximum depth exceeding 1,000 ft . Many of the sub-basin boundaries shown in Fig. 2a are arbitrarily based on surface-water features, and the southern boundary has recently been adjusted northward to accommodate governance considerations for current groundwater management efforts . The 525,000-acre study area has a mix of urban , agricultural , wetland and undeveloped rangeland land uses . Over 90% of the total water use in the study area is supplied by groundwater . Moreover, square pot plastic approximately 41% of the agricultural acreage is planted as vineyards and orchards . This investment in perennial crops hardens water demand and intensifies groundwater extraction during droughts. The spatial distribution of recent water levels indicates localized depressions from extractions far exceeding groundwater recharge . Groundwater levels have dropped as much as 60 ft over the past several decades so that surface water frequently becomes disconnected from saturated groundwater and drains into the subsurface. The lower reaches of the Cosumnes River, in the central part of the study area , are dry 85% of the time .

New regulations for sustainable groundwater management in California require that this chronic lowering of groundwater levels and depletion of storage be addressed through active measures . While restoration of surface-water base flow in the study area may not be required because impact occurred before implementation of the regulations, there is interest in maintaining, and possibly improving, groundwater support of surface-water flows . Consistent with recent analysis , local stakeholders are interested in harvesting runoff from high-precipitation events for recharging groundwater. One option is reoperation of Folsom Reservoir to release extra water in advance of significant rain events alone could achieve a potentially significant amount of aquifer recharge using some of the 140,000 ac of croplands in the study area . This work presents a planning-level analysis of what might be possible. While infrastructure construction costs are not considered, the results of this work might encourage further evaluation of necessary investments.A retrospective analysis is conducted to evaluate the range of improvements in groundwater system state that might have occurred for the study area from an Ag-MAR recharge program. Recharge water is from simulated reoperation of Folsom Reservoir with delivery through the Folsom South Canal consistent with capacity limitations over a 20-year period that covers water years 1984 through 2003 . The timing and amounts of surface water delivered to croplands for recharge application is prescribed by a linear programming model that combines available information regarding surface water and groundwater hydrology with the spatial distribution of croplands. Groundwater recharge is simulated with a groundwater/surface-water model that incorporates existing land uses, surface-water deliveries and groundwater demands over the period considered .This analysis applies a formulation of simulation-optimization to MAR.

Previous work includes Mushtaq et al. who simulated unsaturated flow from individual recharge basins and applied nonlinear programming to identify optimal loading schedules for maximizing recharge volume. Marques et al. included decisions for recharge area allocation and water volume application as part of a two stage quadratic programming analysis that maximized crop profits. Hao et al. used a genetic algorithm to maximize recharge volume while meeting constraints on groundwater elevations. To the best of the authors’ knowledge, the approach presented here is new in that it combines elements of recharge basin and groundwater hydraulics with economic considerations at a regional scale. The foundation of the linear programming approach is based on the study area hydrology which is adapted to include economic considerations regarding land use. A hydrologic formulation is presented as an explanatory step in developing the full hydro-economic formulation.The formulation objective, Eq. , maximizes the volume of water recharged over the planning horizon subject to a set of operational constraints. The total volume of water recharged in any period t cannot exceed the water available for recharge , which is derived from a reoperation of Folsom Reservoir to provide additional water during November through March each year. The reoperation is performed by maximizing reservoir releases during the aforereferenced months while maintaining expected levels of service for flood control, water supply and hydropower generation . The levels of service are maintained with a set of optimization constraints that include downstream requirements for minimum environmental flows and water supply as well as the reservoir operation rule curve. The analysis is based on a perfect foresight formulation which provides an upper bound for recharge water available from the reservoir. A static upper bound on the volume of water recharged at a particular location , is based on local infiltration capacity and field berm height through an analytical ponding and drainage model described in the Appendix. Equations and dynamically constrain the magnitude of recharge decisions as a result of a cap on groundwater elevation to avoid water-logging of soil. This constraint is tied to the buildup and redistribution of recharged water as a result of groundwater flow and is described further in the Appendix.

Negative recharge decisions are prevented with Eq. .Cropland area use for recharge as a function of funding is presented in Fig. 10a,b. These are the results of parametric analysis using Eq. . Differences between the results for hydroeconomic analyses and the reference curves occur because, as indicated by the curves for individual crop categories , some of the more expensive land is brought into use before all of the least expensive land has been used. This result is driven by variation in infiltration rate across the study area which is controlled by the shallow geology and the interconnectedness of high conductivity sediments at depth used in the ponding model of Eq. . Figure 11 shows the spatial distribution of land use for two different levels of funding. For low amounts of funding, land is brought into use where there is a combination of cheaper land and higher infiltration rates in an effort to maximize the product of decision variables RA and D. This observation is consistent with the steep slope of recharge volume as a function of funding for land use at low funding levels . Spatial distribution of the recharge water cumulative depth per year is presented for the maximum funding and land use in Fig. 13. The values are generally within a reasonable range based on currently available information on crop inundation tolerance; however, large plastic planting pots constraints could be added to control cumulative water application as necessary. Figure 14 indicates the increase in groundwater storage from recharge using all of the cropland . Recharging over the 20-year planning period used 36% of the WAR . Simulation of the optimal recharge scenario with the groundwater model indicates the most of the water remains in the groundwater system ; however, appreciable amounts exit to surface water or flow across sub-basin boundaries . Additionally,the recharge provides enough base flow to support flow in the Cosumnes River throughout the 20-year simulation except during a 5-year drought from 1987 through 1992. Table 2 presents results for a range of recharge funding levels. Volumes discharging to surface-water and flowing to other sub-basins increase with the volume recharged since head buildup from adding water to the system is more pronounced. Comparison of the recharge volume results from the hydroeconomic analysis for cost set No. 1 with reference curves from the initial capture analysis indicates the effect of including study area hydrogeology in the analysis . High infiltration rate sites are selected preferentially, even when the amount of recharge area is limited by funding, and plot on the left side of the hydro-economic curve. These sites drain quickly and the results plot above the reference curves . Only few such sites are within the footprint of the cropland and, when greater amounts of land are used for recharge, the additional sites drain slower and plot below one or both of the reference curves. The result is a recharge capture curve for the study area that is shallower in slope than the reference curves. Therefore, the spatial variability in infiltration rate magnifies the diminishing returns to scale already occurring as a results of the temporal variability of the water source. More recharge could be achieved, and the study area capture curve moved higher on the plot, if the berm heights around the cropland were increased.

The linear programming results obtained can help develop guidance on where such capital investment might be most valuable. Reformulating the Lagrange multiplier for Eq. in terms of the berm height indicates where and how much additional water could be recharged over the planning horizon if berms were raised from 1–2 ft . This result provides a high estimate of what might be possible since some perennial crops may be unable to accommodate the increased ponding depth; nevertheless, this information provides guidance for where efforts might be best spent increasing berm heights. The values for Lagrange multipliers based on increasing berm height by 1 ft are low in the northern portion of the study area because little cropland is present . Given the high infiltration rates of the deeper geology in the north , recharge potential would be much better for a gravel pit since it would provide additional land area and also penetrate the low hydraulic conductivity soil layer included in this analysis. Cropland present in one of the northern model elements with high-infiltration rate was used to simulate the potential effect of repurposing a gravel pit for recharge. A total of 570 ac in crop categories 2, 3 and 4 were used to simulate gravel pits by increasing the hydraulic conductivity of the soil layer to match the underlying geology and increasing the berm height to 20 ft . Figure 16a,b summarizes the results of gravel pit simulation at the maximum annual funding level. Recharging over the 20-year planning period uses 50% of the WAR . Most of the water remains in the groundwater system with amounts similar to the previously presented results exiting to surface-water and flowing across sub-basin boundaries . Allocation is skewed towards the gravel pits and provides enough base flow to support continuous flow in the Cosumnes River throughout the 20-year simulation including during the previously mentioned 5-year drought. This approach could entail representing cropland managers as individual profit maximizing agents along with the groundwater management agency charging fees for groundwater pumping and providing rebates for recharge. This approach would relax the assumption of uniform land use rents for each crop category and include a more likely dispersion of land use costs across the study area. It is unclear if the aggregate effect of net metering with modest pumping fees would significantly differ from the work presented here since the influence on rational profit maximizers of a net rebate, rather than a payment for using land for recharge, may be similar. However, the effect of net metering combined with a cash flow constraint applied to water management operations could impose limits on a program for improving groundwater system conditions. Given the regulatory requirement for improved groundwater system state, these changes could drive pumping fees higher and influence the behaviors of profit maximizing land managers. It may also be possible to explore improving groundwater conditions through water banking operations where capital investments and operations costs would be paid by a client, or clients, external to the sub-basins. Management policy questions would include: how much water would be left in-place to benefit the groundwater system and the longevity of withdrawal rights . Details of the policy decisions would likely have implications for the amount of infrastructure investment a water banking client might be willing to make. Either the cash flow or water banking approach might be modified to encourage recharge in areas where it is most needed.

Farmer knowledge accumulation by farmers in this study was mostly observational and experiential

Drawing upon Bar-Tal , we further define farmer values as a farmer’s worldview on farming – a set of social values or belief system that a farmer aspires to institute on their farm . In our study, examples of social-ecological mechanisms for farmer knowledge formation among these farmers included direct observation, personal experience, on-farm experimentation, and inherited wisdom from other local farmers. Similar to Boons’ conceptual guide, our results suggest that social-ecological mechanisms may play a central role in producing a farmer’s values and in integrating ecological knowledge into their farm operation. At the same time, results also highlight that social-ecological mechanisms may contribute to a farmer’s local ecological knowledge base, and importantly, place limits on the incorporation of social values in practice on farms. It is possible that social-ecological mechanisms may also provide the lens through which farmer values and ecological knowledge are reevaluated over time. Moreover, farmer values may also mutually inform ecological knowledge – and vice versa – in a dynamic, dialectical process as individual farmers apply their values or ecological knowledge in practice on their farm. Social-ecological mechanisms may also be key in translating abstract information into concrete knowledge among farmers interviewed. For example, experimentation may codify direct observations to generate farmer knowledge that is both concrete and transferable; or, to a lesser degree, personal experience may enhance farmer knowledge and may guide the process of experimentation. In general, large plastic pots for plants we found that farmers interviewed tended to rely less on abstract, “basic” science and more on concrete, “applied” science that is based on their specific local contexts and environment .

This finding underscores that for these farmers, their theory of farming is embedded in their practice of farming, and that these farmers tend to derive theoretical claims from their land.For example, the farmers who possessed a stewardship ethos viewed themselves as caretakers of their land; one farmer described his role as “a liaison between this piece of land and the human environment.” Farmers that self identified as stewards or caretakers of their land tended to rely most heavily on direct observation and personal experience to learn about their local ecosystems and develop their local ecological knowledge. This acquired ecological knowledge in turn directly informed how farmers approached management of their farms and the types of management practices and regimes they applied. That said, farmer values from this study did not always align with farming practices applied day-to-day due to both social and ecological limits of their environment. For example, one farmer, who considered himself a caretaker of his land expressed that cover crops were central to his management regime and that “we’ve underestimated how much benefit we can get from cover crops.” This same farmer admitted he had not been able to grow cover crops the last few seasons due to early rains, the heavy clay present in his soil, and the need to have crops ready for early summer markets. In another example, several of the farmers learned about variations in their soil type by directly observing how soil “behaved” using cover crop growth patterns. These farmers discussed that they learned about patchy locations in their fields, including issues with drainage, prior management history, soil type, and other field characteristics, through observation of cover crop growth in their fields.

Repeated observations over space and time helped to transform disparate observations into formalized knowledge. As observations accumulated over space and time, they informed knowledge formation across scales, from specific features of farmers’ fields to larger ecological patterns and phenomena. More broadly, using cover crop growth patterns to assess soil health and productivity allowed several farmers to make key decisions that influenced the long-term resilience of their farm operation . This specific adaptive management technique was developed independently by several farmers over the course of a decade of farming through long term observation and experimentation – and, at the time, was not codified in mainstream farming guidebooks, policy recommendations, or the scientific literature . For these farmers, growing a cover crop on new land or land with challenging soils is now formally part of their farm management program and central to their soil management. While some of the farmers considered this process “trial and error,” in actuality, all farmers in this study engaged in a structured, iterative process of robust decision-making in the face of constant uncertainty, similar to the process of adaptative management in the natural resource literature . This critical link between farmer knowledge formation and adaptive mangement is important to consider in the broader context of resilience thinking, wherein adaptive management is a tool in the face of shifting climate and changing landscape regimes . The underlying social and ecological mechanisms for farmer knowledge formation discussed here may have a role in informing adaptive management and pathways toward more resilient agriculture . In this sense, farmer knowledge represents an overlooked source for informing innovation in farming alternatively.

Farmer knowledge provides an extension to scientific and policy knowledge bases, in that farmers develop new dimensions of knowledge and alternative ways of thinking about aspects of farming previously unexplored in the scientific literature. Farmers offer a key source of and process for making abstract knowledge more concrete and better grounded in practice, which is at the heart of agriculture that is resilient to increased planetary uncertainties . Most of the farmers considered themselves separate from scientific knowledge production and though scientific knowledge did at times inform their own knowledge production, they still ultimately relied on their own direct observation and personal experiences to inform their knowledge base and make decisions. This finding underscores the importance of embedding theory in practice in alternative agriculture. Without grounding theoretical scientific findings or policy recommendations in practice, whether that be day-to-day practices or long-term management applied, farmers cannot readily incorporate such “outsider” knowledge into their farm operations. Farmers in alternative agriculture thus may provide an important node in the research and policy making process, whereby they assess if scientific findings or policy recommendations may or may not apply to their specific farming context – through direct observation, personal experience, and experimentation.Similar to Sūmane et al. , we found that the process for farmer knowledge formation, or precisely how farmers learn, is systematic and iterative in approach. In this study, farmer ecological knowledge was developed over time based on continuous systematic observation, personal experiences, and/or experimentation. This systematic approach that relies on iterative feedback to learning applied among these organic farmers is akin in approach to examples of adaptive management in agriculture . As highlighted in the results, it is possible for a farmer to acquire expert knowledge even as a first- or second-generation farmer. Documenting this farmer knowledge within the scientific literature – specifically farmer knowledge in the context of relatively new alternative farmers in the US – represents a key way forward for widening agricultural knowledge both in theory and in practice . This study provides one example for documenting this farmer knowledge in a particularly unique site for alternative agriculture. Future studies may expand on this approach in order to document other sites with recent but practical agricultural knowledge on alternative farms.Farmers in this study tended to think holistically about their farm management. For example, when the farmers were asked to talk about soil management specifically, several of the farmers struggled with this format of question, plastic pot plant containers because they expressed that they do not necessarily think about soil management specifically but tend to manage for multiple aspects of their farm ecosystem simultaneously. This result aligns with similar findings from Sūmane et al. across a case study of 10 different farming contexts in Europe, and suggests that farmers tend to have a bird’s eye view of their farming systems. Such an approach allows farmers to make connections across diverse and disparate elements of their farm operation and integrate these connections to both widen and deepen their ecological knowledge base.For most farmers in this study, maintaining ideal soil structure was the foundation for healthy soil. The farmers emphasized that ideal soil structure was delicately maintained by only working ground at appropriate windows of soil moistures. Determining this window of ideal soil moisture represented a learned skill that each individual farmer developed through an iterative learning process.

This knowledge making process was informed by both social mechanisms gained through inherited wisdom and informal conversations and ecological mechanisms through direct observation, personal experiences, and experimentation .As these farmers developed their ecological knowledge of the appropriate windows of soil moisture, their values around soil management often shifted. In this way, over time , farmers in this study learned that no amount of nutrient addition, reduced tillage, cover cropping, or other inputs, could make up for damaged soil structure. Destroying soil structure was relatively easy but had lasting consequences and often took years, in some cases even a decade, to rebuild. This key soil health practice voiced by a majority of farmers interviewed was distinct from messaging about soil health vis-a-vis extension institutions ), where soil health principles focus on keeping ground covered, minimizing soil disturbance, maximizing plant diversity, keeping live roots in the soil, and integrating livestock for holistic management . While these five key principles of soil health were mentioned by farmers and were deemed significant, for most farmers interviewed in this study, the foundation and starting point for good soil health was maintaining appropriate soil structure. The results of this study emphasize that the most successful entry point for engaging farmers around soil health is context specific, informed directly by local knowledge. Among farmers in Yolo County – a significant geographical node of the organic farming movement – soil structure is a prevalent concept; however, in another farming context, this entry point may significantly diverge for social, ecological, economic, or other reasons. Each farming context therefore necessitates careful inquiry and direct conversation with local farmers to determine this entry point for engagement on soil health. For this reason, in some cases it may be more relevant to tailor soil health outreach to the local context rather than applying a one-size-fits all model.The capacity to learn and pass on that learning are essential for farms that practice alternative agriculture to be able to adapt to everchanging social and ecological changes ahead . Across all farmers interviewed, including both first and second-generation farmers, farmers stressed the steep learning curves associated with learning to farm alternatively and/or organically. While these farmers represent a case study for building a successful, organic farm within one generations, the results of this study beg the question: What advancements in farm management and soil management could be possible with multiple generations of farmer knowledge transfer on the same land? Rather than re-learning the ins and outs of farming every generation or two, as new farmers arrive on new land, farmers could have the opportunity to build on existing knowledge from a direct line of farmers before them, and in this way, potentially contribute to breakthroughs in alternative farming. In this sense, moving forward, agriculture in the US has a lot to learn from agroecological farming approaches with a deep multi-generational history . To this end, in most interviews – particularly among older farmers – there was a deep concern over the future of their farm operation beyond their lifetime. Many farmers lamented that no family or individual is slated to take over their farm operation and that all the knowledge they had accumulated would not pass on; there exists a need to fill this gap in knowledge transfer between shifting generations of farmers, safeguard farmer knowledge, and promote adaptations in alternative agriculture into the future. As Calo and others point out, technical knowledge dissemination alone will not resolve this ongoing challenge of farm succession, as larger structural barriers are also at play – most notably, related to land access, transfer, and tenure .Most studies often speak to the scalability of approach or generalizability of the information presented. While aspects of this study are generalizable particularly to similar farming systems, the farmer knowledge presented in this study may or may not be generalizable or scalable to other regions in the US. To access farmer knowledge, relationship building with individual farmers leading up to interviews as well as the in-depth interviews themselves required considerable time and effort.

A hole board test is used to evaluate spatial reference memory and spatial working memory

There are many possible reasons for why there were different results from these two studies. First of all, the difference in ages tested in these two studies could have played a role as Norman et al. found there was greater success for 28-29 day old chicks when compared to14-15 day old chicks. Additionally, the use of opaque tiers prevented the chick from seeing the reward on the tier in Norman et al. . It is possible that the inability to see the reward reduced the birds’ motivation to jump to the raised platform. Most importantly, the different rewards used could have also played a role in the different performances observed. Regardless of the reasons for the difference in results, this test poses issues when used as an evaluation of spatial cognition. The aspect of spatial cognition being measured was not specified and, more importantly, this test confounds spatial cognition with physical ability. The task increases in difficulty to reach the reward with each trial. This would explain the decrease in number of birds in both rearing treatments that were able to successfully reach the tier as the difficulty increased. Also, providing perches to pullets increases leg muscle deposition in adult hens , which could explain the different performances among the birds of the two rearing treatments. Gunnarsson et al. defended this test as a valid measure of cognition because there was “no obvious reason to believe that the physical effort required to jump from 40 to 80 cm was substantially different” . Although jumping from the ground to the 40 cm tier may not be more physically taxing than reaching the 80 cm tier from the 40 cm tier, these were not the only heights presented to the birds.

It would be reasonable to presume that reaching the 80 cm tier without the aid of an additional platform would be more difficult than reaching the 40 cm tier. The jump test may be used to evaluate differences in physical ability to reach higher tiers, square pots however this task cannot separate physical ability from spatial cognition . For these reasons, the jump test may not be an ideal test to determine if spatial cognition is impaired by lack of access to vertical space during rearing. This test involves presenting a subject with a variety of baited and unbaited holes arranged in a grid. An animal is placed in the arena and is given free choice to visit any or all of possible holes. However, revisiting a previously baited hole will not be reinforced, as the food reward will have already been consumed. Working memory keeps a small amount of relevant information readily accessible while an animal is completing a task . Spatial working memory can be evaluated by measuring how often the subjects revisit holes where they already consumed the reward or holes which have already found to be unbaited . This is measured by the ratio of rewarded visits to the number of visits to the baited holes . Alternatively, reference memory is a form of long-term memory. Spatial reference memory can be examined using the hole board apparatus by repeating the same array of baited and unbaited holes over the course of multiple trials to determine if the animals’ rate of finding baited holes improves . This is determined using the ratio of the number of visits to baited holes to the number of visits to all holes . This improvement implies the location of the food rewards is being retained in reference memory and is being successful retrieved during trials .

Tahamtani et al. investigated the impacts of rearing environment on navigation and spatial memory in laying hens using a hole board task. Chicks were housed in cages for 4 weeks after which the aviary-reared birds were released into multi-tiered aviaries while the cage-reared birds remained in cages until 16 weeks, when all birds were placed in group-housed furnished cages. The hole board task consisted of nine chalk circles with a blue cup positioned in the center, concealing a food reward of live meal worms. The birds were trained to associate the cups with a food reward and only those that readily explored the test arena and found meal worms were selected. Four phases were used for training and testing: uncued acquisition, cued acquisition, over-training, and reversal phase. Uncued training and testing involved baiting only three of the nine cups in the same configuration for all exposures without any specific cues for guidance. This acts as a baseline for the hens’ ability to learn the location of the baited cups without any specific cues. Cued acquisition followed, where hens were trained and tested on the same configuration, however red boards were placed under the baited cups to serve as an extra cue for the location of the food reward. This presents new information that may improve the birds’ ability to complete the task. For over-training, the cues were removed to reestablish their baseline performance. Finally, for the reversal phase, the hens were trained and tested on an alternate configuration of three baited and six unbaited cups. The reversal phase requires flexibility in reference memory, as the birds must override the previously learned configuration in favor of the new information.

It was found that cage-reared birds took longer to complete the hole board task during the reversal phase than the aviary-reared birds, while the aviary birds had a better score for their working memory during the reversal phase when compared to cage-reared birds. This impairment of working memory during task reversal may be due to the decreased complexity found in caged systems when compared to aviary systems. Multi-tiered aviaries provide a greater amount of variability in terms of social interaction and location of resources than caged systems,potentially aiding in the development of spatial learning and memory. The hole board task is an excellent test for evaluating spatial learning and memory, however this test is not relevant at determining if these birds are more capable of navigating vertical space, since it occurs on a flat surface, or geometric plane , and does not take into account vertical space. It offers insight on differences in the working memory of cagereared and aviary-reared birds, however these results do not suggest that chicks reared with access to vertical space have an enhanced ability to avoid colliding with structures.A radial maze is a cognitive task that is designed to evaluate spatial working memory . It involves multiple walkways, or arms, radiating out from a central chamber. A food reward is located at the end of each arm and the subject is placed in the center. The subject is then allowed to freely choose to enter the arms of the maze until all food rewards have been found. The optimum strategy for finding the food rewards would be to enter each arm only once, as previously visited arms will no longer contain a food reward. In order to efficiently solve the maze, the subject must employ working memory to retain information about which arms have already been visited. The animal does this by noting cues, typically extra-maze cues, to determine which arms they have already entered . Since this maze requires adequate spatial memory, it has been employed as a technique to evaluate spatial cognition in fowl raised in environments of varying complexity. Wichman used a radial maze to investigate spatial ability in 16 week old laying hens raised in three different rearing environments. All rearing treatments had access to perches at 20 and 40 cm of height but the control group had not additional enrichments. The floor enrichment group had the addition of wooden blocks while the hanging enrichment group had hanging discs and bottles at beak height. At 16 weeks of age, once all birds had been regularly perching, square plant pot the birds were tested on an eight arm radial maze. Each arm of the maze was baited at the end with a meal worm and birds were given 20 minutes to freely explore the maze. In order to simulate the practice of moving hens from their rearing house to the laying house, all birds were moved to larger, more complex pens at 18 weeks of age. These new pens included perches at varying heights , which could be reached from the floor or by jumping from one perch to another. It was found that there was a relationship between performance on the radial maze and propensity to perch. Birds that used perches the most during the two hours period after being released into the new, complex pens required fewer visits to the arms of the radial maze in order to find all eight of the meal worms. Wichman suggested that based on these results, there is a relationship between two-dimensional spatial ability required for performance on the radial maze and three-dimensional skills required for perching. However, there were no significant differences between the treatment groups for onset of perching or performance on the radial maze. The author suggested that the low height of the perches might have allowed for easy access to vertical space between all treatments. Therefore, there may not have been enough variation between the perching behavior of the chicks to result in clear differences between treatments.

Whiteside et al. used an eight arm radial maze to investigate the impacts of floor rearing on spatial cognition and survivability of pheasants raised in captivity and released for hunting. Whiteside et al. reared one day old chicks until 7 weeks of age in three different environments: standard commercial rearing with no access to perches; access to natural hazel boughs perches; and access to artificial perches. There were no significant differences between the natural and artificial perch groups, so these conditions were combined for analysis.At 6 weeks of age, 27 chicks were tested on an eight arm radial maze to assess their spatial working memory. At the center of the maze there was a circular starting compartment and at the end of each arm of the maze there was a barrier, which concealed a food reward. Orientation in the maze was possible using extra-maze cues as the walls of the testing room were all different in color. The birds were first habituated to the arena for four days and were tested on the task on the fourth day. To solve the maze correctly, the birds had to enter each arm only once and consume the food reward. If a bird entered an arm where they had already eaten the food reward, it was recorded as an error. Those that made the fewest errors were determined to have better spatial memory than those that made more errors. Birds reared without access to perches made significantly more errors in their first eight choices than birds reared with access to perches. This suggests that access to perches at a young age improved the spatial memory of pheasants, potentially aiding in their survival post release. It is also of interest to note that birds reared with perches roosted at night during the two weeks post-release significantly more than birds reared without perches. These two studies demonstrate that performance on a two-dimensional test of spatial memory may impact future use of vertical space. This could have implications for hens’ ability to recall the location of resources and ability to navigate complex environments as an adult .In the detour paradigm, the most direct route to a goal is blocked; the animal must walk around a barrier to reach the goal . For the initially visible goal detour task, the goal is visible at the starting point but then becomes occluded by opaque walls as the animal begins to move through the apparatus. When the goal is initially visible, spatial working memory as well as route planning must occur in order to hold the location of the goal in memory and make choices about what route should be taken. Success on this task is often interpreted as the animal making a mental representation of the location of the non-visible goal and then using this representation to determine the best route . Due to the use of route planning and the need to hold the location of the goal in working memory in order to successful solve this task, the detour paradigm has been frequently used to evaluate spatial working memory and route planning.

Acreage and experience may alter the environmental impact of growers’ pesticide programs

For organic growers, two AIs, spinosad and pyrethrins, are available to target those physiological functions. The “unknown” category, which is mostly sulfur, accounted for a significant portion of treated acreage in organic agriculture. Insecticides that target the midgut, which includes Bacillus thuringiensis and several granulosis viruses, are widely applied in organic fields. Conventional growers rarely use them due to the high cost. In 2015, acreage treated with midgut targeted insecticides was 1% of total treated acreage in conventional agriculture and 24% in organic agriculture. A detailed discussion of insecticide and fungicide use by mode of action in conventional and organic production is in the appendix.Insecticides and fungicides in the two pest management programs have different modes of action and pose different levels of environmental impact. Simply comparing treated acreage or the amount of pesticide products used does not identify the differences in environmental impacts. In this context, the PURE index serves as a consistent measure across farming systems.Figure 1.3 plots PURE indices for conventional and organic fields by year. Index values for air and soil are significantly higher than those for the other environmental dimensions in both farming systems, which means that pesticide use in general has greater impacts on air and soil quality than groundwater, pollinators, and surface water. Risk indices of conventional fields are relatively stable from 1995 to 2015, with no obvious overall changes for air or soil, despite the many changes that have occurred during this 20-year period in regulations and grower portfolios. While PURE indices decreased 16% for surface water, 26% for pollinators, grow bag for blueberry plants and 7% for groundwater over the same time period, these three were much less impacted by pesticides in 1995, the beginning of the study period.

Despite the numerous regulatory actions designed to reduce environmental impacts over this 20-year period, such as the methyl bromide phase-out, large-scale substitution of pyrethroids for organophosphates, and regulations to reduce VOC emissions from non-fumigant products, the overall environmental impacts of conventional pesticide use show only limited reductions when aggregated across all crops. PURE indices for organic fields are similar to conventional fields in that the air and soil have significantly higher index vales than the others. However, the aggregate risk indices in all five dimensions are much lower in organic fields. Compared to conventional agriculture, organic agriculture has dramatically lower PURE indices for surface water , groundwater , air , soil , and pollinators . The reduction for air varies greatly across major California crops. Large reductions in the PURE index for air are observed for table grapes , wine grapes , and processing tomatoes , while others had relatively small ones such as leaf lettuce and almonds . The reduction in the PURE index for soil varies across crops as well, ranging from leaf lettuce to carrots . For surface water, groundwater, and pollinators, the differences between the PURE index in organic and conventional fields are similar across crops. A noticeable spike in PURE indices appeared in 1998 for organic agriculture caused by a single application of copper sulfate with an application rate of 150 lb/acre, which is ten times larger than the average application rate and clearly a data abnormality. The PURE index is a measure of environmental impacts on the per acre basis.One could use the yield difference between conventional and organic agriculture to adjust values in Figure 1.3 and transfer them to a measure of impacts per unit of output. Organic agriculture is found to have 10%-20% lower yields than conventional agriculture . If we use the 15% yield loss as an average to adjust the results for all crops, organic agriculture reduced the PURE index for surface water , groundwater , air , soil , and pollinators .

The impact of organic practices on pesticide use is crop specific. This aggregate result is derived based on current crop mix in California. Each crop is susceptible to a different spectrum of pests, which are managed by a distinct pesticide portfolio as part of a broader pest management program. Comparing PURE indices for individual crops shows the benefit from pesticide use in organic agriculture varies significantly. Based on value, production region, and the acreage share of organic production, four crops are selected to illustrate this point: lettuce, strawberries, wine grapes, and processing tomatoes. Lettuce, strawberries, and wine grapes are the three highest-valued organic crops in California, with organic sales values of $241, $231, and $114 million in 2016 respectively . Production of strawberries and lettuce is concentrated in the Central Coast region. Processing tomatoes are an important crop in the Central Valley. Wine grape production occurs in a number of regions across the state. In 2015, the acreage shares of organic production are 8% , 9% , 4% , and 2% for the selected crops.For my analysis, the unit of observation is a field-year, defined as a field with one or more pesticide applications in a given calendar year. In total, more than 3 million field-year observations are included in the PUR database from 1995 to 2015. Table 1.1 provides field-year summary statistics for key variables by crop. Overall, 3% of them applied only pesticides approved in organic agriculture. For all crops, conventional farms are significantly larger in size and have higher PURE indices. The average farm size in PUR is smaller than the average number in the USDA Census . One potential explanation is that one farm could have fields in different counties and apply for multiple pesticide application permits within in each county, which classifies it as multiple “farms” in the PUR. For all crops, lettuce, strawberries, and processing tomatoes, growers who operate conventional farms have significantly more experience, measured by years they are observed in the PUR. For wine grapes, conventional growers have less experience than organic growers. Ideally, farming experience is measured directly or researchers use age as a proxy. However, the PUR database does not contain any demographic information, which limited my ability to measure experience. The PUR experience is smaller than the farming experience reported in the Census, which has many reasons. . First, the PUR database I use started in 1995. Any farming experience before 1995 is not recorded. The Census is conducted every 5 years. Farms that entered and exited within the 5 year gap are included in the PUR database but not the Census, which reduce the average experience. Conventional strawberries have significantly greater impact on surface water and less impact on groundwater, measured by the PURE indices, comparing to other conventional crops. Organic strawberries, on the other hand, had a higher PURE index for air and a lower PURE index for soil than other organic crops. Pesticides used in conventional production of wine grapes have less impact on pollinators than pesticides used in other conventional crops.To identify the effect of organic agriculture on pesticide uses and associated environmental impacts, I must address the issues of selection bias at both the grower and the field levels. Compared to growers who utilize conventional practices, growers who adopt organic ones may have different underlying characteristics, such as attitudes toward environmental issues, which can also affect their pesticide use decisions directly. If grower characteristics are time-invariant, an unbiased estimation could be achieved by including a grower fixed effect in the regression.

There is also time-variant heterogeneity that is associated with individual growers, due to factors such as farm size and experience, blueberry grow bag that simultaneously influences the adoption of organic production and pesticide use decisions. The identification concern here is that growers with more farming experience or larger farms, including both conventional and organic acreage, are more likely to operate organic fields and use less pesticides . Therefore it is not reasonable to compare environmental impacts of pesticide use for growers without considering these characteristics. For each grower, annual total acreage and experience serve as measures of time-variant heterogeneity. As shown in Table 1.1, there is a significant difference for these two variables between conventional and organic growers. There could be field-level heterogeneity as well, due to pest or disease pressure, that undermines my identification strategy. Fields with less pest or disease pressure need less pesticides and are more likely to be converted into organic production at the same time. Including field fixed effects in the estimation is one approach to address these issues. Organic fields tend to be concentrated spatially to avoid pesticide drift from nearby conventional fields . Spatial relationships are not considered here because the PUR database does not have information on the distance between fields.For all five PURE dimensions, pesticides used in organic agriculture reduced environmental impact. The reduction, captures by the variable Organic, is significant at the 1% level for five environmental dimensions. Relative to the intercept, organic practices reduced environmental impacts for surface water by 86%, for groundwater by 93%, for soil by 60%, for air by 53%, and for pollinators by 76% on a per acre basis holding other variables fixed. The relatively small impact on air is linked to the facts that natural AIs do not have less VOC emissions in general. Regulations regarding high VOC-emitting pesticide AIs also contribute to this result partially because they do no affect two systems evenly. In 2015, the sale and use of 48 pesticide products were restricted due to their VOC emissions, which accounted for 5% of treated acreage in conventional agriculture and 1% of treated acreage in organic agriculture. Although reductions in PURE index values do not translate directly into dollar values or health outcomes, results from Table 1.2 suggest that pesticide use in organic fields substantially reduced environmental impacts. The coefficient for Organic × t represents the change of the difference between two farming systems over time and is positive for all environmental dimensions, which supports the hypothesis that, comparing with conventional agriculture, the environmental impacts associated with pesticide use in organic agriculture have grown over time. Air has the largest coefficient among the five environmental dimensions, which is consistent with previous figures that environmental impacts increased the most for air across all crops. The variable t is the common time trend for all conventional fields and t is negative for surface water and groundwater, which means the environmental impacts from pesticide use decreased in conventional agriculture on those dimensions. The environmental impact on soil and air increased. The combination of variables t and Organic × t shows the time trend for organic fields alone, which is upward sloping for groundwater, soil, air, and pollinators, and downward sloping for surface water. Two variables Acreage and Exp, capture time-invariant grower heterogeneity. Although the variable Organic dominates the overall effect, coefficients for both Acreage and Exp influence the environmental impact associated with crop production. For the same grower-crop combination, a larger farm size is associated with pesticide application pro-grams that pose more negative impacts for all five environmental dimensions. Meanwhile, more experience is correlated with the environmental impacts on soil, air, and pollinators. The PURE indices for surface water and groundwater are positively correlated with experience. This is partially due to the fact that experienced farmers use less organophosphate insecticide per acre, which are more toxic to earthworms and honeybees than alternative AIs.The sub-sample estimation yields similar results . Namely, in conventional agriculture, the environmental impacts on surface water and groundwater associated with pesticide use decreased over time, pesticides used in organic agriculture significantly reduced the environmental impacts measured by the PURE index, the difference between conventional and organic pesticide use decreased.The intercept is smaller than the coefficient of Organic occasionally because the crop and time fixed effects are oftentimes positive and significant and the impacts on those dimensions in organic fields are small. For the sub-sample with fields that have transitioned between production systems, total farm acreage is no longer significantly associated with impacts on groundwater, soil, and pollinators and the environmental impact on surface water is negatively correlated with farm acreage. The main reason for this seemingly dramatic difference, comparing to the full sample estimation, is that there are more wine grape vineyards and fewer almond orchards and alfalfa fields in the sub-sample. Although the organic price premium is limited for wine grapes, the organic farming practices are associated with high quality of grapes, which encourage growers to adopt organic production .

Woody biomass volumes were measured and used for perennial C estimates

California epitomizes the agriculture-climate challenge, as well as its opportunities. As the United States’ largest agricultural producing state agriculture also accounted for approximately 8% of California’s greenhouse gas emissions statewide for the period 2000–2013. At the same time, California is at the forefront of innovative approaches to CSA . Given the state’s Mediterranean climate, part of an integrated CSA strategy will likely include perennial crops, such as winegrapes, that have a high market value and store C long term in woody biomass. Economically, wine production and retail represents an important contribution to California’s economy, generating $61.5 billion in annual economic impact. In terms of land use, 230,000 ha in California are managed for wine production, with 4.2 million tons of wine grapes harvested annually with an approximate $3.2 billion farm gate value. This high level of production has come with some environmental costs, however, with degradation of native habitats, impacts to wildlife, and over abstraction of water resources. Although many economic and environmental impacts of wine production systems are actively being quantified, and while there is increasing scientific interest in the carbon footprint of vineyard management activities, efforts to quantify C capture and storage in annual and perennial biomass remain less well-examined. Studies from Mediterranean climates have focused mostly on C cycle processes in annual agroecosystems or natural systems. Related studies have investigated sources of GHGs, grow bag gardening on-site energy balance, water use and potential impacts of climate change on productivity and the distribution of grape production. The perennial nature and extent of vineyard agroecosystems have brought increasing interest from growers and the public sector to reduce the GHG footprint associated with wine production.

The ongoing development of carbon accounting protocols within the international wine industry reflects the increased attention that industry and consumers are putting on GHG emissions and offsets. In principle, an easy-to-use, wine industry specific, GHG protocol would measure the carbon footprints of winery and vineyard operations of all sizes. However, such footprint assessment protocols remain poorly parameterized, especially those requiring time-consuming empirical methods. Data collected from the field, such as vine biomass, cover crop biomass, and soil carbon storage capacity are difficult to obtain and remain sparse, and thus limit the further development of carbon accounting in the wine sector. Simple yet accurate methods are needed to allow vineyard managers to measure C stocks in situ and thereby better parameterize carbon accounting protocols. Not only would removing this data bottleneck encourage broader participation in such activities, it would also provide a reliable means to reward climate smart agriculture.Building on research that has used empirical data to compare soil and abovground C stocks in vineyards and adjacent oak woodlands in California, this study sought to estimate the C composition of a vine, including the relative contributions of its component parts . By identifying the allometric relationships among trunk diameter, plant height, and other vine dimensions, growers could utilize a reliable mechanism for translating vine architecture and biomass into C estimates. In both natural and agricultural ecosystems, several studies have been performed using allometric equations in order to estimate above ground biomass to assess potential for C sequestration. For example, functional relationships between the ground-measured Lorey’s height and above ground biomass were derived from allometric equations in forests throughout the tropics.

Similarly, functional relationships have been found in tropical agriculture for above ground, below ground, and field margin biomass and C. In the vineyard setting, however, horticultural intervention and annual pruning constrain the size and shape of vines making existing allometric relationships less meaningful, though it is likely that simple physical measurements could readily estimate above ground biomass. To date, most studies on C sequestration in vineyards have been focused on soil C as sinks and some attempts to quantify biomass C stocks have been carried out in both agricultural and natural systems. In vineyards, studies in California in the late 1990s have reported net primary productivity or total biomass values between 550 g C m−2 and 1100 g C m−2. In terms of spatial distribution, some data of standing biomass collected by Kroodsma et al. from companies that remove trees and vines in California yielded values of 1.0–1.3 Mg C ha−1 year−1 woody C for nuts and stone fruit species, and 0.2–0.4 Mg C ha−1 year−1 for vineyards. It has been reported that mature California orchard crops allocate, on average, one third of their NPP to the harvested portion and mature vines 35–50% of the current year’s production to grape clusters. Pruning weight has also been quantified by two direct measurements which estimated 2.5 Mg of pruned biomass per ha for both almonds and vineyards. The incorporation of trees or shrubs in agroforestry systems can increase the amount of carbon sequestered compared to a monoculture field of crop plants or pasture. Additional forest planting would be needed to offset current net annual loss of above ground C, representing an opportunity for viticulture to incorporate the surrounding woodlands into the system. A study assessing C storage in California vineyards found that on average, surrounding forested wild lands had 12 times more above ground woody C than vineyards and even the largest vines had only about one-fourth of the woody biomass per ha of the adjacent wooded wild lands .

The objectives of this study were to: measure standing vine biomass and calculate C stocks in Cabernet Sauvignon vines by field sampling the major biomass fractions ; calculate C fractions in berry clusters to assess C mass that could be returned to the vineyard from the winery in the form of rachis and pomace; determine proportion of perennially sequestered and annually produced C stocks using easy to measure physical vine properties ; and develop allometric relationships to provide growers and land managers with a method to rapidly assess vineyard C stocks. Lastly, we validate block level estimates of C with volumetric measurements of vine biomass generated during vineyard removal.The study site is located in southern Sacramento County, California, USA , and the vineyard is part of a property annexed into a seasonal floodplain restoration program, which has since removed the levee preventing seasonal flooding. The ensuing vineyard removal allowed destructive sampling for biomass measurements and subsequent C quantification. The vineyard is considered part of the Cosumnes River appellation within the Lodi American Viticultural Area, a region characterized by its Mediterranean climate— cool wet winters and warm dry summers—and by nearby Sacramento-San Joaquin Delta breezes that moderate peak summer temperatures compared to areas north and south of this location. The study site is characterized by a mean summer maximum air temperature of 32 °C, has an annual average precipitation of 90 mm, typically all received as rain from November to April. During summer time, plastic grow bag the daily high air temperatures average 24 °C, and daily lows average 10 °C. Winter temperatures range from an average low 5 °C to average high 15 °C. Total heating degree days for the site are approximately 3420 and the frost-free season is approximately 360 days annually. Similar to other vineyards in the Lodi region, the site is situated on an extensive alluvial terrace landform formed by Sierra Nevada out wash with a San Joaquin Series soil . This soil-landform relationship is extensive, covering approximately 160,000 ha across the eastern Central Valley and it is used extensively for winegrape production. The dominant soil texture is clay loam with some sandy clay loam sectors; mean soil C content, based on three characteristic grab samples processed by the UC Davis Analytical Lab, in the upper 8 cm was 1.35% and in the lower 8–15 cm was 1.1% . The vineyard plot consisted of 7.5 ha of Cabernet Sauvignon vines, planted in 1996 at a density of 1631 plants ha−1 with flood irrigation during spring and summer seasons. The vines were trained using a quadrilateral trellis system with two parallel cordons and a modified Double Geneva Curtain structure attached to T-posts . Atypically, these vines were not grafted to rootstock, which is used often in the region to modify vigor or limit disease .In Sept.–Oct. of 2011, above ground biomass was measured from 72 vines. The vineyard was divided equally in twelve randomly assigned blocks, and six individual vines from each block were processed into major biomass categories of leaf, fruit, cane and trunk plus cordon . Grape berry clusters were collected in buckets, with fruit separated and weighed fresh in the field. Leaves and canes were collected separately in burlap sacks, and the trunks and cordons were tagged. Biomass was transported off site to partially air dry on wire racks and then fully dried in large ventilated ovens. Plant tissues were dried at 60 °C for 48 h and then ground to pass through a 250 μm mesh sieve using a Thomas Wiley® Mini-Mill . Total C in plant tissues was analyzed using a PDZ Europa ANCA-GSL elemental analyzer at the UC Davis Stable Isotope Facility. For cluster and berry C estimations, grape clusters were randomly selected from all repetitions. Berries were removed from cluster rachis. While the berries were frozen, the seeds and skins were separated from the fruit flesh or “pulp”, and combined with the juice . The rachis, skins and seeds were dried in oven and weighed. The pulp was separated from the juice + pulp with vacuum filtration using a pre-weighed Q2 filter paper . The filter paper with pulp was oven dried and weighed to get insoluble solid fraction . The largest portion of grape juice soluble solids are sugars. Sugars were measured at 25% using a Refractometer PAL-1 .

The C content of sugar was calculated at 42% using the formula of sucrose. Below ground biomass was measured by pneumatically excavating the root system with compressed air applied at 0.7 Mpa for three of the 12 sampling blocks, exposing two vines each in 8 m3 pits. The soil was prewetted prior to excavation to facilitate removal and minimize root damage. A root restricting duripan, common in this soil, provided an effective rooting depth of about 40 cm at this site with only 5–10 fine and small roots able to penetrate below this depth in each plot. Roots were washed, cut into smaller segments and separated into four size classes , oven-dried at 60 °C for 48 h and weighed. Larger roots were left in the oven for 4 days. Stumps were considered part of the root system for this analysis.In vineyard ecosystems, annual C is represented by fruit, leaves and canes, and is either removed from the system and/or incorporated into the soil C pools, which was not considered further. Structures whose tissues remain in the plant were considered perennial C. Cordon and trunk diameters were measured using a digital caliper at four locations per piece and averaged, and lengths were measured with a calibrated tape. Sixty vines were used for the analysis; twelve vines were omitted due to missing values in one or more vine fractions. All statistical estimates were conducted in R.An earth moving machine was used to uproot vines and gather them together to form mounds. Twenty-six mounds consisting of trunks plus cordons and canes were measured across this vineyard block . The mounds represented comparable spatial footprints within the vineyard area . Mound C stocks were estimated using their biomass contribution areas, physical size, density and either a semi-ovoid or hemispherical model.The present study provides results for an assessment of vineyard biomass that is comparable with data from previous studies, as well as estimates of below ground biomass that are more precise than previous reports. While most studies on C sequestration in vineyards have focused on soil C, some have quantified above ground biomass and C stocks. For example, a study of grapevines in California found net primary productivity values between 5.5 and 11 Mg C ha−1 —figures that are comparable to our mean estimate of 12.4 Mg C ha−1 . For pruned biomass, our estimate of 1.1 Mg C ha−1 were comparable to two assessments that estimated 2.5 Mg of pruned biomass ha−1 for both almonds and vineyards. Researchers reported that mature orchard crops in California allocated, on average, one third of their NPP to harvestable biomass, and mature vines allocated 35–50% of that year’s production to grape clusters. Our estimate of 50% of annual biomass C allocated to harvested clusters represent the fraction of the structures grown during the season .

Differences in the transcript abundance of NCED and PR proteins were also noted

ABA concentrations may be higher in the BOD berry skins based upon the higher transcript abundance of important ABA signaling and biosynthesis genes encoding ABF2, SnRK2 kinases and NCED6. We hypothesize that this would be seed derived ABA since water deficits were not apparent in BOD with the recent rainfall and high humidity. In contrast, NCED3 and NCED5 had higher transcript abundance in RNO berry skins, which might occur as the result of the very low humidity and large vapor pressure deficit . The lower expression of NCED6 in RNO berry skins may indicate that the seeds in the berry were more immature than the BOD berries. The higher expression of other seed development and dormancy genes in the berry skins support the argument that BOD berries matured at a lower sugar level than the RNO berries. The ABA concentrations in the berry skins are a function of biosynthesis, catabolism, conjugation and transport. ABA in seeds increase as the seed matures and some of this ABA may be transported to the skin. In fact, a number of ABCG40 genes, which encode ABA transporters, had higher transcript abundance in BOD berry skins than that in RNO . Part of the ABA in skins may be transported from the seed and part of it might be derived from biosynthesis in the skins. NCED6 transcript abundance in the skins was higher in BOD berries. Perhaps the transcript abundance of NCED6 in the skin is regulated by the same signals as the embryo and reflects an increase in seed maturity. AtNCED6 transcript abundance is not responsive to water deficit in Arabidopsis, square black flower bucket wholesale but AtNCED3 and AtNCED5 are. This is consistent with the higher NCED3, NCED5 and BAM1 transcript abundance in RNO berries . Thus, there are complex responses of ABA metabolism and signaling.

It would appear that there may be two different ABA pathways affecting ABA concentrations and signaling: one involved with embryo development and one involved with the water status in the skins. Auxin is also involved with ABA signaling during the late stages of embryo development in the seeds. Auxin signaling responses are complex. ABF5 is an auxin receptor that degrades Aux/IAA proteins, which are repressors of ARF transcriptional activity. Thus, a rise in auxin concentration releases Aux/IAA repression of ARF transcription factors, activating auxin signaling. In the berry skins, there was a diversity of transcriptional responses of Aux/IAA and ARF genes in the two locations, some with increased transcript abundance and others with decreased transcript abundance. As with ABA signaling, there may be multiple auxin signaling pathways operating simultaneously. One pathway appears to involve seed dormancy. ARF2 had a higher transcript abundance in BOD berries. ARF2 promotes dormancy through the ABA signaling pathway. This is consistent with the hypothesis that BOD berries reach maturity at a lower sugar level than RNO berries.Grapevines have very dynamic gene expression responses to pathogens. The top 150 DEGs for BOD berries were highly enriched with biotic stress genes. The BOD vineyard site had a higher rainfall and higher relative humidity than RNO and these conditions are likely to be more suitable for fungi to grow. We detected a much higher transcript abundance of powdery mildew-responsive genes in BOD berries and this may be connected to a higher transcript abundance of ethylene and phenylpropanoid genes as part of a defense response. The transcript abundance profiles of some of these genes are remarkably similar. Increased ethylene signaling in grapevines has been associated with powdery mildew infection and phenylpropanoid metabolism and appears to provide plant protection against the fungus.

Genes involved with phenylpropanoid metabolism, especially PAL and STS genes, appear to be quite sensitive to multiple stresses in the environment. In Arabidopsis there are four PAL genes. These PAL genes appear to be involved with flavonoid biosynthesis and pathogen resistance in Arabidopsis. Ten different PAL1 and two PAL2 orthologs had higher transcript abundance in BOD berry skins; many STS genes also had a higher transcript abundance in BOD berry skins . Stilbenes are phytoalexins and provide pathogen resistance in grapes and STS genes are strongly induced by pathogens. Thus, the higher transcript abundance of powdery mildew genes may be associated with the higher transcript abundance of genes in the ethylene and phenylpropanoid pathways.The transcript abundance of a number of iron homeostasis genes were significantly different in the two locations and there was a difference in soil available iron concentrations in the two locations. However, iron uptake and transport in plants is complicated depending on multiple factors, such as pH, soil redox state, organic matter composition, solubility in the phloem, etc. Thus, it is impossible to predict iron concentrations in the berry without direct measurements. The roles of these genes in iron homeostasis and plant physiological functions are diverse. Iron supply can affect anthocyanin concentrations and the transcript abundance of genes in the phenylpropanoid pathway in Cabernet Sauvignon berry skins. One of the DEGs, SIA1, is located in the chloroplast in Arabidopsis and appears to function in plastoglobule formation and iron homeostasis signaling in concert with ATH13. Another DEG, YSL3, is involved in iron transport. It acts in the SA signaling pathway and appears to be involved in defense responses to pathogens.

It also functions in iron transport into seeds. FER1 is one of a family of ferritins found in Arabidopsis. VIT1 and NRAMP3 are vacuolar iron transporters and are also involved in iron storage in seeds. Other DEGs are also responsive to iron supply. IREG3 appears to be involved in iron transport in plastids; its transcript abundance increases with increasing iron concentrations. ABCI8 is an iron-stimulated ATPase located in the chloroplast that functions in iron homeostasis. It is unclear what specific roles these iron homeostasis genes are playing in grape berry skins, but they appear to be involved in iron storage in seeds and protection against oxidative stress responses. One possible explanation for the transcript abundance profiles in the BOD and RNO berry skins is that ferritins are known to bind iron and are thought to reduce the free iron concentrations in the chloroplast, thus, reducing ROS production that is caused by the Fenton reaction. As chloroplasts senesce during berry ripening, iron concentrations mayrise as a result of the catabolism of iron-containing proteins in the thylakoid membranes; thus, berry skins may need higher concentrations of ferritins to keep free iron concentrations low. This might explain the increase in ferritin transcript abundance with increasing sugar levels. Most soils contain 2 to 5% iron including available and unavailable iron; soils with 15 and 25 μg g− 1 of available iron are considered moderate for grapevines, but soils with higher concentrations are not considered toxic. Therefore, for both soils in this study, iron concentrations can be considered to be very high but not toxic. The higher available iron concentrations in the BOD vineyard may be associated with the wetter conditions and the lower soil pH.Other researchers using Omics approaches have identified environmental factors that influence grape berry transcript abundance and metabolites. One study investigated the differences in transcript abundance in berries of Corvina in 11 different vineyards within the same region over 3 years. They determined that approximately 18% of the berry transcript abundance was affected by the environment. Climate had an overwhelming effect but viticultural practices were also significant. Phenylpropanoid metabolism was very sensitive to the environment and PAL transcript abundance was associated with STS transcript abundance. In another study of a white grape cultivar, Garganega, berries were analyzed by transcriptomic and metabolomic approaches. Berries were selected from vineyards at different altitudes and soil types. Again, plastic square flower bucket phenylpropanoid metabolism was strongly influenced by the environment. Carotenoid and terpenoid metabolism were influenced as well. Two studies investigated the grape berry transcriptomes during the ripening phase in two different regions of China, a dry region in Western China and a wet region in Eastern China. These two locations mirror some of the differences in our conditions in our study, namely moisture, light and elevation, although the dry China western region has higher night temperatures and more rainfall than the very dry RNO location. In the Cabernet Sauvignon study, they compared the berry transcriptomes from the two regions at three different stages: pea size, veraison and maturity. The TSS at maturity was slightly below 20°Brix. Similar to our study, the response to stimulus, phenylpropanoid and diterpenoid metabolism GO categories were highly enriched in mature berries between the two locations.

Like in our study, the authors associated the transcript abundance of these proteins to the dry and wet locations, respectively. In the second study comparing these two regions in China, the effects of the environment on the metabolome and transcriptome of Muscat Blanc à Petits Grains berries were investigated over two seasons; specifically, terpenoid metabolism was targeted. Like in our study, the transcripts in terpenoid were in higher abundance in the wetter location. The transcript abundances were correlated with terpenoid concentrations and a coexpression network was constructed. A specific set of candidate regulatory genes were identified including some terpene synthases , glycosyl transferases and 1-hydroxy-2-methyl-2-butenyl 4-diphosphate reductase . We examined the transcript abundance of some of these candidate genes in our own data but did not find significant differences between our two locations. The contrasting results between our study and Wen et al. could be for a variety of reasons such as different cultivar responses, berry versus skin samples, or different environmental conditions that affect terpenoid production. Terpenoid metabolism is influenced by the microclimate and is involved in plant defense responses to pathogens and insects. Light exposure to Sauvignon Blanc grapes was manipulated by removing adjacent leaves without any detectable differences in berry temperatures. Increased light exposure increased specific carotenoid and terpene concentrations in the berry. The responses of carotenoid and terpenoid production to temperature are less clear. Some effect of temperature was associated with carotenoid and terpenoid production, but to a lesser extent than light. Higher concentrations of rotundone, a sesquiterpene, have been associated with cooler temperatures. Water deficit can also alter carotenoid and terpenoid metabolism in grapes. Terpenes can act as signals for insect attacks and attract insect predators. Thus, terpenoid metabolism is highly sensitive to the environment and influenced by many factors. In contrast to these studies, excess light and heat can affect transcript abundance and damage berry quality. In addition to a higher rate of malate catabolism, anthocyanin concentrations and some of the transcript abundances associated with them are decreased as well.BOD berries reached maturity at a lower °Brix level than RNO berries; the cause is likely to be the warmer days and cooler nights in RNO. Higher day temperature may increase photosynthesis and sugar transport and coolernight temperatures may reduce fruit respiration. °Brix or TSS approximates the % sugar in a berry and is a reliable marker of berry maturity in any given location; however, TSS is an unreliable marker of berry maturity when comparing grapes from very different climates. The differences in TSS between BOD and RNO are consistent with other studies on the temperature effects on berry development. Indirect studies have associated gradual warming over the last century to accelerated phenology and increased sugar concentrations in the grape berries. Increasing temperature can accelerate metabolism, including sugar biosynthesis and transport, but the increase in metabolism is not uniform. For example, the increase in anthocyanin concentration during the ripening phase is not affected as much as the increase in sugar concentration. These responses vary with the cultivar, complicating this kind of analysis even further. Direct studies of temperature effects on Cabernet Sauvignon berry composition also are consistent with our data. In one study, the composition of Cabernet Sauvignon berries was altered substantially for vines grown in phytotrons at 20 or 30 °C temperatures. Cooler temperatures promoted anthocyanin development and malate concentrations and higher temperatures promoted TSS and proline concentrations. In a second study, vines were grown at 20 or 30 °C day temperatures with night temperatures 5 °C cooler than the day. In this study, higher temperatures increased berry volume and veraison started earlier by about 3 to 4 weeks. The authors concluded that warmer temperatures hastened berry development. In a third study, Cabernet Sauvignon berry composition was affected in a similar manner by soil temperatures that differed by 13 °C .

The project uses the Heroku foundation to run a Ruby on Rails Web Application

We will next perform long distance thermal navigation, at a height of 150 µm above the surface. Retract 150 µm using axis 3 of the coarse positioners. I’d recommend doing this in one or two big steps, because the coarse positioner can slide in response to small excursions. Verify that you can still see the thermal signal on the SQUID. It is Ok if it’s faint or close to the noise floor; it will increase in size, and you know which directions to start travelling. If the resistive encoders are working , then use them to move in 100 um steps, checking the SQUID signal in between movements. There is no need to ground the SQUID in between coarse positioner steps, there will be crosstalk but this is not hazardous for the nanoSQUID. If the resistive encoders are not working, click the Step+ button repeatedly until the SQUID signal increases to a maximum. This might take a few minutes or so of clicking. You can work on a software solution instead if you like , but remember that there is always a simple, safe solution available! Once the signal is at a maximum, take another scan to verify that you’re centered above the device. You should see a local maximum in the temperature in the middle of your scan region. Ground the SQUID. Ramp the current through the device down to zero. Zero and ground any gates you have applied voltages to. Ground the sample. Make sure the SQUID is grounded to the breakout box by a BNC . Hook up the second little red turbo pump to the sample chamber through a plastic clamp and o-ring, and turn it on. Slowly, over 10-20 minutes, flower harvest buckets open the valve to the sample chamber and pump it out. Make sure the sand buckets for vibration isolation are set up and the bellows aren’t touching the ground. If there are vibration issues you can often feel them on the bellows and on the table with your hand.

Repeat the setup for approaching to contact, and approach to contact. Definitely watch the first few rounds of this approach! You can even watch the whole thing- it’ll take 30-45 mintues, but if you’ve messed something up then the approach will destroy both the SQUID and the device, because you’ve carefully aligned the SQUID with the device! Once you’ve reached the surface, you will set up the SQUID circuit. Attach the preamplifier to one of the SMA connectors at the top of the insert. Attach its output to the input of the feedback box. This output goes through the ground breaker that is clamped to the table in Andrea’s lab; all of these analog electronic circuits are susceptible to noise and ringing, so I’m sure there will be different idiosyncracies in other laboratories with other electromagnetic environments. Attach the output of the feedback box to the BNC labelled FEEDBACK . This is the BNC that should get a resistor in series if you wanted to increase the transfer function. We generally use resistors between 1 kΩ and 10 kΩ for this. To start with, just using nothing is fine . Plug the preamp and feedback box into fresh batteries . Turn the preamp on. Turn the feedback OFF. Hook up the SQUID bias wires to SQUID A and SQUID B. You can tell which they are because of the chunky low pass filters on the end, but of course they are also labelled. Make sure both sides of the SQUID are grounded while hooking it up- there is a BNC T there for a grounding cap for this purpose. Hook up Output 2 of the Zurich to signal input on the feedback box. Apply 1 V to signal input. There’s a good chance you just used this same output and cable to apply avoltage to the device, so be careful not to skip this step and apply this voltage to the device itself!

You should see the SQUID array transfer function on the oscilloscope . Turn the rheostat/potentiometer on the preamp until this pattern has maximum amplitude. Turn the Offset rheostat/potentiometer on the feedback box until this passes through zero . There is a more sophisticated procedure for minimizing noise in the SQUID array; this is covered in great detail by documents Martin Huber has provided to the lab. But if you are a beginner this simple procedure will work fine. Flip the On switch on the feedback box, and watch the interference pattern vanish, replaced by a line near V = 0. Turn off the AC voltage going to signal input. You are now ready to characterize the SQUID, although you’ll need to unground it. That includes removing the BNC grounding caps from the T’s downstream of the SQUID bias filters and also flipping the BNC switch on the top of the rack. Click ‘preliminary sweep’ on the nSOT characterizer window. Sweep from 0 to 0.1. If you see a linear slope, a ton of stuff is working! The SQUID bias circuit, the SQUID array, the feedback electronics, all the cryogenics- that’s a really good sign. If you see no signal, don’t panic. Once again, there’s a lot of stuff involved in this circuit and a ton of mistakes you can make. Go back through the list and check everything, then check to make sure the SQUID bias isn’t grounded somewhere. Increase the sweep range until you see a critical current or you get above 3.3 V, which is where the feedback box will fail. If you don’t see a critical current, you have a SHOVET but not a SQUID. If you see a critical current, close the window, switch to the nSOT characterizer, and characterize the SQUID. At this point, you are at the surface and over the device with a working SQUID, and you can begin your imaging campaign, so what comes next is up to you!As wireless technology matured, Wireless Sensor Networks began to emerge as an advantageous alternative to their wired counterparts due in part to easy deployment and scalability. The 802.15.4 IEEE communication standard was developed for use specifically with low-rate wireless personal area networks with a focus on wireless sensor networks. In the early 2000s, the ZigBee alliance worked to construct the ZigBee protocols, communication protocols functioning on the 802.15.4 MAC and Physical layers. The main advantage of the ZigBee protocols over its competitor Bluetooth was ZigBees’ highly efficient sleep mode; ZigBee devices use a basic master-slave configuration suited for low frequency data transmission star topologies, and can wake from sleep and transmit a packet in around 15 miliseconds. As a result, ZigBee devices can last for long periods on a single power supply. In recent years, Digi incorporated the 802.15.4 standard and ZigBee protocols into a proprietary RF module known as the Xbee. Xbee devices have modular firmware capable of constructing various network topologies and have been utilized as end devices in wireless sensor network and monitoring applications. However, Xbee does not contain large processors for signal processing or local data analysis at the End Device. The limited processing capabilities of an Xbee device can be addressed with the implementation of additional hardware for processing support. Current WSN designs utilize an Arduino, a low-cost, round flower buckets reliable microcontroller capable of functioningas a building block for data acquisition or control systems, to augment a sensor nodes processing capabilities. In addition to the Arduino and Xbee, prototype WSN routinely incorporate a Raspberry Pi, a small inexpensive linux computer. The Raspberry Pi usually serves as a hardware platform for the ZigBee network Coordinator, and is used to direct network communication and control in wireless systems. Additionally, the Raspberry Pi can be used to handle WSN data storage by functioning as a database server . Raspberry Pi, Arduino, and Xbee based WSN posit two main questions. First, since ZigBee protocols were developed specifically for facilitating long node lifetimes, how does introducing additional processing hardware in the form of an Arduino impact overall node lifetime? And second, if one reason for the advance of WSN is its scalability, how do developers address the relatively limited storage capabilities of the IoT devices and their potential inability to successfully scale with increasing WSN traffic?Both sensors require signal processing to convert their data into human readable format. The Arduino uses the One Wire and Dallas Temperature libraries to read temperature values from the DS18B20 sensor, and the softSerial and TinyGPS libraries to parse GPS data from the PMB-648 GPS module. The Arduino runs a single loop that manages reading temperature and GPS sensor data, and communicating data via Xbee to the ZigBee network Coordinator. Both the Xbee and Arduino have sleep functions that minimize power consumption by periodically stopping unnecessary internal processes when those processes are notneeded.

The sleep functions were implemented inside the Arduino main code loop to halt superfluous processes while the node was neither gathering nor transmitting data. In oder to address the impact of the Arduino on End Device lifetimes, End Device average power consumption was compared for a range of transmission frequencies to generate a graphical relationship between transmission frequency and power consumption of an Arduino-Xbee End Device. The ZigBee Coordinator node consists of a Raspberry Pi series 2 B running OS Raspbian Jessie and a single Xbee Series 2 loaded with Coordinator firmware via XCTU. The Xbee Coordinator transmission and reception lines are input to the Raspberry Pi via its GPIO pins as a serial communication device. Raspberry Pi uses the Python serial and Xbee libraries to parse incoming API statements from End Devices. In order to address the limited local storage on the Raspberry Pi ZigBee Coordinator, the device is transformed into an SQLite cloud database client. The Raspberry Pi uses the Python requests library to transmit data packets as URL parameters to a cloud server. The cloud database server handles all WSN data storage, alleviating the responsibility from the Raspberry Pi. Web Application Development uses Heroku as a Platform as a Service . Heroku runs a Linux Operating System, a Puma Web Server, and an SQLite database as a framework for development. The Web Application is in charge of managing wireless sensor network data storage in the SQLite database and rendering a useful human readable User Interface for data presentation with a browser request. The Web Application uses Rails Model, View, and Controller architecture to pass incoming URL parameters to the the SQLite database via Object Relational Mapping . User Interface uses the Gmaps4Rails, a Ruby gem to superimpose Sensor and Coordinator GPS data as markers on an interactive map using the google maps API. The markers display relevant sensor data when clicked by the user, such as MAC address for sensor and Coordinator node, and temperature in degrees celsius for sensor node. A full list of the latest received data for each unique Sensor node is displayed in table format underneath the map for easy viewing.Additionally, the cloud database server is designed to be a shared database for multiple wireless sensor networks. A collaborative wireless sensor network cloud database may be useful in monitoring large scale geographically separate areas of interest such as a nationwide average temperature census or large scale environmental monitoring. Examples are given showing the cloud server functioning as a shared database.ZigBee is a global open standard for communication using the 802.15.4 protocol. Maintained by the ZigBee Alliance, transceivers communicate over ISM signal bands with intended ranges of 10-100m.ZigBee device firmware can have one of three functions which combine to form various network topologies. The three types are: ZigBee Coordinator , ZigBee Router , and ZigBee End Device . ZigBee Coordinators act as the central controller and parent node to both end devices and routers. They are in charge of network management functions such as storing security keys and network ids as well as handling network traffic. They are the most resource heavy nodes in terms of processing and local memory, and must be active for a network to exist. ZigBee Routers are capable of performing application layer tasks as well as acting as fully functional sensor nodes. Routers may function as network repeaters, extending network size by relaying information from end devices or other routers out of range of the Coordinator node. Routers are not necessary for a ZigBee network to exist, but are useful in forming sophisticated network structures or when network contains a large number of nodes.

This process populates previously empty Bloch states with electrons

We have already mentioned the most important consequence of a finite net Chern number: the presence of chiral edge states in the gap of a magnetic insulator. We have not yet discussed the consequences of this state of affairs, and we will do so next. The quantum states available in the bulk of trivial materials, i.e. Bloch states, are delocalized over the entire crystal, and as a result, when Bloch states are present at the Fermi level, electronic transport between any two points in the crystal can occur through the rapid local occupation and depletion of these quantum states. The edge states that appear in Chern magnets support a lower-dimensional analog of this property: they are delocalized quantum states restricted to the edge of a two dimensional crystal, and as a result they support electronic transport along the edge of the crystal through the rapid local occupation and depletion of these semi-localized quantum states. They do not support electronic transport through the bulk, and edge states that are not simply connected cannot transmit electrons through the bulk region separating them. As mentioned, the Chern number is a signed integer, and we have not yet discussed the physical meaning of the sign of the Chern number. The edge states in Chern magnets are chiral, meaning that electrons populating a particular edge state can only propagate in one direction around the edge of a two-dimensional crystal. The sign of the Chern number determines the direction or chirality with which propagation of the electronic wave function around the crystal occurs. Electronic bands with opposite Chern numbers produce edge states with opposite chiralities. So in summary, a two dimensional crystal that is a Chern magnet supports electronic transport through chiral edge states that live on its boundaries.

These systems remain bulk insulators, black plastic plant pots bulk and edge states separated by the bulk cannot exchange electrons with each other. The sign of the Chern number is determined by the spin state that is occupied, and thus the chirality of the available edge state is hysteretically switchable, just like the magnetization of the two dimensional magnet.It is important to remember that these quantum states are just as real as Bloch states, and apart from the short list of differences discussed above, they can be analyzed and understood using many of the same tools. For example, in a metallic system, the Fermi level can be raised by exposing a crystal to a large population of free electrons and using an electrostatic gate to draw electrons into the crystal. These Bloch states have a fixed set of allowed momenta associated with their energies, and experiments that probe the momenta of electrons in a crystal will subsequently detect the presence of electrons in newly populated momentum eigenstates. Similarly, attaching a Chern magnet to a reservoir of electrons and using an electrostatic gate to draw electrons into the magnet will populate additional chiral edge states. Properties that depend on the number of electrons occupying these special quantum states will change accordingly. In all of these systems, conductivity strongly depends on the number of quantum states available at the Fermi level. For metallic systems, the number of Bloch states available at any particular energy depends on details of the band structure. The total conductance between any two points within the crystal depends on the relative positions of the two points and the geometry of the crystal.

Thus conductivity is an intrinsic property of a metal, but conductance is an extrinsic property of a metal, and both are challenging to compute precisely from first principles.At finite temperature, electrons occupying Bloch states in metals can dissipate energy by scattering off of phonons, other electrons, or defects into different nearby Bloch states. This is possible because at every position in real space and momentum space there is a near-continuum of available quantum states available for an electron to scatter into with arbitrarily similar momentum and en-ergy. This is not the case for electrons in chiral edge states of Chern magnets, which do not have available quantum states in the bulk. As a result, electrons that enter chiral edge state wave functions do not dissipate energy. There is a dissipative cost for getting electrons into these wave functions this was discussed in the previous paragraph- but this energetic cost is independent of all details of the shape and environment of the chiral edge state, even at finite temperature. This is why the Hall resistance Rxy in a Chern magnet is so precisely quantized; it must take on a value of C 1 e h 2 , and processes that would modify the resistance in other materials are strictly forbidden in Chern magnets. All bands have finite degeneracy- that is, they can only accommodate a certain number of electrons per unit area or volume of crystal. If electrons are forced into a crystal after a particular band is full, they will end up in a different band, generally the band that is next lowest in energy. This degeneracy depends only on the properties of the crystal. Chern bands have electronic degeneracies that change in response to an applied magnetic field; that is to say, when Chern magnets are exposed to an external magnetic field, their electronic bands will change to accommodate more electrons.Simple theoretical models that produce quantized anomalous Hall effects have been known for decades.

The challenge, then, lay in realizing real materials with all of the ingredients necessary to produce a Chern magnet. These are, in short: high Berry curvature, a two-dimensional or nearly two-dimensional crystal, and an interaction-driven gap coupled to magnetic order. It turns out that a variety of material systems with high Berry curvature are known in three dimensions; three dimensional topological insulators satisfy the first criterion, and are relatively straightforward to produce and deposit in thin film form using molecular beam epitaxy, satisfying the second. These systems do not, however, have magnetic order. Researchers attempted to induce magnetic order in these materials with the addition of magnetic dopants. It was hoped that by peppering the lattice with ions with large magnetic moments and strong exchange interactions that magnetic order could be induced in the band structure of the material, as illustrated in Fig. 3.11. This approach ultimately succeeded in producing the first material ever shown to support a quantized anomalous Hall effect. An image of a film of this material and associated electronic transport data are shown in Fig. 3.12.We have already discussed the notion of the Curie temperature and its origin. To reiterate, the Curie temperature is a temperature set by the lowest energy scale at which excitations that change the magnetic order can appear. It is worth emphasizing one point in particular: the set of excitations that change the magnetic order includes but is not limited to all those that promote an electron from the valence band to the conduction band, i.e. the excitations that support charge transport through the bulk of a magnetic insulator. For this reason, the energy scale of the Curie temperature is generally expected to be lower than the energy scale of thermal activation of electrons into the bulk conduction band of the magnetic insulator.There are many a priori reasons to suspect that magnetically doped topological insulators might have strong charge disorder. The strongest is the presence of the magnetic dopants- dopants always generate significant charge disorder; in a sense they are by definition a source of disorder. Because their distribution throughout the host crystal is not ordered, procona system dopants can reduce the effective band gap through the mechanism illustrated in Fig. 3.14. It turns out this concern about magnetically doped topological insulators has been born our in practice; the systems have been improved since their original discovery, but in all known samples the Curie temperatures dramatically exceed the charge gaps . This puts these systems deep in the kBTC > EGap limit. The resolution to this issue has always been clear, if not exactly easy. If a crystal could be realized that had bands with both finite Chern numbers and magnetic interactions strong enough to produce a magnetic insulator, then we could expect such a system to be a clean Chern magnet . Such a system would likely support a QAH effect at much higher temperature then the status quo, since it would not be limited by charge disorder.Other researchers predicted that breaking inversion symmetry in graphene would open a gap nearcharge neutrality with strong Berry curvature at the band edges. The graphene heterostructures we make in this field are almost always encapsulated in the two dimensional crystal hBN, which has a lattice constant quite close to that of graphene. The presence of this two dimensional crystal technically always does break inversion symmetry for graphene crystals, but this effect is averaged out over many graphene unit cells whenever the lattices of hBN and graphene are not aligned with each other.

Therefore the simplest way to break inversion symmetry in graphene systems is to align the graphene lattice with the lattice of one of its encapsulating hBN crystals. Experiments on such a device indeed realized a large valley hall effect, an analogue for the valley degree of freedom of the spin Hall effect discussed in the previous chapter, a tantalizing clue that the researchers had indeed produced high Berry curvature bands in graphene. Twisted bilayer graphene aligned to hBN thus has all of the ingredients necessary for realizing an intrinsic Chern magnet: it has flat bands for realizing a magnetic insulator, it has strong Berry curvature, and it is highly gate tunable so that we can easily reach the Fermi level at which an interaction-driven gap is realized. Magnetism with a strong anomalous Hall effect was first realized in hBN-aligned twisted bilayer graphene in 2019. Some basic properties of this phase are illustrated in Fig. 4.3. This system was clearly a magnet with strong Berry curvature; it was not gapped and thus did not realize a quantized anomalous Hall effect, but it was unknown whether this was because of disorder or because the system did not have strong enough interactions or small enough bandwidth to realize a gap. The stage was set for the discovery of a quantized anomalous Hall effect in an intrinsic Chern magnet in hBN-aligned twisted bilayer graphene.We return now to our discussion of twisted bilayer graphene; we will be discussing domain dynamics. To investigate the domain dynamics directly, we compare magnetic structure across different states stabilized in the midst of magnetic field driven reversal. Figure 5.13A shows a schematic depictionof our transport measurement, and Fig. 5.13B shows the resulting Rxy data for both a major hysteresis loop spanning the two fully polarized states at Rxy = ±h/e2 and a minor loop that terminates in a mixed polarization state at Rxy ≈ 0 . All three states represented by these hysteresis loops can be stabilized at B = 22 mT for T = 2.1 K, where our nanoSQUID has excellent sensitivity, allowing a direct comparison of their respective magnetic structures . Figures 5.13, F and G, show images obtained by subtracting one of the images at full positive or negative polarization from the mixed state, as indicated in the lower left corners of the panels. Applying the same magnetic inversion algorithm used in Fig. 5.1 produces maps of m corresponding to these differences , allowing us to visualize the domain structure generating the intermediate plateau Rxy ≈ 0 seen in the major hysteresis loop. The domains presented in Figs. 5.13, H and I, are difference images; the domain structures actually realized in experiment are illustrated schematically in Fig. 5.13, J-L. Evidently, the anomalous Hall resistance of the device in this state is dominated by the interplay of two large magnetic domains, each comprising about half of the active area. Armed with knowledge of the domain structure, it is straightforward to understand the behavior of the measured transport in the mixed state imaged in Fig. 5.13D. In particular, the state corresponds to the presence of a single domain wall that crosses the device, separating both the current and the Hall voltage contacts . In the limit in which the chiral edge states at the boundaries of each magnetic domain are in equilibrium, there will be no drop in chemical potential across the domain wall, leading to Rxy = 0. This is very close to the observed value of Rxy = 1.0 kΩ = 0.039 h/e 2 .

The second effect cannot be replicated in three dimensional systems with any known technique

A semiclassical model- in which electrons within the system redistribute themselves in the out-of-plane direction to screen this electric field- does not apply; instead, the wave functions hosted by the two dimensional crystal are themselves deformed in response to the applied electric field . This changes the electronic band structure of the crystal directly, without affecting the electron density. So to summarize, when a two dimensional crystal is encapsulated with gates to produce a three-layer capacitor, researchers can tune both the electron density and the band structure of the crystal at their pleasure. In the first case, this represents a degree of control that would require the creation of many separate samples to replicate in a three dimensional system. There is a temptation to focus on the exotic phenomena that these techniques for manipulating the electronic structure of two dimensional crystals have allowed us to discover, and there will be plenty of time for that. I’d first like to take a moment to impress upon the reader the remarkable degree of control and extent of theoretical understanding these technologies have allowed us to achieve over those condensed matter systems that are known not to host any new physics. I’ve included several figures from a publication produced by Andrea’s lab with which I was completely uninvolved. It contains precise calculations of the compressibility of a particular allotrope of trilayer graphene as a function of electron density and out-of-plane electric field based on the band structure of the system .

It also contains a measurement of compressibility as a function of electron density and out-of-plane electric field, plastic growers pots performed using the techniques discussed above . The details of the physics discussed in that publication aren’t important for my point here; the observation I’d like to focus on is the fact that, for this particular condensed matter system, quantitatively accurate agreement between the predictions of our models and the real behavior of the system has been achieved. I remember sitting in a group meeting early in my time working with Andrea’s lab, long before I understood much about Chern magnets or any of the other ideas that would come to define my graduate research work, and marvelling at that fact. Experimental condensed matter physics necessarily involves the study of systems with an enormous number of degrees of freedom and countless opportunities for disorder and complexity to contaminate results. Too often work in this field feels uncomfortably close to gluing wires to rocks and then arguing about how to interpret the results, with no real hope of achieving full understanding, or closure, or even agreement about the conclusions we can extract from our experiments. Within the field of exfoliated heterostructures, it is now clear that we really can hope to pursue true quantitative accuracy in calculations of the properties of condensed matter systems. Rich datasets like these, with a variety of impactful independent variables, produce extremely strong limits on theories. They allow us to be precise in our comparisons of theory to experiment, and as a result they have allowed us to bring models based on band structure theory to new heights of predictive power. But most importantly, under these conditions we can easily identify deviations from our expectations with interesting new phenomena- in particular, situations in which electronic interactions produce even subtle deviations from the predictions of single particle band.

This is more or less how I would explain the explosion of interest in the physics of two dimensional crystalline systems within experimental condensed matter physics over the past decade. If you ask a theorist if two dimensional physical systems have any special properties, they will tell you that they do. They might say that the magnetic phase transitions in a Heisenberg model on a two dimensional lattice differ dramatically from those on a three dimensional one. They might say that high Tc superconductivity is apparently a two dimensional phenomenon. They might note that two dimensional electronic systems can support quantum Hall effects and even be Chern magnets , while three dimensional systems cannot. But it is easy to miss the forest for the trees here, and I would argue that interest in these particular physical phenomena is not behind the recent explosion in the popularity of the study of exfoliated two dimensional crystals in condensed matter physics. Instead, much more basic technical considerations are largely responsible- it is simply much easier for us to use charge density and band structure as independent variables in two dimensional crystals than in three dimensional crystals, and that capability has facilitated rapid progress in our understanding of these systems. The techniques described above still have some limitations, and chief among them is a limited range of electronic densities that they can reach. Of course, the gold standard of electron density modulation is the ability to completely fill or deplete an electronic band, which requires about one electron per unit cell in the lattice. Chemical doping can achieve enormous offsets in charge density, sometimes as high as one electron per unit cell.

Electrostatic gating of graphene can produce crystals with an extra electron per hundred unit cells at most. This limitation isn’t fundamental and there are some ideas in the community for ways to improve it, but for now it remains true that electrostatic gates can modify electron densities only slightly relative to the total electron densities of real two dimensional crystals. As it stands, electrostatic gating can only substantially modify the properties of a crystal if the crystal happens to have large variations in the number and nature of available quantum states near charge neutrality. For many crystals this is not the case; thankfully it is for graphene, and for a wide variety of synthetic crystals we will discuss shortly. Electrostatic gating of two dimensional crystals was rapidly becoming a mature technology by the time I started my PhD. So where does nanoSQUID magnetometry fit into all of this? A variety of other techniques exist for microscopic imaging of magnetic fields; the most capable of these other technologies recently developed the sensitivity and spatial resolution necessary to image stray magnetic fields from a fully polarized two dimensional magnet, with a magnetization of about one electron spin per crystalline unit cell, and this was widely viewed within the community as a remarkable achievement. We will shortly be discussing several ferromagnets composed entirely of electrons we have added to a two dimensional crystal using electrostatic gates. Because of the afore-mentioned limitations of electrostatic gating as a technology, this necessarily means that these will be extremely low density magnets with vanishingly small magnetizations, at least 100 times smaller than those produced by a fully polarized two dimensional magnet like the one in the reference above. It is difficult to summarize performance metrics for magnetometers, especially those used for microscopy. Many magnetometers are sensitive to magnetic flux, not field, so very high magnetic field sensitivities are achievable by simply sampling a large region, but of course that is not a useful option when imaging microscopic magnetic systems. Suffice to say that nanoSQUID sensors, blueberry in pot which had been invented in 2010 and integrated into a scanning probe microscope by their inventors by 2012, combine high spatial resolution with very high magnetic field sensitivity. This combination of performance metrics was and remains unique in its ability to probe the minute magnetic fields associated with gate-tunable electronic phenomena at the length scales demanded by the size of the devices. Gate-tunable phenomena in exfoliated heterostructures and nanoSQUID microscopy were uniquely well-matched to each other, and although at the time I started my graduate research only a small handful of gate-tunable magnetic phenomena had so far been discovered in exfoliated two dimensional crystals, nanoSQUID microscopy seemed like the perfect tool for investigating them.So what exactly is nanoSQUID microscopy? We can start by discussing Superconducting Quantum Interference Devices, or SQUIDs. In summary, SQUIDs are electronic devices with properties that strongly depend on the magnetic field to which they are exposed, which makes them useful as magnetometers. I won’t delve into the details of how and why SQUIDs work here, but I will explain briefly how SQUIDs are made, since that will be necessary for understanding how nanoSQUID imaging differs from other SQUID-based imaging technologies. A SQUID is a pair of superconducting wires in parallel, each with a thin barrier in series . The electronic transport properties of this device depend strongly on the magnetic flux through the region between the wires, i.e. inside the hole in the center of the device in Fig. 1.3.

To be a little bit more precise, superconductors transport current without dissipation, so long as the current density stays below a sharp threshold. When this threshold is exceeded, the superconductor revertsto dissipative transport, like a normal metal. Above this critical current, in the so-called ‘voltage state,’ electronic transport is dissipative and highly sensitive to B. Any non-superconductor can function as a barrier, including insulators, metals, and superconducting regions thinner than the coherence length.This is sufficient for many applications, but it presents some issues for producing sensors for scanning probe microscopy. Scanning probe microscopy is a technique through which any sensor can be used to generate images; we simply move the sensor to every point in a grid, perform a measurement, and use those measurements to populate the pixels of a two dimensional array . This can of course be done with a SQUID, and many researchers have used SQUIDs fabricated this way to great effect. But the spatial resolution of a scanning SQUID magnetometry microscope is set by the size of the SQUID, and there are limits to how small SQUIDs can be fabricated using photo lithography. It is also challenging to fashion these SQUIDs into probes that can be safely brought close to a surface for scanning; photo lithography produces SQUIDs on large, flat silicon substrates, and these must subsequently be cut out and ground down into a sharp cantilever with the SQUID on the apex in order to get the SQUID close enough to a surface for microscopy. In summary, the ideal SQUID sensor for microscopy would be one that was smaller than could be achieved using traditional photo lithography and located precisely on the apex of a sharp needle to facilitate scanning. As is so often the case when developing new technologies, we have to make the best of the tools other clever people have already developed. In the case of nanoSQUID microscopy, the inventors of the technique took advantage of a lot of legwork done by biologists. Long ago, glass blowers found that hollow glass tubes could be heated close to their melting point and drawn out into long cones without crushing their hollow interiors. Chemists used this fact to make pipettes for manipulating small volumes of liquid, and biologists later used the techniques they developed to fashion microscopic hypodermic needles that could be used to inject chemicals into and monitor the chemical environment inside individual cells in a process called patch-clamping. A rich array of tools exist for producing these structures, called micropipettes, for chemists and biologists. Eli Zeldov noticed that these structures already had the perfect geometry to serve as substrates for tiny SQUIDs. By depositing superconducting materials onto these substrates from a few different directions, one can produce superconducting contacts and a tiny torus of superconductor on the apex of the micropipette. The same group of researchers successfully integrated these sensors into a scanning probe microscope at cryogenic temperatures. The sizes of these SQUIDs are limited only by how small a micropipette can be made, and since the invention of the technique SQUIDs as small as 30 nm have been realized. We call these sensors nanoSQUIDs, or nanoSQUID-on-tip sensors. A few representative examples of nanoSQUID sensors are shown in Fig. 1.4. A characterization of the electronic transport properties of such a sensor, and in particular the sensor’s response to an applied magnetic field, is shown in Fig. 1.5. NanoSQUID microscopes share many of the core competencies of more traditional, planar scan-ning SQUID microscopes. They dissipate little power, and the measurements they generate are quantitative and can be easily calibrated by measuring the period of the SQUID’s electronic response to an applied magnetic field.