A two compartment model for SA dynamics on the human skin was developed and fitted to data

Some parameters are identifiable to a reasonable degree through model fitting, but there is a large degree of uncertainty in the viral transport efficiencies and the AD kinetic parameters. While this could be a consequence of fitting a limited number of data points with several parameters, the viral load at harvest and risk estimates were well constrained. This large variation in parameters and ‘usefully tight quantitative predictions’ is termed the sloppiness of parameter sensitivities, and has been observed in physics and systems biology. Well-designed experiments may simultaneously reduce uncertainty in the parameters as well as predictions and therefore increasing confidence in predictions. One possible experiment to reduce parameter uncertainty is recording the transpiration and growth rate to fit eq. independently to get at and bt . An interesting outcome of my analysis is the strong association of risk with plant growth conditions. The health risks from consuming lettuce irrigated with recycled wastewater are highest in hydroponic grown lettuce, followed by soil grown lettuce under Sc2 and the least in soil grown lettuce under Sc1 . This difference in risk estimates stems to a large degree from the difference in AD kinetic constants . Increasing katt,s will decrease risk as more viruses will get attached to the growth medium, while increasing kdet,s will have the opposite effect , as more detached viruses are available for uptake by the plant. The combined effect of the AD parameters depends on their magnitudes and is portrayed in Fig. A.4. This result indicates that a better understanding of the virus interaction with the growth environment can lead to an improved understanding of risk. More importantly,livestock fodder system this outcome indicates that soil plays a vital role in the removal of viruses from irrigation water through the adsorption of viral particles. An investigation focused on understanding the influence of soil composition on viral attachment will help refine the transport model.

The risk predicted by this dynamic transport model is higher than the EPA annual infection risk as well as the WHO annual disease burden benchmark. The reasons for this outcome are many-fold. First, there is a significant variability in the reported internalization of viruses in plants. In research of data for modeling NoV transport in plant, I filtered the existing data using the following criteria: 1) human NoV used as the seed agent, 2) presence of quantitative viral results in the growth medium and different locations of the plant. Based on these criteria, the data from represent the best available data on viral internalization and transport in lettuce. However, it is also important to note that a similar study by did not observe human NoV internalization in lettuce. This discrepancy could be due to the specific subspecies of the plant and growth conditions used in the studies. Besides, minor changes such as damages in roots or decrease in humidity of the growing environment can promote pathogen internalization. Alternatively, tracking viral transport through the growth medium and the plant is challenging, which may yield false results due to reaction inhibitions in genome amplification and inferior detection limit. The risk outcome of this study is conservative because it assumes an individual consumes the wastewater irrigated lettuce daily for an entire year. This assumption and the corresponding higher risk estimates are only applicable to a small portion of consumers, while most consumers in the U.S. are likely to have a more diverse diet. While the model outcomes presented here represent the best attempt given the available data, it is also possible that the internalization observed by is an extreme case and and typically internalization occurs to a lesser extent.As previously discussed by others , risk estimates by different NoV dose-response models differed by orders of magnitude. This study primarily aims to introduce a viral transport model without advocating any one dose-response model. The future refinement of pathogen dose-response models will reduce variability in risk estimates.

The risk of consuming lettuce grown in soil as predicted by is higher than my predictions, although I used the results of in both studies. This is a consequence of considering the greater adsorption capability of soil, which is not reflected when assuming a simple input:output ratio. Using different inoculating concentrations of NoV, body weight and consumption rate distributions also contributed to the difference in the outcomes but to a lesser extent. In addition to a transport model predicting the NoV load in lettuce, I explored strategies to reduce the risk of NoV gastroenteritis by increasing holding time of the produce after harvesting or using larger hydroponic culture volumes. Although neither strategy could significantly alleviate the risks, the process highlights two strengths of modeling: 1)It provides analytical support for arguments that would otherwise be less convincing; 2) It predicts outcomes of experiments without the physical resources required to perform them. For instance, the model can be used to explore alternate irrigation schedules to reduce the NoV internalization risk. Modeling also helps encapsulate our understanding of the system and generate hypotheses. For example, simple first-order decay did not produce the trend observed in the water, which suggests that additional mechanisms are at play. I postulated the attachment of virus particles on the walls of the hydroponic system as one possible mechanism and examined the fit of the model. Although viral attachment to glass or other materials has been observed before, here it stands as a hypothesis that can be tested. In addition to generating and testing hypotheses, some of my model assumptions raise broader questions for future research. For example, I assumed that viruses are transported at the transpiration rate from the growth medium to the roots. However, not much is known regarding the role of roots in the internalization of viruses. Investigating the defense mechanisms of plants’ roots to passive viral transport, i.e., through rhizosphere microbiome interactions, may shed light on the broad understanding of plant and microbe interactions. The question of extending this model to other pathogen and plant systems draws attention to the dearth of data in enabling such efforts. While modeling another virus may not require changes to the model, understanding transport in other plants can be challenging.

Data required includes models for growth rate and transpiration, plant growth characteristics including density, water content, as well as internalization studies to determine transport efficiencies. However, from the perspective of risk management, lettuce may be used as the worst-case scenario estimate of risk in water reuse owing to its high consumption with minimal pathogen inactivation by cooking. This worst-case scenario can be used to set water quality standards for irrigation water for the production of fresh produce eaten raw. The models can also be extended to include pathogen transport to the plant tissue from manure/biosolids that are used as organic fertilizer. By assuming that SA transitions from an un-adapted state to an adapted state, the model is grounded in first principles. The stochastic aspect of dose-response emerges naturally from a stochastic simulation of the growth kinetics. In addition, the model predicts carrier outcomes without additional data. Armitage et al. interpret results from several studies to posit that pathogens, including bacteria,hydroponic nft gully show an initial exponential increase in all individuals. We argue that this is not inconsistent with the initial decrease assumption for three reasons. Firstly, the exponential increase is observed in organs like the liver or spleen, and not the whole body or site of inoculation . This does not refute the possibility of an initial decrease at the inoculation site or the whole body. Secondly, the posited decrease is transient, and samples may not have been collected during this window. Thirdly, the magnitude of decrease is low at higher inocula and consequently less detectable. Further, compared to the initial decrease observed when all bacteria are in the S1 state, one would expect 1) no initial decrease if seeding with bacteria all in the S2 state, and 2) a smaller initial decrease if seeding with a mixture of bacteria in the S1 and S2 state. These trends have been observed when pathogens from in-vivo cultures were used for infecting the host . We note that the transition from S1 to S2 is perhaps not instantaneous, and the pathogen population may constitute a continuum of states between S1-S2. When loads were measured in the whole body, a transient decrease was observed in some cases . Clumping of bacteria was offered as a possible explanation, but this does not rule out an actual reduction in viable counts observed in other systems . Armitage et al. also note that non-responders show a subsequent decrease after the initial exponential increase. These were substantiated by measurements from survivors who were killed at later time points. This decrease is probably due to the activation of the adaptive immune response inside the host, which could be incorporated in a within-host variant of the 2C model. Using the concept of IED to evluate reponse, I am able to explain the data with a single IED. It has been observed that the toxic dose of a chemical can vary between individual subjects or with the season.

A similar stochasticity may be expected in IED between individuals which can be attributed to differences in covariates such as body weight, sex, immune history and biological noise. However, assuming this was not necessary to produce an acceptable fit. The model was fit to data by following a two step optimization procedure. Direct multi-objective optimization was not pursued since the objective functions were very different from each other. The deterministic ODE model was easy to evaluate and a global optimization algorithm was employed to guard against local minima while fitting the growth data. Fitting the dose-response data was computationally challenging for 3 reasons: a non-smooth objective function, stochastic simulations have to be repeated many times, and the number of stochastic entities being modeled is not small. Hence, a simple brute-force optimization was adopted. The RH model exhibits a sharp initial decline in SA density and predicts values lower than the observed minimum for each initial load . The 2C model only goes as low as the lowest load observed on the skin. Experiments similar to that of with greater time resolution are necessary to ascertain the time of true minimal SA density. The 2C model stochastic model does not perform as well as the RH model . However, the 2C model fit to dose-response data improves along the Pareto front . It is possible that exploring solutions with a higher growth objective may yield a solution that fits as well as, if not better than, the RH fit to the dose-response data. Moreover, the proposed approach offers advantages over the existing approach in that 1) it is fully mechanistic, and hence is more applicable in other scenarios , and 2) in addition to response outcomes, the proposed approach also accounts for carrier outcomes. Perhaps the most interesting outcome of this study is the incorporation of quorum sensing in dose-response modeling. The rejection of the absence of cooperativity in SA pathogenesis and the adequate fit of cooperativity make a strong case for the cooperativity in action hypothesis. Experimental support for this hypothesis include the well studied Agr system of quorum sensing . In the words of Le. et al, the Agr system “generally enhances pathogenesis by increasing expression of aggressive virulence determinants such as toxins and degradative enzymes”.This system is activated when bacteria reach a certain density, which results in a disease response such as a murine abscess. However, the 2C model posits that quorum sensing enhances bacterial growth rate, for which I propose two possible explanations. The direct explanation is the existence of an as yet undiscovered signaling mechanism responsible for density dependent growth enhancement. A second explanation relates to the events initiating response in a host, which is the interaction of the toxins/enzymes produced by SA with the host tissue. The 2C model captures these dynamics at a higher level of abstraction, with the mathematical variable i representing the amount QS signals and toxins. We can interpret b2 as the rate of enhanced production of these factors.

Type-1 crops describe field and row crops that have a period of senescence and defoliation

Recent efforts aimed at establishing, standards for data quality indicators and other scoring criteria are driven in part by a desire to properly account for sources of uncertainty in life-cycle assessments . Similar desires have been expressed towards water footprint assessments. As described by Hoekstra : “The field has to mature still in terms of calibrating model results against field data, adding uncertainties to estimates and inter-model comparisons as done in the field of climate studies”. Additionally, researchers now rely on computational methods to synthesize the large quantity of environmental data and observations that are characteristic of studies conducted at large temporal or regional scales. It is still uncommon for data and computational methods to be published along with the completed studies, which obstructs the reproducibility of many hydrologic studies . These later reasons motivated the form of this study—an elementary water footprint analysis decomposed into a reproducible framework. As a case study in resource sustainability, the State of California presents a unique combination of agricultural and economic activities, resource constraints, and environmental monitoring efforts. Among the United States, California has the greatest population, greatest total farm sales , and if considered separately, would rank as the fifth largest economy in the world, by gross domestic product . Nine out of California’s one hundred million acres contain irrigated agriculture; which requires 30 million acre feet of irrigation in an average year, accounting for 80% of the state’s water use . This freshwater requirement is met in part from a vast network of water storage and conveyance infrastructure, which transfer water from the northern third of California,barley fodder system where 2/3 of the precipitation and runoff occurs, to the southern two-thirds, where 3/4 of the anthropogenic water demands are located .

Management of California’s freshwater resources are constrained by dynamic availability on one side and strong, persistent demands on the other. Seasonal variations in precipitation affect the availability of freshwater resources in California The state has recently endured a 5-year drought from 2011-2016, marked by a period from 2012 to 2014 that had the worst drought severity in the past millennium. On the other side, California’s water resources underpin its standing as one of the most productive agricultural exporters in the world and as an important component of the nation’s food security. In 2015, California produced more than 99 percent of the United States’ almonds, pistachios, walnuts, grapes, peaches, and pomegranates . In the same year, international exports accounted for approximately 26 percent of the state’s agricultural production by volume, adding up to 44 percent of the total agricultural sales by value. California is the sole national exporter of many valuable commodities, including almonds, walnuts, and pistachios, which all lie in the top five of the state’s agricultural exports by value . Unpredictable seasonal availability and uncertain international appetite makes it difficult to predict the nature of future constraints and pressures on California’s water resources. There is no guarantee that future climatic, economic, or resource environments will accommodate all of the things that societies value: healthy produce, delicious animal foods, verdant natural vistas, thriving native wildlife, and the autonomy that comes from regional food security. The current attention placed in life-cycle sustainability indicators demonstrates an awareness of the desire to maintain environmental, social, and economic systems without limiting the ability of future generations to meet their needs .

When coupled with scenario analysis, these indicators can support strategic decisions to ensure the security of natural resource supplies. Water footprint assessments have been used to quantify the impact of lifestyles on California’s water resources and have been proposed as policy support tools . Additionally, these assessments have been used to describe the effect of California water resource challenges on international trade networks . While water footprint assessments align with the resource sustainability challenges of California, water scarcity is a problem shared by many nations globally . Therefore, reproducible sustainability assessments are useful in their ability to be applied and compared between different environmental and economic systems.This study used the California Irrigation Management Information System to obtain daily reference evapotranspiration observations across the state. Specifically, the Spatial CIMIS data product was used to obtain raster representations of daily ET0 at a 4 km spatial resolution. This data was upscaled to 30 meters, using bilinear interpolation . The original data is housed and maintained by the California Department of Water Resources , and can be accessed through the CIMIS web interface. CIMIS comprises a network of over 100 automated weather stations that measure the different meteorological parameters at urban and rural sites throughout California. The system was originally established as a project of DWR and the University of California, Davis in 1982 . Each station is sited away from buildings and trees, on a bed of healthy grass that is: “well maintained, properly irrigated and fertilized and mowed or grazed frequently to maintain a height between 10 to 15 centimeters ” . Hourly weather observations are transmitted nightly to Sacramento, where the data are used to compute an average daily evapotranspiration of the reference grass surface underneath each station, using a modified version of the 1977 FAO Penman-Monteith ET0 equation .

The CIMIS Equation differs in its use of a wind function and a method of calculating net radiation from mean hourly solar radiation .The ET0 observations are made publicly available with the primary purpose of aiding agricultural growers develop irrigation schedules. While the CIMIS network provides station-specific ET0 calculations, the Spatial CIMIS data product produces a continuous daily ET0 calculation across the entire state. This is accomplished by using raster observations from the National Oceanic and Atmospheric Administration Geostationary Operational Environmental Satellite system as inputs to the ASCE-Penman-Monteith ET equation . Spatial CIMIS also interpolates temperature and wind measurements from CIMIS stations, to serve as inputs to the ASCE-PM equation . Radiative inputs to the ASCE-PM equation are derived from a clear sky factor that is directly related to cloud cover, as observed by GOES satellite data. Specifically, Spatial CIMIS uses GOES visible imagery to derive a clearness parameter that is directly related to cloud cover in a given grid cell. This is combined with a clear sky solar radiation model developed for the Heliosat-II model . Heliosat-II is a software commissioned by the Solar Radiation Data project, with the purpose of converting images acquired by geostationary meteorological satellites into maps of global solar irradiation, received at ground level . The model incorporates a seasonal turbidity factor, which describes atmospheric attenuation of light due to aerosols and gases. Additional description of inputs to the Spatial CIMIS implementation of the ASCE-PM equation can be found in Appendix. Spatial CIMIS has a weakness in estimating solar radiation in scenarios where changes in the surface albedo can be mistaken for cloud cover. This typically occurs in regions that have snowfall and persistent fog,hydroponic barley fodder system both common winter conditions for some regions in California. Grid cells that contain snow cover and/or fog that persist for greater than 14 days lead to an underestimation of cloud cover and an over-prediction of net radiation during cloudy days Hart et al., 2009. Depending on the location in California, some studies have found good agreement between Spatial CIMIS ET0 and other methods, while others have used Spatial CIMIS after applying correction factors . This study used crop coefficients from Basic Irrigation Scheduling to scale Spatial CIMIS ET0 into crop-specific estimations of evapotranspiration ETc. Kc values for 45 unique crops were selected from the BIS software. These values were supplemented with Kc values from the Consumptive Use Program Plus for garlic and oranges and values from the University of California Division of Agriculture and Natural Resources for some orchard crops. Kc values for peppermint and unspecified caneberries were selected from the AgriMet crop coefficients, which were assembled by the United States Bureau of Reclamation , Pacific Northwest region. Kc values for unstressed Pomegranites were obtained from a study conducted at the Ben-Gurion University of the Negev, Israel. BIS is an application implemented in Microsoft Excel that is used for the planning of irrigation schedules for crops in California .

The software was developed as a collaboration between the University of California, Davis, the California Department of Water Resources, and the University of California Cooperative Extension. The program is currently hosted by the UC Davis Biometerology Group and can be accessed at the BIS home page. Among other uses, BIS is used to determine irrigation schedules, irrigation timings, and maximum allowable soil water depletion for 66 unique crop types. It accomplishes this by estimating crop evapotranspiration given mean climate data for a particular region. BIS partitions evapotranspiration into the component of water evaporated from spoil and plant surfaces and the component transpired by leaves . As the crop matures, the ratio of T to ET increases, until the transpiration component dominates crop ET. To account for the variable ETc , BIS defines: Kc values at different stages in a crop’s life cycle, typical planting and harvest days, and the proportion of the growing period dedicated to each growth stage. These coefficients are defined according to the FAO-56 “single crop coefficient” method, which assigns values according to 4 growth stages of a typical crop: initial growth, crop development, mid-season, and late-season . These growth stages characterize a crop’s daily Kc function, a curve that describes how the values vary as a function of the time in the crop’s growing period. BIS distinguishes between four main crop types.They are characterized by crop coefficients with three inflection points, at 10% ground shading, 75% ground shading, and the onset of senescence. Some type-1 crops such as peas and lettuce, are harvested before their period of senescence. They are characterized by two inflection points, at 10% ground shading and 75% ground shading. Type-2 crops have Kc values that are essentially fixed for most of the season. These include alfalfa, pasture, and most types of turfgrass. Shading of soil by dormant grass may cause an over prediction of soil evaporation and total ETc, however the error may be slight due to the lower overall ETo during the cold winter season Richard L. Snyder, 2014. Type-3 crops do not have a water requirement prior to shoot and leaf growth in the spring and can be characterized by a Kc curve with two inflection points. Type-4 crops represent orchard crops that have fixed Kc values throughout their growing season—similar to type-2 crops. Type-4 crops include subtropical orchards . This study assigned Kc values to individual grid cells according to the crop cover, as observed in the Cropland Data Layer . The United States Department of Agriculture National Agricultural Statistics Service has produced land cover raster image products for major agricultural regions since 1970, and for the 48 conterminous states since 2009 . Annual CDL images can be viewed through CropScape, a web GIS application maintained by USDA-NASS and the Center for Spatial Information Science and Systems at George Mason University. CDL rasters can be downloaded from the CropScape web service, or at the National Resources Conservation Service Geospatial Data Gateway. The CDL was first created by the USDA NASS Research and Development Division, Geospatial Information Branch, Spatial Analysis Research Section . It was based on an image processing and acreage estimation software named Peditor, written in the 1970s and maintained through 2006 . The stated goal of the NASS CDL program is to provide commodity acreage estimates to the Agricultural Statistics Board and other agricultural stakeholders. CDL rasters use standard land cover categories, with an emphasis on agricultural land covers. Records for the State of California begin in the 2007 calendar year; CDL products have a 56-meter spatial resolution from 2007-2009, and a 30-meter spatial resolution from 2009-present. Currently, the CDL is primary constructed from the supervised classification of remotely sensed satellite imagery, from the Advanced Wide Field Sensor onboard the Indian Remote Sensing satellite, RESOURCESAT-1 . This is supplemented with imagery from land imaging sensors onboard the United States Geological Survey Landsat satellites and 16-day Normalized Difference Vegetation Index composites, from the National Aeronautics and Space Administration moderate-resolution imaging spectroradiometer .

Improving agricultural production and profits is an important component of poverty alleviation

Randomized evaluations of the agronomic productivity gains from new crops or agricultural techniques have been common in the agricultural field for many years. More recent is an approach to agriculture that aims to conduct ‘effectiveness’ trials, incorporating real-world issues of access and adoption among smallholder farmers, rather than the idealized ‘efficacy’ trials produced using experimental test plots. Tackling the impacts of agricultural interventions outside of the test plot introduces issues at the heart of economics, such as transaction costs, social interactions, marketing, finance, and contracting as we think carefully about the decision to adopt. Thinking of the smallholder farm as a small business, this decision should be driven by profitability. The core contribution of RCTs is their ability to clearly trace causality between the constraints to agricultural technology adoption, adoption itself and final outcomes . Randomized experimental evaluations allow researchers to isolate the causal impact of a program from other confounding factors—such as price, weather, or access to credit—which are simultaneously changing over time and across regions 2 . Carefully designed experiments allow us to identify whether specific constraints to adoption are binding, and measure the impacts of a technology when adopted in farmers’ actual fields. These evaluations speak to the effectiveness of specific approaches to achieving agricultural technology adoption for improved smallholder productivity and welfare.The Agricultural Technology Adoption Initiative was founded in 2009 to increase the quantity and quality of experimental evidence in developing-country agriculture. ATAI aims to serve as a mechanism to generate, aggregate,livestock fodder system and summarize research for policy outreach on the adoption of agricultural innovations by smallholders in Sub-Saharan Africa and South Asia.

ATAI exclusively funds randomized controlled trials, and pilot work that lays the groundwork for future RCTs, and was organized intellectually around understanding how a set of specific constraints held back technology adoption. Because of this methodological focus, the resulting evidence is primarily on interventions targeted at the individual or household level, although we also report on studies in areas such as input and output markets that attempt to drive outcomes at more aggregated levels. Even within this domain, we have a distribution of studies that is purposive, driven by the questions asked by our affiliated investigators, and by the technical feasibility of running randomized trials. We use the structure of the ATAI constraints to adoption to help summarize the experimental evidence, aggregating individual, internally valid studies around these common themes. This produces an evidence base that is far from comprehensive in terms of the important issues in agricultural development, but is broader than would have been produced by a more tightly structured replication-focused research initiative and does provide a relatively clear guide to what makes specific interventions attractive in terms of evidence-based funding. Throughout the world, 63% of those living under $1.25 per day are working in agriculture .Ligon and Sadoulet show the importance of economic growth in the agriculture sector for the livelihoods of the poorest households: a one percent growth in GDP that originates from agriculture correlates with a 5.6 percentage point increase in expenditures among the poorest decile of the population, a 4.45 percentage point increase for the bottom 30%, while “growth from non-agriculture sectors does not appear to have a significant effect on expenditure growth for the poorest 50%.” The Green Revolution of the 1960s saw the spread of agricultural technologies to less industrialized nations, and large agricultural productivity gains particularly in East Asia.

Yet technological innovations have not similarly spread to transform agricultural productivity in Sub-Saharan Africa and parts of South Asia as evident in the lagging adoption of modern varieties and a persistent yield gap between regions. Many African countries have rising private sectors developing agricultural technologies, and research and implementation groups including the CGIAR centers and AGRA continue to develop improved inputs and interventions designed to improve the resilience, profits, and nutrition of African smallholders in particular. Yet these innovations do not appear to have translated into meaningful improvements in yields at the macro-level. FAOSTAT data shows a large gap between low per hectare cereal yields in Africa and South Asia which are on average roughly one third of the per hectare yields in East Asia and OECD countries. Sub-Saharan Africa is particularly lagging behind. In South Asia, land use for cereal production has increased 20% while yields have tripled. In Sub-Saharan Africa, land use for cereal production has more than doubled, while yields have increased by just 80% . The macro picture of fertilizer use over time similarly looks unchanged, with low and stagnant use of fertilizers in mainly rainfed areas like SubSaharan Africa. Fertilizer consumption remains extremely low in SubSaharan Africa compared to other regions. Roughly 16 kilos of fertilizer are used per hectare in SubSaharan Africa, and among all developing countries the average is 26.75 kg/hectare. This figure is much higher in other regions: 344 kg/hectare in East Asia/Pacific, and 159 kg/hectare in South Asia.This clearly demonstrates that the status quo of agricultural production, particularly in Sub-Saharan Africa, remains far below the technological frontier, suggesting missed potential in terms of yields, income, and welfare improvements to food security and nutrition. The specific reasons behind lagging adoption of productivity enhancing technological innovations and persistent yield gaps in rainfed Sub-Saharan Africa and South Asia relative to the rest of the world have been a puzzle in need of policy solutions. Field experiments help us move beyond test plots to explain the continuing puzzle of low technology adoption by smallholder farmers in rainfed areas where agriculture is performing well below the technological frontier. Focusing at the micro-economic level of this challenge, we focus on technology adoption as an outcome that inherently requires smallholder farmers to change their practices.

Behavior changes can include, for example, the adoption of resilient and high-yielding crop varieties or a shift to high-value crops, the purchase and application of complementary inputs such as fertilizers, and the adjustment of farm labor allocated toward specific agronomic practices. Many smallholder farmers face barriers to adopting effective agricultural technologies. These constraints to adoption may be driven by standard economic factors , or may be behavioral . Standard economic explanations consider smallholder farmers as economic agents, building from the conception that “in a well-functioning economy where markets perfectly capture all costs and benefits, and individuals are fully informed and unconstrained, farmers will adopt a technology if they make a profit from adopting it” . This is an important distinction from a world where farmers focus their efforts to maximize their productivity, for example, their crop yields, given increased yields do not necessarily lead to improved welfare. Profitability can be limited by input costs, credit constraints, and market access. Information and labor constraints are also relevant — how well do farmers understand the properties of new technologies, in the absence of opportunities to experiment? What are the additional labor requirements for the use of these new technologies, and how do farmers value their time in input decisions? Jack reviews in detail other dimensions that mediate whether certain technologies “meet the expected profitability condition” for specific farmers. This varies temporally and spatially . This also varies between and within households,hydroponic nft gully particularly when complementary asset or capital investments are needed, or new technologies challenge individual tastes and preferences. Even where markets are functioning well, accessible and profitable technologies may not be adopted for behavioral reasons, such as risk or uncertainty aversion or procrastination, which challenge decision-making even in the best of circumstances. Smallholders’ decision-making is highly complex and conducted in risky and low resource environments. Farmers make interconnected choices over long time frames that are characterized by risks and uncertainty. One of many choices is among a range of potential inputs to production , in contexts with highly variable land, wide ranging and seasonal climatic variation that is growing increasingly extreme given climate change, and unpredictable shocks to their livelihood. New technologies may change the risk or payoff profiles of farming in ways that require us to incorporate other social science insights, for example expected utility theory and behavioral economics, in order to understand perceived benefits at the farmer level. Motivated by addressing the constraints hindering the adoption of new agricultural technologies, ATAI has worked to fund and structure the experimental evidence base across seven primary market inefficiencies that constrain adoption. These are credit5 , risk, information, input and output markets, labor and land market inefficiencies, as well as externalities . These may operate through supply or demand channels, for example by limiting the availability of technologies, information, or financing, and/or dampening demand by lowering expected profits. Lessons from psychology and behavioral economics are considered where they are particularly relevant. Jack motivates the focus on constraints to adoption, rather than specific technologies, as a framework that helps identify effective strategies to address common inefficiencies and constraints in order to encourage the adoption and use of more than one technology. ATAI uses this conceptual framework of seven constraints to drive its research competitions.

Randomized evaluations are selected for ATAI funding based not only on methodological rigor, logistical viability, and innovation, but also on their potential for both a significant contribution to public knowledge, and practical influence and scalability in related contexts. Field experiments require, by their very nature, durable partnerships with real-world implementation groups that are working directly with smallholder farmers in order to randomize interventions and deliver credible results. Partner organizations may work as agro-dealers, contract farming groups, extension agents, financial service providers, technology developers, or otherwise. ATAI views more favorably studies that evaluate questions of key importance to large-scale program and policy partners, particularly those that are difficult to address without causal evidence, and those that have received less research attention to date. To meet these criteria, technologies under investigation are those where there is credible field data signaling that adoption would prove neither distasteful nor ineffective in target farmers’ contexts, and that the take-up and use of a technology is likely to prove utility-enhancing, profitable, and welfare-increasing for smallholder farmers and others along agricultural value chains. For such promising under-adopted technologies, ATAI funds social science field experiments to provide evidence on the strategies that work in helping farmers adopt, and ultimately benefit from, these technologies. In the sections that follow, we summarize particular components of the evidence base given the accumulation of ATAI-generated experimental evidence in four areas: credit and savings, risk, information, and input and output market inefficiencies. This does not imply that the latter three constraints to adoption, i.e. externalities and land and labor markets, are excluded from this chapter because they do not bind or do not deserve further investigation. These topics are not covered here simply because there is less rigorous micro-evidence given the difficulty of examining them through the lens of RCTs6 . This is not intended to be an exhaustive review. ATAI-funded studies are often presented in greater detail given our familiarity with their contributions. Each section begins by motivating the specific constraint to Agricultural income streams are characterized by large cash inflows once or twice a year that do not align well with specific times when farmers need access to capital to either make agricultural investments or, for example, pay school fees. If there is limited access to credit in an area, farmers may not have cash on hand to make agricultural productivity investments unless they are able to save, or can afford the potentially high interest rates of informal lending. However, saving can be difficult for farmers given their limited resources, a variety of demands on their money, and the seasonal cycle of production and prices of their agricultural production. Credit and saving products could help farmers make investments in inputs and other technologies by making cash available when needed. Yet many developing countries, and particularly rural areas, have limited access to formal financial services that could provide this liquidity. Credit constraints have been reflected in farmers self-reports , and are associated with less use of productive inputs like high-yielding varieties . On the supply side, formal financial service providers are often unwilling or unable to serve smallholders.