We explored the use of location as a proxy variable but the results remained similar

Adjusted associations between a 10-fold increase in the amount of fumigants applied within 8 km of the home and the highest lung function measurements are presented in Table 4. We did not observe any significant adverse relationships between prenatal or postnatal fumigant use within 8 km and lung function. A 10-fold increase in wind-adjusted prenatal methyl bromide use within 8 km was associated with higher FEV1 and FEF25–75 . Additionally, a 10-fold increase in wind-adjusted prenatal chloropicrin use within 8 km was positively associated with FEF25–75 . Associations between methyl bromide and chloropicrin use and lung function observed in the prenatal exposure period were not observed in the postnatal period. Results were similar, although no longer statistically significant, for prenatal methyl bromide and chloropicrin use within 5 km of residences . There were no associations between fumigant use within 3 km of residences and lung function . We did not observe associations between postnatal fumigant use at any distance and lung function measurements or between fumigant use during the year prior to the assessment and lung function measurements .In sensitivity analyses using multi-variable models including other pesticide exposures that have been previously related to respiratory symptoms and lung function including childhood urinary DAP metabolites , proximity to agricultural sulfur use during the year prior to lung function assessment and prenatal DDT/DDE blood concentrations , the results were very similar to those presented in Tables 3 and 4. For example,blueberry plant size the relationships between prenatal methyl bromide use within 8 km were very similar for FEV1 and FEF25–75 .

Prenatal fumigant use was generally not correlated with other pesticide exposures that we found to be associated with lung function in this cohort, except for weak correlations between agricultural sulfur use within 1 km during the year prior to spirometry and prenatal use of metam sodium and 1,3 – DCP with r = 0.14 and r=0.26 respectively. The results were very similar when we only included children with two acceptable reproducible maneuvers in the analyses . The results were also similar when we excluded those currently using asthma medication, excluded the one outlier for FEV1 models or used inverse probability weighting to adjust for participation bias . Risk ratios estimated for asthma symptoms and medication using Poisson regression were nearly identical to the ORs presented in Table 3 and Supplemental Table 2. We did not observe effect modification by asthma medication use. Maternal report of child allergies modified the relationship between FEV1 and prenatal proximity to methyl bromide use and we only observed higher FEV1 among children without allergies . After adjusting for multiple comparisons, none of the associations reached significance at the critical p-value 0.002 based on the Benjamini-Hochberg false discovery rate. This is the first study to examine lung function or respiratory symptoms in relation to residential proximity to agricultural fumigant use. We found no significant evidence of reductions in lung function or increased odds of respiratory symptoms or use of asthma medication in 7-year-old children with increased use of agricultural fumigants within 3 – 8 km of their prenatal or postnatal residences. We unexpectedly observed a slight improvement in lung function at 7 years of age with residential proximity to higher methyl bromide and chloropicrin use during the prenatal period and this improvement was limited to children without allergies.

Although these results remained after adjustment for other pesticide exposure measures previously related to respiratory symptoms and lung function in our cohort, they do not remain significant after adjustment for multiple comparisons. There is a strong spatial pattern of methyl bromide and chloropicrin use during the pregnancy period for our study because of heavy use on strawberry fields near the coast at the northern portion of the Salinas Valley . There could be other unmeasured environmental or other factors that are confounding the relationship we observed between higher prenatal fumigant use and improved lung function. Previously published studies of prenatal exposure to air pollutants and lung function have generally observed links to alterations in lung development and function and to other negative respiratory conditions in childhood, and plausible mechanisms include changes in maternal physiology and DNA alterations in the fetus . Improved lung function was associated with higher estimates of recent ambient exposure to hydrogen sulfide in a study of adults living in a geothermal area of New Zealand . However, hydrogen sulfide has been shown to be an endogenously produced “gasotransmitter”, with anti-inflammatory and cytoprotective functions , and is being explored for its use for protection against ventilator-induced lung injury . In previous studies of this cohort, we found increased odds of respiratory symptoms and lower FEV1,  and FVC  per 10-fold increase of childhood average urinary concentrations of metabolites of organophosphate pesticides . Other studies of prenatal pesticide exposure and respiratory health in children have mostly evaluated exposure using cord blood concentrations of DDE, a breakdown product of DDT, and have observed an increased risk of respiratory symptoms and asthma with higher levels of DDE . Most studies of postnatal pesticide exposure and respiratory health in children have utilized self-reported information from mothers to assess pesticide exposure and have observed higher odds of respiratory disease and asthma with reported pesticide exposure . None of the previous studies of pesticide exposure and respiratory health have specifically evaluated fumigants.

Another strength of the study is that CHAMACOS is a prospective cohort followed since pregnancy with extensive data on potential confounders of respiratory health and other measures of pesticide exposure. Our study also had some limitations. We did not have information on maternal occupational exposure to fumigants or the geographic location of maternal workplaces during pregnancy, and we did not have the location of schools during childhood. These limitations likely resulted in some exposure misclassification during both the prenatal and postnatal periods. An important consideration in this study is that we estimated fumigant exposure using proximity to agricultural fumigant applications reported in the PUR data, which is not a direct measure of exposure. However, the PUR data explains a large amount of the variability of measured fumigant concentrations in outdoor air . In conclusion, we did not observe adverse associations between residential proximity to agricultural fumigant use during pregnancy or childhood and respiratory health in the children through 7 years of age. Although we did not observe adverse effects of fumigants on lung function or respiratory symptoms in this analysis,plant raspberry in container we have seen adverse associations in previous analyses of the CHAMACOS cohort between residential proximity to higher fumigant use and child development. We observed an association between higher methyl bromide use during the second trimester of pregnancy and lower birthweight and restricted fetal growth . We also observed decreases of ~2.5 points in Full-Scale intelligence quotient at 7 years of age for each 10-fold increase in methyl bromide or chloropicrin use within 8 km of the child’s residences from birth to 7 years of age . Future studies are needed in larger and more diverse populations with a greater range of agricultural fumigant use to further explore the relationship with respiratory function and health. It is typically assumed that output levels and prices in the U.S. food processing sector are directly linked to the availability and prices of the agricultural products or materials used for production. However, the traditional link between farm and food prices and production may be weakening. Adaptations in input costs and food consumption patterns are leading to changes in the production structure and technology of the food processing industries, that in turn affect demand patterns for primary agricultural materials. Such structural changes have been documented not only by anecdotal evidence, but in studies such as Goodwin and Brester, and Morrison and Siegel. In particular, Goodwin and Brester find that value-added by manufacture, both per worker hour and as a percentage of sales, increased in the 1980s in the U.S. food and kindred products industry overall, possibly implying an undermining of MA demand. Various economic and behavioral factors underlie these trends. As noted by Goodwin and Brester, relative prices of inputs important to food manufacturing, such as energy and labor prices relative to those for raw materials, shifted significantly in the past couple of decades.

The business environment also has experienced quite a transformation, including market structure and regulatory changes in the early 1980s. Tax changes have, for example, had a direct impact on relative input prices, by affecting the prices of capital inputs. Perhaps even more important than these alterations in the economic climate facing food processors are adaptations in food demand patterns. The fact that a greater proportion of adults are in the labor force today causes a higher demand for food products that require little home preparation time; they are at least in part prepared at the processing plant. These modifications in dietary preferences, combined with changes in food technology that allow processors to adapt foods to meet those preferences, could lead to more in plant processing of agricultural commodities. Other technical changes associated with capital equipment and the quality of agricultural materials, could also have an impact on the relative demand for agricultural products. These adaptations in food product costs, demand, and characteristics may mean that food processors are responding by altering their input composition. If they are using more capital, skilled labor, and non-agricultural materials to produce food products than in the past, these factors could become increasingly important elements in processors’ costs relative to agricultural commodities. The corresponding decline in agricultural materials input intensity is likely to result in weaker effects of changes in agricultural commodity prices on food prices, which has important impacts on both consumers of the final product and producers of the raw agricultural materials. To address these issues, this study assesses the role of changes in food product demand, input prices, and food processing technology on food processors’ costs and output prices, with a particular focus on the use of agricultural commodities as compared to other factor inputs. Our analysis of cost structure and input composition changes in the U.S. food processing industries is based on a cost-function representation of production processes in these industries. In our model we recognize a full range of substitution patterns among capital, labor, energy, agricultural materials, food materials and “other” materials inputs resulting from input price changes or technological factors. This allows us to explore modifications in input mix, costs and commodity prices resulting from changing agricultural commodity prices and output demand. It also facilitates consideration of technological factors affecting MA demand and production costs such as the quasi-fixed nature of capital , scale economies, technical change associated with either time trends or capital composition, and agricultural innovations or market power embodied in the MA input price. The model is estimated using data on 4-digit SIC level U.S. food processing industries, and the results summarized according to time period and 3-digit code . The base price and quantity data for output, capital, labor, and materials are from the National Bureau of Economic Research Productivity Database. The materials breakdown was drawn from data in the Census of Manufactures, which are only available at 5-year intervals – from 1972 to 1992. We therefore have a panel of data for 34 industries and 5 time periods, which are distinguished by fixed effects for estimation.Our empirical results suggest that agricultural materials demand has been affected by various technological and market characteristics of the food processing industry. Although own price effects have had the potential to limit MA demand, growth in the price of agricultural materials has fallen over time, and in the effective price has fallen even lower, so this effect was essentially erased – or even reversed direction – by the end of the 1980s. Substitution effects have also contributed to MA demand. Rising capital costs, especially in effective units, and its implied limitations on production flexibility, have particularly enhanced MA substitution. Scale effects have had a somewhat ambiguous effect, since MA use has increased slightly more proportionately than output increases in effective units, but less than the use of intermediate food products, so MA demand, especially in traditionally measured units, has weakened relative to these substitute inputs. We also, however, find a strong and increasing downward trend in MA demand over time.

We calculated the frequency of fire detections using a neighborhood search algorithm

The time interval between fire detections is not considered in this analysis, such that fires on consecutive and nonconsecutive days at the same ground location are treated equally. A 1-km radius is also consistent with fire spread rates of 200–5000 m h 1 for grass, grass/shrub, and deforestation fuel types , such that even slow-moving grassland fires would spread beyond the 1-km search limit on sequential days. Fires which burn on consecutive days at the same ground location can occur where fuel loads are very high, as is the case in deforestation fires when woody fuels that are piled together may smolder for several days.Specifically, the variety of days on which fires were detected was determined for each cell of the standard MODIS 250-m grid using a search radius of 1 km to interpret the center locations of all high-confidence fire detections for each year. This gridded product of fire days was then used to select those fire detections contributing to high-frequency fire activity and characterize fire frequency for recent deforestation events.To determine whether active fire detections associated with the conversion of forest to other land uses are unique in terms of fire frequency, we compared active fire detections from recently deforested areas with four additional types of fire management. In the following text, we describe the test datasets used to evaluate patterns in active fire detections for maintenance of cattle pastures, indigenous reserves in Cerrado savanna-woodland land cover, small properties associated with government settlement programs,blueberries in containers growing and sugarcane production regions. We used data on recent deforestation and land use following deforestation to identify and characterize active fire detections associated with forest conversion.Deforestation was mapped using highresolution Landsat Thematic Mapper or ChineseBrazilian Environmental Research Satellite data from approximately August of each year 2001–2005 .

We developed our approach for identifying deforestation fires with data for Mato Grosso state. For individual deforestation events 425 ha in size, we also evaluated differences in patterns of active fire detections for conversion of forest to pasture, forest to mechanized agriculture, and forest conversions not in agricultural production . The post clearing land use for each deforestation event was identified previously using phenological information from time series of MODIS data at 250 m resolution . Finally, we examined fire activity in the year before deforestation detection by PRODES, year of forest clearing, and for as many years post clearing as possible to characterize the nature of fire usage during the conversion process. These comparisons provide the timing, frequency, and degree of repeated burning detected by the MODIS sensors for forest conversion to different land uses. We selected annual deforestation from 2003– 2005 to utilize combined Terra and Aqua fire observations. Because few areas are deforested without the use of fire in Amazonia, deforestation events without any MODIS fire detections provide a measure of the extent of omission due to satellite observation and fire characteristics . We utilized data on historic deforestation and recent land use changes to identify maintenance fires on agricultural lands in Mato Grosso state. The dataset is derived from areas that were deforested before the initial year of PRODES digital data , buffered by 1 km from remaining forest edges to exclude fires from new deforestation. Next, we removed areas that underwent conversion from pasture to cropland during 2001–2004 and previously cleared areas that were identified as secondary forest . The resulting dataset isolates old deforestation not associated with forest edges, secondary forest, or recent conversion to cropland. To identify patterns of fire detections for extensive grassland fires in Cerrado regions, we selected 18 indigenous reserves in Mato Grosso and Tocantins states covering more than 42 000 km2 .

Fire is used during the dry season on some indigenous reserves to facilitate hunting, but extensive land cover change is rare . Small properties are an additional challenge for separating evidence of fire activity in the same location. To test the influence of property size on fire frequency, we considered a subset of the demarcated Instituto Nacional de Colonizac¸a˜o e Reforma Agra´ria land reform settlements in Mato Grosso without large deforestation events in either 2004 or 2005 . The typical lot size in these settlements is 100 ha, of which 20–50 ha may be cleared for agricultural use. Although some sugarcane is grown in the Amazon region, the majority of Brazil’s sugarcane industry is located in the southern and northeastern regions of the country. Sa˜o Paulo State had more than 3 million hectares planted in sugarcane in 2005. We evaluated active fire detections in 31 municipalities in Sa˜o Paulo state with 420 000 ha of sugarcane planted in 2005 to calculate the degree of high-frequency fire associated with sugarcane production.High-frequency fire activity is common in areas of recent deforestation but rare for other fire types in Amazonia . Deforestation in Mato Grosso state had more total fire detections than allother fire types in Table 1 combined and seven times the number of fires detected in the same location on 2 or more days during 1 year. High-frequency fire activity accounted for 27% of high-confidence MODIS detections associated with small producers in Mato Grosso 2004 and 2005, but the total number of detections was small , suggesting that property size is not the main component of the pattern of repeated fire usage associated with deforestation. Fires detected on 2 days at the same location are rare within indigenous reserves and agricultural areas of Mato Grosso state or sugarcane production municipalities in Sa˜o Paulo state; fires on 3 or more days are almost exclusively linked to deforestation. Mato Grosso had both the highest total fire activity and greatest fraction of high-frequency fire activity during 2003–2007 of any state in Brazilian Amazonia .

Combined with fires in neighboring Para´ and Rondoˆnia states, these three states contributed 83% of the fires that burn on 2 or more days and 74% of the total fire activity in the Brazilian portion of the Amazon Basin during this period. Inter annual variability in thetotal number of fires highlights drought conditions in Roraima state during 2003 and widespread drought in 2005 affecting Rondoˆnia, Acre, and Amazonas states. The fraction of total fire activity from burning on 2 or more days also increased during drought years in these states. Fire detections were highest in 2005 for Para´ and Amapa´ states, although these regions were less affected by drought conditions; the fraction of repeated fire activity did not increase in 2005 compared with other years. After a decrease in the fire activity in Brazilian Amazonia during 2006, fires in 2007 returned to a similar level as seen in 2004 and 2005, led by increased fire activity in southeastern Amazonia. Major contributions to this increase in 2007 were from low-frequency fires in Tocantins and Maranha˜o states and additional high-frequency fires in Mato Grosso and Para´. Overall, fires on 2 or more days during the same dry season accounted for 36–47% of the annual fire activity in Brazilian Amazonia during 2003–2007,planting blueberries in containers with greater contributions from repeated fires in years with highest fire activity. At the national scale, fire activity in Brazil and Bolivia accounted for 98% of all fire detections in the Amazon Basin during 2003–2007 . High frequency fires contribute a large fraction of MODIS detections in both countries, with peak repeated fire activity during 2004 in Brazil and 2007 in Bolivia. Small contributions to overall fire activity from other Amazon countries are primarily low-frequency fires, with the notable exceptions of 2004 and 2007 in Colombia, 2003 in Guyana and Suriname, and 2003 and 2007 in Venezuela. Spatial patterns of high-frequency fire activity in 2004 and 2005 highlight active deforestation frontiers in Mato Grosso, Rondoˆnia, and Para´ states in Brazil and in southeastern Bolivia . Isolated locations of high-frequency fire activity can also be seen across other portions of the Amazon Basin, but these areas have low total fire detections. Differences in the total fire activity and high-frequency fire detections between 2004 and 2005 highlight the influence of drought conditions in western Amazonia on fire frequency. Total fire detections in central Mato Grosso decreased slightly between 2004 and 2005, while fire detections in drought-stricken northern Rondoˆnia, southern Amazonas, and eastern Acre states in Brazil show higher total fire activity in 2005 than in 2004. The number of 0.251 cells with 450% of fire activity occurring on 2 or more days is similar during 2004 and 2005 , but the spatial distribution is broader in 2005 than in 2004, as fires associated with deforestation activity in Mato Grosso, Para´, and southern Rondoˆnia spread west into northern Rondoˆnia, Acre, and southern Amazonas states. In addition to deforestation-linked fires, slow-moving forest fires and contagion of other accidental burning events may also have contributed to the higher fraction of repeated fire activity in these regions.Among deforested areas in Mato Grosso, the intensity of fire usage varies according to post clearing land use . Forest conversion for cropland exhibits the most frequent fire usage; more than 50% of the 2004 cropland deforestation events had fire detections on 3 or more days during the 2004 dry season and 14% burned on 10 or more days.

Over 70% of the forest clearings with fires on more than 5 days were subsequently used for cropland. Because of more frequent fire usage in preparation for mechanized agriculture, few areas deforested for cropland in 2004 had no high-confidence fire detections during 2004 . Deforestation for pasture averaged less than half as many fire days as deforestation for cropland, measured as either the maximum or mean days of fire detection per clearing. Even among very large clearings , fire usage was significantly higher for cropland deforestation than forest clearing for pasture . Only 13% of all deforestation events for pasture 4200 ha averaged 3 or more fire days in any year, suggesting that mechanized forest clearing and high-frequency burning are more related to post clearing land use than clearing size. For both pasture and cropland deforestation, polygons in which the conversion occurs within 1 year have a greater number of fire days in the year that the deforestation was detected than conversions occurring over 2 or more years , consistent with the expectation that higher fire frequency leads to higher combustion completeness. For those areas that showed no clear pasture or cropland phenology in the years following deforestation, fire activity was minimal. Nearly 50% of the areas described as NIP showed no high-confidence fires in 2004, and only 22% of these deforestation events exhibited fires on 2 or more days typical of other deforestation events. The timing of fire use during the dry season also differed for cropland and pasture deforestation . Deforestation fire activity may begin during the late dry season in the year before the deforestation is mapped and continue for several years post clearing as the initial forest biomass is gradually depleted to the desired conditions for cropland or pasture use. September was the most common month of fire activity for all types of deforestation in Mato Grosso in 2004. More than 70% of the fires associated with 2004 deforestation for pasture during 2003–2005 occurred during the late dry season . In contrast, fire activity for conversion to cropland was more evenly distributed through the dry season, with 45% of fire detections occurring in May–July. Burning activities initiated in the early dry season for both pasture and cropland deforestation continue to burn in subsequent months. The highest percentage of fires without detections on additional days occurred during the late dry season; approximately 30% of the fires for conversion to pasture during September and October were the first fire detection for those deforestation events compared with 11% of all fires for cropland conversion during this period. High-frequency fire activity may last for several years following initial forest clearing, further increasing the expected combustion completeness of the deforestation process . Forty percent of the areas deforested for cropland during 2003–2005 had 2 or more years during 2002–2006 with 3 1 fire days. The duration of clearing for pasture was more variable. Most areas cleared for pasture had 0–1 years of high-frequency fire usage, although a small portion had frequent fire detections over 2–3 years typical of mechanized forest clearing.

Constituent values are reported as mean standard deviation unless otherwise indicated

To isolate the effects of bio-solids and TCS amendments on microbial community composition, the data was analyzed using pCCA considering TCS and bio solid amendment as environmental variables, and incubation time as a covariable . This confirmed the results of the CCA indicating that the strongest determinant of microbial community composition was addition of bio-solids to soil. TCS concentration, on the second axis, described only 3.6% of the variation, showing TCS effects were overshadowed by the effects of bio-solids amendment. Bio-solid amendments caused an approximately two-fold increase in PLFA biomarkers for Gram-positive bacteria, actinomycetes and eukaryotes in SB compared to soil samples . Even larger increases were observed in biomarkers for fungi and Gram-negative bacteria, which were up to three times higher in SB than soil. Again, these changes were likely due to increased nutrient availability in the bio-solid amended samples and/or the biomass added along with the bio-solids, consistent with previous studies that found that the fatty acid 18:2 ω6, 9c and monounsaturates were increased by addition of these materials . The effect of TCS on microbial community composition was greater in soil than SB. Spiking with 10 or 50 mg/kg TCS decreased the abundance of Gram positive and Gram negative bacteria as well as fungi, with reductions ranging from 14 to 27% by day 30. Additionally, actinomycetes, which are Gram positive bacteria,plant pots with drainage were reduced in the 50 mg/kg TCS samples after 30 days of incubation . Eukaryotes were negatively affected after 7 and 30 days of incubation at both concentrations of TCS in soil but not SB samples.

Biomass results for all microbial groups were consistent in suggesting that the presence of bio-solids mitigated the potential toxicity of TCS. It is important to note that the spiking levels used here are similar to levels found in the upper half of U.S. bio-solids, but would be unlikely to be achieved in bio-solid amended soils even after continued long term application. Therefore, the effects observed at the 10 or 50 mg/kg spiking levels should be viewed as a conservative upper bound on potential effects expected in the field. In addition, since all of the results in this study are based on an observation period of 30 d, the extent to which the observed effects persist is not known. Future studies should, in particular, investigate longer term changes in community structure in response to addition of bio-solids both with and without specific contaminants. There have been many efforts across the world to mitigate wetland habitat lost over the past century. This movement is echoed in California’s Central Valley where stakeholders have established the goal of creating and protecting over 60,000 ha of new wetland habitat in the state . Many of these wetlands are, or will be, ephemeral, flow through wetlands receiving irrigation return flows during the growing season . Most wetlands in CA are restored with the primary objective of enhancing waterfowl habitat, however, these systems also have the potential to retain and remove nutrient loads that would otherwise be exported directly into major waterways . Therefore, wetland treatment of agricultural return flows is being considered as a beneficial management practice to reduce algal and nutrient loads that contribute to seasonally low dissolved oxygen in the lower San Joaquin River, California . Many studies have demonstrated that natural and constructed wetlands are generally effective at removing nitrogen from municipal and agricultural waste waters . Removal efficiencies as high as 98% have been reported, though other studies report significantly lower N removal rates typically between 35 and 55% .

A study of three wetlands used to treat subsurface tile drainage water in the Midwestern, USA demonstrated NO3 removal rates of 28% . Similarly, high but variable NO3 removal rates have been documented from water seeping through side berms of a constructed wetland in Illinois . Variation in nitrate removal is a result of many factors such as hydraulic residence time, soil properties, vegetation characteristics, variability in input loads, N loading, temperature, dissolved oxygen concentration, climate and nitrogen form in input waters . Using wetlands as a beneficial management practice to reduce non-point source pollution from agricultural drainage waters may introduce a problem as these wetlands could leach contaminants such as nitrate directly into the groundwater. This could compound an existing problem in California where groundwater NO3-N loading rates of 200 Gg per year have been reported in areas of intensive agriculture such as the Salinas Valley and Tulare Lake Basin . Several studies of dairy lagoons summarized in Harter et al. document high seepage rates , and elevated groundwater N concentrations beneath lagoons. Similarly, Huffman found NO3-N concentrations exceeding the EPA drinking water standard beneath two thirds of 34 swine lagoons in North Carolina. More studies of nitrogen fate and transport in wetlands receiving tail water from cropland are needed because the existing literature base for this topic encompasses a wide range of environmental characteristics that govern nitrogen transformations . The primary objectives of this study were to determine the fate of nitrogen in seepage waters of a restored surface-flow through wetland and to determine the importance of hydrologic- as well as soil- and bio-geochemical-factors that regulate nitrate removal. We addressed these objectives by: monitoring nitrogen concentration in nested piezometers throughout the wetland and comparing them to surface water; measuring spatial patterns in selected soil and hydrological characteristics; and, developing wetland hydrologic and nitrogen mass balances to evaluate the fate of nitrate. The results from this study provide information relevant to the optimization, design, and management of restored wetlands for nitrate removal. Moreover, these findings expand upon the limited number of published studies that document nitrate removal by constructed wetlands receiving nitrate runoff from irrigated agriculture .

The wetland received agricultural return flows during the irrigation season from April to September, with no rainfall occurring during this time. Surface water inflow and outflow volumes were measured at 30-min intervals using v-notch weirs and barometric pressure compensated water level loggers . A digital elevation model was created using a Trimble RTK GPS with 3 cm accuracy. The DEM was used to relate water depth measured at two locations with water depth throughout the wetland,plastic plant pots as well as to determine changes in the wetted surface area throughout the irrigation season. Vertical hydraulic gradients were calculated at 12 piezometric monitoring locations in the southern section of the wetland, using biweekly water height measurements at 10- and 100-cm depths . Surface water residence time was calculated using a plug-flow model . Temperature was measured at 15-min intervals near the output. Wetland evapotranspiration was estimated using meteorological data obtained from the California Irrigation Management Information System Patterson station, approximately 15 km from the study site. ET rates for vegetated upland areas were presumed to approximate the CIMIS values calculated for grass cover. Evaporation for the sparsely vegetated wetland area was assumed to be 1.28 times that of the grass ET value . ET volumes were calculated at 30-min intervals to account for fluctuations in the wetted surface area. A season-long seepage volume was calculated by subtracting total outflow volume from total inflow volume, accounting for water loss due to ET. An independent measurement of the seepage rate for the northern and southern sections of the wetland was determined on 6/4/2007 through 6/9/2007 by preventing all inflow and outflow, and measuring the rate of water level drop over a 120- h period. Seepage volumes were then calculated for each 30-min interval by multiplying the seepage rate by the wetland wetted surface area . Assuming similar seepage rates across the different hydrologic zones, we calculated the percentage of the water surface area covering each hydrologic zone at 30-min increments based on the high-resolution DEM and water height at the output location. The seepage volume was summed for each 30- min increment to obtain a total seepage volume for each hydrologic zone.Pore water was collected from piezometers at 12 locations on a biweekly basis at depths of 10, 50 and 100 cm below the soil surface. Screened sections of the piezometers were surrounded in a layer of pure silica sand and sealed above and below with bentonite clay to prevent water intrusion from adjacent horizons . Prior to sampling, piezometers were purged and allowed to recharge for 1–2 h. Water samples were maintained at 3 C between the time of collection and analysis . Aliquots of samples were filtered through a prerinsed 0.4mm polycarbonate membrane filter for quantification of NO3-N , NH4-N , and DOC . Determination of NO3 was made using the vanadium chloride method and NH4 using the Berthelot reaction with a salicylate analog of indophenol blue . DOC was measured using a Dohrmann UV enhanced-persulfate TOC analyzer .

A non-filtered sample was used to determine total N following oxidation with 1% persulfate using the method described above for NO3-N. Surface water samples were collected adjacent to the piezometers and at input and output locations on a weekly basis and were analyzed as described above. Depth splines were used to model nitrate distribution over the 100-cm depth of the piezometer monitoring nests. The segmentation procedure involved fitting an equal-area or mass-preserving quadratic splineacross the discreteset ofporewaterNO3-N sampling depths , producing a continuous depth function segmentedat1-cmintervals . Mean values at each1-cmdepth increment were calculated across all sampling dates and sampling locations within each hydrologic zone. The segmenting algorithm was implemented using the ‘GSIF’ and ‘aqp’ packages for R .Inflow and outflow seasonal loads for total nitrogen, nitrate, and ammonium were calculated using the period-weighted approach from weekly constituent concentration and weekly water flux . Nitrate seepage loads for each hydrologic zone were also calculated with the period-weighted approach using average biweekly nitrate concentration at the 100- cm depth and weekly seepage flux.Linear mixed effects models were used to analyze data from water analysis and DNP incubations using S-Plus . As samples were taken at the same location several times throughout the season, location was treated as a random effect in the model to account for auto correlation between measurements at the same site. The NH4-N, NO3-N, DNP and DOC values were log transformed prior to statistical analysis to better approximate a normal distribution. For each analysis, the initial model accounted for main effects, as well as all possible two-way interactions between main effects. Interactions that were not significant were removed from subsequent models to gain sensitivity. Mean separation was determined using a conditional t-test. Raw are reported in Tables 4 and 5 to reflect measured field conditions. The water sampled from piezometers was termed seepage water. Nitrate concentration was markedly lower in seepage water than in surface waters . Concentrations of NO3-N were significantly lower at the 50-cm depth than the 10-cm depth, but there was not a significant difference in NO3-N concentrations between the 50- and 100-cm depths among the three hydrologic zones . Modeled nitrate removal rates from Fig. 4 in the top 10-cm soil depth relative to the water column were 932, 631 and 143 mg NO3- N m 2 d 1 in the flowpath, finger and upland zones, respectively. In the wettest hydrologic zones there was a significant increase in NH4-N concentrations from the surface water to the 10-cm depth . NH4-N concentrations decreased at the 50- and 100-cm depths and were not significantly different from those in the surface waters . DOC concentration in seepage water ranged from 3.2 to 6.0 mg L 1 . There were no significant differences in DOC between the surface water, 10-, and 50-cm depths; however, DOC concentration decreased significantly at the 100-cm depth of the upland sites. Among the hydrologic zones, DOC in seepage water was significantly higher in the uplands .Soil texture was generally similar among hydrologic zones and no abrupt changes in texture were observed with depth . Sedimentation was highest in the flowpath zone totaling over 35 kg m 2 yr 1 compared to sedimentation rates <5 kg m2 yr 1 in the fingers and uplands. Saturated hydraulic conductivities estimated for these textural classes were similar to measured seepage rates . Average soil organic carbon concentration was relatively low in all hydrologic zones . Organic carbon decreased with depth in all hydrologic zones.

Biodiversity is critical to ecosystem functioning and is threatened by anthropogenic activity

They allowed for a 1% chance of mutation of each experiment and component to allow for global search. They also discovered that the response space was multi-modal and had interactions between components, which confirmed the need for global optimization of fermentation and bio-processing problems.Microbes perform key ecosystem functions , necessitating a better understanding of how microbes and microbial diversity respond to environmental stressors . Streams are examples of threatened ecosystems, where watershed modification decreases stream integrity and water quality , altering macro-invertebrate, fish, and microbial diversity . The use of macro-invertebrate and fish indices to assess stream conditions is fundamental to stream ecology and depends on known relationships between stream integrity and community structure . The Benthic Index of Biotic Integrity is one such index, using the abundance and diversity of stream benthic macro-invertebrates to accurately distinguish degraded streams classified based on stream chemical and physical criteria . Biotic indices are calibrated to specific regions, as the distribution of stream macro-invertebrates is controlled by a combination of dispersal limitation and local environmental conditions . Despite their abundance and ecological importance, natural microbial communities, unlike macro-invertebrates, are not used in stream monitoring programs to assess stream conditions. As with macro-invertebrates, dispersion and environmental selection control the spatial distribution of microbes along stream continuums . Dispersion,growing raspberries in pots or the advection of microbes from the surrounding landscape, impacts head water stream community composition, and with increasing stream order, environmental sorting becomes more important as stream residence times increase .

Several studies have demonstrated the influence of the surrounding landscape on stream microbes, showing that watershed urbanization leads to shifts in bacterial communities . While alpha diversity generally remains constant , the abundances of taxa associated with anthropogenic activity and high-nutrient conditions increase in urbanized streams . Similar to larger organisms, microbes respond to environmental disturbance and are strongly influenced by watershed land use ; therefore, their distribution may be used to further characterize stream conditions. Microbes mediate important stream ecosystem functions, controlling the movement of carbon and nitrogen through freshwater ecosystems . Previous studies demonstrate the effects of urbanization on stream nutrient transformations, such as nitrogen uptake , nitrogen retention , and carbon processing . Community respiration determines the fate of terrestrial carbon in head water streams, where carbon is either lost as carbon dioxide during respiration or transported farther downstream . Community respiration is often used to assess ecosystem function , as rates are influenced by watershed land use , correlated with stream chemistry , and sensitive to pollutants . The effects of urbanization on stream dissolved organic matter quality and respiration have previously been demonstrated, and stream microbial community structure can potentially be used to monitor these ecosystem functions. In addition to respiration, dissolved organic matter fuels stream denitrification and the microbial reduction of nitrate to nitrous oxide and dinitrogen gases . Denitrification removes nitrogen from streams and is credited as the major source of the greenhouse gas N2O . Watershed land use and anthropogenic nitrogen loading alter rates of stream denitrification , increasing the amount of nitrogen transported downstream and emissions of N2O to the atmosphere . Urbanization has been linked to changes in denitrifier community composition , and a previous study linked changes in denitrifier composition to changes in denitrification potential, and therefore nitrogen loss, in urban streams .

However, it is less clear how changes in microbial community composition in response to land use modification alter N2O production. The goal of this study was to identify stream microbes that respond to watershed urbanization and agricultural development. These anthropogenic factors alter microbial diversity and community structure, which can be used to assess stream conditions and ecosystem functioning. We measured microbial diversity using 16S rRNA gene amplicon sequencing across 82 head water streams within the Chesapeake Bay watershed in the state of Maryland in the spring and summer for 2 years. Measurements were collected in conjunction with stream physicochemical parameters and a macro-invertebrate indicator of stream health. Additionally, at a subset of streams, water column and sediment community respiration were measured using oxygen consumption methods, and N2O concentrations were measured using gas chromatography. We determined how stream bacteria and archaea are distributed across gradients of watershed land use and stream conditions, assessed how changes in microbial community composition relate to benthic macro-invertebrate diversity and traditional indices of stream conditions, and determined how these changes influence stream function by relating microbial community composition to rates of microbial respiration and concentrations of N2O.The aims of this study were to understand how stream bacteria and archaea are distributed across gradients of watershed land use and water quality, to assess how changes in microbial community composition relate to benthic macro-invertebrate diversity, and to discern how changes in stream conditions alter stream ecosystem processes, as reflected in community respiration and N2O concentrations. Bacterial and archaeal diversity significantly differed across the geographic regions of Maryland , demonstrating the influence of the surrounding landscape on headwater stream microbial communities.

Regional alluvium composition likely influenced stream alpha diversity, causing lower alpha diversity in Coastal Plain streams . Sediments on the Coastal Plain of the eastern United States are composed of gravel, sand, silt, and clay , making streams more embedded . Embeddedness was the environmental factor that most strongly negatively correlated with Shannon diversity , and homogeneous fine sediments have been shown to have lower diversity than that of sites with riffles, shallow turbulent sections . Similarly, community structure varied across the geographic regions and strongly correlated with DOC concentration, pH, and embeddedness , all of which significantly differentiate Coastal Plain streams from the other regions . This finding is in agreement with those from previous studies, demonstrating the strong influence of DOC concentration and pH on freshwater communities . Despite the strong influence of stream chemistry on microbial communities , in this study, geodesic distance explained more of the variation in community composition than environmental distance. Partial Mantel test results indicated that community structure was correlated with geographic distance rather than the measured environmental variables. Geographic distance is likely a strong controlling factor in structuring head water stream communities because there are regional differences in landscape, and stream microbes are locally seeded from the surrounding soil . Alpha diversity was greatest in spring, when water flow through the landscape is greatest and, therefore, when advection of microbes from the surrounding landscape is greatest to head water streams. While seasonal changes in microbial diversity during fall and winter are unknown, the higher diversity in spring than in summer was likely due to higher terrestrial inputs in spring, further demonstrating the influence of landscape on stream microbial communities. Distance-decay relationships were also observed between water column and sediment community similarity and geodesic distance , further highlighting the finding that head water stream microbes display geographic distribution patterns. Alternatively,plant pot with drainage the distance-decay relationship could be a result of spatial differences in unmeasured environmental variables. Microbial distance-decay relationships have been observed previously in streams . Z-values represent the rate at which species similarity decreases with increasing distance; in this study, Z-values are similar to microbial values from soil, salt marshes, and lakes but lower than regional differences observed in salt marshes , suggesting different dispersal limitations across regional scales. In contrast to highly urban and agricultural streams, community dissimilarity in highly forested streams did not increase with distance. Neither geographic distance nor environmental distance correlated with community structure, implying that highly forested streams have a similar terrestrial microbial source. Microbial diversity differed in streams in watersheds with high urban, agricultural, and forested land use. In contrast to previous studies, degraded streams had lower alpha diversity than that of forested streams , likely due to elevated pollution and habitat loss. Several abundant and pervasive taxa found in urban and agricultural streams are often associated with high-nutrient and low-oxygen environments. Members of the order Burkholderialeswere abundant in urban streams and correlated strongly with several anthropogenic nutrients .

Comamonadaceae are often associated with high nutrient conditions and are ubiquitous in many environments, including aquatic, soil, activated sludge, and wastewater . Comamonadaceae have previously been associated with urban streams and have been found to have the highest number of urban-tolerant taxa . Sulfurospirillum spp., in the order Campylobacterales, were abundant in highly agricultural streams and are often associated with microaerophilic polluted habits, commonly growing on arsenate or selenate using NO3 and sulfur compounds as electron acceptors . In contrast, an unclassified and potentially phototrophic member of Acidobacteria and Hyphomicrobiaceaewere more abundant in forested streams. These taxa are often associated with low nutrient conditions and were previously identified as indicators of forested streams and shown to decrease in abundance with increasing watershed urbanization . Only weak associations were detected between sediment and water microbial community composition and B-IBI scores. This is in contrast to findings of Simoninet al., who found that stream microbial community structure correlates with a macro-invertebrate biotic index in North Carolina . Simonin et al. identified concurrent changes in microbial taxa and environmental conditions associated with the biotic index, finding a higher number of negatively responding taxa than positive responding taxa . Here, only one taxon was more abundant and pervasive in streams in good condition, while several taxa were found to be abundant and pervasive in streams in very poor condition . Hydrogenophaga spp. and an unclassified member of Desulfobacterales, commonly associated with anaerobic, reducing, and contaminated environments , were both more abundant in streams in very poor condition . Forested water communities were more even than agricultural and urban communities, suggesting that certain taxa increase in abundance disproportionality in degraded streams, which is likely why the indicator analysis identified more taxa in streams in very poor condition. The findings here suggest that land use cover and stream chemistry are better predictors of head water stream microbial community composition than are macro-invertebrate indices of stream conditions. In agreement with the idea that structure determines function, in this study, water community respiration correlated with microbial community composition. Similarly, previous studies report that changes in community metabolism, specifically, the degradation of organic matter, are related to shifts in community composition and diversity . In contrast, other studies report that respiration depends on substrate availability rather than community composition due to functional redundancy , finding no connection between stream bacterial diversity and the activity of enzymes associated with carbon cycling . The weaker correlation between community respiration and community composition in sediments compared to water samples could be due to a high level of functional redundancy within sediment communities; if dominated by generalists, shifts in community composition would likely not significantly affect rates of respiration . Degraded streams had lower rates of community respiration than forested streams, as evidenced by the positive correlation between water respiration and forest cover and the negative correlation between respiration and urban cover . Rates of community respiration also negatively correlated with several physicochemical variables , including conductivity, Cl , Ca, Mg, pH, SO4 2 , and ANC. All signatures of anthropogenic influence, ANC, Cl and Zn concentrations, and pH, were found to previously correlate with benthic stream respiration across the Highlands, Piedmont, and Coastal Plain regions of the eastern United States , and Zn is a common urban pollutant . These results suggest that environmental conditions associated with land use drive differences in community respiration, altering carbon processing in head water streams. In addition to altering carbon transformations, watershed modification affected stream nitrogen processing. Agricultural streams had higher N2O concentrations than those of forested streams , with higher N2O concentrations being associated with elevated TN, NO2 , NO3 , and NH4  concentrations . The N2O concentrations measured in this study, at 0.22 to 4.41 g N2O liter 1 , were comparable to the values reported for agricultural streams in Illinois but lower than values published for agricultural drainage waters in Scotland, UK and higher than values reported for other forested and agricultural streams . N2O production is known to vary by land use, with higher production from denitrification in streams in agricultural and urban basins , and changes in community composition have been shown to influence denitrification rates in agricultural and urban streams . However, Audet et al. found no difference in N2O concentrations measured in forested and agricultural streams in Sweden .

Low leptin levels at birth may increase risks of insulin resistance and obesity in later life

The current work focused on AVHSE as a research tool, but similar techniques could be further developed for the development of grafts. Grafting would require addressing many of the structural and biological limitations noted above, as well as modifications to address host immunity issues. Overall, we have demonstrated AVHSEs as a research platform with regards to photoaging effects, but expansions of this model could be utilized for clinical skin substitutes, personalized medicine, screening of chemicals/cosmetics, drug discovery, wound healing studies, and therapeutic studies. Perfluoroalkyl substances are stable, water-soluble compounds that persist in the environment and are of major concern for the public. These substances are widely used in surfactants, lubricants, paints, polishes, paper/textile coatings, food packaging, and fire retardant . The most widely studied PFAS chemicals are PFOS and PFOA. These are organic compounds in which all hydrogens on all non-functional group associated carbons have been replaced by fluorine; thus, PFOS and PFOA are extremely stable due to their numerous strong carbon-fluorine bonds100 . Due to their stability and bio-accumulative properties, PFAS are ubiquitous and are found in food and water . PFASs have come under recent scrutiny as bio-accumulating toxins; once they reach a certain concentration in the body, their hydrophobic long chain structures penetrate cellular lipid bilayers and displace and disrupt membrane structures. Concerningly,gallon pot in people aged 12 or older, PFAS chemicals including PFOS, PFOA, PFNA, and PFHxS have been detected in more than 95% participants of a ~7800 sample study in the U.S.. These chemicals have also been found to environmentally bio-accumulate with detection in several animals including mammals and aquatic species. The primary route of PFAS exposure in adults is through food and water ingestion.

There are also a few studies demonstrating toxicity of PFAS through skin exposure in human tissue engineered skin and in animals263–265. In US children , the exposure from dust ingestion and dietary ingestion is nearly equal258 . Bio-accumulation of PFAS results in multiple health detriments with exposure ranging from pre-natal time points into adulthood. Toxicity mechanisms of PFAS within the cell and on tissue development are poorly understood but implications have been made regarding disruption of viability and proliferative capacity, immune response, pro-estrogenic endocrine processes, lipid profile and fat development , cell membrane, endothelial barrier, gap junctions, and cytoskeleton . Higher PFAS concentrations in adults are associated with greater weight regain. Evidence for non-favorable lipid profiles linked to PFAS plasma concentrations has also been found including greater total cholesterol low-density-lipid , higher triglycerides, increased very low-density lipoprotein , and increased gamma glutamyl aminotransferase. Further, hormonal effects have been linked to PFAS compounds in adult subjects. Reproductive hormones such as sex-hormone binding globulin, follicle stimulating hormone, and testosterone concentrations have been found to be inversely related to PFOA and PFOS in people aged 12-30 years. Thyroid stimulating hormones and total T4 have been positively associated with PFAS while negative associations have been found with kidney function . Children have a higher burden of PFAS partially due to mouthing behaviors, their lower body size to area ratio, and possible exposure via breastfeeding . Studies with concerning outcomes concluded associations of increased PFAS exposure in utero and during childhood with adolescent/adult obesity, cholesterol, and increased beta cell dysfunction. Childhood obesity and overweight risk is in turn associated with higher adult risk of obesity and multiple chronic diseases including cardiovascular disease and diabetes. Several PFAS compounds are able to penetrate the placental barrier in pregnant women and reach fetal circulation. Although the mechanism of accumulation is not yet understood, multiple studies have shown that this transfer is preferential, that is, fetal concentration is higher than maternal concentration-possibly due to differences in fetal v. maternal blood.

Studies have associated higher maternal serum concentrations/exposure with decreased birthweight but increased infant adiposity. A growing number of studies have also linked high maternal serum concentrations of PFAS to increased pediatric/young adult adiposity and changes in lipid profiles. Hypothesized mechanisms of PFAS association with low birthweight and increased infant/childhood adiposity include lipid metabolism changes, reduced food/water intake by the mother, direct fetotoxic effect, endocrine disrupting effects, and other altered hormone levels. Much evidence points toward hormonal effects and their downstream interference of tissue development and function including body weight and adipose regulation. One direct example of hormone sensitive development effects include changes in anogenital distance which have been observed in female infants. Negative PFAS effects have also been explored with mechanisms acting on constitutive andostane receptor, pregnane-X receptor, estrogen receptor beta, and the phosphatidylinositol 3-kinase-serine/threonine protein kinase pathway. Hypothesized mechanisms for greater weight regain in adulthood include PFASs possible involvement in changing energy metabolism and homeostasis of thyroid hormones through transcriptional factor activation like peroxisome proliferator-activated receptors. PPARα and PPARγ are key regulators in fatty acid oxidation, differentiation, adipocyte proliferation and function, glucose breakdown, and lipid and lipoprotein metabolism. In mice, PFOA was shown to affect leptin and adiponectin release during differentiation of fat cells ; leptin is a regulator of energy homeostasis while adiponectin is secreted by mature adipocytes and effects insulin responsiveness. At this time, the high PFOA/PFOS exposure association with decreased birthweight is based on multiple conflicting literature. However, to shed light on conflicting conclusions, a case study that systematically reviewed 18 epidemiological and 21 animal studies regarding PFOA toxicity concluded that exposure to PFOA is in fact toxic to human reproduction and development300.

Conflicting evidence of the associations between PFAS maternal concentrations and childhood adiposity is also present, although a recent mass analysis of cohort studies has concluded an association with early life exposure to PFOA and increased risk for childhood obesity. Speculation upon the conflicting evidence of associations between PFAS maternal concentrations and childhood adiposity have been made regarding the differences in concentrations of PFAS evaluated in study populations and differences in method of subject weight measurement. In many studies PFAS associations with negative health effects were only observed in the highest fraction of serum concentration studied while in lower concentrations associations were not present. More evidence is required to fully understand PFAS associations and how they may vary with concentrations. It has been suggested that low maternal serum concentrations show positive associations with childhood adiposity and higher serum concentrations show positive, negative, or non-monotonic dose responses with childhood obesity; this evidence is supported by the fact that PFAS are endocrine disrupting chemicals 106, since EDCs have been shown to induce NMDR. Much of this association has been investigated due to concrete developmental effects PFASs have on rodent birthweight when exposed in utero. Importantly, animals have differences in PFAS metabolization and gestational duration when compared to humans and many animal effects were observed with very high levels of PFAS. Lipid profiles are associated with adult weight and prevalence of obesity. There is much conflicting evidence on whether PFAS plasma concentrations are associated with less favorable or beneficial outcomes on lipid profiles. Several studies have linked PFAS with higher total cholesterol, higher triglyceride levels,gallon nursery pot and higher low-density lipoprotein while other studies have linked higher PFAS plasma concentrations with beneficial lipid profile effects. Drawing conclusions based on adiposity effects of PFAS in animals should be carefully considered as well. Notably, mice have differences in adipocyte derived hormones; for example, resistin is secreted by adipocytes in the mouse but in humans is not expressed in adipocytes and is primarily present in monocytes and macrophages. Conflicting literature of PFAS and associations with health elucidate a need for further understanding at the cell and biological signaling level of PFAS effect on humans. These contradictions are exacerbated due to the poor understanding of the toxic mechanisms PFAS elicits. Several of these mechanisms share common modulators, the Hippo signaling pathway and the cell cytoskeleton. Hippo signaling is a crucial cellular pathway involved in organ development, growth, homeostasis, stem cell maintenance, and regeneration. Briefly, the classic Hippo cascade involves kinase regulators mammalian Ste20-like kinases 1/2 and large tumor suppressor 1/2 which act to regulate phosphorylation and degradation of Yes-associated protein and transcriptional co-activator with PDZ-binding motif. As downstream effectors of the Hippo pathway, YAP and its paralog TAZ target genes involved in cell growth, proliferation, differentiation and development. When Hippo is active, YAP/TAZ are phosphorylated leading to YAP/TAZ degradation and cytoplasmic retention. In the inactive state, YAP/TAZ accumulate in the nucleus and target proteins of TEA-domain containing family , runt-related transcription factor , and others .

Several factors can mediate activation of the Hippo pathway including the classic cascade involving Mst1/2 and Lats1/2, mechanical cues, cell polarity and adhesion mechanisms, metabolic pathways, ligand-dependent activation, and hormone/growth factor control. Further, YAP/TAZ are involved in pathway crosstalk as well. YAP and its core upstream regulators Mst1/2 and Lats1/2 have been investigated in several organs and their developmental dysregulation which may be linked to changes in body weight. Many organ systems have been studied to identify developmental problems after knockout of Mst1/2, Lats1/2, or YAP. Through these investigations, it has been shown that changes in the main regulators and effector YAP have caused lung epithelial defects, faults in kidney structure development, bone to fat ratio disruptions, over proliferation during intestinal development, improper regulation of liver development including epithelial and hepatocyte maturation changes , and pancreas mass/size changes. It is still unclear how PFAS may disrupt YAP regulation and how these mechanisms tie into body weight and organ development/size control. PFAS may regulate the Hippo pathway via mechanosensing extracellular matrix changes through focal adhesions and the actin cytoskeleton, a known Hippo pathway propagation. YAP/TAZ regulation through Rho/ROCK activation acts as a control mechanism for transcriptional control of cytoskeleton stability, PFAS is likely effecting cell function and viability through this Hippo pathway cascade as well. Adipogenic versus osteogenic polarization of mesenchymal stem cells is dependent on YAP/TAZ localization in the Hippo signaling pathway, a relationship linked through mechanical cues and cytoskeletal tension. An increase in nuclear YAP/TAZ localization corresponds to increased osteogenic stem cell differentiation while increases in cytosolic YAP/TAZ correspond to higher adipogenic differentiation. Mechanical regulation of adipogenesis upon nuclear localization has been suggested to work through transcription factor ß-catenin or SMAD proteins rather than TEAD. Due to the cytoskeletal involvement in adipose tissue, it is plausible that PFAS may dysregulate adipose through perturbation of cell cytoskeleton. PFAS chemicals have been found to act on the cell cytoskeleton through disruption of f-actin, microtubules, and gap junctions. PFAS has been shown to disrupt and fragment actin cytoskeleton and tight junctions in mice Sertoli cells and human microvascular endothelial cells. It is likely that PFAS effects cytoskeleton integrity and could change balance of osteogenic/adipogenic polarization of mesenchymal stem cells and/or adipose tissue homeostasis. During adipogenesis, cytoskeleton remodeling is a preliminary process and it has been found that the cytoskeletal components actin, tubulin, vimentin, and septin undergo localization and expression changes. Specifically, actin forms filament bundles in the cytoplasm of pre-adipocytes and short filaments in mature adipocytes with similar organization in the microtubules, and vimentin regulates lipid droplet accumulation by forming cage structures that surround lipid droplets. Septin has been found to form filaments or rings depending on timepoint within adipocyte differentiation. These findings support the cytoskeleton’s role in regulation of adipogenesis and lipid accumulation. In a study completed with rat cardiomyocytes, it was found that the adipokine, adiponectin acts on Rho/ROCK and increases RHO GTPase activity and induces cytoskeletal remodeling to further regulate glucose uptake and metabolism. Adiponectin effects weredemonstrated by its ability to increase membrane microvillar like protrustions, and increasing actin polymerization to form filamentous actin/actin stress fibers. Potentially PFAS chemicals may be directly disrupting the cytoskeleton or disrupting it indirectly through changes in the adipokine profiles that adipose tissue secretes. On the other hand, PFAS may regulate the Hippo pathway via mechanosensing extracellular matrix changes through focal adhesions and the actin cytoskeleton, a known Hippo pathway propagation. YAP/TAZ regulation through Rho/ROCK activation acts as a control mechanism for transcriptional control of cytoskeleton stability , PFAS could be effecting cell function and viability through this Hippo pathway cascade as well. The mechanical control of adipogenic differentiation of MSCs relies on both the integrity of the actin cytoskeleton itself and tension feedback from myosin II motor which directly acts on the Hippo signaling pathway.

There is previous work motivating the use of a nonvolatile cache to increase disk performance

This means that its adaptive size prevents it from negatively impacting performance. Additionally, the thousands of read operations it satisfies enables Anticipatory Spin-Up functionality.These works generally conclude that a small non-volatile memory write-cache can significantly increase performance by reducing disk traffic. With hybrid disks soon to be available, hybrid disk/non-volatile memory file systems, such as Conquest and Hermes can be evaluated for their effectiveness at increasing file system performance by leveraging on-board non-volatile memory. Previous works have also looked at reducing hard disk power consumption using non-volatile memory. FLASH CACHE proposes to place a small amount of flash directly between main memory and disk, as an additional level in the caching hierarchy to decrease power-savings as well as increase performance. Nvcache focuses completely on reducing power management, and therefore has a completely different architecture. Although Anand et al. don’t use non-volatile memory, they propose ghost hints to anticipatively spin-up a hard disk in a mobile system context while redirecting read I/O to the Internet during disk spin-up. Microsoft proposes to use hybrid disk drives to reduce hard disk power consumption, and decrease boot-time and application launch in their upcoming Microsoft Vista Operating System. They claim a hybrid disk can be spundown by up to 96% of the time with a 1GB NVCache. Unfortunatelly, neither algorithms nor workloads are described. Duty cycle is only one metric for disk drive reliability. Disk drive reliability must also factor in duty hours, temperature, workload, and altitude. Mean Time To Failures and Mean Time Between Failures are widely used metrics to express disk drive reliability. However,black plastic pots for plants these metrics must considered with care, as they are often incorrect . IDEMA has proposed a revised MTBF rating based on disk age.

Most adaptive spin-down algorithms for traditional disks mention that disk reliability decreases when using a spin-down algorithm, but don’t quantitatively describe the impact. Greenawalt modeled the effects of different fixed time-out values and its impact on power conservation and disk reliability. They use a Poisson distribution to simulate inter-arrival access patterns and consider duty cycles as detracting X hours from the MTBF rating. Other strategies to save hard disk power involve pushing power management to applications. Weissel et al. propose that energy-aware interfaces should be provided to applications. Such interfaces can be used to convey priority or state information to an operating system. For example, deferrable I/O interfaces can be used by applications to in- form the operating system that particular I/O requests may be differed.The sediments of coastal marine wetlands in California are inhabited by a variety of algal and bacterial primary producers in addition to the more conspicuous vascular plants that provide most of the physical structure of coastal salt marshes and seagrass meadows. The non-vascular plant flora includes microscopic cyanobacteria, anoxygenic phototrophic bacteria, diatoms, and euglenoids, often collectively known as “microphytobenthos” . Larger green algae, red and brown seaweeds, and the macroscopic tribophyte, Vaucheria are also residents of these ecosystems . Ecologically, sediment-associated algae and photosynthetic bacteria are key components of wetland food webs . They account for a substantial fraction of ecosystem primary productivity in California and in other regions . While understanding of the ecological roles and spatio-temporal dynamics of these organisms has improved, the diversity and natural history of the micro- and macroalgae of salt marshes and mudflats from the northeastern Pacific, including southern California, are still poorly understood. Lack of a deeper understanding of the diversity of these organisms within and between estuaries and estuarine habitat types impedes efforts to understand how spatio-temporal variation in the composition of benthic assemblages may affect ecosystem functions or how changes in assemblages may relate to anthropogenic impacts to wetlands.

To date no comprehensive floristic account of wetland algae and photosynthetic bacteria in California has been produced by the phycological community. However, some taxonomic information on these organisms exists in scattered sources. Early research on the taxonomy of wetland algae began with William Setchell, Nathaniel Gardner, and George Hollenberg. These phycologists produced lists and/or descriptions of cyanobacteria and macroalgae from salt marshes and mudflats in several publications, but the accounts principally focused on either rocky shore cyanobacteria or wetland vascular plant floras . Several decades later, Zedler published a list of cyanobacteria, diatoms, and green algae collected from Tijuana Estuary at the southern extreme of the state. She recorded 32 species of diatoms, four cyanobacterial taxa, and the green algal genera Rhizoclonium and Enteromorpha, but noted that her account was not comprehensive. Wilson and Carpelan studied benthic diatoms from Mugu Lagoon and pelagic diatoms in four lagoons in northern San Diego County respectively. Records of wetland macroalgae have been compiled for Humboldt Bay in northern California , and for Newport Bay in southern California. Stewart’s treatment of San Diego County seaweeds also notes wetland occurrences of marine macroalgae. There are formidable obstacles to producing a comprehensive flora of tidal wetland algae for any localized region. First, the phylogenetic breadth of photosynthetic organisms in tidal wetland habitats requires a diversity of specialists, employing an array of tools from electron microscopy to culturing techniques to standard phycological methods for macroalgal identification and preservation. As Sullivan and Currin note, funding for such an endeavor is likely to be difficult to acquire. Moreover, the systematics of many groups of these organisms is in flux. In particular, study of the cyanobacteria is complicated by the existence of competing bacteriological and morphological classification schemes and by widely differing approaches to using morphology to delineate species . An additional consideration is that application of names to microalgal and cyanobacterial taxa for a given locality is at least somewhat dependent on decisions made in other geographic regions or habitats since detailed taxonomic studies are haphazardly distributed in space and time.

For instance, some important cyanobacterial reference sources either treat distant geographic areas or describe primarily freshwater and terrestrial organisms . Despite these challenges, floristic and systematic work on wetland microalgae and seaweeds provides the foundation for progress in basic biodiversity research. In addition to the possibility that cryptic taxa may be discovered in the flora, algae are excellent systems for investigation of molecular versus morphologically-based phylogenies . Better knowledge of the diversity of microproducers present in coastal wetland habitats should also enable a better understanding of ecological interactions between microphytobenthos and other wetland organisms and facilitate the use of biodiversity metrics as a means of assessing ecosystem health and dynamics. In this paper the common benthic cyanobacteria, microalgae, and seaweeds associated with sediments from tidal wetlands in southern California are described and illustrated. The goal is to provide preliminary documentation of the local flora and add to the fragmentary knowledge of these organisms in the region. The paper focuses on new collections made from Mission Bay and Tijuana Estuary in San Diego County,drainage pot but also includes some records of species previously recorded from wetlands throughout southern California . Organisms included here were assigned tentative genus names based on morphological features visible by eye or by light microscopy. Supporting references pertinent to the identification of taxa, their local distribution, and their natural history, are also included. Of the various taxa treated, documentation of the cyanobacteria is most thorough, partly filling the significant gap in information on these common inhabitants of tidal wetlands in the region. Observations and photographic documentation were made on live organisms, or occasionally on organisms grown in culture. Specimens were often kept alive by transferring moist field sediment to incubation in the laboratory. Field sediment and cultures were maintained at about room temperature with illumination . Cultured organisms were grown on sterilized f/2 media prepared in artificial seawater with or without sterilized glass particles as a substrate. Organisms living on field-collected sediment were kept and observed up to about seven months following removal from the field . Photographs were taken with a digital camera through compound microscopes . Diatoms were identified to genus where possible using Round et al. . Cyanobacterial taxa were generally identified to genus using the recent taxonomic treatments in Anagnostidis and Komárek , Komárek and Anagnostidis , and Boone and Castenholz . Humm and Wicks , Desikachary , and Setchell and Gardner were also consulted for identification and nomenclatural purposes. Macroalgae attached to sediment-associated substrates or occurring loosely in wetland habitats were pressed fresh on herbarium paper and dried. Identification and current nomenclature of macroalgae follows Abbott and Hollenberg and Gabrielson et al. . Skin is one of the largest organs of the body and has functional roles in immune response, physical protection, and thermal regulation. As aging occurs, skin function and healing capacity is reduced. Skin aging is frequently divided into two related processes: intrinsic and extrinsic aging . Intrinsic aging, also referred to as chronological aging, includes genetic and hormonal changes and the progression from cell maturity to cellular senescence. Extrinsic aging, also referred to as environmental aging, represents the impact of the environment, including: photo aging associated with sun exposure , cigarette smoking, pollution, chemical exposure, trauma. Due to the different underlying mechanisms, characteristics of each type of aged skin are different. Chronologically aged skin presents as unblemished, smooth, pale, dry, lower elasticity, and has fine wrinkles while environmentally aged skin has coarse wrinkling, rough textures, pigmentation changes, and lower elasticity.

Microstructural changes in intrinsically aged skin include decreased dermal vasculature ; changes in dermal elasticity and increased collagen disorganization; build-up of advanced glycation end products and changes in glycosaminoglycan and proteoglycan concentrations/organization contributing to stiffening of dermal structure and frailty, and decreased hydration; imbalance of tissue inhibitors and matrix metalloproteinases resulting in imbalance between collagen deposition and breakdown; and flattening of the dermal epidermal junction/loss of rete ridges. Aging also contributes to variations in epidermal and dermal thickness and reduced subcutaneous fat volume. There are also many changes related to cell population in all three main skin compartments including reduced epidermal cell turnover, drop in number of active melanocytes ; decreases in dermal fibroblast concentrations, decreases in immune cells and immune function. Abnormalities of skin barrier occur during aging and often present as dryness or skin irritation. In aged skin, barrier function has been studied in the context of decreases of filaggrin, increases in pH , altered lipid presence, and changes in cornified envelope arrangement. These changes add to fragility of older skin and higher chances of infection, it remains unclear exactly how these changes take place and what mechanisms are controlling them. On the molecular scale, expression levels of soluble factors, proteins, and vitamins are both effects and contributors to aging phenotypes. Examples include upregulation of stress regulatory proteins, increases in AP-1, and declines in vitamin D production by the epidermis. These changes are largely attributed to increases in reactive oxygen species, DNA mutations , telomere shortening, increased cell senescence, and hormonal changes. Changes in skin aging have been associated with fluctuations in expression patterns of integrins including α6 and ß1 integrins.In healthy human skin, α6 and ß1 integrin expression are localized on the basal side of basal keratinocytes. Defects in integrin expression are present in human blistering skin diseases with supporting evidence in knockout mice 34 and also in aged human skin ,although further work is necessary to understand how integrin expression changes in aging. Aging in the skin has sex-related differences as well, specifically, sex is linked to faster thinning of the dermis and collagen density decline in males as opposed to females. Males undergo a decline in androgen levels while estradiol levels are constant, these changes result in a linear decline of skin thinning and collagen content in men 10. Women experience both androgen and estrogen decline linearly and an additional post-menopausal estrogen decline which is linked to lower collagen content, lower skin moisture and capacity to hold water, lessened would healing response, thinner skin, and lower skin elasticity. Detailed summary and discussion of sex-related changes in skin aging have been previously reviewed. These intrinsic mechanisms are compounded by environmental skin aging . A key example is the effects of ultraviolet irradiation , which accelerates telomere shortening and DNA damage present with intrinsic aging. Other extrinsic aging and examples of compounding UV effects are discussed in previous literature. Overall, skin aging at the molecular, cellular, and tissue levels continues to be a field of active research.

A decrease in reliability is expected because the spin-down algorithm has become more aggressive

Hybrid disks present an opportunity for spin-down algorithms to further reduce power consumption while minimizing the performance and reliability impact they impose on the media itself. We now describe four spin-down algorithm and I/O subsystem enhancements.Spin-down algorithms controlling traditional disks compute the idle period as current time last access time, where last access time is the time of the last disk access. If the idle period exceeds the current time-out, the rotating media is spun-down. I/O type, whether read or write, is ignored because any request, regardless of type, will cause the rotating media to spin-up. This is not true of hybrid disks as one of the intents for adding an NVCache to a hard disk is to extend the duration of spun-down periods by servicing I/O to and from the NVCache. Note this assumes the block I/O layer or disk driver is aware of the rotating media’s power state, and will redirect I/O while it is at rest. Our previous work describes a mechanism implemented in the block-layer to redirect I/O to and from a physically separate flash-based NVCache while the disk is spun-down. With such support, NVCache utilization is conveyed to the spin-down algorithm in the context of extended spin-down periods. A hybrid disk-unaware spin-down algorithm will still ignore I/O type because it believes any I/O will cause a spin-up. However, with the above redirection mechanism, such an assumption is false—write requests are actually unlikely to cause a spin-up. Therefore, we present Artificial Idle Periods, a spin-down algorithm modification for a hybrid disk which considers I/O type when computing a disk’s idle time, by recording idle time as time since the last read request. When a request occurs for a disk in the active mode,black plastic plant pots the time-out value is reset only on a read request. The idle period is thus artificially increased to current time last read access time.

As a result, even if a hybrid disk is actively servicing requests, it can be spundown and remain so, provided I/O consists only of write requests. Such a modification has several implications. First, duty cycles may be consumed faster; idle periods are artificially increased so a disk will spin-down sooner and probably more frequently. Second, I/O performance may degrade with sequential write workloads as flash sequential write throughput is only a fraction of rotating media, which must still be periodically flushed to disk. Finally, the NVCachewill endure more erase-operations resulting from its increased workload, decreasing its expected lifetime.As we will show in our evaluation, even with Artificial Idle Periods, typically less than 10% of the NVCache is used per spin-down period to cache writes. Although the NVCache is checked for desired read requests, reads are still typically responsible for initiating spin-ups. NVCache cached writes are not successful at servicing read requests because the host operating system is likely to be idle and not evicting buffer cache pages quickly. As a result, most read requests will be satis- fied by the buffer cache. To ensure that read requests are satisfied by the NVCache we propose a Read-Miss Cache, an area in the NVCache that is populated with unsatisfied NVCache read requests . The hypothesis being that readmisses causing a disk to spin-up are likely to cause a disk to spin-up again. When an NVCache read-miss occurs while the rotating media is spun-down, the requested content is read from the newly spun-up disk and returned to the file system. It is stored in the Read-Miss Cache and subsequent sequential reads are also stored in the Read-Miss Cache, which we refer to as preloading. Preloading stops when a non-sequential read or write request occurs. Only the most frequently preloaded content is stored in the NVCache— preloading data into the NVCache merely updates its frequency count, including the original read-miss.

The Read-Miss Cache size is dynamic. However, a maximum size constraint can be supplied to bound its growth, represented as a percent of the total NVCache. There is also a static minimum Read-Miss Cache size set to 1%. The Read-Miss Cache grows when a read-miss occurs causing the disk to spin-up and shrinks when a write operation cannot be stored in the NVCache because there is no available room.We implemented hybrid disk functionality in the Linux kernel, mimicking a hybrid disk using flash and a traditional disk by redirecting I/O traffic at the block I/O layer. While the disk is spun-down, I/O is intercepted at the block layer and redirected to flash memory. To evenly spread out block erase cycles and reduce page-remappings, redirected writes are appended to flash in log order. Each redirected request is prepended with a metadata sector describing the original I/O: starting LBA, length, spin-down interval, etc. The redirected LBA numbers are stored in memory to speed up read requests to flash. When the flash fills up or a readmiss to it occurs, the corresponding disk is spun-up and the flash sectors are flushed to their respective locations on disk. We also built a a simulator to model a hybrid disk to allow expedient evaluation of the proposed enhancements with several week-long block-level traces. The simulator considers power relative to the given trace for different power states: read/write, seek, idle, standby, and spin-up. For the notebook drive, power state transitions to and from performance, active, and low-power idle are done internally to the drive and are workload dependent. Therefore, we make a worst case assumption and assume the drive is always in low-power idle, providing a lower bound on spin-down algorithm performance, relative to idle power. Read/write I/O power is computed using the disk’s maximum specified I/O rate, and seek power is computed with the drive’s average seek time rating for non-sequential I/O.

The spin-down algorithm implemented is the multiple experts spin-down algorithm. It is an adaptive spin-down algorithm developed by Helmbold et al.. The spin down algorithm is based on a machine learning class of algorithms known as Multiple Experts. In the dynamic spin-down algorithm, each expert has a fixed time-out value and weight associated with it. The time-out value used is the weighted average of each expert’s weight and time-out value. It is computed at the end of each idle period. After calculating the next time-out value, each expert’s weight is decreased proportional to the performance of its time-out value.To evaluate the proposed enhancements, we use several different block-level access traces, shown in Table 2. We use four desktop workloads and a personal video recorder workload. Each workload is a trace of disk requests, and every entry is described by: I/O time, sector, sector length, and read or write. The first workload, Eng, is a trace from the root disk of a Linux desktop used for software engineering tasks; the ReiserFS file system resides on the root disk. The trace was extracted by instrumenting the disk driver to record all accesses for the root disk to a memory buffer, and transfer it to userland when it became full. A corresponding userland application appended the memory buffer to a file on a separate disk. The trace, HPLAJW, is from a single-user HP-UX workstation. The next trace, PVR, is from a Windows XP machine usedas a Home Theater PC running the personal video recording application, Beyond TV. The WinPC trace is from an Windows XP desktop used mostly for web browsing, electronic mail, and Microsoft Office applications. The block level traces for both Windows systems were extracted using a filter driver. The final trace, Mac is from a Macintosh PowerBook running OS X 10.4. The trace was recorded using the Macintosh command line tool, fs usage,black plastic planting pots by filtering out file system operations and redirecting disk I/O operations for the root disk to a USB thumb drive. The physical devices we present results for are a Sandisk Ultra II Compact Flash card, a Hitachi Travelstar E7K100 2.5 in drive, and a Hitachi Deskstar 7K500 3.5 in drive. The power consumption for each state are shown in Table 1. Note that in all figures except Figure 9, we show results using the 2.5 in drive. We present results for both a a 2.5 in and 3.5 in drive in Figure 9 to motivate placing an NVCache in a 3.5 in form factor.Figure 5 shows the results of making the I/O subsystem aware of a hybrid disk’s NVCache, such that a write request occurring while the rotating media is at rest, is redirected to the NVCache.

This figure shows the percentage of time the disk can remain spun-down as a function of the NVCache size. The Eng trace benefits the most from the write cache by increasing its spin-down time from 71% to 92%, which translates into an increase of slightly more than one and half days of spin-down time for the seven day trace. This is primarily due to the periodicity of writes-while-idle which occur more frequently than any another workload. The other workloads also benefit from a write-cache by increasing their spun-down time by 4–10%. Figure 5 shows the percentage increase in spun-down time by adding Artificial Idle Periods to write-caching. This plot shows that Artificial Idle Periods significantly increase the percentage of time a disk is spun-down. The most significant benefit comes when the NVCache is less than 1MB. By adding Artificial Idle Periods to NVCache with less than 1MB, its utilization increases as I/O redirection to the NVCache occurs sooner and more frequently. With larger NVCache sizes, Artificial Idle Periods is utilized less often, and so its impact is less pronounced. However, the percentage increase by adding Artificial Idle Periods still stabilizes between 3.5% and 5% for all but the PVR workload, which stabilizes at a 27% increase in spundown time. The PVR workload benefits from artificial write period so much because its workload consists of periodic write requests without interleaving read requests. Although write-caching and Artificial Idle Periods are excellent solutions to decrease the time a disk is spent in standby mode, it is important recognize the associated reliability impact. Figure 5 shows the number of expected years to elapse before the 2.5 in disk exceeds the duty cycle rating . By enabling write-caching while the disk is spun-down, reliability increases with respect to utilized NVCache size. As the NVCache exceeds 10MB, reliability stabilizes because it becomes under utilized beyond this point. Figure 5 also shows the expected years before the disk will exceed the duty cycle rating, but with Artificial Idle Periods on. In this figure, we see that reliability decreases relative to write-caching alone. However, the expected years before exceeding the duty cycle rating is still more than two and half years for the the Mac trace, the lowest of the five workloads. Note that without write-caching or Artificial Idle periods, the Mac workload would exceed the duty cycle rating in seven months.Figure 6 shows the results for a 256MB NVCache with write-caching, Artificial Idle Periods, and a Read-Miss Cache as a function of the maximum Read-Miss Cache size. Figure 6 shows the number operations satisfied by the Read-Miss Cache while the rotating media is spundown. This figure shows that with less than half the NVCache enabled for the Read-Miss Cache, the working set of read requests to the NVCache is captured. Figure 6 shows the average Read-Miss Cache size as a function of the maximum Read-Miss Cache size. The average Read-Miss Cache size grows linearly with respect to the maximum size until 90%, after which it deteriotes quickly, confirming that usually 10% of a 256MB NVCache is used for write-caching and Artificial Idle Periods. The PVR workload is an exception as it stabilizes at 27% because of its high NVCache utilization from television content recording. Figure 6 shows the percentage increase in spun-down time by adding a Read-Miss Cache to an NVCache with write-caching and Artificial Idle Periods. Here we see that the Read-Miss Cache only increases the spun-down time percentage by at most 1.5%. Note that with only write caching and Artificial Idle Periods enabled for a 256MB NVCache, all but the PVR workload are spun-down for 90–95% of the workloads, which leaves little room for improvement. Although the Read-Miss Cache does not increase spun-down time significantly, there is still a consistent increase when the Read-Miss Cache is allowed to use the entire NVCache.

Some increases in recharge are indicated in the northeast and in parts of the high Sierra

The traces of this history may remain in plain sight but reconstructing this history requires more than critical close readings of China-themed films or Chinatown tourism publicity from the period. While close reading methodologies can help us understand the ways in which popular culture structured dominant ideas of race, gender, and nation, these methods tell us little of the motives of the Chinese Americans who helped produced these films and related performances, or of the emotional or social ties that the production of these performances elicited. To understand these performances in their historical context necessitates both close readings of popular racial representations alongside the use of archival documents and oral history interviews produced by community members. This archival work can help us piece together the lives and actions of the Chinese Americans who played such an important role in producing cultural artifacts from this period. This dissertation unites archival and oral history methodologies from social history with close textual analysis from film studies to foreground the ways seemingly everyday Chinese Americans influenced the process of racial formation. In the process, this interdisciplinary methodology produces new ways of understanding Chinatown both as a representation and as an ethnic enclave. As part of this interdisciplinary methodology, this dissertation is closely grounded in a form of social history often referred to in Asian American Studies as community history. As first deployed in the developing field of Asian Americans studies in the 1970s and 1980s, Asian American community history utilizes community-engaged methods such as oral history and collection of family and community documents to create an archive on which the history of a given place-based, ethnic enclave can be written. If the primary goal of mainstream American history has long been to produce academic interventions in society’s knowledge of the past,drainage pot the goal of Asian American community history, as it was originally developed, was much more politically informed.

Developing out of the political imperatives of the Asian American Movement of the late 1960s, community history methodology sought to address broader historical silences within mainstream narratives of American history while simultaneously documenting, building, and empowering local Asian American communities. Often produced collectively, these community histories originated both out of emerging ethnic studies departments as well as out of the first local Asian American historical associations that were then coming into being. While Asian American community history developed around the same time that “public history” was emerging as an accepted academic field within American history, the audience for these early Asian American community histories was not an unidentified public audience but rather the members of the same ethnic enclaves whose history was being told. This project may be written as dissertation in the field of Ethnic Studies, but it maintains the ethos of these earlier Asian American community histories. In particular, this dissertation builds on the work of community historians at the Chinese Historical Society of Southern California and the Chinese American Museum of Los Angeles. While other archival institutions were used, the collections of these two institutions form the foundation of the dissertation. Central to this dissertation is the Southern California Chinese American Oral History Project. Produced in the late 1970s and early 1980s as a joint effort between the Chinese Historical Society of Southern California and UCLA’s Asian American Studies Center, the project interviewed 165 people about their lives in Los Angeles until 1945. The resulting collection contains four hundred hours taped interviews and 1700 pages of summary transcripts. A collective effort that involved volunteers from both UCLA’s Asian American studies center and the CHSSC, the project remains perhaps the most comprehensive archived Chinese American oral history collection of its type focusing on the pre-war period. I employ these archival and community history methodologies alongside those of cultural studies.

Grounded in the work of scholars like Stuart Hall and Edward Said, this project sees culture as intimately tied to social, economics, and political power. Power relations of a given social structure are encoded in popular representations, and subaltern groups, such as Chinese Americans in the 1930 and 1940s, use culture as a means of engaging the intersecting structures of race, class, gender, and sexuality. This project thus sees films, newspaper articles, and above all Chinatown itself, as texts that can be read to better understand the social structure in a given place and period of time. Reading these cultural texts can lead not only to a better understanding of power relations within a given historical moment, but also to a better understanding of the ways those groups contested their subaltern position within the social structure. Historians are often criticized for their over reliance on the written word as a primary source, and certainly few of the existing studies on Chinese Americans in the first half of the twentieth century have given popular cinematic representations from the 1930s and 1940s the same attention as the written word. In contrast, my dissertation utilizes visual and material culture as “texts” that can be read in a way that will supplement rather than supplant our understanding of community history. As the first dissertation on the history of Los Angeles Chinatown and its relationship to Hollywood film, this project bridges these methodologies from social history and cultural studies to demonstrate the ways in which members of the Chinese American community in Los Angeles shaped dominant ideas of race, gender, and nation. I contend that the same transformations in the urban environment that facilitated the development of film in the late nineteenth century also transformed Chinatown into a similar type of cultural apparatus. Within the field of film studies, scholars such as Tom Gunning, Ben Singer, Laruen Rabinovitz, and Vanessa Schwartz have advanced what has come to be known as the modernity thesis.This modernity thesis posits that urbanization brought about a transformation in the social act of seeing which facilitated the development of new types of visual amusements, key among which was early silent film.A handful of film studies scholars have touched on this visual transformation and its relationship to both New York and San Francisco Chinatowns.My project builds on this earlier scholarship by briefly tracing the shared symbiotic history of Chinatowns and cinema from their roots in Chicago in the 1890s and San Francisco after the 1906 Earthquake. The dissertation then moves on to demonstrate the convergence and development of these two mediums—Chinatown and film—in Los Angeles between in the 1910s and the 1940s. Repositioning Chinatown as a medium of cultural production, symbiotically tied to the development of cinema allows for a more nuanced understanding of the amount of agency Chinese Americans were able to exercise over self-representations of their own community over the course of the first half of the twentieth century. It also allows us to see the myriad ways in which Chinese American in Los Angeles challenged, rearticulated, and at times reinforced ideas of American Orientalism.

Edward Said defines the idea of Orientalism as a system of knowledge and power through which the West defines itself against the East.For Said, the Orient is more than simply an idea. Instead it is “a mode of discourse with supporting institutions, vocabulary, scholarship, imagery, doctrines, even colonial bureaucracies and colonial styles.”While Said originally advanced the idea of Orientalism in discussing Europe’s relationship to the Middle East, a growing number of scholars have examined the way Orientalism functions within the United States. Scholars such as Gordon Chang and John Kuo Wei Tchen have all discussed the roles that discourses and popular conceptions of China played historically in constructing the idea of the United States as a modern, progressive nation.At the same time, Mary Ting Yi Lui, Anthony Lee, and Kay Anderson have all discussed various aspects of Chinatown and Orientalism.These scholars have demonstrated the ways in which popular ideas about China and Chinese people defined so many aspects of the way the United States understood itself as a nation in the late nineteenth and early twentieth centuries. While recognizing the lasting permanence of American Orientalism as a foundational ideology of the United States,large pot with drainage scholars have also acknowledged an important shift that occurred around the Second World War in the way Chinese Americans were popularly perceived. During the Chinese Exclusion Act period, American Orientalism defined Chinese Americans as legally, economically, and culturally as outside the boundaries of the US nation state. Throughout the Exclusion act period, the U.S. citizen came to be defined against the Asian immigrant.As such, representations of an American citizen of Chinese descent remained in many ways a cultural impossibility. Beginning around the Second World War, the ideology of racial liberalism took hold within the United States. With racial liberalism, the United States began the process of attempting to incorporate and manage, rather than exclude, a wider range of racial and ethnic groups within the United States.For Chinese Americans this period saw the symbolic end of the Chinese Exclusion Act in 1943 and the increasing acceptance of Chinese Americans into broader society. For the first time, large numbers of Chinese Americans were able to find jobs outside of the nation’s Chinatowns. While Orientalist ideas about Asia and Asian people did not disappear, the advent of racial liberalism transformed the ways in which American Orientalism functioned. Scholars have argued that the shift toward racial liberalism in general and the increasing incorporation of Chinese Americans into the nation-state in particular was largely the result of geopolitical factors directly linked to the war itself. In this narrative, the U.S. alliance with China during the Second World War, and the broader need to combat Japanese propaganda that labeled the United States as a racist nation, necessitated a transformation in the way in which the country treated its Chinese American residents. This accepted historical narrative leaves little room for the agency of Chinese Americans in the shifting notions of race, gender, and nation, and it further demonstrates Karen Leong observation that too often studies of American Orientalism see only whites as being able to engage these Orientalist discourses.In contrast to most earlier studies, I contend that Chinese American engagement with American Orientalism, through Chinatown performance, helped lay the foundation for the eventual incorporation of Chinese Americans into the nation state under the logic of racial liberalism during World War II. During the Chinese Exclusion Act era, Chinese Americans were forced to negotiate U.S. citizenship and national belonging through the discourse of American Orientalism. During this period, the question was not whether or not Chinese Americans would be defined as an Other against the US citizen, but rather what form this image of the Asian Other would take within the popular imagination. Therefore understanding Chinese American self representations before the Second World War necessitates an acknowledgement of the discursive possibilities and limits under which Chinese Americans operated during this period. While a few Chinese Americans such as Wong Chin Foo did attempt to present cultural representations of Chinese Americans as U.S. citizens, most Chinese Americans utilized a largely different strategy to combat Orientalist depictions of Chinese immigrants as a Yellow Peril.Examining what I call, “Chinese American Orientalism,” as a challenge to Yellow Peril stereotypes, the project foreground the ways that Chinese merchants, actors, and street performers used the medium of Chinatown to advance a vision of their community that at once challenged earlier Yellow Peril depictions while still maintaining some of the underlying assumptions about Chinese people’s differences from whites. In the face of Yellow Peril representations that defined Chinatown as an underground den of violent opium dealing tongs, Chinese American Orientalism cast the nation’s Chinatowns as clean, modern commercial areas where whites could shop and eat. These representations remained Orientalist in that they constructed Chinese culture in opposition to that of the West, but this form of Chinese American Orientalism negated rather than perpetuated ideas of Chinese Americans as a threat. This Chinese American Orientalism challenged images of Chinatown as a community of violent, opium-addicted bachelors living in underground dens and presented in its place an image of Chinatown as the modern extension of an ancient Oriental culture and tradition, one that could easily be commodified and sold to white visitors to financially support the needs of an emerging Chinese American middle class.

Training related to care policies and procedures was also provided to this interdisciplinary team

Same with nurses due to their specialty care facilities. So no one ever looks to come here full time. And the chiefs here know that we are posting positions to be filled with [temporary staff]. And so the cycle goes.” To overcome the cycle of temporary-staff usage, enable the sustainability of change, and maintain the spirit and knowledge of the implementation through consistent staff, a separate program was developed Nursing positions had the highest number of vacancies overall and also carried the highest fill rate. An analysis was commissioned to understand the ratio of clinical-position staffing relative to workload. Time-motion studies were carried out and combined with human resources initiatives designed to place employees in“hard-to-fill” posts. The result of these efforts was a program dedicated to nurse staffing designed to work hand in hand with CCM implementation, ultimately enabling sustainability of the overarching transformation effort.For health care quality improvement efforts to be sustainable in a correctional environment, local physician and nurse champions are required . Also important is an interdisciplinary implementation team involving a variety of health care team members. In the custodial health care setting, correctional officers are key stakeholders. Hence, identifying a local custody champion to be part of the interdisciplinary team was critical. The team of interdisciplinary champions was provided sufficient release time to participate in training and development in the areas of quality improvement, the chronic care model, and clinical diabetes content. Gaining the support of the custodial personnel who are responsible for prisoner control and safety related to health care needs was essential to overcome the institutional barrier. Correctional officers, and more specifically their captains and assistant wardens of health care, were brought into the primary care team meetings to be educated and trained on the methodology and processes. Feedback was received from team members on how to improve existing processes. The CCI implementation team decided that, as cultural change agents,vertical garden hydroponic they ought to be provided time to plan, implement, and disseminate change after inculcating an understanding of what the change meant for them, their particular departments, and their coworkers.

In this custodial environment, a reliance on one’s teammates and coworkers for success and safety was paramount, much as in work environments with nuclear reactors. In the receivership experience, champions were selected based on their expressed interest in serving as a catalyst for change in their pilot site; they were also identified as excellent clinicians who were respected by their peers. Physician involvement is critical to a successful change effort and program implementation within a health care delivery setting . Among the champions, the physician leaders’ commitment to the model and the change it represents must be asserted from the beginning. In order to achieve the goal of developing workforce competencies after receivership, several local institutions’ chief medical officers were recruited as the core project team’s clinical leaders. Practicing physicians and nurse consultants were recruited as quality improvement advisors.Health care delivery system design for prisons is a significant challenge. Contrary to models of care delivery external to the correctional environment, the primary mission of the institution from the custody perspective is security; health care delivery is procedurally treated as secondary. Typically, prisons are constructed with little or no space for clinics or medical supply storage. To implement a new CCM, the fundamental delivery system design had to start with the basics of creating adequate space for exam rooms so that the interdisciplinary team could provide integrated patient care. Once the issue of clinic space was addressed, the delivery system was changed to shift from a siloed, single-provider approach to a patient-centered team model. The Chronic Disease Management Program’s pilot prison sites adopted a managed-care-based primary care model and redefined the care team’s roles and definitions from a traditional solo provider medical model to an interdisciplinary team model. Treatment of inmate-patients was thus transformed to enable a more comprehensive level of care with each visit, with the goal of increasing quality of care and reducing the need for future medical visits.

Given the state government’s bureaucratic structure and a heavily unionized workforce, it was necessary to create new job classifications in the organizational structure as permanent positions—for example, nurse executive, nurse clinical care coordinator, nurse case manager, and medical assistant. In California, state employees are unionized, and as a result managing labor relations proactively during the delivery system design was critical to minimize employee grievances, union resistance, or both. Transparency and proactive collaborative approach were the keys to minimizing resistance from the unions. Planned group visits or health education classes proved to be a strong component of the delivery-system design in the correctional system.At the time of the program’s design the California prisons did not have enterprise-wide information technology connectivity and most clinic areas had no access to computers. The pilot prison sites therefore used the Chronic Disease Electronic Management System’s Disease Registry for asthma and diabetes as a temporary solution. As additional staffers were needed to perform data-entry functions and there was limited physical clinic space, the adoption was challenging. Some pilot prisons employed a low-tech, manual, tickler-file approach of tracking inmate-patients with chronic conditions. To overcome the identified institutionalized barrier of using memorized manual processes and tools to perform work, the benefits of the new systems were discussed during learning collaborative sessions. Additionally, continuing educational unit needs were identified and promised for delivery through new automated solutions. Communications were coordinated with other departments concerning other aspects of computerization to be implemented within the facilities as some of these programs were generally well accepted by staff. Staff also saw self-management support as reducing workload, and this further contributed to their acceptance of the changes.Confinement for high-security inmates is the primary obstacle to implementing self management support. Custodial concerns and rules greatly inhibit the inmate-patients’ ability to perform aspects of self-management of routine care. For example, some prisons prohibit the use of a medical device known as a drug delivery spacer due to it being physically sharp with the potential to be converted into a weapon.

Serum glucose monitoring and insulin injections are typically performed by a licensed vocational nurse instead of by the patient. Despite the custodial constraints against self-management,vertical farm tower peer education was successfully utilized as a prominent strategy for promoting health education and compliance for inmate-patients at lower security levels. While clinical and custodial staff had institutionalized aversion to dealing with inmates on an educational level, the cultural processes by which inmates tend to mentor other inmates was strong. As described by an associate warden of health care who was involved in the pilot program, “they, particularly the older ones or the ones trying to get themselves together—like for parole or if they found God—are all about educating each other or at least other inmates who they talk to. I knew back then I wasn’t going to tell my officers to deal with making sure they got their information on how to control their disease, but I knew they’d be passing back and forth any sort of knowledge that worked for them that they got in triage or somewhere else.” The custodial supervisor in the above passage was describing the exact institutional obstacle of organizational process that was discussed at the outset of the planning process. Custodial personnel were concerned with one issue—custody concerns. Nothing else mattered. To attempt to alter this highly institutionalized method of thought and process of organizational behavior was not the point of the CCM program, nor was it even considered feasible. Utilizing a train-the-trainer approach via another highly institutionalized process—that of inmate-to-inmate communication channels—was the preferred approach for successful implementation of the model and improved treatment outcomes. In order to make best use of the inmate communication channels to enable self management and to better understand the obstacles to self-management within this setting, a rudimentary analysis was performed. Two primary factors were identified during this analysis: self-efficacy and health literacy. From discussion sessions with the mental health clinicians, it was known that self-efficacy was a limiting factor for improving health in this population. Self-efficacy is generally defined as a person’s perception that she or he has the intrinsic capability to attain a goal . The second limiting factor identified was health literacy, which, given the average educational level of seventh grade, was considered an impediment to self-management. While health literacy does not specify a certain level of understanding within a given point of time concerning one’s health, it does consider the inmate-patient’s ability to understand and follow a clinician’s general instructions. A study of diabetic patients of various ethnic backgrounds found evidence that improvements in self-efficacy were associated with improvements in diabetes care outcomes . After controlling for ethnicity and health literacy levels, researchers found that increasing self-efficacy was related to patient self monitoring. On the basis of this and related research, the implementation’s management team concluded that nothing needed to be done immediately to improve health literacy or self-efficacy. It was felt that, as treatment outcomes improved over time, these related concerns would be naturally addressed and could be revisited as the program evolved. Patient education would be enhanced at the treatment encounter, and as treatment outcomes and inmate-patient experience improved due to better coordination of care, overall health literacy would improve due to the natural inmate communication channels.

The community at large typically marginalizes the incarcerated. Due to the high rate of recidivism, parolees become part of the broader community when released and then return to the prison environment, potentially repeating this cycle several times. If not treated in the prisons, chronic conditions and communicable diseases eventually become public health problems. While under the custodial confines of the state, the incarcerated also access the specialty services and acute care from community providers when health treatment considerations warrant such visits. Hence, care coordination, case management, and discharge planning are critical functions connecting the inmate-patients with their communities. Clinical staff within CDCR performs these specialty care visits. Community resources for the newly paroled, however, were and continue to be scarce. Community-level integration efforts with CDCR were not viewed as a priority by agency staff. While such efforts could have been integrated into CCM planning, they were not—perhaps due to agency staff’s overwhelming workload. While the literature does not currently state that community-prison integration is a primary aspect of successful CCM implementation, it is here argued to be significant due to recidivism. Unfortunately, however, it was not picked up as an element of overriding concern. The prison health care reform effort in California did not address the linkage with the community through provider-network development and community partnership. However, preliminary steps were taken to establish the public-private partnership with the local public health agencies. Discharge planning for the parolees was also deemed critical as it helped ensure continuity of care and avoided burdening the emergency departments in the community. This chapter has reviewed the process by which the new structure and processes proposed by the private-sector chronic care model were implemented within the public correctional setting. The challenges to implementation were met by carefully modifying the technical details of the program to fit the institutional context of the environment and the people who operate within it. Program level of analysis was introduced as a concept helpful for understanding the nature of departmental behavior because this project required collaboration and motivation at the program level, not at the overall organizational-mission level. This is an important concept that will carry over to the next chapter, where management behavior is explored. The actions considered by managers are reviewed at the program level of analysis in order to better understand both the motivation of this employee level and how their actions can be best guided to enable program implementation success.The ability of managers to transform organizations is an often-visited topic, debated by various disciplines within both academic and practitioner settings. Successful implementation of a program is a learning opportunity for scholars of many domains because numerous associations between the variables of performance and outcomes can be drawn. The literature specific to implementations within the public sector, particularly those studies looking at leaders to manage the change, provides many examples of failure . Peeking through the fog of these tales of derailment are the few stories of hope: implementations that provide evidence of success despite the odds.

CFA contains inactivated Mycobacterium tuberculosis in mineral oil and is unsuitable for human use

Vaccination with CNS antigens can induce autoreactive T cell responses that home to sites of injury in the CNS and can inhibit neuronal degeneration in different models of neurological disease and injury, including spinal cord and head injury, Parkinson’s disease, Alzheimer’s disease, amyotrophic lateral sclerosis, glutamate toxicity, and glaucoma . It is thought that immunization activates CNS-reactive T cells that enter the CNS, secrete neurosupportive factors and shift the phenotype of resident microglia to one that is more neurosupportive. The initial studies of this therapeutic strategy immunized animals with oligodendrocyte antigens such as myelin basic protein . Since autoimmunity to oligodendrocyte antigens can lead to a multiple sclerosis -like disease in experimental animals, subsequent studies of neuroprotective vaccines have focused on vaccinating with CopaxoneH . CopaxoneH is a mixture of synthetic polypeptides composed of four amino acids in a random sequence dissolved in an aqueous solution. Frequent CopaxoneH injection induces regulatory T cell responses that have partial cross-reactivity with myelin antigens and this treatment has been approved as a therapy for relapsing-type MS. The vast majority of neuroprotective vaccine studies in animal models of neuropathological disorders have, however, administered myelin antigens or CopaxoneH in complete Freund’s adjuvant , an adjuvant that is unsuited for human use. There have been few reports of adjuvant-free CopaxoneH having beneficial effects in animal models of neuropathological disorders other than MS. Our previous studies of antigen-based vaccine therapies for inhibiting autoimmune disease have shown that the ability of a vaccine to induce protective T cell responses depends critically on which self-antigen is administered. This is because each self-antigen has a unique expression pattern and impact on T cell selftolerance induction. Accordingly,vertical tower for strawberries self-antigens have different immunogenicities and should vary in their ability to induce neuroprotective T cell responses.

Random copolymers as CopaxoneH may not be optimal immunogens for inducing neuroprotective T cell responses since only a small portion of the induced T cell response may be capable of cross-reacting with CNS antigens. Hence, further studies are needed to examine how the nature of the antigen used in neuroprotective vaccines affects the efficacy of the treatment. Current treatments for Parkinson’s disease temporarily ameliorate its symptoms but do not slow progressive loss of dopaminergic neurons. Accordingly, new approaches to slow the degeneration of the nigrostriatal dopaminergic system are urgently needed. It is thought that oxidative stress, protein nitration andactivated microglia contribute to the loss of dopaminergic function in human PD. Additionally, there is a growing appreciation that CD8+ and CD4+ T cells significantly infiltrate the SN of patients with PD . All of these potentially pathogenic factors are elicited by treatment with the neurotoxin MPTP. The MPTP mouse model of PD has therefore been extensively used to assess neuroprotective strategies. Several studies have shown that vaccination with oligodendrocyte antigens or CopaxoneH in CFA preserves dopaminergic neurons in MPTP treated mice. These studies, however, did not determine whether the vaccine-induced immune responses limited the initial nigrostriatal dopamine system damage and/or promoted long-term neurorestoration. We began our studies asking whether vaccination with tyrosine hydroxylase , a neuronal protein involved in dopamine synthesis, could protect striatal dopaminergic neurons to a greater extent than CopaxoneH in the MPTP model of PD in mice. Contrary to our expectations, we observed that immune stimulation by the CFA adjuvant, regardless of the emulsified antigen, appeared to be the major neuroprotective factor.The BCG vaccine developed against childhood tuberculosis contains live attenuated Mycobacterium bovis that is closely related to Mycobacterium tuberculosis, and has been administered safely to billions of individuals since the 1920s [29,30]. We describe the neuroprotective effects of BCG vaccination in the MPTP mouse model and discuss possible underlying mechanisms.

Our results suggest that general immune stimulation in the periphery may provide a new strategy to help slow disease progression in some neurodegenerative diseases.Studies of neuroprotective vaccines have focused on using CopaxoneH since it induces protective immune responses that cross-react with myelin antigens and because it is in clinical use for treating MS. We wanted to test whether immunization with a dopaminergic neuron antigen might have a more beneficial effect in the MPTP mouse model of PD, since this should direct vaccine induced T cells to the brain areas that were damaged by MPTP treatment and that slowly degenerate in human PD. We chose tyrosine hydroxylase as a test antigen because it is involved in dopamine synthesis and is predominantly expressed in striatal dopaminergic neurons. We isolated TH from recombinant E. coli inclusion bodies, and purified it using affinity chromatography and preparative SDS-PAGE as described in Materials and Methods. Gel analysis of the purified TH is shown in Supplement Figure S1. Since it takes 10–14 days for vaccine-induced immune responses to peak, and MPTP has a very immediate toxic effect, we immunized mice with TH or CopaxoneH in CFA 10 days before MPTP treatment. A group of control mice received only saline. The animals were sacrificed 21 days after the last MPTP treatment, which is a relatively long time point for such studies, because we wanted to test for potential neurorestorative effects of vaccination. As an initial read-out of the vaccine’s ability to preserve dopaminergic system integrity, we measured [3 H]WIN 35,428 binding to DAT in mouse striatal homogenates. We found that the mean DAT WIN binding levels were higher in striata from MPTP-treated mice that received CFA, regardless of whether they received CFA alone, TH/CFA, or CopaxoneH/ CFA, compared to that in unvaccinated MPTP-treated mice . Specifically, compared to unvaccinated MPTP-treated mice, the levels of striatal WIN binding were 43% higher in MPTP-treated mice that received CFA alone and 34% higher in mice that received CopaxoneH/CFA . The level of striatal WIN binding was 17% higher in MPTP-treated mice that received TH/CFA, but this was not statistically significant. These results argue against our initial hypothesis that a neuronal self-antigen may provide a more efficacious neuroprotective vaccine. Rather, the results suggest that peripheral immunostimulation by CFA was the major beneficial factor.It is possible that immune responses elicited by CFA vaccination limited MPTP’s direct effects or promoted the subsequent restoration of dopaminergic neuron integrity. We therefore performed a more detailed study of the effects of CFA immunization on DAT levels 4 and 21 days after the last MPTP treatment. Groups of mice were vaccinated with CFA or saline, and 10 days later were given MPTP for 5 consecutive days.

Four days after the last MPTP treatment, the mean levels of striatal WIN binding were 18% higher in the CFA treated group than in unvaccinated MPTP-treated mice, but this increase was not statistically significant . This suggests that CFA vaccination did not differentially affect the uptake, distribution or metabolism of MPTP and that CFA-induced immune responses have little or no ability to limit the acute toxicity of MPTP. We also examined similarly treated mice 21 days after the last MPTP treatment. We found that the mean levels of striatal DAT WIN binding was 29% higher in CFA vaccinated mice that received MPTP,vertical growing compared to unvaccinated MPTP-treated mice . These data again demonstrate the beneficial effects of CFA treatment in our model. Additionally, the increase in striatal WIN binding observed in vaccinated vs. unvaccinated MPTP-treated mice from 4 to 21 days post treatment , suggests that CFA treatment promoted a greater rate of neurorestoration. Indeed, 21 days after MPTP treatment, only CFA-treated mice displayed a significant increase in striatal DAT WIN binding compared to levels 4 days post-treatment, suggestive of a neurorestorative effect.CFA is unsuitable for human use, but its main immunogenic component, inactivated Mycobacterium tuberculosis, is closely related to the live attenuated Mycobacterium bovis used in the BCG vaccine against childhood tuberculosis. We hypothesized that the peripheral immune responses induced by BCG immunization may also be neuroprotective. C57Bl/6 mice were immunized with BCG and 10 days later they, and a control group of unvaccinated mice, received MPTP. Twenty one days later, their striatal WIN binding levels were measured. Mice vaccinated with BCG had significantly higher levels of WIN binding than MPTP controls . In addition, striatum from mice vaccinated with BCG also had significantly higher DA content . Thus, BCG vaccination had a significant beneficial effect on both striatal DA content and DAT ligand binding levels.Previous studies have shown that the number of microglia increases rapidly in the striatum after MPTP treatment and play an active role in MPTP-induced nigro-striatal system damage. Inflammatory type microglia are considered detrimental to neuron survival after a neuro-toxin insult and blockade of microglia activation was neuroprotective in the MPTP mouse model of PD. To examine whether BCG vaccination also affected the microglial reaction to MPTP toxicity, we treated other groups of mice with BCG or saline prior to MPTP treatment and counted the number of microglia in their midbrains three days post-MPTP treatment. We found that the number of Iba1+ microglia cells was significantly greater in animals that received MPTP compared to that in mice that received only saline , as also reported by others . In contrast, the Iba1+ cell number in SNc of BCG-vaccinated mice that received MPTP was similar to that in mice that only received saline . We also observed that the Iba1+ cells in the unvaccinated MPTP-treated mice had large cell bodies with only a few short thick processes, a morphology associated with microglia activation. In contrast, the Iba1+ cells in mice that received BCG before MPTP treatment had small cell bodies with long-fine processes similar to those in saline-treated control mice, suggesting a resting state. Thus, BCG vaccination prevented the MPTP induced increase in the number of activated microglia in the SNc, suggesting that general immune stimulation in the periphery can limit CNS microglia responses to a neuronal insult. At 21 days post-MPTP, stereological analysis revealed that the number of TH+ cells in the SNc of animals that received BCG was on average 6% greater than that in mice that received only MPTP , although this was not statistically significant .Vaccination with CNS antigens has beneficial effects in a number of different animal models of neurological disease and injury. This strategy is based on inducing CNS-reactive T cells which home to areas of damage and exert beneficial effects locally in a process termed ‘‘protective autoimmunity’’ . Early studies of neuroprotective vaccines administered myelin antigens, which raised safety concerns because of their potential for inducing a MS-like disease. Subsequent studies used CopaxoneH which has some resemblance with MBP, and in its aqueous form is approved for MS treatment. Almost all these studies, however, used CFA as an adjuvant and often did not report on the effects of CFA alone. The studies that did examine CFA often reported that these treatments had some beneficial effect, although of lower magnitude than the myelin antigen or CopaxoneH in CFA. Contrary to our initial expectations, we found that immunization with a dopaminergic neuron antigen did not provide a greater beneficial effect. Rather, CFA itself appeared to be main factor associated with higher levels of striatal WIN binding in vaccinated MPTP-treated mice. CFA treatment did not significantly alter the level of striatal DAT WIN binding 4 days after MPTP treatment, suggesting that CFA-induced immune responses cannot limit the acute toxicity of MPTP. Twenty one days post-MPTP treatment, however, the average levels of striatal DAT WIN binding in CFA treated MPTP-treated mice was significantly greater than that in unvaccinated MPTP-treated mice. The ratio of striatal WIN binding in vaccinated mice versus unvaccinated MPTP-treated mice increased from 4 to 21 days, suggesting that the CFAinduced responses promoted a greater rate of neurorestoration. CFA-treated mice, but not unvaccinated mice, had significantly higher striatal WIN binding 21 days vs. 4 days after MPTP treatment, indicating a neurorestorative effect. Based on the neuroprotective effects of CFA, we turned to testing BCG vaccination. Potential advantages of BCG vaccination include not only its established safety record over many decades of worldwide use in humans, but also that the attenuated BCG slowly replicates in the vaccinated individual, inducing immune responses over many months. Accordingly, BCG vaccination could provide a long-term source of neurosupportive immune responses.We observed that BCG vaccination significantly preserved striatal DAT WIN binding and DA content compared to that in unvaccinated MPTP-treated mice.