The same distance measurement module is used in meta-learning and test

Five adaption configurations using Mini-ImageNet, three PV settings, and AFD are proposed. As shown in Figure 4, S1 uses a general dataset in base-training and meta-learning, then uses target dataset in test, which is the adaptation from one domain to another, denoted in Formula 3. S2 uses a general dataset in base-training, target dataset in meta-learning and test, which is denoted in Formula 4. S3 uses target dataset in three stages, which is denoted in Formula 5. S4 uses general dataset in base-training, similar-target dataset in meta learning, and target dataset in test, which is denoted in Formula 6. When AFD is used in test, PV is considered as a similar domain as the target domain, because they are both associated with leaf diseases of the plants. S5 uses the similar target dataset in base-training and meta-learning, and target domain dataset in test, which is denoted in Formula 7. S1, S4, S5 are cross-domain, and S2, S3 are intra-domain. According to the definitions of SD and TD, e2, e3, e5, e6, e8, e9 are intra-domain experiments, because the data used in meta-learning and test is from the same dataset. The results are shown in Table 4 and Figure 5A. In PV-Split-2, the accuracy of e5 is better than e4 and e6. In PV-Split-3, the accuracy of e8 is better than e7 and e9. What the two settings have in common is that the disease classes belong to different plants. To the diverse species cases, S2 is better than S1 and S3. Especially when the number of species is bigger, the superiority of S2 is more obvious. As listed, e6 gets close to e5, but e8 is much better than e9, which means that the general dataset is better supported when the testing data is more diverse. A broad prior knowledge is very useful for adapting to diverse target. However, in PV-Split-1, e3 is the best one by using S3 because the testing data belongs to the same plant. So, the features of testing data are intensive and the general date in base-training is not helpful.

Oppositely, the data belonging to the same dataset is easier for adaption. In short, to the intra domain cases,hydroponic nft gully if the testing classes are of super-classes, S2 is the best strategy. If the testing classes are sub-classes, S3 is the best strategy. Experiments e1, e4, e7, e10, e11, e12 are cross-domain cases. e1, e4, e7, e10 are the experiments with the worst results in their respective data settings by using S1, due to the big gap between the general domain and target domain. Comparing e10, e11 and e12, e11 has the highest accuracy by using S4, which are shown in Table 4 and Figure 5B. e12 is not as good as e11 because too intensive features extracted from monotonous samples leads to weaker adaptation. S4 is the best training strategy for cross-domain cases, which uses general dataset in base-training to learn the prior knowledge in a wide range, and uses similar-target dataset in meta-learning for adapting to new domain smoothly.Ablation experiments e13–e22 are conducted to show the positive effects of CMSFF module and CA module, respectively. The results are listed in Table 5. Under four data configurations: PV-Setting-1, PV-Setting-2, PV-Setting-3, and AFD, we execute 8 experiments. The training settings are listed: Mini-ImageNet is used in base-training; backbone network is Resnet12; distance metric is cosine similarity; training strategy is S2 and S4. Taking e2, e5, e8, e11 as the baseline, the CMSFF module is added and the results of e13, e15, e19, e21 show the improvement of CMSFF. e14, e18, e20, and e22 indicate that CA has further improved the performances on the basis of CMSFF. e15 and e17 are used to compare the PMSFF module with the CMSFF module, and the results show that CMSFF outperforms PMSFF.Sub-class is defined as the classes belong to the same entry class. The PV-Setting-1 and AFD are sub-class classification examples. Sub-class classification is also named as fine-grained vision categorization which aims to distinguish subordinate categories within entry level categories. Because the samples belonging to the same super-class are similar with each other, sub-class classification is a challenging problem. In Table 4, the PV-setting-1 is the lowest accuracy group among the three PV-settings, as the samples all belong to tomato and are indistinguishable.

The results of AFD group are worse than PV-Setting-1, which is not only because of Subclass reason, also due to cross-domain and in-wild setting of images. Even if the images of AFD are already pre-processed, the backgrounds of images are still different from PV. Also, the illumination condition, resolution, photography devices are all different. Intuitively, the gap of features from SD to TD causes the accuracy declining.N-way and K-shot are the configurations of the task that indicate the difficulty of the task. Given a fixed K, the accuracy decreases as N increases. The result of PV-split-1 with N-way, 10-shot is shown in Figure 5C. The accuracy drops down from 85.39% to 64.35% as N-way increases from 3 to 10. All experimental results listed in Table 4 are executed with fixed 5-way, which indicates that regardless of the data configurations, all experiments follow the common trend: accuracy increases with the number of shots. The accuracy sharply increases as the Shot increases from 1-shot to 5-shot, and tends to be stable when the Shot is larger than 10. After the shot is larger than 20, the growth is not significant. From 1-shot to 50-shot, the increase of accuracy ranges from at least 10% to a maximum of 32%. The results show that the accuracy increases with the number of shot and decreases with the number of way. More ways means higher complexity, and more shots means more supporting information. In existing researches, the Nway is set to 5 generally. In application scenarios, the N is determined by the number of target categories and should not be limited to 5. For example, a plant may have more than five diseases, then the ways should the same as the number of diseases that may occur in the specific scenario. N-way and K-shot are a pair with trade off relationship. When expanding novel classes, we can increase the number of shots as compensation to maintain accuracy. For a new class to be identified, it is acceptable to collect 10 to 50 samples as its support set. However, the positive relationship of shots and accuracy is not linear. The increase of accuracy as Kshot has ceiling. When the K is larger than 30, the accuracy is still growing but very slowly. In this work, we compared three distance metrics: dot product, cosine similarity, and Euclidean distance.This is because even if there is no parameter to be trained in this module, the losses calculated from the distance measurement still affect the parameter updates in the iterations.

An appropriate distance metric significantly helps in improving the performance of classification, clustering process etc. Cosine similarity hits the best performance, as shown in Table 6 and in Figure 5D. The reason is that the vectors obtained from encoder are high dimensional vectors. The cosine similarity has often been used to counteract the problem of Euclidean distance in high dimensional space. The normalization in cosine similarity also has positive effect. In this work, we compared different backbone networks: Convnet4 , AlexNet , Resnet12, Resnet18, Resnet50, Resnet101 , DenseNet , MobileNet-V2 . The Convnet4 is the classical architecture used in FSL which stacks four blocks of convolutional calculation. Different networks include different sizes of trainable parameters. The trainable parameters are more in base-training than in meta learning because the base-training classifier is removed in meta-learning. The size of trainable parameters, learning rate , training time, and epochs in the two training stages are listed in Table 7. e25–e31 are conducted with the configuration: Mini-ImageNet is used in base-training and PV-2- 22 is used in meta-learning. The different number of iterations is due to the different convergence speed in meta-learning. The performances of the backbone networks are listed in Table 8. Resnet12 and Resnet50 outperform the other networks,aeroponic tower garden system with Resnet12 being more efficient. In base-training and meta-learning, we use the validation data to test the accuracy of 5-way, 1-shot tasks which is shown in Figure 6. The black numbers on the black lines are the best accuracy in base-training, and the black numbers on the red lines are the best accuracy in meta-learning. The lifting ranges of accuracy in meta-learning are marked in red numbers. It is shown that the model trained in base-training stage already has the identification ability with few shots to some extent, even without training with tasks in meta-learning. However, in base training, the model is already convergent by training with image wise data, and the accuracy of task testing no longer increases. In fact, the model still has space to improve. Based on this, in meta learning, by using task-wise data, the accuracy has been further promoted around 20% to 30%. In recent years, the architectures of networks go deeper and deeper. Some researchers proposed a question that do we really need so deep networks? Our results show that a medium sized network outperforms other networks in this task. We summarized two reasons: In CNNs, the simpler and more basic features are learnt in shallower layers, the more abstract and complex features are learnt from deeper layers. From shallower layers to deeper layers, the features transition from edges, lines, and colors, to textures and patterns, to complex graphics, even to specific objects. For our specific task, even humans rely more on color, shape, and texture for disease identification. Hence, the too deep networks may be not critical meaningful. 

FSL is the kind of learning task with limited data-scale. For a deeper network, it always has large number of parameters needed to be updated. In the data limitation condition, too deep network could meet insufficient updating of parameters in back propagation due to the too long back propagation path. In parameter updating, shallower networks are more flexible, while the deeper networks look bulky. In short, it does not mean that deeper networks always outperform shallower networks. The size of network should match the specific task and data resources.In order to show the superiority of our method, we conducted several experiments to compare with some recent related researches. Argüeso et al. used Siamese Network, Triplet Network, and PV as their experimental material. They set a different data splitting: 32 classes are used for training and the rest six classes for testing. They listed results of three methods: transfer learning, Siamese Network, and Triplet Network. Their backbone network is Inception-V3. In order to be comparable, we executed the experiments with the same data setting as their work. Mini-ImageNet is used in base-training, 32 classes of PV are used in meta-learning, and the rest 6 classes are used in test. The results of e32–e34 are shown in Table 9. We also compared with Li and Chao . They proposed a Semi-supervised FSL approach. The baseline is a typical fine-tuning model. The Single SS adds Semi-supervised step on the top of baseline. The Iterative SS adds one more Semisupervised step on the top of Single SS. PV was also used as their experimental material and set to three splits. Each split has 28 classes for training and the rest 10 classes for testing. They compared with Argüeso et al. too. We also conducted experiments by our methods with the same data settings as Li and Chao . The results of e35–e43 are shown in Table 9. All the comparison results are shown in Figure 7. The data settings of the two references are different from our data settings. The results indicate that our method outperforms the existing works with all data settings, which means that our method is superior and robust.The method learning from few samples is very promising in plant disease recognition, which has wide range of potential application scenarios for its saving of cost on data. When expanding the range of application, a well established model of FSL can easily generalize to novel species or diseases without retraining and providing large scale training data. However, some existing limitations of the FSL itself and the specific applied areas are needed to be considered.

Non-significant interactions were removed and a new reduced model was produced when necessary

To our knowledge, however, gibbons have never been tested with these quantities. Nevertheless, in our scenario gibbons did not necessarily need to discern between these two quantities because the two amounts were never presented at the same time. Thus, although it is possible that this difference could have played a role in their performance, it is more parsimonious to think that their motivation to pull in direct food test trials was due to the high probability to eat the extra reward while pulling the handle. Future studies may use different reward constellations varying in quantity and/or quality to continue shedding light on gibbon socio-cognitive performance. Finally, given the quasi-experimental nature of our task, we did not always capture the social dilemma scenario we envisioned. Future tasks should implement designs in which cooperative acts are clearly costly for those individuals willing to volunteer. In addition, given our restricted sample size we could not test species differences or the presence of individual biases . Te present study advances our understanding of how tolerance may allow primates to solve potential conflict over food rewards. In our study gibbons exhibit high degrees of social tolerance . Passive partners tolerate that actors obtain higher benefits in a majority of trials while actors often actively forego opportunities to maximize rewards . Relatedly, gibbons engaged in cofeeding events relatively often. One possibility is that such a high degree of social tolerance towards conspecifics results from gibbons’ unique pairliving social system compared to other great apes,hydroponic dutch buckets although future studies should inspect this relationship in more detail. Overall, the inclusion of gibbons in studies exploring the nature of primate socio-cognitive abilities is critical.

It will help to elucidate the nature of our prosocial motivations and their relationship to specific socio-ecological pressures and ultimately to understand how they have evolved since the last common ancestor with all living apes.One experimenter interacted with the apes during a test session while a second experimenter recorded the session and scored the subjects behavior . Each experimenter tested half of the dyads. We used high quality rewards that would be easily visible to the subjects. Blueberries were not part of their daily diet but were sometimes presented as enrichment in puzzle feeders and were highly desirable for all gibbons housed at the GCC. Te apparatus was composed of a plastic folding table with a square wooden plank clamped to the top. At one end of the plank a transparent plastic bin was taped so that it could be lifed up or hang down. Te bin, at rest, would hang down and remain unmoved on the top of a wooden ramp. A hole big enough to ft blueberries was drilled on the back side of the bin so that when at rest on the ramp, the experimenter could place five blueberries into the bin. A thin purple rope was tied to the far end of the plastic bin and was routed back to the opposite end of the wooden plank. This was set up so that pulling on the purple rope would reliably life the plastic bin, so blueberries could fall down the wooden ramp and be easily accessible for subjects to obtain. Te extreme end of the rope was attached to the mesh of the enclosure. To allow reaching and pulling the rope, we attached a small, handheld, opaque white handle . At the right tension, pulling on the handle would reliably life the plastic bin. Te handle could contain a single blueberry inside depending on the condition presented. We used two handles of the same dimensions and appearance to avoid contamination of blueberry lefovers after the trial. Te table with wooden plank would be set up at a distance so that it could not be grabbed by subjects and the ramp was placed underneath so that blueberries would roll down and land in front of the enclosure gate.

E2 would then distract the two subjects to an opposite or adjacent side of the subjects’ enclosure with a handful of cereal pieces while E1 tied the end of the purple rope with the handle onto the mesh gate of the enclosure, roughly at the experimenter height, approximately 2 m to the right or left. Te distance and location of the rope was kept constant for all trials of each dyad; however, because the enclosures differed in layout, the rope would go to the most convenient side. This way, we ensured that the rope had proper tension to be pulled by gibbons and life the plastic bin as well as be distant enough from the ramp so that a subject could not easily pull on the rope and obtain food from the ramp at the same time.Individual solo pre-testing of the mechanism of the apparatus was not possible because the separation of the dyads was prohibited. However, gibbons had had experience with ropes before as part of their enrichment and several individuals had participated in pilot sessions where they had to pull from different ropes and handles. Tree conditions were tested: direct food test condition, indirect food test condition and no food control condition. In the direct food test condition, the following procedure was performed. E1 would place five blueberries in the plastic bin on the apparatus. To gain the attention of the subjects, E1 would call the subjects names and show the food, if they were not already focused on the food/experimenter. Once both subjects had observed the five blueberries placed in the plastic bin, E1 would squeeze a single blueberry on top of the handle, so that the blueberry would be clearly visible. Te rope and handle would be set up so that the handle was just far enough from the enclosure in order for subjects to need to pull on the rope to obtain access to the handle and blueberry. Consequently, pulling the rope would also life the plastic bin and drop five blueberries down the ramp, accessible to subjects. Te experimenter would also call the names of the subjects when placing the single blueberry in the handle. A choice was recorded when one of the subjects pulled the rope. If no subject pulled the rope within 90 s, the trial ended and was recorded as no pull. If an experimenter error was made , up to 3 repetitions of the trial would be completed.

Environmental conditions such as rain would also end test sessions to be continued the next day. In the indirect food test condition, there was no single blueberry placed in the handle. To compare conditions, we followed the same procedure as in the direct food test condition. Instead of inserting a blueberry inside the handle, we approached it with the first close and then we touched it with the fingers. In the no food control condition, no blueberries were used in the trial. In order to control for time and actions, we used the same procedure of calling the subjects and touching both the box and the handle.Two cameras on tripods recorded footage concurrently. One was placed to the side of the experimenter in order to capture a wide view of the trials, specifically to show the positions of the subjects, their choices and if they obtained blueberries. Te other was placed close to the ramp to accurately count the quantity of blueberries obtained by each subject. For all trials we coded the act of pulling or not pulling and the ID of the puller and non-puller . We also coded the number of blueberries each subject ate and whether the actor subject ate the blueberry from the handle. Next,bato bucket we coded whether a passive subject was present in front of the ramp or within one meter from it at the moment the plastic bin was lifed and at the moment the actor arrived at the release location. Additionally, we coded instances of cofeeding and displacements. Cofeeding was coded when individuals feed within a distance of 1 m of one another. Displacements occurred when an individual left her spot due to the partners’ arrival. Additionally, we calculated the latency to pull from the start of the trial until the individual releases . All analyses were conducted with R statistics . We used Generalized Linear Mixed Models to investigate gibbons’ choices . Covariates were z-transformed. Every full model was compared to a null model excluding the test variables. We controlled for session and trial number in all our models. We controlled for the length of the dyad in models 1 to 3 given the larger dataset compared to models 4 to 6. In addition, in model 3 we included individuals’ age and sex as control predictors. When the comparison between the full and the null model was significant, we further investigated the significance of the test variables and/or their interactions. We used the “drop1” function of the lme4 package68 to test each variable significance including interactions between test predictors. A likelihood ratio test with significance set at p<0.05 was used to compare models and to test the significance of the individual fixed effects. We ruled out collinearity by checking Variance Infation Factors . All VIF values were close to 1 except for age and length of dyad in model 3. Te two variables were slightly collinear . For every model we assessed its stability by comparing the estimates derived by a model based on all data with those obtained from models with the levels of the random effects excluded one at a time. All models were stable. We also fitted a mixed-effects Cox proportional hazards model to analyze gibbons’ latencies to act. For this purpose, we used the “coxme” function from the coxme package. Te results of Model 2 are reported as hazard ratios .

An HR greater than one indicates an increased likelihood of acting and an HR smaller than 1 indicated a decreased hazard of acting. In addition, to obtain the p-values for the individual fixed effects we conducted likelihood-ratio tests.The human brain requires a constant movement of blood through a network of cerebral arteries and veins to deliver oxygen, glucose, and other essential nutrients, but also to remove carbon dioxide, lactic acid, and other metabolic products. CBF in adults represents approximately 15% of the total cardiac output, while the brain accounts for only 2% of total body weight. Regional blood flow, which is tightly regulated to meet the metabolic demands of the brain, varies significantly between gray and white matter, and among different gray matter regions. After adolescence, cerebral blood flow stays relatively stable for a long period, after which it steadily declines. In fact, in middle-aged and elderly adults, aging accounts for a decrease of approximately 0.45% to 0.50% in global CBF per year. It has been shown that, likewise, perfusion through both cortical regions of the cerebral cortex decreases with age, especially in the frontal, temporal, and parietal lobes, and subcortical regions. Aging is also a main risk factor for cognitive impairment and dementia. In elderly subjects, regional CBF in the superior temporal gyrus was positively associated with global cognitive performance. Furthermore, lifestyle factors increased global and regional CBF, and these lifestyle-induced changes in cerebral perfusion may improve cognitive functioning. These relationships are schematically depicted in Figure 1. Differences in CBF between elderly subjects are related to vascular risk factors as well as risk factors for dementia. Bangen and colleagues have observed that the presence of multiple vascular risk factors may add to the already diminished cerebral perfusion that results from aging. It has also been shown that mean gray matter CBF was 15% lower in late middle-aged subjects suffering from metabolic syndrome than age-matched healthy subjects, and was associated with lower cognitive function. Moreover, in the elderly, reduced cerebral perfusion correlated with the volume of white matter hyper intensities and cortical microbleeds, which are established risk factors for dementia. Neurovascular coupling is another critical component that affects CBF and consequently cognitive function. This phenomenon refers to the close temporal and regional relationship between neural activity elicited, for example, by a cognitive task and subsequent changes detected in cerebral perfusion. In particular, aging impairs the mechanisms that match oxygen and nutrient delivery with the increased metabolic demands in active brain region.

Blood pressure values did not show any significant changes

It is possible there are rare, but highly efficient, pollinators that were rarely observed during the sampling period, or were lumped together with a more frequently observed morphotype. An alternative explanation for the lack of an association between floral visitation and seed set is that higher plant diversity in urban and agricultural areas may decrease pollinator efficiency. Previous research has shown that invasive alien plants can have a negative effect on native plant communities by acting as attractors for pollinators, or decreasing pollinator efficiency by providing a wider range of resources for pollinators to visit, with the consequence that visitors transfer pollen from non con-specifics, potentially clogging stigmas and reducing pollination success. In this case, our target plant, yellow starthistle is indeed considered an invasive alien plant, but the hypothesis of it being in a novel diverse community could lead to a similar effect on the frequency and quality of pollination services that it receives. In sites where there are many other potential plants to visit and accompanying decreased floral fidelity leading to diverse pollen loads, one predicts decreased pollinator efficiency. Abundant sources of exotic plant pollen could occur in areas where there is a greater diversity of nearby plants for pollinators to visit. This explanation might account for the observation that shield-tipped small dark bees were negatively correlated with seed set. We selected yellow starthistle as the target plant for this study because of its ubiquitous distribution,planting gutter reliance on pollination, and its attraction for a wide set of visitors; it is also a highly invasive and undesirable plant.

Previous research on yellow starthistle has found that its invasion can be facilitated other non-native pollinator species such as the honey bee, Apis mellifera, and the starthistle bee, Megachile apicalis, which is included in the medium striped hairy belly bee morphotype. However, the abundance of bees in both of these 2 morphotypes were most closely associated with agricultural areas, which did not have the highest rates of seed set as would be predicted by visitation alone. Our results indicate clearly that bee visitation in human-altered landscapes can be higher than that in comparable natural areas, especially towards the end of the flowering season when there are few resources available in natural landscapes. Because the response of bee visitors to land use change depends on species specific requirements and these pollinators also have variable effects on plants, understanding the effect of land use change on pollination services requires knowledge not only of which pollinator groups shift to the human-altered landscapes, but also the rate of pollination that those groups have on the plant species in those landscapes. Future research will benefit from looking at a wider range of plants with a different range of target pollinators and that flower earlier in the year to better tease out these hypotheses. If the patterns of bee visitation and seed set that we observed are indeed consistent across other plant species, the novel plant communities created in these human-altered landscapes and the generalist bee species that are favored in such landscapes will lead to a reduction in overall pollination services.There is increasing evidence that alterations in the energy metabolism of cyst lining cells—especially increased glucose dependency and defects in fatty acid oxidation—may underlie the pathogenesis of autosomal dominant polycystic kidney disease.

Dietary interventions have been shown to be surprisingly effective in several polycystic kidney disease animal models, where they lead to a strong decrease in cyst growth. The positive effects of mild food reduction were most likely mediated by ketosis, as a ketogenic state—regardless of whether it was induced by a time-restricted diet, a ketogenic diet or a short term water fast —resulted in significantly inhibited cyst growth, fibrosis and PKD-associated signaling pathways in different animal models, even when the state of ketosis was only induced for a short period of time. Ketogenic dietary interventions are high-fat, low carbohydrate and moderate-protein diets which mimic a fasting state. KDIs have been used as an effective tool for the treatment of obesity and childhood epilepsy and could potentially be beneficial in several other diseases. A recent retrospective case series indicated safety, feasibility and positive effects of KDIs in patients with ADPKD for the first time. Most recently, the results of a 1- year behavioral weight loss study in obese ADPKD patients supported therapeutic feasibility of weight loss interventions and hinted towards possible positive effects such as slowing of kidney growth. However, no trials investigating the effects of ketosis per se in patients with ADPKD have been performed. Therefore, this proof-of-principle trial aimed to provide prospectively collected data on the short-term effects of KDIs in ADPKD patients.This trial was designed and conducted as a nonrandomized, non-blinded, single-center study at the University Hospital Cologne.Patients were recruited from the German ADPKD cohort or through the patients’ advocacy organization “Familiäre Zystennieren e.V.” If assessed eligible at the screening visit by medical history, physical examination and laboratory parameters, participants were enrolled after having obtained written informed consent.

The study was conducted in accordance with the Declaration of Helsinki and the good clinical practice guidelines by the International Conference on Harmonization.The study protocol will be provided upon request.Four study visits were conducted as part of the study . Each visit included blood and urine tests, bio-impedance measurements, measurements of anthropometric parameters, measurements of ketosis in finger stick blood, urine and breath, and an MRI abdomen for kidney and liver volumetry. Between V1 and V2 , patients continued to eat according to their usual dietary habits, i.e. a high-CHO diet, for a minimum of 13 and a maximum of 28 days. After V2, the KDI was started within 7 days and the KDI was finished by V3 . Patients could choose whether they wanted to achieve the ketogenic metabolic state by WF for 3 days or by a KD for 14 days. V3 was conducted after a maximum of 72 h upon termination of the diet. Between V3 and V4 , participants switched back to their standard ad libitum diet for a minimum of 20 and a maximum of 43 days. During I1 and I3, patients measured the extent of ketosis in breath, urine and fingertip blood at least three times, while in I2, during the KDIs, ketosis measurements were performed twice a day . Patients were provided with a diet diary for daily documentation of hunger,gutter berries well-being and potential additional foods consumed as well as results of the daily ketosis measurements. After completion of the KDIs, a dedicated questionnaire was used to assess feasibility and tolerability of the KDIs.Patients in the WF study arm limited oral intake to ad libitum amount of water and a low-salt broth once a day for a period of 3 days; in the KD study arm, patients consumed a very high-fat, low-CHO diet for a period of 14 days according to individual dietary plans. KD was based on a fat:protein:CHO ratio of 10:4:1 and calorie requirements were calculated individually for each patient with the MifflinSt. Jeor equation. Ten percent of the fat calories were provided as medium-chain triglycerides . Patients were supplied with the required food items. Patients in the KD group were advised to consume at least 20 kcal/kg body weight, but preferably 25 kcal/kg body weight daily. In addition, regular phone calls ascertained patients’ well-being and monitored adherence to the protocol. Patients in the KD arm were instructed to refrain from additional intake of non-ketogenic foods during the intervention. Both groups of patients received ketogenic snacks in case they desired food in between scheduled meals or experienced undesired symptoms during WF, respectively. Patients were advised to eat blueberries in case of blood beta-hydroxybutyrate levels >3.5 mmol/L or breath acetone levels > 40 p.p.m. and/or malaise or symptoms of ketosis .Baseline demographic and clinical data including comorbidities and medication were assessed at the screening visit. Anthropometric data were recorded at all study visits. Vital parameters were assessed at each study visit. Acetone concentrations were measured using a portable breath analyzer . BHB measurements in finger stick blood were performed with a portable ketone meter . Urine ketones were measured using urine dipsticks . The MRIs were performed by the in house Department of Radiology on a 1.5-T system .

For assessment of total kidney volume and total liver volume , kidney and liver boundaries of each patient were manually traced in axial T2 SPIR scans by a radiologist using Intellispace Discovery . The renal hilum was excluded from the kidney outline while the gallbladder and the main portal vein were excluded from the liver outline. A second reader was employed to additionally segment the kidneys and liver of each patient to estimate inter-reader variability in TKV and TLV. Both readers were blinded to patient information and previous tracing results. To evaluate cyst fraction in both organs, the T2 map was overlayed with each volume. Using Intellispace Discovery, each voxel within each volume was classified in either cystic or non-cystic depending on a cut-off value of 250 ms. This cut-off appeared to sufficiently differentiate tissue depending on its water content in a previous study and allows to approximate cyst fraction. The T1 mDIXON sequence which is included in our research protocol facilitates the assessment of fat content of liver tissue using chemical shift imaging. This allowed to check for potential fatty liver disease such as non-alcoholic fatty liver disease . Additionally, morphologic sequences were used to find potential fibrotic or cirrhotic changes of liver tissue. Clinical chemistry measurements on blood and urine samples were performed by the in-house central laboratory . Body composition was evaluated using a Tanita BC 418 MA scale .RESET-PKD was designed as a pilot trial analyzing pre-defined exploratory endpoints on feasibility and safety of short-term KDIs as well as their impact on TKV and TLV. Regarding TKV, we focused on the relative change in TKV between V2 and V3 . This analysis was complemented by the absolute and relative differences of TKV/height-adjusted TKV between study visits ; the same timepoints were compared regarding TLV, anthropometric parameters and blood pressure. To allow for the detection of potential safety signals a panel of blood and urine values including kidney function, lipids and liver values were examined , and feeling of hunger, discomforts and problems with general well-being were analyzed from the diet diary . The biochemical efficacy of the KDIs was assessed as follows: absolute and relative differences in self measurements of ketosis parameters . Most investigators agree that normal values for BHB on standard Western diets are 0.1–0.5 mmol/L. A non-linear relationship between acetone in breath and BHB in plasma has been described in adults. Therefore, we defined the cutoffs for the metabolic endpoints as follows: for the KD group: acetone level ≥10 p.p.m. or a BHB level ≥0.8 mmol/L in ≥75% of home measurements ; WF group: acetone level ≥10 p.p.m. or a BHB level ≥0.8 mmol/L in ≥75% of home measurements, or alternatively, a ketogenic state in at least one measurement in either method on 2/3 days during the WF. Feasibility was assessed using a questionnaire that contained 17 questions directly related to the KDI. Patients could rate each question on a scale from −4 to +4 with −4 representing low and +4 high feasibility . An average score of ≥0 was required to consider the KDI feasible. Both the metabolic endpoint and the feasibility endpoint had to be reached to meet the combined feasibility endpoint. The self reported feeling of hunger was recorded regularly in the study diary and linked to numbers from 1 to 4 . Stool, urine and blood samples were frozen for future analyses after patients had given specific informed consent for bio-banking.Both KDIs induced a significant weight loss . Loss of body water and loss of fat mass contributed equally . However, two patients in the KD group reported lower blood pressure values in their home measurements. In one of those patients, antihypertensive medication had to be paused upon the start of the ketogenicdiet due to orthostasis. All anthropometric parameters are provided in Supplementary data, Table S4.RESET-PKD is the first prospective interventional trial to combine exploratory analyses of metabolic efficacy , feasibility and efficacy of short-term controlled ketogenic metabolism in patients with ADPKD. Following the promising data on the beneficial effects of ketosis in PKD animal models, the present study was designed to include patients who could benefit most, i.e. rapid progressors.

Gene-by-diet interactions have the potential to have a tremendous impact on human health

Significant strides in science and technology made over the past decade will be instrumental in our efforts to understand precision nutrition. In this next section, we discuss contributors to individual variability and precision nutrition, including the role of metabolism, genotypes, and the gut microbiome.Variability in responses to nutritional interventions in healthy humans is well-known and suggests that individuals may benefit from more personalized dietary regimens to improve or maintain health. Although the physiologic/genetic underpinnings of these phenotypes and their responsiveness to changes in nutritional status largely remain to be explored, tools to efficiently identify nutritionally responsive phenotypes are emerging. Variable responses to dietary omega-3 FAs are one of the better characterized nutritionally responsive phenotypes, and this research highlights the complexity and nuance needed to fully appreciate physiologically relevant responses. For instance, in a secondary analysis of a randomized, double-blind, placebo-controlled trial of short-term fish oil supplementation in 83 individuals of African ancestry, a two-thirds by one-third split in ‘high responders’ compared with ‘low responders’ was reported with respect to intervention effect on red blood cell long-chain ω-3 FA enrichment, reduction in plasma triglyceride concentrations, and stimulated monocyte inflammatory responses. Although an individual’s adiposity, baseline ω-3 FA status, consumed dose,dutch buckets system and the ingested ω-3 form contribute to the ω-3 response, this variance may also be influenced by an individual’s background diet. In particular, the consumption of less than one-third cup of dark-green and orange vegetables and legumes—and the health effects of their accompanying nutrients—was associated with the low response in a secondary analysis of the aforementioned intervention study.

Another experimental approach to examine inter individual variability to a nutritional intervention is the mixed meal/ macro-nutrient challenge test. Analogous to oral glucose tolerance tests, in which the metabolic response to a standardized carbohydrate challenge is investigated, a mixed macro-nutrient challenge can be used to probe the metabolic response to a complex meal. Using standard clinical measurements, such a challenge can be used to simultaneously assess insulin sensitivity and fat tolerance. However, by expanding the experimental end points to include both physiologic and broad metabolic responses using modern metabolomic technologies, the potential for phenotypic profiling of an individual’s response to such a standardized meal is extraordinary. For instance, an individual’s metabolic flexibility , their metabolic health, and the potential for their response to interventions can all be assessed. Another powerful application of metabolomic phenotyping in nutritional research is the application to twin studies. By employing sets of both dizygotic and monozygotic twins, these approaches have demonstrated the power to segregate and quantify the genetic and environmental factors driving covariance between physiologic and metabolic traits and health outcomes. In summary, characterizing the range and nature of both fasting and postprandial nutritional phenotypes based on differences in metabolism in healthy populations offers novel approaches to identify individuals that may benefit from more individualized nutritional guidance to improve and/or maintain their health. Moreover, tools exist today to begin this task. The application of these tools in well-designed clinical trials will be critical to effectively demonstrate their value in aligning nutritional guidance and/or interventions with metabolic phenotypes.

Throughout history, humans evolutionarily adapted to their local environments to move across the globe, including to their changing diets. However, transitions to the modern Western diet in the last 75 y have resulted in maladaptations leading to a high prevalence of various chronic diseases, including obesity, cancer, and cardiometabolic diseases that disproportionately affect certain populations and create ethnic health disparities. For example, the adoption of the Western diet brought about a dramatic increase in the intake of PUFAs, specifically dietary ω-6 PUFAs. This shift was initiated by an American Heart Association recommendation in 1961 to replace dietary SFAs with PUFAs. Evidence supporting the recommendation included randomized controlled trials and cohort studies conducted in non-Hispanic White populations showing benefits of increasing ω-6 PUFAs on levels of serum lipids and lipoproteins. It was also assumed that only a small proportion of these ω-6 PUFAs could be converted to proinflammatory/prothrombotic long-chain ω-6 PUFAs, such as arachidonic acid, so adding 5%–10% energy as ω-6 PUFAs would have limited detrimental inflammatory/thrombotic effects due to saturation of the biosynthetic pathway. However, studies began to emerge a decade ago that showed genetic ancestry plays a critical role in determining the metabolic capacity of the long-chain ω-6 PUFA bio-synthetic pathway. Specifically, several studies revealed that populations with African ancestry have much higher frequencies of genetic variants in the FA desaturase cluster on chromosome 11 that markedly enhance the conversion of dietary ω-6 PUFAs to the long-chain ω-6, arachidonic acid and proin- flammatory/prothrombotic oxylipins , and endocannabinoids metabolites.

This underlying pathogenetic mechanism potentially results in a higher risk of chronic disease in those of African ancestry compared with those with European ancestry.With few exceptions, ω-6 long-chain PUFAs, such as arachidonic acid are proinflammatory/prothrombotic, and ω-3 longchain PUFAs, such as EPA and DHA are anti-inflammatory/ antithrombotic. Given the fact that a much higher proportion of populations of African ancestry has the capacity to form higher levels of arachidonic acid and its metabolites from dietary ω-6 PUFAs, it might be expected that ω-3 long-chain PUFAs would have a greater capacity to balance the impact of high dietary ω-6 PUFAs in these populations. Among clinical trials carried out to date, the VITamin D and omegA-3 TriaL is of particular interest when considering African ancestry, as it included n ¼ 5106 African-American participants out of n ¼ 25,871 total participants. Overall, supplementation with marine ω-3 long-chain PUFAs failed to prevent CVD or cancer events among healthy middle-aged men and women over 5 y of follow-up. Although ω-3 long-chain PUFA supplementation failed to prevent CVD in the full group analysis, in a follow-up subgroup analysis, Manson et al. demonstrated robust risk reductions in AfAm . Similarly, subgroup reanalysis of the VITAL study data based on the FADS framework compared the Kaplan–Meier curves for the MI end point, faceted by fish consumption and the number of CVD risk factors, for both European American and AfAm participants. This reanalysis revealed a marked ~80% reduction in MI associated with ω-3 long-chain PUFA supplementation in AfAm participants with baseline CVD risk who did not consume fish. By contrast, and in accord with our FADS framework and the mixed distribution of FADS haplotypes in European American populations, these participants failed to benefit similarly, regardless of baseline fish intake or baseline CVD risk. Collectively, these data suggest that AfAm populations may benefit from ω-3 long-chain PUFA supplementation, and both ancestry and FADS variability should be factored into future clinical trial designs. Such heterogeneity in the FADS cluster and other genes should inform the design of future clinical trials and may offer the opportunity to personalize recommendations of long-chain ω-3 PUFA supplementation to individuals of different ethnicities.The human gut responds rapidly to significant changes in the diet, and long-term dietary habits can exert strong effects. The influence of dietary components has had a long history of impact on gut health and maintenance of high gut microbial diversity. However, the gut microbiomes in humans are highly diverse and variable among individuals. Moreover, the influence of specific dietary components on the gut microbiome community structure and microbial metabolic function may vary among individual microbiomes. Thus, diet–microbiome interactions are highly individual and idiosyncratic,dutch buckets especially over one’s lifetime. Myriad dietary compounds are known to modulate human gut microbiome structure and function, with impact on disease; among these, dietary fibers were first established for their protective effects against chronic disease at population scales, which are widely believed to be largely mediated by the microbiome. Although dietary fiber intake is widely associated with positive health outcomes, persistent public health and nutrition messaging in many such nations has made only modest gains in increasing consumption. Thus, dietary fibers remain, to date, the only microbiome-focused nutrient with established dietary guidelines for population-scale health. If populations are recalcitrant to increasing their overall fiber intake, dietary fiber-based strategies to improve health must seek to identify the fiber types most active in stimulating the appropriate microbiome responses to benefit host physiology. This is not trivial in that 1) as a category, “fiber” simply means the non–human-digestible plant components and includes a vast array of molecular structures, both soluble and insoluble; and 2) the mechanisms by which these divergent structures alter the structure and function of gut microbiota, thereby influencing health, are poorly understood. Coupled with the fact that many fiber intervention studies do not specify or characterize the fiber structures employed , it is very challenging to discern which structural variables are influential on the responses of gut microbiota, both in vitro and in vivo.

Consequently, the ways in which fiber structures differentially influence ecology in the gut and metabolic function suggest that specific fibers can be targeted to desirable microbial consumers, thereby potentially being health beneficial at much smaller daily doses and at population scales . Fiber polysaccharide structures contain a dizzying array of linkages among glycosyl residues that, in turn, generate strong differences in higher-order structure of these substrates. Because microbial carbohydrate-active enzymes are highly specific to the bonds they hydrolyze, differences in genome content or regulation of these carbohydrate-active enzymes can drive division of labor in degradation of polysaccharide consumption. The Lindemann laboratory at Purdue University has demonstrated that 1) metabolism of fibers is emergent across individuals but structural differences select for similar microbiota across donors and 2) polysaccharides can structure communities and maintain diversity against high-dilution pressure. These data strongly suggest that fiber fine structures are highly selective for consortia of fermenting microbes and sustain them in diverse communities, potentially serving as a basis for targeting these microbiota in the midst of complex and idiosyncratic human gut communities. The hypothesis is that there are general ecologic strategies that microbes use to gain advantage with respect to fiber fermentation and possible downstream health benefits. It is believed that these strategies are genetically encoded; thus, they provide a foundation for engineering fibers that will allow the gut microbiome to be manipulated for predictable outcomes across disparate individuals. To test the hypothesis that subtle differences in polysaccharide structure select for distinct microbial communities, 2 subtly different model polysaccharides, red and white sorghum arabinoxylan were fermented with identical microbiota. RSAX was slightly more complex at the level of branching diversity than WSAX and maintained a more diverse microbiome in which members of Bacteroides spp., especially B. ovatus, were dominant. In contrast, WSAX promoted the growth of Agathobacter rectalis and Bifidobacterium longum-dominated communities. Interestingly, these polysaccharides selected forgenomically identical strains across 3 unrelated donors. Alongside the differences in community structure, RSAX and WSAX were fermented to different metabolic outcomes. Further, when fed to mice, WSAX and a human-derived microbial consortium adapted to its use modified the cecal metabolism of mice in sex specific ways. Interestingly, the effects of transient human microbes could be seen in metabolite profiles and in post antibiotic community resilience in the mice. Our data suggest that 1) polysaccharide fine structure deterministically selects for fermenting communities; 2) fine polysaccharide variants often target largely the same microbes across individuals; and 3) in turn, these differences lead to divergent metabolic outcomes, which are potentially impactful on host physiology and resilience to stress. Together, these results suggest that well-characterized fiber structures may be used to influence human health at population scales and relatively small doses.Tree death is a natural part of forest dynamics , but increasing rates of mortality can result when climatic conditions exceed a species’ physiological threshold . Although directional climate change has historically resulted in shifts in the distributions of species and ecosystems , comparatively rapid shifts in tree distributions attributed to anthropogenic climate change have been documented on all six plant-covered continents . Recent research has focused predominantly on causal mechanisms of tree death, feed backs to the climate system, and predictive modeling . Ecologists generally agree that trees and forests in temperate regions will shift to higher latitudes and upward in elevation due to warming trends . However, understanding how forests will behave at the ‘‘trailing ends’’ is limited . Stand development patterns following forest mortality events are of considerable interest because they indicate future structure and composition of affected forests, and the ability of these forests to maintain biodiversity and other ecosystem services . Although widespread mortality events can have negative impacts to ecosystem services , there may be benefits that are also important for adaptation in the human dimension .

The production of water for agriculture requires an enormous amount of energy

Without strong a priori hypotheses about region-specific effects of polyphenols , we corrected for multiple comparisons across the entire brain using FSL’s FEAT, correcting at the cluster level. FEAT uses an FWE cluster-based thresholding using random field theory to correct for multiple comparisons across space. Thresholding defines contiguous clusters; each cluster’s estimated significance level from Gaussian random field-theory is compared with the cluster probability threshold. First, for both the verbal and visual memory tasks, we contrasted task versus baseline for each subject at each time point . To minimize the number of comparisons, we restricted our analysis to learn versus baseline and recall versus baseline for the verbal memory task and task versus baseline for each of the three visual memory conditions. Next, for each task, the individual subject data were entered into a second level analysis of within-group means for each contrast.Finally, for each task, the individual subject data were entered into additional analyses of within-group between time point, and between group within time point.The carbon footprint is defined as a measurement of the total amount of carbon dioxide emission that is directly and indirectly caused by an activity or is accumulated over the lifetime of a product. Due to its impact on the environmental issues such as global warming, the carbon footprint is the hot topic in the field of environmental science. Virtual water trade refers to the hidden flow of water if food or other agricultural products are traded from one place to another. At the same time, virtual water is related to the carbon footprint directly and indirectly. Some studies have focused on the virtual water trade aiming to conserve water in the production of crops by increasing product export to areas with less water needs . In this effort,nft hydroponic the research on virtual water of agricultural products has the potential to reduce economic costs, where water withdrawals may have greater impacts on water-lacking regions than on water-abundant regions.

However, few studies have analyzed the internal virtual water flow dynamics of the U.S. on a state or regional scale. And fewer have focused on the associated carbon footprint on a state or regional scale in the U.S. In this study, we calculated the carbon footprint of the exporting agricultural products of California to their destinations by firstly exploring the products’ water footprint. Previous virtual water quantification studies have identified the U.S. as the leading global virtual water exporter . Close examination indicates that California is the largest agricultural producer . Thus we hypothesize that California is the largest virtual water exporting state in the country. Accordingly we also hypothesize that California is releasing a great amount of carbon dioxide related to the embedded water of agricultural products. And in this research, we focus on the carbon footprint associated with energy cost by the embedded water in agricultural products exported from California to other regions of the world.Nowadays, carbon emission is a worldwide topic that hinders the development in many various sectors of human life. Every year, the United Nations would regulate the carbon budget for most of the countries. How to use the carbon budget efficiently is a mandatory issue to be managed. At the same time, water resources shortage is becoming an urgent problem all over the world, as energy deficiency is an equally critical problem. California is facing an unprecedented water crisis in history where water treatment is the largest energy use of the state taking up approximately 19 percent of the total annual electricity consumption . It will cost significant financial investment to keep the water supplies sufficient for next several decades. New regulations and court decisions have resulted in the reduction of water delivery from the Sacramento-San Joaquin Delta . In some areas of the state, the quantity of underground water and surface water supplies is experiencing rapid decrease .

The energy water relationship is particularly inseparable in the Southwestern arid and semi-arid regions of the United States, where significant amounts of energy are used to import water. California is exceptionally vulnerable because its water sector is the largest energy user in the state, estimated to account for 19 percent of the total electricity consumed annually . Another fact is that the annual water used in growing agricultural products in California is much greater than the total amount used by the other fields such as commercial and industrial applications . Less known is the amount of water embedded or embodied in agricultural products that are exported to other states and countries. For some certain kinds of agricultural products, California is dominating the supply of the whole U.S. market, such as almond, grape, strawberry, processed tomato, and lemon .The possible presence of life elsewhere in our universe is a subject of investigation still ridden with speculation. Although prebiotic chemistry has developed drastically and evolved over the last 60 years into the current field of astrobiology, there is still not a thorough understanding of the series of chemical reactions that first created living entities or even the most probable location for them to have occurred. Early theories about the origin of life can be traced back to Oparin and Haldane and have been since referred to as the Oparin-Haldane hypothesis. They proposed that the origin of life necessarily involved a rich broth of bio-molecules that proceeded to form life as we know it after a series of chemical reactions. Many experiments have promoted this early theory, however, debate still exists about the most plausible time and location for life’s origin on Earth. Stanley Miller’s empirical synthesis of amino acids from water, hydrogen, methane, and ammonia helped begin a new field of origin of life chemistry. His experiments demonstrated that the most plausible model for the synthesis of bio-molecules was under reducing atmospheric conditions on the early earth. Miller’s experiments validated the early 20th century theories of Oparin and Haldane . Similar experiments have recently shown that these syntheses are also successful in neutral atmospheres, although not with as high of yields . Submarine hydrothermal systems vents have been proposed as the location for the origin of life and this theory persists despite little empirical evidence. These debates regarding these central questions regarding life’s origin are not surprising.

There are vast unknowns about the early Earth during the prebiotic epoch approximately 3.5 billion years ago and a slim geological record from this epoch makes it inherently difficult to study. Among the major uncertainties are the composition of the atmosphere and chemistry of the early oceans. The only principles that the scientific community seems to agree upon is that water and organic compounds were essential for the origin of life . The synthesis of organic compounds has thus become a central theme for origin of life chemistry, and the search for life on other planets has focused on detecting these necessary ingredients for life’s formation . ‘Follow the water’ is a moniker used by NASA for extraterrestrial exploration which implies how important the presence of water is deemed in the search for extraterrestrial life.These broad groupings represent the central disciplines of astrobiology, an inherently multidisciplinary science which combines the expertise of many fields. Astrobiology provides the forum to address questions about the origin of life and assess the probability of life having arisen on other planets within our solar system and universe. The more we learn regarding the surface and subsurface chemistry of other planets seems to indicate that life may be much more widespread than previously thought. This increased knowledge of our solar system coupled with the expanding limits of habitability in extreme environments continues to show the adaptability and tolerance of terrestrial life.It is now widely believed that microbiological life in the deep ocean and deep biosphere are far greater a reservoir of carbon than all terrestrial life combined . Included in this large reservoir are all of the sea dwellers and microbial life that inhabit the seafloor, making their living in the oceanic crust or at hydrothermal vent systems. As life is recognized to be more and more ubiquitous on our planet, this leads one to think how improbable it would be if Earth were the only planet that life had originated on.The Mars exploration program has been successful as of late with a number of important achievements including the robotic exploration of Mars’ surface by twin landed Mars Exploration Rovers , Spirit and Opportunity. These two Mars rovers have achieved great success in confirming an aqueous history of the planet through detection of minerals deposited by standing water bodies on Mars. Although the timescales of these deposits are not fully known, the detection of evaporitic mineral assemblages confirms that Mars was once a wet planet, similar to the Earth in many respects, most likely very early in the planet’s history. Deposits of ice still remain in the polar regions of Mars and within the deep subsurface where water is stable, however, the most accessible regions of the planet appear to be very inhospitable not only to life, but to the preservation of any bio-signatures from the past. In the next few years, robotic missions to Mars will include instruments specifically designed to detect organic compounds and evidence of life on our neighboring planet . These planetary life detection missions have the potential to not only find out if there was ever life on Mars, but it might also aid in answering some of the fundamental unknowns associated with the origin of terrestrial life. For instance, if evidence of extinct or extant microbiological life on Mars were detected, and it was determined to have a similar biochemistry to terrestrial life, this could be interpreted in many ways. It could be evidence of similar independent chemical processes that led to independent origins on neighboring planets, or this could imply that there could have been exchange of organic compounds or other material between planets that would have helped the spread a common origin of life. Regardless, these are the questions that should be anticipated if we are successful in the next 10 years in detecting life on Mars.

If the Mars community focuses on the detection of biomolecules that offer unequivocal evidence of life, then we may be successful detecting traces of life that once existed on Mars. Any success in this field through in situ studies via robotic exploration, future sample return mission,hydroponic gutter or far distant manned missions to Mars must target environments that offer high degrees of preservation of organics within the harsh and extreme Martian surface.Bio-signatures are defined as any type of physical or chemical record that show evidence of the presence of extinct or extant life. These can be remnants of a microbial community that existed in the planet’s early history or the detection of active microbial life. The best bio-signatures to target are bio-molecules that are ubiquitous components of microbial life, constitute a significant portion of their cellular mass, offer good preservation over geological time, and can be detected at trace levels with current technologies.The two largest classes of bio-molecules are Nucleic acids and proteins . Proteins, composed of individual amino acid residues linked by peptide bonds, comprise ~55% of the mass of bacterial cells and have a mean length of ~315 residues . Only 20 amino acids are utilized in terrestrial proteins except for the rare cases of selenocysteine and pyrrolysine , however, due to the rare occurrence of these amino acids, they are not considered important. Total amino acids within environmental samples can be used to estimate bio-densities in microbial communities associated with extinct or extant life. The total hydrolyzed amino acids give an idea of the mass of total protein and can be extrapolated to estimated equivalent cell counts by comparing them to the protein dry weight composition of prokaryotes. Chapter II below discusses this method of bacterial cell enumeration. There is no reason to expect that extraterrestrial life utilizes completely different biochemistry than here on Earth. The best chance at detecting evidence of life on Mars is to focus on the major terrestrial bio-molecular classes such as amino acids which have been defined as prime targets in the search for bio-signatures on Mars. The drawback of amino acids is that they might be degraded on Mars if inadequately protected from harsh surface conditions such as ionizing radiation from space . However, certain secondary minerals that sequester organics could allow for some degree of protection from these extreme conditions.One fundamental property of amino acids other than glycine is their chirality.

MDS is a common analytical technique for data from sorting tasks

A theoretical example of one of these binary matrices is depicted in Table 1. From the individual sorting data collected, a symmetrical proximity matrix with sums of counts of how many times the attributes appear together in “sibling” relationships or “parent–child” relationships was compiled, similar to that in Table 2 but on a larger scale . To ensure that all data could be used to create the flavor wheel, 1st, the 2 groups were compared . Two separate similarity matrices were created, one for UC Davis participants and one for industry participants. The scaled matrices were used to run 2 separate 5-dimensional multidimensional scaling analyses . The results of the 5D-MDS analyses were used to run a multiple factor analysis , a technique to compare multiple datasets and in sensory science is typically applied to compare sensory profiles, also on XLSTAT 2015 . No significant difference was found between the 2 groups, so the data for all 72 participants were used for further analysis. To determine the clusters and levels of the flavor wheel, AHC was conducted on the similarity proximity matrix with cooccurrence values using the unweighted pair group average linkage agglomeration method . Hierarchical clustering is a statistical technique that can be applied to sorting data to group the attributes into different categories and subcategories on different levels in the form of a dendrogram . At the beginning of analysis, every individual object starts as a single “cluster,” and then the unweighted pair group average linkage links the attributes together, one pair at a time, from the bottom to the top . In each successive linkage, it merges the most similar pair of items . Upon observation of the dendrogram,grow bags garden truncation was set to specify 9 main categories. In addition to the analysis in XLStat, other methods of similarity were tested in R, such as Euclidean, maximum, and Manhattan. Other agglomeration techniques were tested in R as well, such as Ward’s, complete, single, and average .

In R, the Euclidean distance method with the unweighted average linkage method was determined to be the combination with the most distinct clusters without being biased by outliers or the size of clusters, but even this still split the Fruity group into 2. Otherwise, this combination was very similar to the XLStat result, confirming the hierarchical structure in 2 different software programs. Thus, the XLStat dendrogram with unweighted pair group average linkage, which kept the Fruity group intact, was selected. Finally, MDS analysis was performed to represent all 99 attributes in a 2-dimensional space, a visual aid to see where the attributes fell in proximity to one another.Nonmetric MDS was performed on the proximity matrix of Euclidean distance values, meaning the order of the “distance,” using Kruskal’s stress values, in the resemblance matrix matched the ranking of the distances for the representation space . MDS was performed to supplement the AHC data and to guide the positioning of the main classes around the new Coffee Taster’s Flavor Wheel. Since the similarity values were obtained from frequency counts for every pair of attributes, the data were considered nonmetric; that is to say, the differences or ratios between the values held no meaning. Higher values were considered more similar and lower values were considered less similar. Kruskal’s stress values, testing both Minkowski’s distance values and Euclidean distances commonly used in nonmetric MDS, were used to obtain the 2- in the stress function. In those results, the 9 main categories were dimensional coordinates that most closely adhere to the ranking positioned in the 2-dimensional space in a similar order, but the of those similarity values . Other plots were more sensitive to outliers, meaning some points were methods of MDS were tested in R, both metric and nonmetric, far from the origin of the MDS plot and the majority of points were clustered near the origin.

As the purpose of this analysis was to obtain the positioning of the main categories around the flavor wheel, the nonmetric MDS in XLStat was ultimately selected as the option that was less sensitive to outliers and most clearly separated the data points.The MFA comparison of the similarity matrices from the UC Davis panelist group and the coffee industry panelist group revealed that there was no significant difference between the 2 groups. The RV-coefficients were much greater than 0.70, meaning the 2 groups were related and came from the same population . An attribute-by-attribute comparison was also plotted from the MFA, showing the degree of similarity in sorting between the 2 groups for each attribute . For all participants together, AHC was truncated at 9 main classes, shown in 9 different colors . The MDS plot for the scaled data of all 72 participants is depicted in Figure 4. Using the dendrogram , the 9 main classes were named. Due to the fact that the lexicon was used to provide the attributes to be sorted, some main categories that were found did not have an “umbrella” term that existed in the lexicon, or a general word that encompasses and describes the category . In order to fit the AHC and MDS results onto a flavor wheel, a few modifications had to be made by SCAA and the researchers. Unfortunately, due to the nature of this project , it was impossible to know which of these “umbrella” terms would be needed exactly or how many, so a few of the terms were moved or added to the lexicon to create the final organization . This issue is further elaborated on in the Suggestions section. These 9 main categories are labeled in Figure 3 and 4. The attributes that are similar are found in the same categories and subcategories in the dendrogram . The attributes that are similar are found close to one another on the MDS plot and those that are less similar are further away from one another. Additionally, as mentioned earlier, the WCR Sensory Lexicon is a living document, so a few terms were added to the living WCR Sensory Lexicon document after the sorting exercise was complete, and as the lexicon was being finalized, based on the expert opinion of the scientists and panelists at Kansas State Univ.

Finally, with the unweighted pair group average linkage, there is a different similarity level for every single pair, and only 3 levels were needed for this flavor wheel. Thus, the dendrogram was interpreted by SCAA and the researchers to create a 2nd and 3rd tier of subcategories for each of the 9 main categories. To determine the positioning around the wheel, the MDS plot with category labels was used . Therefore, not only are the more similar attributes placed together in the same categories and subcategories, but the 9 main classes are placed around the flavor wheel based on similarity . The hierarchy used for the flavor wheel is the interpretation of the 9-class dendrogram in Figure 3 with these modifications incorporated. The final wheel, translated from Table 4, is depicted in Figure 5.The flavor wheel construction techniques used in this method created a suitable, intuitive flavor wheel to complement the Sensory Lexicon for the specialty coffee industry. However, there are ways to improve the process from the beginning if these methods are to be adopted for the construction of flavor wheels for other products. If the researchers know that a product lexicon will be used to develop a wheel or other visual containing multiple categories and tiers, then these projects could be coordinated to improve the process. To begin with, the initial lexicon should contain only vocabulary from the most specific attributes . The study subjects would then be able to use a free sorting exercise similar to that performed in the study, but the exercise would not involve multiple levels. The subjects would simply sort the words into as many groups or clusters as they deem necessary. Also, when the descriptors are presented to the subject to be sorted, it would be best to randomize them for each individual,grow bag for tomato rather than presenting the same unorganized lexicon to each subject. In this way, both research projects would inform each other as they progressed. After the initial sorting exercise, a cluster analysis and MDS analysis could be performed to determine the number of groups for the 2nd tier and the positioning of the words around the wheel, respectively. These 2nd-tier clusters would then be appropriately named by the subjects or descriptive panel in a consensus exercise. Next, the sorting exercise would be repeated with only the 2nd-tier vocabulary, to sort those descriptors into clusters. Finally, the 1st-tier groups would be named, with input from the descriptive panel. To summarize, to use this improved flavor wheel construction technique, researchers would develop the lexicon and wheel simultaneously. Only the most specific vocabulary words should be present in the initial lexicon, and then the more general descriptors, or so-called “umbrella” terms, would be added in later, with help from the descriptive panelists for as many iterations or levels deemed necessary.There are multiple ways in which removal of infected host plant tissue can be employed as an element of disease management. These include removal of reservoir hosts to limit pathogen spillover onto a focal host , roguing of infected focal hosts to limit secondary spread , and removal of localized infections within hosts to limit further infection or to retrain an unproductive plant . Studies of bacterial pathogens in perennial crops have evaluated the utility of pruning as a disease management tool, with mixed results . The removal of infected plant tissues is analogous to measures used for management of trunk diseases, often referred to as “remedial surgery,” as an alternative to replacing infected plants . In this study, we investigated whether severe pruning of Xylella fastidiosa-infected grapevines in commercial vineyards could clear vines of existing infections. Pierce’s disease is a lethal vector-borne disease of grapevines caused by the bacterium X. fastidiosa . After susceptible plants are inoculated by X. fastidiosa, pathogen populations multiply and move through the xylem network, leading to symptoms of reduced water flow , including leaf scorch, cluster desiccation, vine die back, and eventually death.

There is no cure for grapevines infected with this bacterium; current strategies for management of PD in California vineyards involve limiting pathogen spread to uninfected vines by controlling vector populations, disrupting transmission opportunities, and eliminating pathogen sources in the surrounding landscape . PD is notable for the numerous sources of variability in infection levels and symptom severity in plants. X. fastidiosa infection levels vary among plant species , grapevine cultivars , seasons , and as a function of temperature . Like other bacterial plant pathogens , X. fastidiosa is often irregularly distributed within individual hosts. For example, X. fastidiosa infection levels in grapevines may vary by more than 10-fold between grapevine petioles and stems ; in other hosts, infection levels may vary by more than 100-fold between basal and apical sections of shoots . This within-host heterogeneity may be epidemiologically significant if it affects pathogen acquisition efficiency . Moreover, if such variation is associated with protracted localized infection near inoculation points, such heterogeneity may facilitate other disease management tactics. In addition to grapevines, other plant species that are susceptible to X. fastidiosa infection include citrus in South America . Management of the resulting disease in C. sinensis relies on clean nursery stock, vector control, and pruning infected plant tissue from established trees or roguing young plants . The concept of pruning of infected plant material is based on the fact that, in established trees , tissue with early symptoms of infection can be pruned ~1 m proximal to the most symptomatic basal leaf, effectively eliminating infections, as the remaining tissue is free of X. fastidiosa . However, pruning is not adequate for young trees or for removing bacterial infections if any symptoms are present in fruit . X. fastidiosa multiplies and spreads through the xylem vessels, reaching the roots of perennial hosts such as citrus , peach , alfalfa , and blueberry . Nonetheless, under field conditions, chronic infection of grapevines is temperature and season dependent. In regions with freezing winter temperatures, infected plants can recover in winter, curing previously infected and symptomatic grapevines .

Mass-flowering crops can exert strong effects on pollinator populations

Toddlers’ diarrhea is a well-known and benign condition that often responds by simply removing excess juice from the diet of 1- to 4-yearolds. However, malabsorption of carbohydrate in juice, especially when consumed in excessive amounts, can result in chronic diarrhea, flatulence, bloating, and abdominal pain.‍Fructose and sorbitol have been implicated most commonly,but the ratios of specific carbohydrates may also be important.‍ The malabsorption of carbohydrate that can result from large intakes of juice is the basis for some health care providers to recommend juice for the treatment of constipation, particularly in infants. The North American Society of Pediatric Gastroenterology, Hepatology, and Nutrition constipation guideline suggests taking advantage of the sorbitol and other carbohydrates contained in some juices, such as prune, pear, and apple juices, to help increase the frequency and water content of stools for infants with constipation.A basic premise of the Dietary Guidelines for Americans, the most recent version of which was published in 2015, is to focus on nutrient-dense foods.Fruit is 1 of the key focus foods in the dietary guidelines.Fruit, along with vegetables, is recommended to provide necessary vitamins and minerals, reduce the risk of cardiovascular disease, potentially protect against cancer, and curb excessive caloric intake. For example, children consuming approximately 1000 kcal/day should have ∼1 cup of fruit per day, whereas those consuming approximately 2000 kcal/day should consume ∼2 cups of fruit per day. Although whole fruit is to be encouraged,plastic pot up to half of the servings can be provided in the form of 100% fruit juice . A 6-ounce glass of fruit juice equals 1 fruit serving. Fruit juice offers no nutritional advantage over whole fruit. A disadvantage of fruit juice is that it lacks the fiber of whole fruit. Kilocalorie for kilocalorie, fruit juice can be consumed more quickly than whole fruit.

Reliance on fruit juice instead of whole fruit to provide the recommended daily intake of fruit does not promote eating behaviors associated with the consumption of whole fruit. Because recent studies suggest that pure orange juice consumption has health benefits in adults, further research is needed to determine whether children and adolescents may derive similar benefits.‍Pediatricians play a central role in children’s health and nutrition by providing guidance to pediatric patients and their parents. Pediatricians can also advocate for changes in public policy, especially in schools, where improved fruit and vegetable intake has been associated with policies promoting healthier dietary choices.‍Open assessment and recommendations for appropriate dietary habits, including consuming whole fruit rather than fruit juice, can help encourage parental support of healthy rates of weight gain.Although other risk factors associated with obesity may be important to consider, a recent study suggests that special attention may be indicated for infants and children of women who are overweight before bearing children.‍Parents need to be informed that unpasteurized juice products may contain pathogens, such as Escherichia coli, Salmonella species, and Cryptosporidium species, which may be harmful to children. These organisms are associated with serious diseases, such as hemolyticuremic syndrome.‍If parents choose to give their children unpasteurized juice products, they should do so with caution and be advised that this is an unsafe practice. Commercially prepared unpasteurized juice must contain a warning on the label that the product may contain harmful bacteria.‍This guidance does not apply to certain modes of sale , but families should remain vigilant when providing unpasteurized juice products to children. Pasteurized fruit juices are free of microorganisms and are safe for infants, children, and adolescents.The American Academy of Pediatrics recommends that human milk be the only nutrient fed to infants until approximately 6 months of age.‍

For mothers who cannot breastfeed or who choose not to breastfeed, a prepared infant formula can be used as a complete source of nutrition. No additional nutrients are needed. There is no nutritional indication to give fruit juice to infants younger than 6 months. Offering juice before solid foods are introduced into the diet could risk having juice replace human milk or infant formula in the diet, which can result in reduced intakes of protein, fat, vitamins, and minerals such as iron, calcium, and zinc.‍Malnutrition and short stature in children have been associated with excessive consumption of juice.‍It is optimal to completely avoid the use of juice in infants before 1 year of age. When juice is medically indicated for an infant older than 6 months, it is prudent to give the juice to the infant in a cup. Dental caries have also been associated with juice consumption.Prolonged exposure of the teeth to the sugars in juice is a major contributing factor to dental caries. Recommendations from the AAP and American Academy of Pediatric Dentistry state that juice should be offered to toddlers in a cup, not a bottle, and that infants not be put to bed with a bottle in their mouth.‍The practice of allowing children to carry a bottle, easily transportable covered cup, open cup, or box of juice around throughout the day leads to excessive exposure of the teeth to carbohydrate, which promotes the development of dental caries.‍ Infants can be encouraged to consume whole fruit that is mashed or pureed. After 1 year of age, fruit juice may be used as part of a meal or snack. It should not be sipped throughout the day or used as a means to calm an upset child. Because infants consume <1600 kcal/day, 4 ounces of juice per day, representing half of the recommended daily serving of fruit, is more than adequate. The AAP practice parameter on the management of acute gastroenteritis in young children recommended that only oral electrolyte solutions be used to rehydrate infants and young children and that a normal diet be continued throughout an episode of gastroenteritis.

Surveys show that many health care providers do not follow the recommended procedures for the management of diarrhea.‍The high carbohydrate content of juice , compared with oral electrolyte solutions , may exceed the intestine’s ability to absorb carbohydrate, resulting in carbohydrate malabsorption. Carbohydrate malabsorption causes osmotic diarrhea, increasing the severity of the diarrhea already present.‍Fruit juice is low in electrolytes. The sodium concentration is 1 to 3 mEq/L. The stool sodium concentration in children with acute diarrhea is 20 to 40 mEq/L. Oral electrolyte solutions contain 40 to 45 mEq sodium/L. As a replacement for fluid losses, juice may predispose infants to development of hyponatremia. Concern has been raised that infants exposed to orange juice had an increased likelihood of developing an allergy to it. The development of a perioral rash in some infants after being fed freshly squeezed citrus juice is most likely attributable to the chemical irritant effects of acid.‍ 53 Diarrhea and other gastrointestinal symptoms observed in some infants were most likely attributable to carbohydrate malabsorption. Although allergies to fruit may develop early in life, they are uncommon.‍Most issues relevant to juice intake for infants are also relevant for toddlers and young children. Fruit juice and fruit drinks are easily over consumed by toddlers and young children because they taste good. In addition, they are conveniently packaged or can be placed in a bottle or transportable covered cup and carried around during the day.Because juice is viewed as nutritious,grow bag limits on consumption are not usually set by parents. Toddlers and young children can be encouraged to consume whole fruit instead of juice. Like soda, it can contribute to energy imbalance. Pediatricians should support policies that seek to reduce the consumption of fruit juice and promote the consumption of whole fruit by toddlers and young children already exposed to juices. This support should include policies of the Special Supplemental Nutrition Program for Women, Infants, and Children , provided that those policies do not have negative nutritional consequences for children without access to fresh fruit. In addition, high intakes of juice can contribute to diarrhea, over nutrition or under nutrition, and the development of dental caries. The dilution of juice with water does not necessarily decrease the dental health risks.Juice consumption presents fewer nutritional issues for older children and adolescents, because they consume less of these beverages. Nevertheless, juice intake should be limited to 8 ounces/day, half of the recommended daily fruit servings. It is important to encourage the consumption of the whole fruit for the benefit of fiber intake and a longer time to consume the same kilo calories. Excessive juice consumption and the resultant increase in energy intake may contribute to the development of obesity. One study found a link between juice intake in excess of 12 ounces/day and obesity.‍

Other studies, however, found that children who consumed greater amounts of juice were taller and a had lower BMI than those who consumed less juice‍ or found no relationship between juice intake and growth variables.A more recent study suggested that varying intakes of 100% juice were not associated with obesity.‍More research is required to better define this relationship.Global production of pollinator dependent crops has increased by 300% in the past 50 years . At the same time, managed honey bee populations are declining due to a complex of factors including novel diseases, pesticides and habitat change . Pollinator deficiencies may precipitate significant yield reductions and increased food prices, ultimately jeopardizing food security . Unmanaged bees are highly effective pollinators of a variety of crops and act as insurance against loss of pollination function due to honey bee deficits . While proximity to natural habitat increases populations of such alternate pollinators , intensive agricultural landscapes often contain little remnant habitat. As a result, re-diversification of agricultural areas has been proposed as a means of bolstering pollination services from these alternate pollinators . Diversification of agricultural landscapes can take place at many scales, including within fields , along field edges , or bordering landscape features . One benefit of field edge techniques is that they create habitat without sacrificing arable land , and comprise a large portion of non-cropped area in farming regions globally . Farm bill conservation programs in the United States and agri-environmental schemes in the European Union prioritize on-farm habitat creation projects that target pollinators, providing incentives through cost-share programs . Despite the prominence of these programs, there is little information as to the effectiveness of field-margin diversification techniques, and specifically, whether they can bolster pollinator services and affect yields to the same levels documented in patches of natural habitats while simultaneously conserving pollinator species . One common field edge diversification technique, hedgerow restoration , has been found to increase pollinator richness within field edges and up to 100 m into nearby crop fields . Additionally, hedgerows show potential for increasing pollination function within adjacent fields. Using sentinel canola plants, Morandin, Long and Kremen found that wild bees enhanced seed set, once the contribution from managed honey bees was accounted for. However, the canola plants provided a highly attractive resource within an unattractive crop matrix of processing tomato, which provides few nectar rewards and requires buzz-pollination to release pollen stores. These conditions are not reflective of the field conditions created by monoculture plantings of pollinator-dependent crops, which generate hundreds of thousands of synchronous, though short-lived, blooms within a single field . Pulses of highly attractive floral resources can create dilution effects, drawing species away from adjacent seminatural habitat and reducing pollination services there . Yet in spite of the attractiveness of MFC fields, wild bee abundance and richness has been found to be higher in habitats, including hedgerows, in closer proximity to MFC fields . The effects of MFCs may be species-specific, with some exhibiting higher preference for MFCs over other resources . Specialist pollinators, such as the squash bee , seek out fields of their host plant, cultivated squash, in the landscape . While the influence of MFCs on pollinator populations and services has been well-studied, whether the presence of field-scale restorations can augment pollinator populations and pollination services within MFC fields remains an open question . We examine the ability of hedgerows to enhance pollination services in a simplified agricultural landscape when adjacent to a mass-flowering, pollinator-dependent crop, cultivated sunflower . We ask whether the identity of the pollinator species found within hedgerows during the crop bloom period is the same as those found within adjacent sunflower fields. Then, using an independent data set, we determine whether the most abundant wild sunflower visitors, sunflower specialist bees, also utilize hedgerow plantings in our study landscape.

The egg density effect on the percentage of eggs developed to adults in cherry was analyzed using linear regression

Native to East Asia, Drosophila suzukiihas been a major invasive pest of soft-and thin-skinned fruits since it was first detected in 2008 in North America and Europe and has been found recently in South America. Drosophila suzukii is highly polyphagous, being able to oviposit and/or reproduce in various cultivated and wild fruits. Its fast development and high reproductive potential can lead to explosive population increases and significant economic losses to crops. Though various management strategies, including behavioral, biological, chemical and cultural approaches, have been implemented to suppress D. suzukii populations and reduce crop damage, current control programs rely heavily on insecticides that target adult flies in commercial crops. Because non-crop habitats can act as a reservoir for the fly’s reinvasion into treated crops, area-wide Integrated Pest Management strategies that reduce population densities at the landscape level need to be developed for such a highly mobile and polyphagous pest. To develop area-wide programs, it is critical to understand how D. suzukii populations persist and disperse in the landscape as the season progresses. Many environmental factors, such as local climatic and landscape traits, may trigger the dispersal of D. suzukii populations to escape resource-poor habitats or unfavorable weather conditions. Landscape composition surrounding cultivated crops, such as forests and shrub vegetation, could act as sinks, sources,black flower bucket shelters or overwinter sites for the fly populations. For this reason, the availability of alternative hosts could play an important role in sustaining fly populations and dictating their local movement patterns when favorable hosts are not available. Researchers have provided a better understanding of local D. suzukii population dynamics. Still, there are gaps that limit our understanding of the relative importance of different hosts for D. suzukii within some geographical regions.

For example, the seasonal periods of host utilization and the importance of non-crop hosts within the agricultural landscape need to be understood to develop area-wide programs. In this framework, this study aimed to illustrate the temporal dynamics of host use by D. suzukii in California’s San Joaquin Valley, one of the world’s major fruit growing regions. Drosophila suzukii was first detected in California when it was found infesting strawberries and cranberries in Santa Cruz County in 2008. Since then, damaging populations have been recorded from cherries, cranberries, mulberries, raspberries and strawberries, mainly in the coastal or northern California fruit growing regions with relatively mild summer. In comparison, California’s interior San Joaquin Valley has hotter summers and colder winters, and while D. suzukii is collected in cherry, citrus, fig, grape, kiwi, mulberry, nectarine, peach, persimmon, plum and pomegranate as well as in non-crop habitats surrounding the orchards, reported crop damage has been mainly on cherries. Adult fly captures show two main periods of activity—spring and fall—and low captures in winter and summer. The number of captured flies was positively related between pairs of sampled sites based on their proximity, but it was negatively related to differences in fruit ripening periods among crops, suggesting that fly populations might move among crop and/or non-crop habitats during the year. Though adult flies are captured in various orchard crops, it is not clear whether these fruits are vulnerable and serve as hosts. For example, the potential impact of D. suzukii on wine grapes in Italy was discussed by Ioriatti et al., who observed D. suzukii oviposition in soft-skinned berries, and, in Japan, some grape cultivars were reported as hosts for D. suzukii. In Oregon, Lee et al. found that D. suzukii was able to successfully oviposit in some wine grape cultivars but that offspring survival was low , whereas other studies observed no or low levels of infestation of intact grapes in the field or laboratory.

Some of the initial work in Japan reported that D. suzukii emerged only from fallen and damaged apple, apricot, loquat, peach, pear, persimmon and plum, but Sasaki and Sato reported that healthy peach fruit can be infested. However, in California, Stewart et al.reported that intact peach fruit are unlikely hosts. No doubt, many fruits with hard or hairy skin can be colonized if wounds are available to allow flies to oviposit in the pulp. In this study, we document the temporal patterns of host use by D. suzukii in California’s San Joaquin valley by sampling intact and damaged fruits of various crop and no-crop plants throughout the fruiting season. We evaluated the suitability of key fruits, including several unreported ornamental and wild host fruits as hosts for the fly, particularly focusing on the host status of grapes—considered to be a non-preferred host—and cherry—considered to be a preferred host. Wine grapes can contain uniquely high levels of organic acids that are important for producing wines less susceptible to microbial and oxidative damage and with more vibrant color. The levels of acidity decrease as fruit are ripening, but they remain high throughout the ripening process. For this reason, we also examined the impact of tartaric acid concentrations on the fly’s fitness. For cherries, we examined the effects of cultivar and fruit size on the fly’s performance. We additionally monitored adult fly populations at different elevations—from the Valley floor east to the foothills and Sierra mountains—to determine if the fly is active at higher elevations during the hot summer when the fly populations were extremely low in the Valley’s agricultural areas. We discuss the implications of this information for area-wide management in the San Joaquin Valley. A total of 17 common fruits were sampled in a temporal sequence of fruit ripening, including twelve important crops , three ornamentals , and two wild host plants . Samples were taken from 2013 to 2015 at the University of California’s Kearney Agricultural Research and Extension Center, near Parlier, California and near Brentwood, California .

Ornamental fruits were also collected in riparian areas surrounding agricultural crops near Bentwood. Bitter cherry, Prunus emarginataEaton , and the Cascara buckthorn, Frangula purshianaare endemic to western North America; these fruits were collected at higher elevations 1683 m near Shaver Lake, California . For all species, both intact fruit and damaged fruit were collected as available,square black flower bucket as the fruit were at a susceptible ripening stage for D. suzukii oviposition. A total of 30–50 fruit were collected when at a susceptible ripening stage for each species, although the number of intact ornamental and wild fruits varied depending on the availability.Collected fruits were placed individually or in groups of 10–50 in deli cups and held under controlled conditions at the University of California’s Kearney Agricultural Research and Extension Center . Deli cups were covered with fine organdy cloth and fitted with a raised metal grid on the bottom to suppress mold growth. A piece of tissue paper was placed underneath the fruit to absorb any liquid accumulation. Emerged flies were collected every 2–3 d, frozen, and then identified as either D. suzukii or other drosophilids. Only those flies that emerged within 2 weeks following field collection were counted to exclude the possibility of second-generation flies.All laboratory studies were conducted under controlled conditions, as described above . A laboratory colony of D. suzukii was established from field collections of infested cherries at Kearney. The fly larvae were maintained on a standard cornmeal-based artificial diet using methods described by Dalton et al., and adult flies were held in Bug Dorm2 cages supplied with a 10% honey–water solution and petri dishes containing standard cornmeal medium sprinkled with brewer’s yeast for feeding and oviposition. Field-collected D. suzukii were introduced into the colony yearly to maintain the vigor of the colony. All tests used 1–2-week-old adult female flies that had been housed with males since emergence . To determine if D. suzukii can oviposit within and develop from damaged or rotting navel oranges, a single adult female D. suzukii was exposed to a whole fresh fruit, halved fresh fruit, rotting whole fruit, or halved rotting fruit for 24 h in the acrylic cage. To simulate the natural decay process of a fallen orange, fresh oranges were placed individually on wet sandy soil in deli cups until the fruit started to rot. The halved fruit were allowed the same amount of time as the whole fruit but were cut into halves just prior to the test. On average, rotted fruit had 42.3 ± 7.3% of their surface covered by mold growth. Following exposure, the numbers of eggs laid were counted, and the fruit was then held in the cage until the emergence of adult flies.

Each treatment started with 25 replicates; however, a few replicates were discarded because of contamination by other drosophilids that likely occurred during the regular examination for the decay status of the fruit. A sub-sample of 10 fruit was measured to determine the Brix levels of fresh and rotting fruits. To determine the possible effect of tartaric acid on D. suzukii survival and development, seven different concentrations of tartaric acid were mixed with a standard artificial diet. The powdered tartaric acid was purchased from a wine and beer brewing store in Fresno, CA, USA, and mixed with the diet just before the diet solidified. The content of tartaric acid in grapes can vary depending on cultivar, ripeness, and environmental conditions; for example, Kliewer et al. reported a tartaric acid content ranging from 3.7 to 13.2 g/L in different cultivars and from 3.4 to 9.2 g/L in early- vs. late-harvested cultivars. The doses used here covered these reported ranges. Each treatment had 20–22 replicates, and each replicate started with 10 D. suzukii eggs from the laboratory culture that were placed in drosophila vials over the diet. The number of developed adults was recorded. A sub-sample of 25 pupae from each treatment was measured for pupal length and width , and the volume of each pupa was estimated based on the formula 2. Apple cider vinegar traps were used to monitor fly populations at four different elevations from the Valley’s low agricultural areas to the Sierra Nevada: Kearney , lower foothills , higher foothills and Sierra mountains . Traps at Kearney were placed in a mixed stone fruit orchard; traps at the three higher elevations were along Highway 168, with the foothill sites in residential yards with fruit trees and the Sierra site at the forest’s edge in bitter cherry bushes. Three traps were placed at each location, approximately 200 m apart. Collection methods were similar to Wang et al.. Briefly, traps were constructed of plastic containers filled with apple cider vinegar and a small amount of Bon-Ami Free and Clear® unscented soap to serve as a surfactant. Traps were hung on tree branches at head-height and then checked and replaced weekly from June to November 2017. Captured arthropods were placed into 95% ethanol in small glass bottle and later examined under a dissecting microscope to count the number of D. suzukii. Counts of emerged adult D. suzukii were based on total fruit samples. Laboratory data on fruit size preference, citrus test and grape acidity effect were presented as treatment means , and treatment effects were compared with the Analysis of Variance .For host suitability tests, since fruit varied in weight, the percentage of D. suzukii eggs that successfully developed to adults was calculated based on eggs per gram fruit to standardize the comparison among different treatments. The percentage of eggs that successfully developed to adults on different fruit species or different cherry cultivars were subject to further analysis of generalized linear model with binomial distribution and log-link function by considering the effect of both fruit species or cultivar and egg density per gram fruit, as well as the interactions of these two factors. To separate the means among different treatments, the percentage data were also arcsine transformed as needed to normalize the variance and analyzed using ANOVA. A separate analysis with 10 different cherry cultivars did not yield a significant effect of the Brix on the percentage of eggs that successfully developed to adults, although we could not rule out the possibility that Brix and other chemical properties may affect other fitness parameters of the developed flies. Many of the differences in chemical traits among different fruits could be attributed to geographic location and differences in environmental and cultivation conditions rather than inherent varietal properties, such as in cherries. In the current study, chemical differences were controlled to some extent, as cherry cultivars used were grown in the same plot with the same fertilization and irrigation regimes.

Each cell was carefully removed daily and the filter paper doused with water to prevent leaf desiccation

Currently, there are no integrated pest management plans available for control of citrus thrips in blueberry. This is primarily due to the recent nature of this crop-pest association. Avocado thrips, Scirtothrips perseae Nakahara, is a relatively new pest of avocados in California. It appeared in the state in 1996, and, at the time, was a species new to science . By 1998, crop damage reduced industry revenues by 12% . Avocado thrips adults can feed on over 11 plant species’, however, larvae have been found only on avocados in the field in both California and Mexico, suggesting that S. perseae has a highly restricted host range . Although it has little effect on tree health, avocado thrips feed directly on immature fruit , and obvious feeding scars cause severe downgrading and culling of damaged fruit . With a limited number of pesticides available for thrips control and the propensity with which economically important thrips develop insecticide resistance, it is wise to monitor population levels carefully, limit treatments to population levels of economic concern and time treatments optimally . Appropriate cultural practices and conservation of natural enemies should be practiced in concert with the use of pesticides only on an as-needed basis. Thus, continuing the search for effective biological and chemical controls useful in citrus and avocado thrips management is important. For both species of thrips, some pupation occurs on the tree in cracks and in crevices’, however, the majority of both species drop as late second instars from trees to pupate in the upper layer of the leaf litter under trees . Propupae and pupae are rarely seen, move only if disturbed, and do not feed. Thus, pupation in the upper layers of the soil surface may create the ideal interface for control using the entomopathogenic fungi Beauveria bassiana . Coarse organic mulch beneath trees and the maintenance of a mulch layer,plastic flower bucket a common practice by many growers as a method of Phytophthora spp. management in avocados , may reduce survival of thrips that drop from trees to pupate below the tree.

The effectiveness of mulching to control thrips is uncertain and labor costs are required to add mulch may not be justified solely for thrips control. There is increasing pressure in the U.S. to move away from broad-spectrum insecticides and focus on alternative methods of control, e.g., genetically modified crop plants expressing Bacillus thuringiensis toxins , use of entomopathogens, and similar approaches. Applications of B. bassiana have been reported to decrease populations of thrips in greenhouse cucumbers, chrysanthemums, gerbera daisies, roses, and carnations . Microbial insecticides containing δ-endotoxins from Bt have been used as alternatives to conventional chemical insecticides for almost 70 years . Bt produces insecticidal proteins during the sporulation phase as parasporal crystals. These crystals are primarily comprised of one or more proteins, i.e. Crystal and Cytolitic toxins, also called δ-endotoxins. From a practical perspective, Cry proteins are parasporal inclusion proteins from Bt that exhibit experimentally verifiable toxic effects to a target organism or have significant sequence similarity to a known Cry protein . Similarly, Cyt proteins are parasporal inclusion proteins from Bt that exhibit hemolytic activity or has obvious sequence similarity to a known Cyt protein. These toxins are highly specific to their target insect, are innocuous to humans, vertebrates and plants, are regarded as environmentally friendly, are completely biodegradable, and show little adverse effect on non-target species . The Cyt proteins are significantly different both in their structure and their biological activities from the Cryproteins. However, Cyt proteins have shown toxicity to non-dipterous insects . In fact, Cyt proteins in some cases can extend activity to other Bacillus spp. for mosquitoes that lack the proper receptor . Many studies with thrips involving Bt proteins have typically evaluated Cry toxins in transgenic crops targeted mainly toward lepidopterous pests and there are no published studies we know of representing the impact of Cyt proteins on thrips. Due to the synergism seen between these two Bt proteins and the method of thrips feeding, commonly described as ‘punch and suck’ , whereby leaf tissue is macerated prior to ingestion, we hypothesized that Cry or Cyt proteins could potentially be useful against thrips pests.

The goal of this investigation was to determine if Cry or Cyt proteins or B. bassiana could be used effectively to manage citrus and avocado thrips. Field management of both thrips species is the ultimate goal with these biopesticides but field studies are laborious and expensive. Thus, we evaluated these materials in the laboratory to determine which were sufficiently efficacious to warrant follow-up field studies. Leaves of both avocado and citrus for all bioassays were chosen in observably identical states; young and soft but fully expanded leaves were used as these are the type on which both species of thrips prefer to feed and large leaves were needed to fit in the Munger cell bioassay units that confined the thrips on treated leaves . Briefly, Munger cells were constructed by using a Plexiglas sandwich; the middle cell layer was drilled with 3.2-cm diameter bit to provide a circular test arena . The upper and lower parts of the Plexiglas sandwich were solid and between the lower base and test arena a piece a piece of filter paper was placed to allow moisture exchange and to extend the life of the leaf during the bioassay. Airflow through the test arena was provided through two holes drilled through the center cell layer directly opposite one another, with fine-mesh screening melted onto the interior of the test arena to prevent escape. The Plexiglas sandwich was held together with four binder clips positioned such that the airflow was not covered. Once dry, the leaves were placed on the filter paper in Munger cells and the respective thrips species was added. The lid was placed on the cell but leaving the cell arena exposed, so that once the thrips were added, the cells could be closed quickly. Female and late second-instar avocado thrips and citrus thrips were then placed on treated leaves of their respective host plants inside the Munger cell. Control leaves for both species were treated with a mixture of the same suspension ingredients minus the protein. Bioassays were conducted concurrently in the following manner for both species: adult female thrips were placed on leaves coated with activated or inactivated forms of both Cyt1A and Cry11A, immature thrips were also placed on leaves coated with activated or inactivated forms of both Cyt1A and Cry11A, and all combinations for adults and immature thrips were carried out along with the corresponding control cells.

The Munger cells were closed and placed in an environmental chamber at 28ºC, 55% RH, and long daylight conditions . The bioassay was replicated on two separate dates . A minimum of 10 individuals was placed into each Munger cell and thrips were checked daily for eight days to assess mortality. Post seven days,flower buckets wholesale the integrity of the leaves was questionable and in all but one bioassay, mortality was observed before seven days; thus data were analyzed using day 7 mortality. Mortality was determined by lack of movement after gently probing each thrips with a small brush. Six strains of B. bassiana were obtained from the USDA-ARS Western Integrated Cropping Systems Research Unit located in Shafter, CA. GHA is the commercially available strain found in the field formulation of B. bassiana, Mycotrol O and the greenhouse formulation BotaniGard ES, and each of the other five strains were obtained via isolation from soils in Kern County by USDA-ARS collaborators in 2000. They were stored at – 80ºC. Culture methods for the thrips experiments were similar to those described previously for Lygus hesperus Knight bioassays and were conducted by collaborators from USDA-ARS, Shafter, CA. Briefly, isolates were grown on SDAY media, or Sabouraud’s dextrose agar plus yeast extract . The conidia were harvested from culture plates after 10–14 days incubation by scraping with a sterile rubber policeman into a 0.01% solution of Silwet L- 77 . The conidia were then enumerated with a hemocytometer. For preservation and storage, glycerol was added to the conidial suspension and stored in aliquots of 2 × 108 in a 2 ml solution at −80°C until needed for bioassays. Conidial viability was assessed following incubation for 16 h in potato dextrose broth just prior to use in experiments. Viability was determined by adding a sample of approximately 107 conidia to 20 ml potato dextrose broth and incubating ca. 16 h in a rotary shaker at 28°C. Conidia germination was examined under a compound microscope at 400× and scored as viable if the germ tube was at least twice the length of the conidium. Percentage viability was measured on 250 conidia of each isolate. All bioassays were conducted on the basis of the number of viable conidia measured after thawing and the desired concentrations were formulated by serial dilution. The strain from Mycotrol was isolated and cultured exactly as above to eliminate possible effects of production methods and formulation ingredients on insecticidal activity. Glycerol was not removed prior to using the conidia in bioassays. All six B. bassiana strains were suspended in 0.01% Silwet in a de-ionized water solution and evaluated on the same date at four concentrations for each thrips species. The control consisted of 0.01% Silwet in de-ionized water solution. Each of the 25 treatments was evaluated using five Munger cells , which contained a minimum of ten adult female thrips. These bioassays were repeated on 10 dates with both species tested simultaneously on each date .

Groups of thrips were anesthetized by exposure to CO2 for 15-30 sec, and each strain was administered to the dorsum of the abdomen of each knocked out thrips quickly and carefully in a 1µl drop with a Burkard Hand Microapplicator over filter paper. The droplet spread the length of the thrips immediately and the thrips was then deposited, still knocked out, onto the leaf tissue in the Munger cell. Once a minimum of 10 treated thrips were added, Munger cells were closed and sealed with binder clips and placed in an environmental chamber at 28ºC, 55% RH, and long daylight conditions . Each cell was checked daily for seven days to observe infection by the fungus. Each cell was carefully removed daily and the filter paper doused with water to prevent leaf desiccation. Individuals infected with B. bassiana were defined as those whose natural activity was retarded and/or showed arrestment and subsequently produced mycelia, which was confirmed post bioassay. Mortality caused by mycosis was confirmed on the basis of visual observation and then crushing individuals to reveal the presence of mycelial growth. When mycelial growth was not apparent, crushed individual thrips were placed on potato-dextrose agar plates for 5 days and then re-examined for the presence of mycelial growth. Data were analyzed after Abbott’s correction for control mortality using log-probit analysis with PROC PROBIT on SAS 9.2 and using the Raymond Statistics package . The purpose of the probit analysis was strictly for gross strain comparison. Probit analysis was used to estimate the LC50 and LC95 levels, confidence intervals, and χ 2 values for each strains. Lethal concentrations with overlapping 95% confidence intervals were not considered significantly different. The daily check data were analyzed as non-cumulative counts per day via the Survival Distribution Function on SAS 9.2 , where observation time represented the probability that the experimental unit from the population would have a lifetime exceeding that time with the variables strain and concentration. Assessments for each variable by species were done with Log-rank and Wilcoxon tests and multiple comparisons for the log-rank test were adjusted by using Tukey-Kramer method. Data were then plotted as estimates of the survivor function for the different strains separately for each species. Bacillus thuringiensis israelensis produces two groups of toxic proteins, the Cry and Cyt toxins that have different modes of action.In this investigation, results with Cyt1A and Cry11A were disappointing as both activated and inactivated forms of both proteins showed little effect against adult and second instar citrus thrips and avocado thrips. To our knowledge, there have been no reports of Bt endotoxins with activity against Thysanoptera, although Cyt1Aa was found to be toxic to the non-target species Chrysomela scripta Fabricius .

This conclusion for estimating fruit intake is the same for all three race/ethnic groups studied

Until such time, rigorous quality control using protein analysis methods such as qPCR or enzyme linked immunosorbent assay remains a sensitive method to confirm such contaminants are not detected in therapeutic diets fed for the purpose of the clinical diagnosis of CAFR. This remains the industry standard for quality control of commercially produced diets, including extensively hydrolyzed diets. In conclusion, this study confirms that commercial RMBD should not be considered appropriate for selection as ED in the diagnosis of CAFR as a result of their tendency to include unlisted protein ingredients, which can differ from batch to batch. A clinician should use caution when interpreting the results of an owner-directed ED trial using RMBD to exclude CAFR as a cause of their pet’s pruritic dermatopathy, and veterinarian-guided elimination diet oversight is still recommended. Until further evidence is presented, an elimination diet and provocation trial with a patient-appropriate prescription-based diet subjected to applicable quality control or a home-prepared novel protein diet remain the current diagnostic standard for CAFR.The CDPS is nationally recognized as the oldest, state-specific tracking survey for fruit and vegetable intake in the country. In 2005, it will represent sixteen years of bi-annual survey data using a modified 24-hour recall telephone interview methodology. Before this study, the examination of trends over time, especially for the race/ethnic groups of interest and for their low-income cohort, could not rule out seasonal effects. Because the CDPS has not always spanned the exact same months,plastic planters wholesale although generally it covers July through October, seasonal issues concerning the race/ethnic samples have been suspect.

The results of this study enhance both the interpretative dimensions of past and future CDPS findings. This study set out to achieve four objectives that are intended to illuminate and augment observations and methodological issues related to the California Dietary Practices Surveyin tracking fruit and vegetable consumption in the California population. The first and primary objective is to explore whether seasonal variation exists during the months of the year. The second is to quantify differences among, and seasonality effects as it relates to, the race/ethnic groups tracked by the CDPS, specifically Whites, Latinos, and African Americans. The third objective is to see if Latino acculturation plays a role in seasonal differences. And fourth, nested within this study is the calibration of a short form version of the CDPS dietary collection method. This study assesses this short form as a possible low-cost substitute data collection tool for tracking fruit and vegetable intake. Using identical CDPS methods, 8,543 telephone interviews were collected between November 2000 and October 2002. Sample sizes for each month of the year were approximately equal and included over-samples of low-income persons in the three race/ethnic groups of interest and of African Americans and Latinos in general. Interviews were conducted in both English and Spanish, and Latinos were further categorized into highand low-acculturation segments. Half of the overall sample was randomly assigned to answering three questions directly asking for the number of servings of fruit, fruit juice, and vegetables consumed on the previous day. These were asked ahead of the more extensive and detailed CDPS questions in order to avoid positive recall bias. Findings indicate that seasonality is not a factor in California for the adult population for the total number of servings of fruit and vegetables consumed, or separately for servings of fruit or servings of vegetables.

For the race ethnic groups in this study, this finding is the same for Whites and for Latinos. Additionally, no seasonality effects are seen for high-acculturation Latinos, however, results for low-acculturation Latinos are inconclusive. African Americans do have significant variation among months, mostly attributed to the month of December where there are large and significant intakes of servings of total fruit and vegetables and of servings of vegetables. However, since December is excluded from the usual CDPS data collection period, this finding is not a factor in interpreting CDPS data. For African Americans, although there appears to be some variation, there are no significant differences observed across the months of July through November, the months when CDPS African American samples and over-samples have been collected in the past. The overall conclusion is that there are no major month-to-month seasonality effects during the usual period of data collection for the CDPS for all adults, or specifically for Whites, African Americans, Latinos, and the low-income segments of these three race/ethnic groups. A noteworthy caveat is that these findings suggest, somewhat surprisingly, that the monthly patterns may be different from year to year. There is no explanation for this. Since this study only included two years of data, there is insufficient evidence to confirm this finding. Interpretation of CDPS trend data since 1989 can eliminate seasonality as an explanatory factor if patterns of monthly variation from July though October are assumed to be the same from year to year. This study suggests that this is the case.

Results comparing the SF3 with the CDPS method in measuring the number of servings of total fruit and vegetables show that the SF3 correlates positively and somewhat strongly . However, the SF3 was found to overestimate the number of servings of total fruit and vegetables by a little more than one-third of a serving . Among the three race ethnic groups studied, that overestimation is only slightly higher for African Americans . Since few surveys have sample sizes that can statistically differentiate groups at a level below half a serving, the SF3 appears to be a very good approximation of the number of servings of fruit and vegetables for population estimates in relation to the CDPS method. The conclusion is similar for estimating the number of servings of fruit. The degree of overestimation of the number of servings of fruit for all adults is higher than that for total fruit and vegetables, however, it is still less than half a serving . In estimating the number of servings of vegetables, the SF3 performed best in that there is no significant difference from estimates made using the CDPS method either for all adults or for any of the race/ethnic groups measured. The correlation is also good , although not as strong as that observed for fruit or for total fruit and vegetables. Although the point estimate for servings of vegetables in this study was not statistically different from the CDPS estimate, the lower correlation suggests the SF3 vegetable estimate will not track as well over time as the estimates for fruit alone or for total fruit and vegetables, both of which have relatively stronger correlations with the CDPS estimates. However, compared to dietary studies in general, all these correlations are still very good. In place of the CDPS method, the SF3 is a very good and potentially cost efficient way to obtain population estimates of the number of servings of fruit and vegetables. It should work well to track intake over time, but would likely produce a slightly higher estimate than that produced by the CDPS method. It is a good estimator of the number of servings of fruit. Estimates of the number of servings of vegetables,plastic plant pot although not as strongly correlated, should not be very different than those produced using the CDPS method. The California Dietary Practices Survey is conducted by the Cancer Prevention and Nutrition Section of the California Department of Health Services and the Public Health Institute to measure and track fruit and vegetable intake in the California population. Since its inception in 1989, the CDPS has been carried out every other year. From 1993 onward, an over-sample of Latino adults has been included, and since 1995, over-samples of low-income persons and low-income African Americans have been conducted. The CDPS is nationally recognized as the oldest, state-specific tracking survey for fruit and vegetable intake in the country.

In 2005 it will represent sixteen years of bi-annual survey data using a modified 24-hour recall telephone interview methodology. The results of this California Fruit and Vegetable Intake Calibration Study will enhance both the interpretative dimensions of past and future CDPS findings. Trends among White, Latino, and African-American groups have been a major focus of the CDPS.1 Between 1989 and 2001, the trend for the overall state estimates was relatively stable, starting at 3.8 servings in 1989 to 3.9 in 2001 . The highest estimate was 4.1 servings for 1995, dropping back to 3.8 in 1997 and 1999 and 3.9 in 2001. Among the majority White population, the trend mirrors the statewide trend. After an initial increase from 3.7 to 4.0 in 1991, the estimate has remained relatively flat at 3.9 servings per day, going to 4.0 in 2001. A much more pronounced increase has been observed among California’s Latino population. Starting relatively high at 3.9 servings in 1989 that increased to 4.7 in 1995 then fell almost a full serving to 3.8 in 1997 and returning to 3.9 in 2001. The trend among African Americans evokes even more concern because of its seemingly clear negative direction for a number of years. After an initial increase from 4.0 servings to 4.3 in 1991, each subsequent estimate has been lower. After a drop to 3.7 in 1993 it remained stable and lower at around 3.1 or 3.2 since 1997. A simple linear regression line fitted to these estimates for each race/ethnic group appears slightly positive for Whites and for Latinos, and negative for African Americans . However, the actual trend line or slope for Whites and for Latinos is technically flat, i.e., not statistically different from zero. The slope or trend line for African Americans is significantly negative . For a reasonable scientist, these observations also raise the question of a possible methodological or measurement flaw in the CDPS design. If only the statewide general population estimates were being made, the trend line appears believable even if discouraging when measured against the more than 10 years of effort by the 5 a Day—for Better Health! campaign. The race/ethnic subgroups, however, suggest another story. The question remains, how believable are these trends? Are there some critical adjustments not being made to these important data? If there is a race/ethnic difference, can a more precise difference be quantified? How much is the limited sample size of past surveys a contributor to these observations? Are the implementation methods suspect with regard to seasonal timing? One issue of possible measurement error may exist from the inherent logistic difficulties in obtaining past over-samples of Latino, African American, and low-income persons. To the best possible extent, the CDPS has been conducted mostly during the same months of the year, generally between July and October. Although conducting the survey in the same window of time each year is the operative objective, for a variety of funding-related administrative reasons, this has not always been possible. The actual data collection periods for the past seven surveys have been somewhat different. The 1989 survey occurred the earliest , while the 1993 survey ended the latest . Four surveys covered similar periods starting sometime in July-August and ending in September-October. This is also true for CDPS VI, because it finished data collection on November 2, 2001, making the effect of any cases in November negligible . The question arises whether or not there is a dimension of seasonal variation not accounted for in the CDPS estimates, especially in CDPS I and III. Equally important to acknowledge is that the more time consuming, “more difficult to reach” over-samples extended their data collection as much as four to six weeks beyond the end month for the general population shown in Exhibit 3 for CDPS III-VII. This places the Latino and much of the low-income African American data collection far outside the California “summer” period. These groupshave had their data collected well into the month of November when perhaps fruit and vegetable intake may be seasonally lower. Although fruits and vegetables are available throughout the year in the California, their cost is seasonally affected. This research examines if any seasonal variation exists for each of the racial/ethnic groups, both generally and for their low-income cohort. Seasonal variability in dietary intake has been recognized and measured using intake instruments other than the CDPS telephone interview method, but not for California alone.This study also measures seasonal differences among California’s Latino population based on different levels of acculturation.