Horticultural genetics may be one such area of stalled innovation

The letter alleges that the new variety contains a piece of technology that in fringes upon a client’s IP claims. Furthermore, the patent owner appears not even to be interested in negotiating a license. And to this day, the legendary variety sits in storage somewhere in a greenhouse or a freezer, unused and sadly neglected. Of course, it is difficult to establish the definitive reasons why a project does not come to fruition, especially when there are numerous factors simultaneously affecting the outcome. Prior patents may be just a convenient excuse — and the patent owners a scapegoat — for tough decisions made to terminate unpromising or economically unattractive projects. Still, while patents do provide convincing incentives for private firms to invest in agricultural research and development , taking the necessary steps to respect the rights of patent ownership does add an additional layer of costs for developing new crop varieties. Economists call these additional costs “transaction costs”; they include legal fees for searching and filing patents and expenses for negotiating and drafting licenses. Royalties paid for using another’s technology are not IP transac tion costs. Rather, they are “rent” paid to use the technology and to compen sate for the R&D expenditures spent to create it. Commercial developers of agricul tural biotechnologies often take mea sures to avoid incurring these IP transaction costs. They may shift their R&D strategies or even acquire other companies to avoid dependence on outside technologies, thereby limiting expenses and preventing the complications and uncertainties inherent in “renting” them . These measures, however, can be costly too. Either way,square nursery pots costs faced under an IP system can, in theory, cancel out the private incentives created by IP to pursue innovation. More troubling, IP can even prevent publicly funded innovation from having its in tended social impact.

Yet are there any good indicators of this stalling beyond just stories and rumors? And if so, can we establish links with IP?Recent U.S. Department of Agriculture registrations for field trials of transgenic crops show that R&D in horticultural crops is lagging when compared with the major row crops. Even leading transgenic horticultural crops such as melon, lettuce, straw berry, grape, apple and sunflower are hardly represented in field trials . Horticultural crops are completely dwarfed by corn, the single most commonly tested transgenic crop, which by itself is the subject of almost half of all transgenic field trials. Of course, U.S. production of any single horticultural crop is far less valuable than U.S. production of corn. Less field-testing is to be expected for less valuable crops. But, even when applying a rough calculation to account for the differences in size and value of individual crops — dividing by the annual value of each crop’s U.S. production — horticultural crops tend to show a greater farm-gate value per field trial. In other words, horticultural crops are subject to fewer genetic field trials, and presumably receive less biotech R&D, for every dollar of crop production. Furthermore, the proportion of transgenic field trials conducted by public-sector research organizations, such as state universities or the USDA, versus the proportion conducted by commercial firms, varies widely by crop type . Public-sector involvement in the field-testing of the 10 leading transgenic crops — mostly major row crops — averages just 15%. Yet, in the next 20 mostly horticultural transgenic crops, public-sector involvement averages much higher, around 40%. These numbers should be interpreted cautiously, as the samples representing many of the horticultural crops are small and the ratios are taken over just a few field trials. For example, 16 field trials have been done on transgenic papaya and only 11 on transgenic walnut . Despite this variability, there appears to be less investment in biotech for horticultural crops than for major row crops, both in absolute terms and relative to overall crop values, while a greater proportion of that smaller R&D investment in horticultural crops comes from the public sector. Involvement by commercial firms in horticultural crops seems to be missing. While this data is too sketchy to conclude outright that commercial firms are under investing in horticultural biotechnology, it al lows us to ask whether they might be, and if so, why.

After a few early excursions into horticultural crops — most notably by Monsanto, As grow and Calgene as well as by Syngenta’s predecessors at Zeneca — major agricultural biotechnology firms have virtually shut down their product development in horticultural crops. Long-shelf-life tomatoes, virus-resistant squash and insect-resistant potatoes have not taken off as did Bt corn and herbicide-tolerant soybeans. Some of the specialized vegetable seed firms, such as PetoSeed , and some of the smaller agricultural biotechnology firms that specialized in vegetable crops, such as DNA Plant Technologies , continued their bio technology efforts a bit longer. Yet those efforts appear to have all but dried up in recent years. Instead, fruit and vegetable seed companies with active research and production activities, such as Seminis, Danson, Golden Valley, Harris Moran and others, continue to pursue their product development goals through conventional breeding techniques. One exception is the Scotts Company, which is currently seeking regulatory approval for a biotech product for golf courses, a glyphosate-resistant bentgrass. Indeed, most of the biotech work in horticultural varieties is conducted in university laboratories doing basic plant science. Occasionally, those projects spin out a commercially interesting trait or technology, but university technology-transfer offices have a hard time finding commercial partners among the seed firms, nurseries or growers’ associations.As with any investment, there is a degree of risk involved in putting re sources into the development of a new transgenic horticultural variety. Future returns are uncertain, and expected re turns are weighed against costs incurred to enter the marketplace. Such considerations also apply, more generally, to public-sector investments in re search. Although the measures of success may be more in terms of scientific advancements than earned profits, the practical importance of a new discovery is still important. .The size and strength of demand for a new transgenic variety will determine the size of returns on the investment. Market uncertainties for agricultural products are nothing new, due to such factors as disruptive competition in supply, cyclical price fluctuations and changes in consumer demand. How ever,ebb and flow tray some food consumers, such as in Europe, are skeptical of foods produced using biotechnology.

While a majority of U.S. consumers seem relatively unfazed by the genetic contents of processed bulk commodities such as soybeans and feed corn, consumers could react more strongly to obvious modifications of products in the produce aisle. Yet specific market uncertainties surrounding the use of transgenics could be addressed by the selection of technologies and traits that deliver real tangible benefits to consumers in ways that are perceived as unambiguously safe.The process of regulatory approvals for GM crops is essential to assure the safety of the technology. The R&D costs associated with gaining approval are considered up-front or “sunk” investments, and they must be spent to gain access to the market. These costs can be greater if the transgenic crop contains novel proteins or pest-control components, as additional assessments are required. In major row crops, investments to obtain regulatory approval can be recouped from the small technology fees charged on each bag of transgenic seed, which are multiplied out over millions of acres planted; however, with horticultural crops the distribution of regulatory costs is often concentrated onto much smaller markets. In many horticultural crops, several different varieties are commercially important. If introgression of the new trait via back crossing is not an option, such as may be the case for clonally propagated varieties that do not breed true, each variety must be separately transformed in the lab, and each must be separately tested and approved. Regulatory costs would add up, but they could not be spread out over nearly as large a market as they could for row crops. Still, returns per acre from horticultural varieties tend to be much higher, and the costs of specialized pesticides replaced by transgenic traits may also be higher. In addition, regulatory costs can be expected to decline as more risk assessments are completed, government agencies become more adept at judging the merits of different biotechnologies, and the policies and procedures become streamlined and finely tuned. In addition, the extension of an approach similar to the IR-4 program, which provides regulatory assistance for pesticides targeted to the needs of specialty crops , could reduce the regulatory burden on transgenic specialty crops.Transaction costs for gaining freedom to operate in the relevant IP protected technologies can be consider able. As with regulatory costs, the total IP transaction costs are independent of market size, and a larger number of transgenic varieties means more costly negotiations and more deals to cut. One industry estimate put the costs of negotiating a single crop genetics deal as high as $100,000.

When multiple patented genetic technologies are stacked in a cultivar, as is increasingly the case, the problem is compounded. Uncertainty over the total amount of IP transaction costs scares off investment in R&D projects, unless the expected returns are particularly attractive. This will continue as long as there is uncertainty in the IP landscape for plant bio-technologies and genetic materials. With the number of patents in this area growing at an exponential rate, IP access could be a deterrent to biotech R&D in horticultural varieties for years to come.IP access is a general problem for all of crop biotechnology. The reasons lie in the cumulative nature of the genetics and bio-technologies embodied in transgenic varieties. Plants are complex systems, and a healthy, productive crop plant has numerous genetic and metabolic pathways functioning together. Those genetics are inherited from breeding stock or can be added using biotechnology. A genetically engineered seed or plant cultivar may contain three different kinds of technological components that can be protected as IP, including the germplasm of the plant variety, the specific genes that confer a new trait and the fundamental tools of biotechnology such as genetic markers, promoters and transformation methods. The IP situation is complicated by a number of additional factors that add to the transaction costs.Different technological components of a transgenic crop variety are covered in the United States under different forms of IP law. If a variety is clonally propagated, the germplasm — the plant variety itself — can be claimed as IP at the U.S. Patent and Trademark Office under a Plant Patent, established in 1930 by the Plant Patent Act to protect against cuttings being taken, repropagated and directly resold under another name. Seed-propagated varieties can be claimed as a form of IP under the USDA system of Plant Variety Protection certification, established by the Plant Variety Protection Act in 1970. And, since 1980 — following a landmark decision by the Supreme Court in Diamond v. Chakrabarty over the patenting of a genetically engineered microorganism — all kinds of “invented organisms,” including novel plant germplasm, have come to be claimed as IP under standard U.S. utility patents. Subsequent technological and legal developments following Diamond v. Chakrabarty now allow utility patents to protect invented genes, proteins and other gene products, as well as biotechnology tools such as transformation of genetic contents, selection using genetic markers, and regulation of expression using genetic promoters. Finally, a significant part of the value of an agricultural variety often lies not in its technological or biological characteris tics perse but rather in its recognition and reputation among consumers in the marketplace. That “brand” name can be protected as IP by registering it as a trademark with the USPTO. The challenges posed by multiple layers of IP law are, if anything, greater for horticultural varieties than for row crops: plant patents, PVPs or utility patents may cover the germplasm; util ity patents typically cover the gene and biotechnology tools used; and trade marks are more often used to protect variety names. In leading row crops such as corn and soybeans, germplasm as well as the genes and bio-technologies are protected more consistently under only utility patents. While trade marks like Roundup Ready or Liberty Link refer to input traits and may be of some value in marketing to farmers, the identities of such agronomic traits command little notice or value from final food consumers.

The sheep tissue samples were collected when a sheep rancher harvesting session took place

Once the stress condition was removed, a fraction of the cells recovered the capacity to grow in laboratory media, thereby indicating a potential of the E. coli O1O4:H4 cells to cause human disease. The formation of VBNC bacterial cells on plants also was previously described for Listeria monocytogenes on parsley. The number of viable L. monocytogenes cells was 1 to 2 log higher than the culturable cells recovered from parsley grown in greenhouses at 20uC under low relative humidity ; growth of the VBNC cells was not restored on the plants when the RH was increased to 100%. Although we did not examine for E. coli O157:H7 recovery from the VBNC-like state, future efforts might investigate whether those cells can recover and resume growth either under growth-conducive conditions on lettuce or after removal of the cells from the plants and prior to or after plating for viable cell enumeration. Overall, the toxigenic strain EC4045 survived in similar quantities as ATCC 700728 on lettuce. A recent study showed that certain lineages of E. coli are more commonly associated with plants and presumably have evolved the capacity to tolerate plant associated environments better than E. coli isolated from other sites. Because we only compared two strains, subsequent investigations should examine multiple attenuated and virulent O157:H7 strains isolated from different sites for their capacity to colonize and persist on lettuce under field relevant conditions. Survival of E. coli O157:H7 on lettuce also was measured in two field studies. Culturable amounts of strain ATCC 700728 declined shortly after inoculation onto plants in the field, as we reported previously. Rates of cell decline were similar to E coli O157:H7 on lettuce in the growth chamber directly inoculated in drops with a pipette. Real-time PCR estimates of E. coli O157:H7 ATCC 700728 in lettuce leaf washes showed that this strain was present on the plants immediately after inoculation and 2 h later in quantities equivalent to the inoculum levels. Importantly,flower plastic pots the rapid decline in culturable E. coli during the first hours after application onto plants in the field was not due to an inability to remove the organism from the lettuce or from dispersal and lack of strain attachment.

Rather, it appears that the majority of the E. coli cells in the inoculum either died shortly after application or entered a VBNC state. In contrast to the growth chamber experiment results, the numbers of E. coli O157:H7 cells were below detection by real time PCR within 2 days after inoculation onto field lettuce. These findings suggest that the E. coli O157:H7 cells and genomic DNA were degraded rapidly. Environmental parameters such as solar radiation, heat, and water stress could be responsible for the differences in the stability of E. coli O157:H7 DNA in the field compared with the laboratory. Alternatively, cell maintenance might depend on other microorganisms on the leaves. Also, it is notable that different lettuce cultivars were used in the field and growth chamber studies, which may have impacted survival. The potential for cultivar-dependent effects was shown for E. coli O157:H7 on lettuce cultivars grown under axenic conditions in the laboratory. Because the E. coli O157:H7 ATCC 700728 DNA was degraded on field-grown plants within 2 days after inoculation, it is unlikely that the organism developed a VBNC state, particularly over longer time scales . However, this possibility could not be directly addressed using the PMA real time PCR assay in the field trials. Viable cell amounts measured by culturing and PMA real-time PCR were in agreement immediately after application of ATCC 700728 onto laboratory grown lettuce, but PMA-mediated detection was impaired on plants from the field. For those plants, the viable cell number estimates for strain ATCC 700278 were 10-fold lower as measured by PMA real-time PCR than by colony enumeration and total cell numbers estimated by real-time PCR. These differences might have been due to the higher turbidity or opacity of the lettuce plant washes from the field samples, thereby preventing light from penetrating the suspension during the PMA photo inactivation step. This interference would prevent the inactivation of free PMA, resulting in sufficient quantities of the compound to bind genomic DNA released from viable cells during the subsequent DNA extraction and amplification steps. In addition, the PMA real-time PCR assay was unable to detect low numbers of cells. Attempts to detect the ATCC 700278 strain after concentrating the leaf washes were unsuccessful . Similarly, PMA real-time PCR was found to be more reliable for viable cell detection in diluted wastewater than in pooled and concentrated wastewater samples. Such factors strongly limit the overall usefulness of this approach for field-grown plants. However, this method is informative for examination of E. coli O157:H7 on ‘‘cleaner’’ plants grown in the growth chamber and not exposed to the biotic conditions that are common outdoors.

In conclusion, this study illustrates the similarities and differences between controlled studies of human pathogens on plants in a growth chamber and experiments examining the population dynamics of pathogens on plants under production-like conditions in the field. By applying relevant environmental conditions and droplet inoculation in the growth chamber, we were able to more closely mimic the rapid decline in E. coli O157:H7 culturability that was observed after inoculation of this organism onto lettuce plants in the Salinas Valley. Culture-independent assessments con firmed that the pathogen remains on the plant long after application. However, field studies showed that at least for the majority of E. coli O157:H7 cell inoculants, the loss in culturability was most likely due to cell death rather than an inability to form colonies on standard laboratory media. Hence, this work confirmed our observations that low numbers of E. coli O157:H7 persist on lettuce grown in the Salinas Valley and variations in pathogen survival among individual plants are dependent on other unknown factors .Ethnically diverse populations are disproportionately exposed to hazardous environmental materials by virtue of living in close proximity to toxic waste materials. One-half of the uranium in the US is found on American Indian lands, where mining, milling, processing,grow table and waste storage has commonly occurred. From the 1940s to the 1980s, northwestern New Mexico alone contributed 40% of the U.S. U production. The study setting was a prime target of U mining for military purposes from the 1940s to the 1980s. Diné lands were one of the prime targets for mining, contributing thirteen million tons of U ore for military use from 1945 to 1988 and leaving more than 1100 abandoned and partially unreclaimed U mines, mills, and waste piles. The extent of the health threats to the Diné community exposed to these sites is anticipated to be high. Uranium enters the body primarily by inhalation or ingestion , and then it enters the bloodstream and is deposited in tissues, primarily the kidneys and bones. Human and animal studies of those exposed to U have shown kidney toxicity, as well as damage to the liver, muscles, cardiovascular, and nervous systems. Arsenic is a teratogen. Cadmium can accumulate in organs and impair renal function; Lead is associated with adverse effects on the nervous, developmental, renal and reproductive systems. Selenium toxicosis can cause neurological and gastrointestinal problems and endocrine function disruption and is a teratogen in several species of animals; Molybdenum has been shown to be a male reproductive toxicant in animals and humans.

The Dine Network for Environmental Health study worked closely with 20 Navajo chapters or communities to address the concerns of the community and leaders regarding the health effects of environmental exposures to unreclaimed U mines and mill sites. The DiNEH cohort found that 40% of participants lived within 3.2 km of an abandoned U mine, 16% lived near a U mill, and 12.6% of children played on tailing piles or waste dumps. In self-reported data of past exposure, 15.4% utilized materials from abandoned U mine sites to build homes or other structures, 1.8% sheltered livestock in abandoned mines, 12.7% herded their livestock near contaminated sites, and 12.8% said their livestock came into contact with contaminated water. In this population, surface and groundwater utilization is important for human and livestock consumption as well as agriculture. In affected areas, greater than half of the Diné people continue to drink from unregulated water sources. DiNEH data indicate that more than 80% of Diné people haul water for all uses, including irrigation and livestock watering, despite having regulated water in the home. Mutton or lamb meat and organs are primary food staples in this community, and all aspects of the animal are used; there are important cultural uses for the animal. The purpose of this study was to determine if sheep, a harvested primary dietary food staple on Diné lands in northwestern NM, were contaminated with U and other associated heavy metals. Past studies of these areas in NM demonstrated that the consumption of U contaminated food may occur through the ingestion of locally raised livestock and by way of their forage. Food chain contamination in locally harvested food in the Diné community in NM was reported as a plausible exposure pathway . The current study was undertaken to reexamine and contribute more recent data and introduce data not previously reported . This was a descriptive, comparative study examining contamination levels in locally harvested O. aries, their forage, and associated soil and water from reservation areas within a 3.2 km radius of previously mined areas. Data obtained from the DiNEH study cohort served as one of the sources for identifying subjects and samples of food, herbs, water, forage, and soil. Additional participants were recruited by word-of-mouth, home visits, and advertising at public tribal community events. Of the DiNEH cohort respondents, those individuals who reported harvesting sheep were recruited for potential participation in the present study. Sheep chosen represented a range of ages , their proximity to mining structures, and a variety of water sources. Three ewes were included in this study. The individual sheep data are compared and reported to reflect an accurate measure of heavy metal uptake in O. aries tissue with respect to the associated forage, water consumption, and their environment.The study area is a semi-arid to arid region of the American Southwest in northwestern NM on Diné reservation lands . The average precipitation was less than 25 cm per year according to meteorological data for NM for the study period. Despite several decades of longstanding drought in the area, community members still participated in subsistence activities. Two “Chapters” provided sheep and associated samples. The Mariano Lake Chapter is 272 km2 of land mass and the Churchrock Chapter is 233 km2 . Recruitment was initiated on May 2012 and enrollment began in July 2012. All samples were collected from 10 November to 13 December 2012. This study focused on sheep as a food staple and was part of a larger “parent” research project that examined subsistence farming on the reservation, including the metal contamination of herbs.Three sheep came from two different chapters. From 10 November to 13 December 2012, three ewes were contributed to the study. The O. aries tissue samples were collected in the field immediately after slaughter and included muscle, bone, intestine, lung, liver, kidney, and wool. Upon collection, all samples were placed on dry ice and shipped to the University of New Mexico Analytical Chemistry Laboratory Earth and Planetary Sciences Department for storage and analysis. The 13th cortical rib bone samples were sheared from the proximal, middle, and distal portions and composited together after the removal of excess tissue. The proximal, medial, and distal portions of the small intestine were collected and composited. For lung tissue, the samples were derived from each anatomical lobe and composited. Both kidneys were sampled, and the cortex and medulla were composited separately. Composited muscle samples were from the proximal, medial, and distal portions of the gastrocnemius. Of the wool fiber samples, the area over the neck, middle section, and posterior portions of the animal were sampled and composited. All tissues were representative of 1 g of dried tissue. For coupled organs, the tissues collected from the right side of the sheep were labeled as the sample, and one duplicate was obtained for each tissue type from the left side of the animal. A composited duplicate or replicate was obtained for non-dual type organs .

The Samper government even went beyond the demands of the United States executive and legislature

The government was therefore able to go ahead with its eradication policy with few internal restrictions. Even so, the result was not very positive; the rise of the poppy emporium in Colombia amply demonstrated the limits of the government’s public anti-narcotics policy and the dramatic consequences of unremitting prohibition on the part of the United States. The Colombian government did not attack drug trafficking or narco-terrorism on the financial front. In accordance with the logic of the so-called economic liberalization fomented by the government in the early nineties, it made no sense to place restrictions and greater controls on the free movement of capital. In 1993, a report by the Vienna-based United Nations International Board on Fiscal Control of Narcotics recommended that “Colombian legislation consider the laundering of capital resources to be a crime and that banking laws should become stricter in order to allow for multilateral cooperation….”The alleged financing of the presidential election campaign with drug money formed the backdrop to the anti-narcotics policies of President Ernesto Samper’s administration . As months went by, the coercive diplomacy which the United States had hitherto been exerting on Colombia became transformed into “blackmail diplomacy.”The president’s capacity for political survival led him to “North Americanize” the fight against drug trafficking in Colombia; that is to say, the president accepted and implemented a strategy virtually imposed by the United States. The Samper government undertook an all-out chemical eradication campaign far beyond anything seen in the two preceding decades,macetas de plastico por mayor with massive use of glyphosate. Fumigators also employed imazapyr, a more powerful granulated herbicide, and were planning to use tebuthiouron, an even more devastating killer than the others.

Ernesto Samper also became the president who most helped criminalize the drug trade, while in Colombia it became almost impossible to discuss the subject of legalizing drugs, something Samper himself had suggested in the late seventies, given the failure of repressive measures taken at that time by the Turbay administration and fomented by the United States.In 1995, months before the infamous “Frechette Memorandum” began to circulate — a document which suggested that Colombia should adopt legislation and take drastic measures in the anti-drug war — President Samper had launched his “integral plan” announcing, amongst other things, the creation of Operación Resplandor designed to put a definite end to all illegal crops which existed in Colombia in the space of two years.”An all-out eradication policy had been set in place. In 1994 , 4.094 hectares of coca were eradicated. In 1995, the Samper administration eradicated 25,402 hectares; and in 1996, 9,711 hectares. In 1994, the Gaviria and Samper administrations had eradicated 5,314 hectares of poppies. In 1995, the Samper government eradicated 5,074 hectares; and in 1996, 6,044 hectares.Between the years 1995 and 1996, glyphosate was used on a massive scale to destroy illegal crops.Even so, the idea of putting an end once and for all to illegal crops proved again to be illusory. In 1996, the US government estimated that the number of hectares dedicated to the planting of coca in Colombia had reached 53,800 hectares, while independent estimates placed the figure at around 80,000 hectares.This meant that Colombia had surpassed Bolivia, a country which traditionally was second only to Peru as a coca producer in South America. The same official US source estimated that Colombia had 4.133 hectares of marijuana and that the country had produced 63 tons of heroin in 1996. However, Colombians had their greatest surprise of all in 1996 when small farmers from the south, especially from the Caquetá region, suddenly made their presence felt in mass demonstrations and protest marches. Nobody had expected this. It was as if the whole population had discovered overnight, and a little belatedly, that the country had ceased to be the processor of these stimulants and had transformed itself now into something else: a huge grower of illegal crops.

People also came to realize that the state simply did not operate at all in a large and strategic portion of the country’s territory, and that power, at the local level, was in the hands of insurgent groups, especially in those of the FARC . Colombians came to realize as well that violent measures alone were not going to solve the profound and intricate social, political and economic problems which had been incubating for decades in the nation’s geographic wilderness.In sum, fumigating with herbicides in southern Colombia in 1996 turned out to be as useless for dismantling the illegal business of drug dealing as had similar efforts in previous years. The difference was that, in 1996, paramilitary detachments were multiplying at a frightening rate in the south. The political blindness of people in government, police officers and the military, together with the administration’s obsequious submission to United States policies, led to a repeat, in 1997, of the indiscriminate fumigation with herbicides — on a huge scale with glyphosate, to a lesser extent with imazapyr. In 1997, Colombia sprayed 41,847 hectares of coca and 6,962 hectares of marijuana. Twenty-two hectares of coca were eradicated manually, as well as twenty-five hectares of poppies and 261 hectares of marijuana. In just over three years, the government had fumigated more than 100,000 hectares of illegal crops. But paradoxically that only went to prove, as never before, just how mistaken, harmful and counter-productive the chemical destruction of such crops could be; in 1998, almost 110,000 hectares of the national territory were dedicated to plantations of coca, marijuana and poppies. In that year, the Samper administration , and that of Andrés Pastrana , fumigated 66,083 hectares of coca and 2,931 of poppies, and manually destroyed 3,126 hectares of coca, 181 of poppies and 18 of marijuana.Nonetheless, according to the Central Intelligence Agency , the total area planted in coca in 1999 amounted to 120,000 hectares,and the US State Department declared that this had increased to 136,200 hectares in the year 2000. This means that in just four years, from 1996 to 2000, the surface planted in coca in Colombia has doubled; the total number of hectares went from 8,280 to 13,200. An increase in the fumigation of illegal crops has not resulted in a decrease in the area planted with illegal crops,rolling benches nor to a decrease in the production of illegal drugs.

To this evident failure one must add the fact that, on the US market, cocaine and heroin have become both cheaper and purer. It is worth noting, also, that something similar has occurred in Western Europe where, in 1999, a gram of cocaine was worth US$90, and a gram of heroin was fetching US$98. So, the rationale which attempts to justify a strong eradication policy in the centers of supply has proved to be way off the mark. It had been presumed that the massive destruction of illegal drugs where production and processing were taking place was going to lead to less availability of narcotics in the centers of demand, an increase in price for the ultimate consumer and a lowering of standards of purity in the stimulants themselves. Quite the opposite has happened; in the year 2000 one could procure in the United States more drugs of better quality than ever before, and at lower prices. Besides, in terms of illegal drug consumption and of drug-related crime, the United States record has not shown substantial improvement. In 1988 the number of occasional consumers of heroin was reckoned at 167,000; in 1995 it had reached 322,000; while the total number of heroin consumers worldwide went from 692,000 in 1992 to 810,000 in 1995. The overall demand for heroin was 1,800,000 grams per year in 1988, but by 1996 it had soared to 2,400,000.Despite certain laudable achievements in reducing drug consumption in the United States, it is evident that a strong demand still exists. In this context it is worth quoting Bruce Bagley: “Some 13 million US drug users spent approximately US$67 billion on illicit drugs in 1999, making the US market the most lucrative one in the world for Colombian traffickers.”Concomitantly, in 1990 the total number of arrests in the area of drug-related law infringements was 1,089,500, whereas in 1996 the figure had risen to 1,128,647. In 1990, 53 percent of prisoners in federal jails were serving sentences for narcotic-related crimes; in 1995 the statistic had risen to 59.9 percent.Finally, the environmental cost to Colombia of chemical eradication has not been sufficiently studied and quantified. However, it is estimated that “for every hectare of poppies sown, an average of 2.5 hectares of woodlands are destroyed; in the case of coca plantations the ratio is 1 to 4, and for marijuana it is 1 to 1.5.”However the negative effects of herbicide fumigation have not been assessed in this process of forest destruction. What we do know is that the mere fact of fumigation forces the growers to move elsewhere in order to plant their illegal crops, and that entails necessarily a further environmental disaster.Despite the fact that organizations such as Greenpeace, the Worldwide Fund for Nature and Dow Agrosciences are opposed to the use of this herbicide, the United States authorities have insisted that it is quite harmless.

They have gone even further; during the Pastrana administration especially, they have been putting pressure on Bogotá to apply a dangerous fungus, fusarium oxysporum, in the process of obligatory eradication. Nonetheless, after almost four years in government, the Pastrana administration has not taken the risk of rethinking the procedure of chemical eradication. On the contrary, since coming to power in August 1998, Pastrana has persisted in an unquestioning policy of intensive fumigation. He has gone even further than his predecessors, in that he accepted the setting up of an Anti-narcotics Battalion within Colombia’s armed forces, in accordance with the wishes of the United States as expressed over the past few years. In 1999, this special unit of 1,200 men under the command of the Colombian army but monitored by “Washington’s magnifying glass,” replaced the anti-narcotics unit of the police force in the most critical of tasks, namely those to do with illegal crops.In 2001, as the so-called “Plan Colombia” went into operation — insofar as it touched on aspects of security and the anti narcotic policy of the United States — three battalions of the Colombian army were charged with combating illegal drugs. In short, there has been nothing new as far as eradication is concerned. Rather things have gone on as usual, in the hope that Colombia’s armed forces, by playing a definitive role in the fight against drugs, will somehow turn things around and produce a fundamental change in favor of the government and of the United States. The risk that Colombia is taking by continuing to obsessively and obsequiously spray crops is an enormous one. By insisting on this unfortunate and counter-productive measure, the government is committing a serious political error and is leading the country to the brink of a catastrophe which will affect both the population and the country’s ecology, but will not effectively help to overcome the drug problem. Chemical eradication has already produced multiple negative effects: for a start, it has contributed to greater devastation of the environment; it has led, also, to an even closer marriage between drug traffickers and paramilitaries and, at the same time, has encouraged guerrilla fronts to depend more than ever on income from the drug trade; it has served to increase corruption at different levels of society; without achieving any positive results, it has involved the government unnecessarily, in some of the most violent aspects of the drug war; it has exposed some of the weakest and most vulnerable members of Colombia’s society — peasants, Indians, poor farmers, and others — to greater threats, often forcing them to migrate and leaving them totally unprotected; and finally it has helped to further stigmatize Colombia in the eyes of the world, despite the fact that no other country has sprayed crops with herbicides to nearly the same extent. Nonetheless, it would seem that nothing is going to change in this regard; the year 2002 will probably see more futile fumigations. To sum up: notwithstanding the intense war being waged to combat it, the drug trade will continue to prosper.

Chlorophyll index was measured in all the leaves every two days

Many insects rely on microbial communities and endosymbionts to grow and develop; however, it has been shown that Lepidoptera species do not have a vertically transmitted microbial community . In addition, because the effects of microbial communities on T. ni survival and development have not been documented, we present these data only to show that microbial communities change when exposed to CECs, and not as a proven factor influencing survival. We found significant shifts in the microbial community in the various life stages examined within the control treatments notably from third instar to subsequent life stages.However, there is one family, Lactobacillaceae, which ap pears in all treatments and life stages in high proportions, except for adults. They are fairly common in insects and can be responsible for at least 70% of the bacterial community . Lactobacillaceae is responsible for ∼42% of the bacteria in all life stages, followed by Pseudomonadaceae, Alcaligenaceae, and Enterobacteriaceae. Lactobacillaceae have been shown to act as beneficial bacteria in Drosophila ; however, its function in T. ni is still unknown. Alcaligenaceae has been shown to be present in other moths , but Lepidopterans are not thought to have a functional microbiome . There are clear patterns regarding the changes in microbial community proportionality according to the heat map . In controls, third-instar microbial communities are relatively evenly spaced by family.Once the insects reach the adult stage, their most predominant family is Pseudomonadaceae. This pattern holds in the acetaminophenand caffeine treatment groups as well. Interestingly, the other treatment groups do not share this pattern. For antibiotic- and hormone-treated T. ni, Lactobacillaceae is the predominant microbial family in the immature stages, but at the adult stage microbial community reverts to predominantly Pseudomonada ceae. We suspect that this is because, once the larvae undergo metamorphosis and shed their gut contents in preparation forpupation,macetas de 9 litros they are no longer exposed to the pressures exerted by the CECs on the microbial community. Fig. 3 provides a visual indication of the changes in the bacterial communities over time.

The increase in β diversity after eclosion could be due to the larvae no longer being exposed to CECs or diet-borne bacteria after being moved to sterile containers. Also, when bacteria are lost as larvae digest their gut contents during pupation, the microbial β diversity could change. Interestingly, the hormone-treated T. ni follow a similar pattern to those exposed to antibiotics, but their ellipses are always much smaller, suggesting the entire insect population is showing a uniform response within their microbial communities. However, in the mixture-treated insects, larvae displayed a greater average diversity in their microbial community structure than either pupae or adults. This finding has not been shown in any single category of treatment, and we suspect the microbes exposed to mixtures could be experiencing potential interactive effects among chemicals . Such interactions should be the focus of future studies along with investigations of plant rhizosphere bacteria, particularly since we found a difference in the Bradyrhizobiaceae family for all treatments. These results show that a terrestrial insect pest of commercial crops can be affected by CECs found in reclaimed wastewater for agricultural use. Our results suggest that CECs found in waste water can impact T. ni growth and development, survivor ship, and alter their microbial communities. Because T. ni is a common agricultural pest found around the world, feeds on a wide variety of plants, and has a history of developing pesticide resistance, its ability to deal with toxins is likely higher than many other insects. In addition, the responses we observed to CECs could have interesting implications for IPM practices on plants such as lowering the amount of pesticides needed or increasing susceptibility to insect pathogens, as has been shown in mosquitoes . These potential effects may be understated because some insects cannot detect the presence of the pharmaceuticals . However, we do not recommend purposefully exposing crops to CECs specifically for the control of insects because our study documented that these pharmaceuticals are translocated into crops and we do not yet know their possible effects on humans if consumed . We specifically want to note that ingestion of these compounds through uptake and translocation by a plant is not the only way T. ni or any other insect would be exposed to these compounds.

Overhead sprinkler irrigation could cause contact absorption by the plants or insects, and simply drinking water on leaves at contaminated sites could ex pose insects to higher concentrations than were found in plant tissues. In fact, the ciprofloxacin concentration used was less than one-third of the highest rate . We urge caution in ex trapolating to plants growing in soil, because variation in soil type and potential soil bacterial degradation could affect persistence [although soil bacteria are often negatively impacted by CECs ]. However, CEC exposures are considered pseudo persistent because they are reapplied with each irrigation. Thus, the effects reported here are likely to be conservative. Additional studies with other insects, particularly those with other feeding strategies, will be necessary before any patterns can be discerned.The correction of metal micro-nutrient deficiencies is a problem still not fully solved in Agriculture. The low solubility of the iron, manganese and zinc oxides in the pH range of calcareous soils contributes, among other factors, to the low availability of these nutrients to plants. The Fe3+ chelate of o,oEDDHA acid, a polyphenolic polyaminocarboxylic acid, and its analogues are the most efficient solution to correct the Fe deficiency with good results in hydroponic and soil conditions. The Mn deficiency is often treated with salts of Mn as Mn sulfate or as the Mn chelate of EDTA or analogous. Zinc sulphate has traditionally been the ‘‘reliable’’ source of Zn fertilizer but other sources of Zn are also available . New chelating agents such as o,pEDDHA -N´acid, a polyphenolic chelating agent with only five available donor groups or EDDS ,mobile vertical farm a biodegradable ligand with similar structure than EDTA, are been considered but there are not studies about the efficiency of the use of metal fertilizer mixes containing these chelating agents. The aim of this work was study the efficacy of the combined application of Fe, Mn, Zn and Cu chelates to correct the deficiencies in soybean in hydroponic solution in the presence of CaCO3. Then the stems of two individual seedlings were wrapped together with polyurethane foam and placed in 500 mL vessels preserved from the light by means of a black cover and with continuous aeration. At this point, treatments were applied in the presence of the MS solution. The Fe was applied as o,oEDDHA/Fe3+ in all the cases since it is known to be one of the best sources for the Fe nutrition .

The Mn, Zn and Cu were applied as o,pEDDHA, EDDS, EDTA, HEDTA or DTPA chelates, with the same chelating agent for the three metals in the same treatment and two additional treatments with Mn and Cu chelated by EDTA and Zn chelated by o,pEDDHA or EDDS. Three controls with no Mn, no Zn and no Mn and Zn with the remaining metal micro-nutrients chelated by EDTA were tested too. Moreover, 0,1g/l of CaCO3 was added to the pots, in order to achieve calcareous soils conditions. The concentrations in the hydroponic solution were 1.00 Fe, 0.375 Mn, 0.250 Zn and 0.150 Cu. The chelate solutions were prepared as described by Alvarez-Fernandez et al. . Three replicate pots per treatment were considered. The plants stayed for 7 days in these conditions.Samples were taken after 7 days of the treatment application. Plants were washed following the procedure described by Álvarez Fernflndez et al. , and fresh and dry weight of leaves, stems and roots determined separately. Then, micro-nutrient concentrations were determined in the plant organs after dry mineralization by atomic absorption spectrophotometry. Plant dry weight at the end of the experiment showed that in the three treatments with Zn-o,pEDDHA, plants had higher values than with the other treatments. In addition, controls with no Mn were more affected than the control with no Zn. It seems that the Mn nutrition have a more relevant effect in the plant weight than the Zn nutrition. Plants treated with DTPA had the lowest values. It is important to indicate that in general the best treatments are those with the chelates of lower stability, while the high stable chelates gives the worst results. This is the consequence of the competition between the plant and the chelating agent for the Zn2+ and Mn2+ as already studied by Halvorson and Lindsay . Then, the results here presented are only valid for hydroponic like cultures. In conclusion, the best treatment for the whole application of Mn and Zn was for the o,pEDDHA ligand that presents the higher levels for Mn and Zn in leaf especially for Zn and in the plant dry weight. It seems that the less stable polyphenolic chelates, like o,pEDDHA, are adequate for the nutrition of Mn and Zn in hydroponics due the low competence of this chelating agent with the plant for the metals. Horticultural crops have high economic, and enrich our lives through their aesthetic and nutritional value. Many horticultural species originate from tropical regions and are sensitive to cold at every stage of their life cycle. Cold stress leads to lower productivity and post-harvest losses in these species, with poor economic and environmental outcomes. Better understanding of the protective mechanisms mediated by hormonal and other signaling pathways may offer solutions to reduce cold-stress induced losses. The papers included in this collection illustrate this concept, examining natural cold-tolerance mechanisms and practical ways for growers to alleviate chilling stress and to reduce crop losses. The studies were remarkably diverse in terms of the species studied , plant organs examined , and approaches used . The papers encompassed the use of basic science, aimed at identifying key genes and their roles in cold signal transduction and protective pathways in fruit and photosynthetic tissues; reverse genetics for proof-of-concept on the hypothesized role of a cold-tolerance transcription factor cloned from an understudied species; and emerging technologies, by using exogenous hormones and signaling compounds to mitigate the harmful effects of chilling. These studies are described below. C-repeat binding factor proteins constitute a transcription factor subfamily known to play a key role in plants against different types of abiotic stress including cold, heat, salinity or dehydration, and thus have been extensively studied. Over expression of CBFs has been used for the development of genetically modified plants with enhanced stress tolerance and for the investigation of the molecular mechanisms underlying plant stress responses. Using this approach, Yang et al. found that over expression of three newly identified longan CBF genes enhanced cold tolerance in Arabidopsis by increasing the content of the osmoprotectant proline, reducing the accumulation of reactive oxygen species , and stimulating the expression of cold-responsive genes. The fact that longan, a cold-sensitive species, showed low expression levels for these three genes, suggests a possible strategy for genetic improvement of cold tolerance in this crop. Cold storage of apples is often used to extend post-harvest storage; however, it leads to superficial scald development, which is a major physiological disorder characterized by necrosis of the hypodermal cortical tissue. Karagiannis et al. applied a multiomics systems approach and created regulatory module networks to compare scald-affected and healthy apple phenotypes. Individual and combinatorial treatments with ozone , which induced scald symptoms, and 1-methylcyclopropene , which reversed O3-stimulated scald effect, were used to identify pathways and gene-to-protein-to-metabolite networks involved in scald prevention and sensitivity. Importantly, 1-MCP-induced scald tolerance correlated with the expression of genes involved in photosynthesis, stress responses, flavonoid biosynthesis, and ethylene signaling in apple peel and key TFs that may control some of these processes. This study represents an important contribution for future functional studies to develop improved apple cultivars to superficial scald. The acquisition of cold tolerance under conditions of varying light quality is essential for plants growing in regions with seasonal variation in both temperature and light . Photo inhibition, i.e., the down regulation of the electron transport chain, reduces plant productivity, but safeguards the photosynthetic apparatus during cold and light stress .

Similar national security directives exist in Canada and the EU

The current demand for non-COVID-19 mAbs in the United States is >50 million doses per year27, so COVID-19 has triggered a 44% increase in demand in terms of doses. Although the mAb doses required for pre-exposure and post-exposure COVID-19 treatment will not be known until the completion of clinical trials, it is likely to be 1–10 g per patient based on the dose ranges being tested and experience from other disease outbreaks such as Ebola . Accordingly, 22–222 tons of mAb would be needed per year, just in the United States. The population of the United States represents ~4.25% of the world’s population, suggesting that 500–5,200 tons of mAb wouldbe needed to meet global demand. The combined capacity of mammalian cell bioreactors is ~6 million liters27, and even assuming mAb titers of 2.2 g L−1, which is the mean titer for well-optimized large scale commercial bioreactors , a 13-day fed-batch culture cycle , and a 30% loss in downstream recovery, the entirety of global mammalian cell bioreactor capacity could only provide ~259 tons of mAb per year. In other words, if the mammalian cell bioreactors all over the world were repurposed for COVID-19 mAb production, it would be enough to provide treatments for 50% of the global population if low doses were effective but only 5% if high doses were required. This illustrates the importance of identifying mAbs that are effective at the lowest dose possible, production systems that can achieve high titers and efficient downstream recovery, and the need for additional production platforms that can be mobilized quickly and that do not rely on bioreactor capacity. Furthermore, it is not clear how much of the existing bioreactor capacity can be repurposed quickly to satisfy pandemic needs, considering that ~78% of that capacity is dedicated to in-house products, many to treat cancer and other life-threatening diseases . The demand-on-capacity for vaccines will fare better, given the amount of protein per dose is 1 × 104 to 1 × 106 times lower than a therapeutic mAb. Even so, macetas cuadradas grandes most of the global population may need to be vaccinated against SARS-CoV-2 over the next 2–3 years to eradicate the disease, and it is unclear whether sufficient quantities of vaccine can be made available, even if using adjuvants to reduce immunogen dose levels and/or the number of administrations required to induce protection.

Even if an effective vaccine or therapeutic is identified, it may be challenging to manufacture and distribute this product at the scale required to immunize or treat most of the world’s population . In addition, booster immunizations, viral antigen drift necessitating immunogen revision/optimization, adjuvant availability, and standard losses during storage, transport, and deployment may still make it difficult to close the supply gap. Regardless of the product, the supply of recombinant proteins is challenging during emergency situations due to the simultaneous requirements for rapid manufacturing and extremely high numbers of doses. The realities we must address include: the projected demand exceeds the entire manufacturing capacity of today’s pharmaceutical industry ; there is a shortage of delivery devices and the means to fill them; there is insufficient lyophilization capacity to produce dry powder for distribution; and distribution, including transportation and vaccination itself, will be problematic on such a large scale without radical changes in the public health systems of most countries. Vaccines developed by a given country will almost certainly be distributed within that country and to its allies/neighbors first and, thereafter, to countries willing to pay for priority. One solution to the product access challenge is to decentralize the production of countermeasures, and in fact one of the advantages of plant-based manufacturing is that it decouples developing countries from their reliance on the pharmaceutical infrastructure. Hence, local production facilities could be set up based on greenhouses linked to portable clean rooms housing disposable DSP equipment. In this scenario, the availability of multiple technology platforms, including plant-based production, can only be beneficial. Several approaches can be used to manage potential IP conflicts in public health emergencies that require the rapid production of urgently needed products. Licensing of key IP to ensure freedom to operate is preferred because such agreements are cooperative rather than competitive. Likewise, cooperative agreements to jointly develop products with mutually beneficial exit points offer another avenue for productive exploitation. These arrangements allow collaborating institutions to work toward a greater good.

Licensing has been practiced in past emergencies when PMP products were developed and produced using technologies owned by multiple parties. In the authors’ experience, the ZMapp cocktail was subject to IP ownership by multiple parties covering the compositions, the gene expression system, manufacturing process technology/know how, and product end-use. Stakeholders included the Public Health Agency of Canada’s National Microbiology Laboratory, the United States Army Medical Research Institute of Infectious Diseases , Mapp Biopharmaceutical, Icon Genetics, and Kentucky Bioprocessing, among others. Kentucky Bioprocessing is also involved in a more recent collaboration to develop a SARS-CoV-2 vaccine candidate, aiming to produce 1–3 million doses of the antigen, with other stakeholders invited to take on the tasks of large scale antigen conjugation to the viral delivery vector, product fill, and clinical development.Collaboration and pooling of resources and know how among big pharma/biopharma companies raises concerns over antitrust violations, which could lead to price fixing and other unfair business practices. With assistance from the United States Department of Justice , this hurdle has been temporarily overcome by permitting several biopharma companies to share knowhow around manufacturing facilities and other information that could accelerate the manufacturing of COVID-19 mAb products.Genentech , Amgen, AstraZeneca, Eli Lilly, GlaxoSmithKline, and AbCellera Biologics will share information about manufacturing facilities, capacity, raw materials, and supplies in order to accelerate the production of mAbs even before the products gain regulatory approval. This is driven by the realization that none of these companies can satisfy more than a small fraction of projected demands by acting alone. Under the terms imposed by the DOJ, the companies are not allowed to exchange information about manufacturing cost of goods or sales prices of their drugs, and the duration of the collaboration is limited to the current pandemic. Yet another approach is a government-led strategy in which government bodies define a time-critical national security need that can only be addressed by sequestering critical technology controlled by the private sector. In the United States, for example, the Defense Production Act was first implemented in 1950 but has been reauthorized more than 50 times since then.

In the United States, the Defense Production Act gives the executive branch substantial powers, allowing the president, largely through executive order, to direct private companies to prioritize orders from the federal government. The president is also empowered to “allocate materials, services,macetas cuadradas plastico and facilities” for national defense purposes. The Defense Production Act has been implemented during the COVID-19 crisis to accelerate manufacturing and the provision of medical devices and personal protective equipment, as well as drug intermediates. Therefore, a two-tiered mechanism exists to create FTO and secure critical supplies: the first and more preferable involving cooperative licensing/cross-licensing agreements and manufacturing alliances, and alternatively , a second mechanism involving legislative directives.Many companies have modified their production processes to manufacture urgently-required products in response to COVID- 19, including distillers and perfume makers switching to sanitizing gels, textiles companies making medical gowns and face masks, and electronics companies making respirators.Although this involves some challenges, such as production safety and quality requirements, it is far easier than the production of APIs, where the strict regulations discussed earlier in this article must be followed. The development of a mammalian cell line achieving titers in the 5 g L−1 range often takes 10–12 months or at least 5–6 months during a pandemic . These titers can often be achieved for mAbs due to the similar properties of different mAb products and the standardized DSP unit operations , but the titers of other biologics are often lower due to product toxicity or the need for bespoke purification strategies. Even if developmental obstacles are overcome, pharmaceutical companies may not be able to switch rapidly to new products because existing capacity is devoted to the manufacture of other important biopharmaceuticals. The capacity of mammalian cell culture facilities currently exceeds market demand by ~30% . Furthermore, contract manufacturing organizations , which can respond most quickly to a demand for new products due to their flexible business model, control only ~19% of that capacity. From our experience, this CMO capacity is often booked in advance for several months if not years, and little is available for short-term campaigns. Furthermore, even if capacity is available, the staff and consumables must be available too. Finally, there is a substantial imbalance in the global distribution of mammalian cell culture capacity, favoring North America and Europe. This concentration is risky from a global response perspective because these regions were the most severely affected during the early and middle stages of the COVID-19 pandemic, and it is, therefore, possible that this capacity would become unusable following the outbreak of a more destructive virus. Patents covering several technologies related to transient expression in plants will end during or shortly after 2020, facilitating the broader commercial adoption of the technology. This could accelerate the development of new PMP products in a pandemic situation . However, PMP production capacity is currently limited. There are less than five large scale PMP facilities in operation, and we estimate that these facilities could manufacture ~2,200 kg of product per year, assuming a combined annual biomass output of ~1,100 tons as well as similar recombinant protein production and DSP losses as for mammalian cells. Therefore, plant-based production certainly does currently not meet the anticipated demand for pandemic countermeasures. We have estimated a global demand of 500–5,200 tons per year for mAbs, depending on the dose, but only ~259 tons per year can be produced by using the current global capacity provided by mammalian cell bioreactors and plant-based systems currently represent less than 1% of the global production capacity of mammalian cell bioreactors. Furthermore, the number of plant molecular farming companies decreased from 37 to 23 between 2005 and 2020, including many large industry players that would be most able to fund further technology development . Nevertheless, the current plant molecular farming landscape has three advantages in terms of a global first-line response compared to mammalian cells. First, almost two thirds of global production capacity is held by CMOs or hybrid companies , which can make their facilities available for production campaigns on short notice, as shown by their rapid response to COVID-19 allowing most to produce initial product batches by March 2020. In contrast, only ~20% of fermentation facilities are operated by CMOs . Second, despite the small number of plant molecular farming facilities, they are distributed around the globe with sites in the United States, Canada, United Kingdom, Germany, Japan, Korea, and South Africa, with more planned or under construction in Brazil and China . Finally, transient expression in plants is much faster than any other eukaryotic system with a comparable production scale, moving from gene to product within 20 days and allowing the production of up to 7,000 kg biomass per batch with product accumulation of up to 2 g kg−1 . Even if the time required for protein production in mammalian cells can be reduced to 6 months as recently proposed , Medicago has shown that transient expression in plants can achieve the same goals in less than 3 months . Therefore, the production of vaccines, therapeutics, and diagnostics in plants has the potential to function as a first line of defense against pandemics. Given the limited number and size of plant molecular farming facilities, we believe that the substantial investments currently being allocated to the building of biopharmaceutical production capacity should be shared with PMP production sites, allowing this technology to be developed as another strategy to improve our response to future pandemics.Nutrients, especially nitrogen and phosphorus , affect terrestrial ecosystem carbon cycling through their regulation of plant and soil microbial activity . Natural terrestrial ecosystems are often nitrogen and phos phorus limited , with a general consensus that temperate and boreal ecosystems are commonly N limited while tropical forests are phosphorus limited .

Lam was less efficiently transmitted and less able to multiply in citrus leaves of all sweet orange varieties

Small datasets were generated for each sample and used in bioinformatic analysis through de novo assemblies and read-mapping. Assembled contigs were identified and classified according to the sequence they aligned to with the highest bit score in BLAST searches against the NCBI non-redundant DNA and protein databases. The Capillovirus, ASGV was identified in the known sample together with several CTV genotypes. However, the atypical “psorosis” sample had a more complex virome that include three viroids , as well as several CTV genotypes. The presence of multiple CTV genotypes was confirmed for both samples by read-mapping to full length reference genomes. The results of this proof of principle experiment indicate that the metagenomic sequencing approach of dsRNA can be successfully implemented to establish the virome of citrus trees with an unknown virus etiology.Research subsequently focused on other aspects of the disease epidemiology: impact of ambient temperatures and graft transmission and multiplication of the pathogens in citrus and orange jasmine Jack as incorrectly referred to in other publications. The research on the impact of ambient temperatures focused on exposing Lam+ve and Las+ve trees to distinct daily temperature regimes . The research was motivated by the already known contrasting responses to ambient temperatures of plants affected by Las or Ca. L. africanus and by the complete lack of information on this subject for Lam. After a series of growth chamber experiments it was demonstrated that Lam is more heat sensitive than Las. Fully symptomatic orange trees affected by Lam exposed to daily regimes of 27 to 32°C, 24 to 32°, or 35 to 38ºC for 60 days were totally cleared of symptoms and of the pathogen,cultivo arandanos en maceta while fully symptomatic trees affected by Las were only partially cleared of symptoms and the pathogen only when exposed to 24 to 38ºC for the same duration.

More recently it was shown that this same temperature regime leads to a decline in Las titers in new flushes on symptomatic branches, an impact which would lead to a significant reduction in pathogen acquisition rates by the insect vectors feeding on them . Although field work will add important information on this aspect of the HLB pathosystem, data so far accumulated indicate that high summer temperatures may restrict rates of spread of the disease and help to explain the irregular dissemination patterns of HLB in SPS. Field and greenhouse experiments involving even higher temperatures for different durations were also conducted, with the aim of curing Las+ve trees, but with limited success . The reasons for the limited success were apparently related to the sensitivity of the citrus tree to high temperatures and to the ability of the pathogen to survive in roots. The temperature-time combinations necessary to kill the bacterium were apparently close to those that would kill a citrus tree, and in the roots, the bacterium remains protected from heat. Studies on graft transmission of Las and Lam were conducted with the objective of comparing graft transmission efficiencies and the ability of both bacteria to multiply, individually or simultaneously, in potted Valencia, Hamlin, Pera, and Natal , under conditions favorable for disease development .The percentage of plants that became infected varied from 10.0 to 23.3% for Lam and 66.7 to 73.3% for Las, and the cycle threshold values varied from 24.14 to 24.97 for Lam and 19.42 to 20.92 for Las. These Cts corresponded to average 106 and 107 cells per gram of tissue for Lam and Las, respectively. Similar values were obtained also when field samples, collected from three distinct regions of SPS, were analyzed . No apparent effect of one species over the other was observed in plants inoculated simultaneously with both pathogens. Lower titers of Lam appear to be the main factor explaining its conspicuous decline over the years in SPS. Lower titer would reduce the chances of pathogen acquisition by the insect vector and its consequent transmission to healthy trees, in a pattern similar to the one observed for Las in new flushes exposed to heat .

This work also showed that, contrary to Lam+ve plants, Las+ve plants harbored the bacterium attiters close to maximal values, three months before symptom expression, an indication that asymptomatic trees may be serving as a source of inoculum, contributing to dissemination of HLB in the field. The research on orange jasmine aimed at determination of the distribution, based on sampling at 76 urban locations over two time intervals, of orange jasmine trees infected by Lam or Las, and determination of levels of genetic and pathogenic similarities among the orange jasmine and citrus liberibacters, based on sequences of the rplJ gene and on cross inoculation experiments . The work was motivated by the detection of Lam in a single mature orange jasmine tree growing in front of the manager’s house in the citrus farm most affected by Lam in 2004, by the detection of Las in 2005 in orange jasmine trees growing in urban areas and, more importantly, by suspicion that infected M. exotica trees may play an important role in the HLB epidemics. In the years 2005/2006 Lam was detected in 56 and Las in 2 of the 477 orange jasmine trees from 10 locations and, in 2009, Lam was detected in an additional 5 and Las in 28 of the 309 orange jasmine trees from seven locations. Lam titers were higher in Lam+ve than in Las+ve trees . As happens with infected citrus under favorable conditions for disease development, symptom severity was stronger on the orange jasmine trees infected by Lam than on those infected by Las. The higher symptom severity in M. exotica may not be related only to the higher bacterium titers in this host since in citrus, Lam reaches lower titers than Las. In Las infected orange jasmine the infection seemed to be transient. This was observed in naturally infected field trees and in graft-inoculated plants. This work also showed that the infected orange jasmine trees were in locations relatively close to each other and, coincidently, in the area of highest incidence of HLB in citrus at that time, a clear indication of pathogen transmission from host to host by D. citri. Similarity among citrus and orange jasmine liberibacters,grow hydroponic in terms of pathogenicity, could not be fully determined due to the strong tissue incompatibility observed between citrus and orange jasmine during the cross inoculation experiments. Most budwood used as inoculum died in heterologous combinations. On those plants in which the budwood survived, only Lam was successfully transmitted and the plants remained infected.

Comparative analysis of the rplJ gene from the liberibacters found in orange jasmine with those found in citrus showed that Lam or Las from both hosts were identical. The importance of orange jasmine and citrus as source of Lam to citrus in SPS was investigated in further work involving the insect vector for bacterium inoculation . Higher Lam transmission rates occurred from orange jasmine than from citrus. As orange jasmine trees infected with liberibacter are not systematically eliminated in urban areas, and vector populations not suppressed, orange jasmine may represent a constant risk to neighboring citrus orchards. Also, since nursery production and sale of orange jasmine are not regulated , asymptomatic orange jasmine trees may be important for distributing liberibacters to distant citrus areas still free from the disease. An overview of the HLB epidemics in Brazil, particularly in SPS, and the main research findings on the HLB pathosystem were briefly presented here. Other field work and studies , and the daily experience of the citrus growers with the disease, have confirmed the necessity of eliminating symptomatic trees and controlling the insect vector on an area-wide basis in order to optimize opportunities for successfully minimizing the spread and impact of HLB. Although many research questions still require answers, research has provided a better understanding of the distinct patterns of spatio-temporal progress of the disease, and knowledge required for official responses and establishment of management practices. Among research outcomes, impacts of high temperatures on Las multiplication in new flushes may have some potential for the development of new, less costly and less insecticide dependent strategies to manage HLB. The impact of the COVID-19 pandemic caused by the novel severe acute respiratory syndrome coronavirus 2 was foreshadowed by earlier epidemics of new or re-emerging diseases such as SARS , influenza , Middle East Respiratory Syndrome , Ebola , and Zika affecting localized regions . These events showed that novel and well-known viral diseases alike can pose a threat to global health. In 2014, an article published in Nature Medicine stated that the Ebola outbreak should have been “a wake-up call to the research and pharmaceutical communities, and to federal governments, of the continuing need to invest resources in the study and cure of emerging infectious diseases” . Recommendations and even new regulations have been implemented to reduce the risk of zoonotic viral infections , but the extent to which these recommendations are applied and enforced on a regional and, more importantly, local level remains unclear. Furthermore, most vaccine programs for SARS, MERS, and Zika are still awaiting the fulfillment of clinical trials, sometimes more than 5 years after their initiation, due to the lack of patients .

In light of this situation, and despite the call to action, the SARS-CoV-2 pandemic has resulted in nearly 20 million infections and more than 700,000 deaths at the time of writing based on the Johns Hopkins University Hospital global database.The economic impact of the pandemic is difficult to assess, but support programs are likely to cost more than €4 trillion in the United States and EU alone. Given the immense impact at both the personal and economic levels, this review considers how the plant-based production of recombinant proteins can contribute to a global response in such an emergency scenario. Several recent publications describe in broad terms how plant-made countermeasures against SARS CoV-2 can contribute to the global COVID-19 response . This review will focus primarily on process development, manufacturing considerations, and evolving regulations to identify gaps and research needs, as well as regulatory processes and/or infrastructure investments that can help to build a more resilient pandemic response system. We first highlight the technical capabilities of plants, such as the speed of transient expression, making them attractive as a first-line response to counter pandemics, and then we discuss the regulatory pathway for plant-made pharmaceuticals in more detail. Next, we briefly present the types of plant-derived proteins that are relevant for the prevention, treatment, or diagnosis of disease. This sets the stage for our assessment of the requirements in terms of production costs and capacity to mount a coherent response to a pandemic, given currently available infrastructure and the intellectual property landscape. We conclude by comparing plant-based expression with conventional cell culture and highlight where investments are needed to adequately respond to pandemic diseases in the future. Due to the quickly evolving information about the pandemic, our statements are supported in some instances by data obtained from web sites . Accordingly, the scientific reliability has to be treated with caution in these cases.The development of a protein-based vaccine, therapeutic, or diagnostic reagent for a novel disease requires the screening of numerous expression cassettes, for example, to identify suitable regulatory elements that achieve high levels of product accumulation, a sub-cellular compartment that ensures product integrity, as well as different product candidates to identify the most active and most amenable to manufacturing in plants . A major advantage of plants in this respect is the ability to test multiple product candidates and expression cassettes in parallel by the simple injection or infiltration of leaves or leaf sections with a panel of Agrobacterium tumefaciens clones carrying each variant cassette as part of the transferred DNA in a binary transformation vector . This procedure does not require sterile conditions, transfection reagents, or skilled staff, and can, therefore, be conducted in standard biosafety level 1 laboratories all over the world. The method can produce samples of even complex proteins such as glycosylated monoclonal antibodies for analysis ~14 days after the protein sequence is available.

Several growers emphasized the need to know the soil history to determine what cultivar to grow

The nature of this sample suggests a significant, but not surprising, overlap between those growers more generally willing to work with researchers and those experimenting with various production techniques. Importantly, many growers I attempted to contact were not reach able and/or had gone out of business, and even three of those I did interview had retired or all but exited strawberry production. In the interviews, conducted in 2018 and 2019, I was able to reframe questions that had not quite worked in the surveys, as well as probe on the more difficult questions . Before completing them, I reached saturation, such that additional interviews were no longer producing more themes or deepening understanding, which substantiated that the sample size was sufficient . Research assistants transcribed and coded interview data with NVivo qualitative research software , identifying ideas and themes that further elucidated the more bounded questions asked in the survey. Alongside these two primary sources of data , I reviewed limited discussions about cultivars from my previous project and notes taken from short discussions with growers at field days and follow-up phone calls for the survey. These additional data were thoroughly in keeping with survey and interview data, providing further triangulation of the findings. While the strawberry industry has long enjoyed the benefits of strawberries bred with multiple aims, emphasis in one area often comes at the expense of another . Since UC began its breeding program in the 1940s, growers have generally adopted those varieties with high productivity traits . An important question, therefore, was within the current context of fumigant restrictions and the emergence of novel diseases, to what extent disease resistance had become a desirable trait.To prevent them from choosing all, it asked them for their top three priorities. As seen in table 1, growers mostly wanted high yields, especially if a variation on the same theme, long steady yields,producción macetas 25 litros was included. While interest in resistance to soilborne diseases and in marketability were not negligible, they appeared as secondary priorities. These preferences were corroborated by answers to a question about which cultivars had been planted for the 2016 marketing year .

Of the UC varieties, Cabrillo, Monterey and Fronteras were the most planted and they are high yield performers. In a recent trial involving equal plot sizes, Fronteras produced an average cumulative marketable fruit weight of 11,000 grams per plot, with Monterey producing close to 9,500 per plot. Of these two cultivars, Monterey allegedly has better flavor. San Andreas, the next most widely planted cultivar, is most associated with Fusarium resistance, but in that same experiment yielded only a little over 7,000 grams per plot. The notably flavorful Albion, which is popular among growers selling in farmers markets, although was not often planted by survey respondents , yielded only about 6,500 grams per plot . Answers to a third question further clarified the dimensions of the trade-off between yield and disease resistance. Asked about the maximum decline in yield a grower would accept in a cultivar with high levels of resistance to soilborne diseases and no change in production costs, most growers reported that no or only a minimal yield decline was acceptable . Qualitative responses and interviews provided additional evidence that growers tended to choose yield over pathogen resistance and helped clarify their rationale. Of the 20 growers interviewed, 15 said yield was a high priority, albeit not without some hedging. Many recognized the importance of marketability characteristics, acknowledging that a strawberry that lacks flavor, for example, would turn off consumers. For that reason, they were more likely to grow Monterey than even higher yielding varieties, and some shippers insisted that they grow a marketable variety such as Monterey. Growers who use proprietary varieties because they sell to shippers who require them to have somewhat less choice in what they grow. The shipper sets priorities, and Driscoll’s, in particular, has allegedly prioritized flavor and disease resistance over yield in their breeding. Growers who favor work ing with these shippers do so because they obtain higher prices, making up for the loss of yield. Still, my interview data showed that when given a choice these growers, too, favor yield, especially because they are paid the same no matter what they grow. When I pressed on questions of why yield remained a priority for those selling in wholesale markets when they also complained of low prices, I learned of a significant collective action problem. Most growers recognized that it made sense for the industry to reduce supply but felt that it was folly personally to choose a lower-yielding variety.

This is the technological tread mill problem first identified by agricultural economist Willard Cochrane in 1958. Cochrane noted the tendency of farmers to adopt technologies that reduce costs because early adopt ers make additional profits as their expenses decline . As he also noted, such tendencies eventually negatively affect crop prices because other farmers join in, supply increases overall and price competition ensues, driving some out of business. In the case of adoption of a higher-yielding variety, rather than reducing cost, the output increases with little additional effort, making such a strategy nearly irresistible. As one grower put it, “We’re in a competitive environment. We like to say we don’t grow a commodity, but there are commodity-like characteristics. So if you have a variety and neighbor selling into same market, if he’s more productive he will have an edge.” In addition to low prices, fixed costs such as land leases and land preparation are extremely high and in creasing in strawberry production. Labor costs, though variable, have risen considerably with labor shortages and new minimum wage and overtime laws. Therefore, growers feel they need to sell as many berries as they can to be economically viable. As another grower said, “You could have the best fruit around, but if you don’t have yield you can never make any money. . . . I mean our costs are going through the roof. The only way we can bring some of the costs down is through yield.” At the same time,hydroponic growing systems growers also questioned this logic, asserting that the industry was under mining itself by continuing to breed and grow ever more high-yielding varieties: “So we want these varieties to give out more numbers and last longer, but it’s hurting us in the long run. . . . It seems like people think that if I plant 100 acres and make such amount of dollars, if I put 200 acres in, I’m going to make double that, but it doesn’t work that way.” This observation is corroborated by the most recent statistics on historical trends reported by the USDA National Agricultural Statistics Service. Utilized production of strawberries grown in California increased from 539 million pounds in 1974 to 3,015 million pounds in 2012, an increase of 559%; grower prices increased only from 29 cents per pound to 80 cents per pound, an increase of 276%, in that same time period . It is not that growers were oblivious to the need for disease resistance, but some were making a calculated decision that the yield benefits of a cultivar outweighed the risk of plant loss.

As one grower said, “We can have 30% die out of Radiance due to soilborne pathogens and still beat the yield on San Andreas.” More often, growers had not experienced enough plant loss to make disease resistance a priority: “If we begin to see more Vert or other pathogens, we will worry more. Right now, all is cool.” Some growers, though, who had experienced disease loss were more inclined to let go of leases on diseased land than give up on the yield or marketability advantages of a cultivar. There were exceptions, too. After that, he “switched soils,” but that soil was infested too, and he lost 32% of Monterey that year. He then turned to growing almost entirely San Andreas. Not surprisingly, it was growers with organic fields who were most interested in disease resistant varieties. With fumigation still available, growers with conventional fields remained relatively uninterested in these varieties. Understanding that most growers were unwilling to trade off yield for pathogen resistance because soil fumigants were available, I wanted to explore in more depth what role pathogen-resistant cultivars could play in reducing the use of soil fumigation. The survey included two questions about what prevents growers from reducing their use of preplant soil fumigation and what currently encourages them to reduce their use of preplant soil fumigation. It asked them to choose all answers that applied. Answers to these questions aligned with previous studies and reports . Growers most often chose “crop loss/ potential crop loss” as the condition that prevented them from reducing their use of preplant fumigation . Buyer and lease conditions played a role, as well — for instance, some leases require that growers fumigate so that the lessors, often vegetable growers, get the benefits of fumigation. On the flip side, regulatory pressures, including restrictions on fumigation in the form of buffer zones, were most encouraging growers to reduce fumigation, with opportunities such as en try into organics or land with low disease pressure also playing roles . Qualitative responses and interviews corroborated and nuanced the latter answers. Several growers emphasized how fumigant restrictions had pushed them to find alternative means to grow strawberries and discussed organic certification and the accompanying price premium as a way of offsetting the potential costs and crop losses of forgoing fumigation. In these in stances, they saw disease-resistant varieties as enabling such a transition: “Without disease-resistant varieties, conventional strawberries require the use of fumigants. If they become unavailable, organic is the best alternative.” The trade-off is noteworthy given that growers have to give up other pesticides besides fumigants to be certified organic. A few growers mentioned their willingness to give up fumigation without converting to organics, simply because of fumigation costs. And a few growers noted that organic prices might be too weak to make that trade-off. One wrote in the survey, “If I were organic [I’d reduce fumigation use], but they don’t have the price either right now.” Even the many interviewees who have organic programs were not at this time considering transitioning their entire operation; instead, they were choosing fields for their organic programs where soil conditions make them viable, often areas with low disease pressure. That organic markets were nevertheless the main factor incentivizing fumigant reduction was confirmed by answers to a question about whether there were any conditions in which growers would consider eliminating the use of preplant soil fumigation, not including transitioning to organic. Only 10 growers replied to this question, but seven said no, with two maybes and one yes. When asked to comment about what, if any, conditions might lead growers to eliminate preplant soil fumigation altogether within the next 5 years, surveyed growers mentioned cultivars completely resistant to all major soilborne diseases — not just simply tolerant to diseases, which is what the best cultivars are today. Growers basically wanted alternatives that wouldn’t forgo yield, quality or higher profit — in other words, something foolproof. In an interview, one grower was emphatic on this point: “It has to be proven to me, I gotta see it. . . . But I’m not going to do it because [the UC breeder] says ‘Oh, by the way, I have this variety that’s resistant to Macrophomina, you don’t need to fume.’ Well, let me see that, you know what I mean?” The more personalized setting of the interviews also allowed me to explore what growers would do if fumigants were taken away. Here I learned that while such a possibility heightened interest in disease-resistant varieties, several said that they would leave strawberries or retire early, and many said they would move to soilless regimes. As it happens, one of the challenges of soilless systems is finding cultivars that work in those settings. The performance of existing varieties is reportedly subpar. Those interested in remaining “on ground” clarified that disease-resistant varieties would be helpful, but they would need to adopt other tools as well, such as nonchemical modes of soil disinfestation, making breeding for disease resistance only a partial solution. One grower said, “Just having a variety that is tolerant of x, y or z only does so much. . . . . That would be just like added insurance.”

Bacterial serotype and strain specificities to plants have also been uncovered

These challenges underscore the critical need to identify novel approaches to prevent or reduce the public health risk from pre-harvest microbial contamination of fresh produce. Although to date, no breeding program has adopted strategies to control human pathogens on fresh produce, a few studies have taken steps in this direction. For instance, Shirley Micallef is exploring cultivar variability in fatty acid content in tomato fruit as a means to reduce the favorability of tomato fruit for Salmonella . Maeli Melotto is screening lettuce germplasm for susceptibility or tolerance to E. coli O157:H7 and S. enterica to define the genetic basis for the persistence of these pathogens in leafy vegetables . Additionally, in collaborative studies with USDA-ARS, Salinas, CA, United States and FDA-CFSAN, Laurel, MD, Maria Brandl has been investigating lettuce cultivars in relation to basal plant defense responses to plant pathogen infection and to processing for their role in enteric pathogen colonization . Given the complexity of produce safety issues and the need to prioritize efforts for the highest impact, a logical step would be to identify the crop–hazard pairs that create the largest burden on public health and the economy. Typically, the severity of an outbreak is estimated by the number of illnesses, hospitalizations, and deaths. With a hazard × occurrence risk model, one can begin classifying crop/hazard pairs. Although these are relevant metrics, it is very difficult to calculate the relative risk of each crop–hazard pair due to the low re-occurrence of particular pairs associated with outbreak events and the need to accumulate a substantial amount of data over extended periods of time . Nonetheless, potential targets for plant breeding that are being identified may be the basis of future research to reduce human pathogens, mycotoxins, heavy metals, toxic elements,maceta 10 litros and allergens in foods. Currently, the National Outbreak Reporting System of the Centers for Disease Control and Prevention reports disease outbreaks in the United States and maintains a comprehensive searchable database with information spanning from 1998 to 2017.

Using this resource, we have generated a heat map illustrating the relative importance of the major fresh produce in combination with reported etiological agents of outbreaks . Hierarchical clustering analysis of the etiological agents revealed Salmonella, Norovirus, and Escherichia as the three most important biological hazards based on the number of outbreaks, illnesses, hospitalizations, and deaths . In addition, the compilation of these data has enabled the identification of high priority pairs for breeding programs geared toward improving microbial safety of produce . These systems have been studied at the genetic level by Jeri Barak , Maria Brandl , Maeli Melotto , and Shirley Micallef . For instance, it has been discovered that certain varieties of tomato , lettuce , cucumbers , and melons are less likely to support pathogen populations than others, suggesting a plant genetic component underlying these traits .Identifying the molecular mechanisms underlying these interactions can point to promising plant traits to further explore and integrate in plant breeding programs. Encouraging commercial production of plant varieties that carry relevant traits without compromising other aspects of plant productivity and product marketing might help reduce illness from produce. In the area of mycotoxin contamination, Fusarium in wheat is an annual occurrence with prevalence determined by local weather at crop maturity . Aflatoxin in maize is regional and limited to more hot and humid regions, but remains relatively low in the main U.S. corn belt. However, on a global scale, up to 80% of maize seed lots can be contaminated in tropical areas such as Sub-Saharan Africa and India . Peanuts have similar occurrence of aflatoxin in areas such as East Africa. Heavy metals are predicted to continue to be a problem as arable land becomes increasingly scarce due to desertification and urbanization, and lands or irrigation water with heavy metals are more extensively used . These hazards can also be prioritized and paired with the crops in which the highest occurrence makes them the greatest human health hazards .

A multidisciplinary approach will be necessary to develop plant breeding research programs since the occurrence of a contamination event depends on the interaction of several factors such as plant genotype, environmental conditions, the microbe and its community, and plant management practices. Together,these variables may create “The Perfect Storm.” Interactions between enteric pathogens and plants affect all mitigation strategies aimed at inhibiting pathogen growth and survival on crops to improve their microbial safety. Below, we discuss various hurdles and important aspects of these interactions that must be considered to ensure the success of a plant breeding program for enhanced crop safety. One of the most significant challenges in breeding crops to decrease the risk of contamination with enteric pathogens is that they have lower fitness on plants than most well-characterized plant commensal and pathogenic bacterial species. Nevertheless, given the recurrence of food-borne illness outbreaks linked to produce , the ability of enteric pathogens to multiply and survive as epiphytes and endophytes implies that particular plant phenotypes and genotypes can affect their fitness in the plant habitat . For example, the composition of substrates available on fruit and leaf surfaces as well as in their internal tissue ; the density of trichomes, stomata, and veins , which harbor larger pools of substrates than other areas of leaves; and the physical and chemical composition of the cuticle layer on various parts of the plant , which affects water dispersal and hence, water and nutrient availability to microbial inhabitants , may all be relevant traits to investigate in plant breeding efforts for their effect on enteric pathogen colonization. Temperature and humidity conditions, and the presence of free water, are important in the multiplication and survival of enteric pathogens and must be investigated simultaneously with the role of other plant traits. This includes consideration of agricultural practices, such as irrigation type and frequency , which may greatly affect the success of any breeding strategy aimed at reducing surface and internal plant colonization by food-borne pathogens. It is also clear that physicochemical stressors in the plant environment may overshadow other factors in their inhibitory effect on enteric pathogens. Therefore, the role of certain heritable plant traits at microsites that shield the bacterial cells from such fatal stressors should be investigated at the microscopic level as well as the plant or tissue level. Fully elucidating the interaction between food safety-relevant microbes and crops necessitates the consideration of the entire plant microbiome below and above ground. Plant microbiota are complex and strongly driven by plant genetics, plant age, plant anatomical structure,maceta 3l and environmental factors . Identifying conditions that select for members of the plant microbiota able to competitively exclude enteric pathogens, which in general exhibit reduced fitness in the plant niche, can form an important component of this phytobiome approach . In addition, rhizosphere and phyllosphere microbial communities can comprise epiphytes known to affect plant colonization by enteric pathogens or toxigenic fungi either antagonistically through biocontrol strategies or favorably by supporting survival and growth. For instance, phytopathogens that actively degrade plant tissue or trigger plant chlorosis and necrosis may cause changes in pH and nutrient levels that favor the establishment and proliferation of enteric pathogens . Adjustment of management practices and environmental conditions to modulate and exploit microbe– microbe interactions should be actively investigated as part of a holistic approach to inhibit or prevent the colonization of enteric pathogens on/in plants. Certain plant phenotypes may have independent as well as co dependent effects with other plant features so that their role may only be fully revealed by actively investigating and/or selecting for both traits simultaneously.

For example, entry of enteric pathogens into the plant tissue, where they are shielded from external environmental stressors, is thought to increase their survival in the plant habitat . Thus, selecting for genotypes with lower stomatal density and stomatal pore size may prove to be effective in reducing the probability of pathogen survival on plants in the field, provided that plant productivity is not impacted by the selection of that trait. Furthermore, basal plant defense responses to the presence of human pathogens , which can only take place upon exposure of plant cells to, and close interaction with, microbial cells in the plant apoplast, require entry of the enteric pathogen cells into the substomatal space of the tissue. Consequently, the full potential of breeding for a cultivar that is less hospitable to the endophytic lifestyle of an enteric pathogen may require consideration of both plant traits, i.e., traits that affect the entry of the pathogen cells into the plant and those that affect the plant response once the cells have gained entry . The role of the physiological state of plants in their interaction with enteric pathogens cannot be understated. Plant defense responses may vary depending on the age of the plant tissue, the overall plant age, challenge history, and association with other microbes such as plant growth promoting rhizobacteria and plant pathogens . The carrying capacity of plant tissue for enteric pathogens depends on plant species and cultivar, leaf age, fruit ripeness, and root age given that structure and opening density via cracks at the secondary root emergence sites change over time . Evidence is increasing that changes in temperature and rainfall caused by climate change may affect plant physiological and anatomical responses. These include stomatal conductance and density, leaf area and cuticle thickness, plant morphology, and plant nutrient cycling . The level of relative humidity can significantly influence stomatal movement that can affect colonization of the leaf interior by human pathogenic bacteria . It is clear that if these are targets of breeding programs for improving food safety, these traits will have to be resilient under long-term shift in weather patterns. Enteric pathogens vary broadly in their fitness as epiphytes and endophytes in a species-specific manner, and even based on variation at the inter- and intra-strain level . In particular, surface appendages, such as different types of fimbriae and adhesins that act as important plant attachment factors or flagella and other surface molecules that may trigger defense signaling cascades, vary among and within enteric species and strains . Preferential bacterial pathogenic species and even serotype-commodity pairs are not uncommon and the basis for this specificity is still poorly understood. Clearly, phenotypic and genotypic variation among food-borne pathogen targets must also be taken into account while selecting for plant targets to enhance microbial crop safety. Domestication of several crops has resulted in desirable agronomic and organoleptic traits such as shape, color, and prolonged shelf-life, with the unintended loss of other traits . The resulting loss in genetic variation may have reduced the ability of some crops to cope with fluctuating environmental conditions and biotic challenges . Despite this, genetic diversity could still reside in germplasm that is not commercially grown , allowing for the possibility of reintroducing genotypic and phenotypic traits that restore lost properties or establish new ones . The underlying genetic basis for traits that enhance food safety are largely unknown, but as more research uncovers the interactions between plant, pathogen, and the environment, opportunities for identifying these traits will increase. Traits that confer enhanced food safety are likely complex and controlled by multiple genes, presenting challenges to breeding efforts, especially for human pathogen–plant interactions. A starting point could be genome-wide association studies followed by metabolic pathway analysis or functional analysis of mapped intervals . For instance, one could predict various biochemical pathways needed for the synthesis of secondary metabolites with antioxidant and antimicrobial properties that could influence plant-microbe interactions and plant responses to associated microbiota. These interactions may be extremely important in food safety and should be a major focus of pre-breeding efforts. Given the overall challenge of considering numerous aspects of plant genotype × environment × microbe × management interactions, a concerted effort to focus on given pathogen– crop models may be necessary to make headway in utilizing plant breeding as a feasible strategy to enhance produce safety. For effective genetic gain, a systems approach that maximizes consistency and differentiation of the desired phenotypes is essential. These traits must be considered with major traits of crop yield, quality, and resistance to abiotic and biotic stresses.

The proposed weathering mechanism varies from one study to another and even from fungus to fungus

Assuming that tunneling can be taken as an indicator for overall weathering activity, it remains unclear whether the greater weathering activity in the lower fertility sites is due to lower pH, greater nutrient demand on the part of the host, or greater ectomycorrhizal colonization. Hoffland et al. assessed tunneling activity across a northern Sweden podzol sequence and found that the occurrence of tunnels in feldspar grains coincided with the disapearence of easily weatherable cation sources such as biotite. Taken together, these tunnel studies imply a correlative, but not a causative, link between weathering activity by ectomycorrhizal fungi and host nutrient demand. Wilson at al. used magnetic separation to segregate readily weatherable cation sources such as biotite and orthopyroxene from more cation poor K feldpars. They then used a variety of molecular and microscopic methods to asses the density of microbial colonization and weathering state of these minerals. They found significantly more mycelial colonization of readily weatherable cation sources such as biotite and orthopyroxene than on more cation poor K feldpars, but noticed only slightly increased weathering of the biotite compared to the feldspar minerals. In the aforementioned field studies, there is evidence that ectomycorrhizal fungi may increase mineral foraging and colonization in response to increased demand for phosphorus. There is also evidence that weathered tunnels coincide with increased demand for mineral elements other than phosphorus and that ectomycorrhizal hyphae can preferentially colonize mineral fragments which are good sources of mineral nutrients other than phosphorus. However,frambueso en maceta  there is no direct evidence in field studies that foraging for and weathering of K, Mg, or Ca sources by ECM can respond to demand for these nutrients. There are many reports in the literature of forest ecosystems dominated by ectomycorrhizal hosts which, possibly due to anthropogenic acid deposition, are now limited by base cation availability and not nitrogen or phosphorus. The mesh bag approach employed by Wallander and others in Swedish forests may be a good method for examining how the mycorrhizal role in nutrient acquisition has changed with the changing nutrient status of these forests. Especially good sites to use this approach would be the sharp N depositional gradients near industrial or agricultural sites.Microcosm studies allow weathering to be quantified and can focus on the weathering of a single mineral or any desired mineral mix. Microcosms can be used to examine the weathering potential of individual ectomycorrhizal species and can be employed to isolate the weathering activity of the ectomycorrhizal fungus from that of the plant root. Microcosm studies also allow the researcher to isolate the effects of the availability of just one nutrient on weathering activity. In relatively sterile microcosm experiment it is also much easier to assay for readily decomposable weathering agents, particularly low molecular weight organic acids , and examine how LMWOA production affects weathering rates. In soils, measured bulk solution LMWOA concentrations are generally too low to significantly enhance mineral weathering due to their rapid degradation by soil microbiota. However in semi sterile microcosms and at the fungus mineral interface in natural soils, LMWOA concentrations may be high enough to greatly enhance weathering rates via proton promoted and ligand promoted dissolution. Van Scholl et al. looked at how organic acid production was influenced by nutrient deficiency of Mg, N, P, and K. Decreasing P or N increased organic acid production, while reducing Mg or K either had no effect or slightly decreased overall LMWOA, although reducing Mg did increase oxalate production in some treatments. There were also significant differences between individual fungal species organic acid exudation profiles and how they reacted to different nutrient deficiencies. Paris et al. conducted a series of studies examining how weathering activity of ectomycorrhizal fungi in azenic culture is affected by nutrient availability. They found that Ca , K, and Mg had no effect of weathering activity when one element was deficient, however when Mg and K were simultaneously deficient both phlogopite weathering and oxalic acid production increased. In order to test whether weathering activity can respond to nutrient demand there must be a nutrient sufficient treatment and a nutrient deficient treatment, both with added minerals. The great majority of microcosm studies investigating ectomycorrhizal weathering fail to have both a nutrient deficient and a nutrient sufficient treatment. Only the work by van Scholl et al. and Paris et al. , explicitly tested whether weathering activity can respond to nutrient demand. From these studies it does appear that there is potential for the ectomycorrhizal fungus alone, or the ectomycorrhizal seedling to respond to deficiencies in P, Mg, or K by enhancing weathering activity, however the study by van scholl et al. had no added minerals and thus doesn’t actually measure weathering, and the studies by Paris et al. are azenic pure culture studies. More studies are clearly needed to address this specific question. If the ectomycorrhizal fungus is also below its optimal level for a particular nutrient, then increases in weathering or nutrient uptake observed by ectomycorrhizae in a –nutrient treatment are not necessarily a reaction to host plant nutrient demand. Increased weathering may be a reaction to ectomycorrhizal nutrient demand only. Having separate mycorrhizal and rooting compartments would help to resolve this question as would an additional treatment in which the growth medium is kept very nutrient poor but the plant is foliarly fertilized. The two compartment system used in Jentschke et al. would be a very effective way to segregate the ecomycorrhizal nutrient demand from plant nutrient supply. While they do not explicitly test whether weathering activity can respond to changing nutrient status, a number of other studies can offer insight into the study of ectomycorrhizal weathering and some discussion of them is warranted in this review. Ectomycorhizae have been found to increase weathering in a number of microcosm studies ,planta de arandanos en maceta while others have not found any increase in weathering with ectomycorrhizal colonization . Many of these studies find increased weathering with one ectomycorrhizal species but not another or with one nutrient treatment or mineral type but not another. Generally, studies that deny P or K and add a weatherable P or K source such as apatite or biotite do find increased weathering with ectomycorrhizal colonization. The same cannot be said for Mg; no study has yet looked at how weathering by ectomycorrhizal plants is affected by Ca status.Wallander found that all 3 ECM strains tested increased weathering rates above the non mycorrhizal control, but the mechanism of increased weathering was different for each strain: decreasing solution pH , oxalic acid prodution and greater P uptake . The most commonly proposed mechanism for ectomycorrhizal enhancement of mineral weathering is greater nutrient uptake and transport away from the mineral surface . Organic acid production by seedlings is generally found to be altered, though not necessarily increased, by ectomycorrhizal colonization. Organic acid exudation does not respond in a consistent way to nutrient demand or to the presence of certain minerals, nor is it generalizable across different ectomycorrhizal species. When one LMWOA is linked to increased weathering rates it is most commonly oxalic acid. Oxalic acid is produced in particularly large quantities by P. involutus, which also happens to be the most commonly used ectomycorrhizal species in weathering experiments. Ochs et al. found that there were strong weathering agents in the root exudates of H.& crustiliniforme, present in very low concentrations which were likely not LMWOA’s. The work of Calvaruso et al. and Uroz et al. give convincing evidence for a key role that bacteria may play in ectomycorrhizal weathering. They found that bacteria isolated from the symbiotic mantle of ectomycorrhizosphere of oak mycorrhizas have significantly higher weathering capacity than phylogenetically closely related bacteria isolated from the adjacent bulk soil . Calvaruso et al.  demonstrated that one of these bacteria has the potential to enhance non mycorrhizal seedling growth by alleviating Mg and K limitation by stimulating biotite weathering. These results strongly suggest that further research into the field of mycorrhizal helper bacteria and ectomycorrhizal weathering is warranted. It also suggests that some of the highly reductionist experiments with either no bacteria or a much simplified bacterial community may fail to account for a key mechanism of ectomycorrhizal weathering. Often the rooting area in pot or microcosm studies is quite small such that the roots are far more densely packed than they would be in a natural setting. As a result, the ectomycorrhizosphere is no larger than the rhizosphere in nonmycorrhizal treatments. This eliminates one of the major proposed advantages of mycorrhizal colonization, and possibly a key mechanism by which ectomycorrhizae may confer a greater weathering ability on root systems: greater mineral surface contact and uptake of weathering products directly from mineral surfaces. The majority of microcosm experiments employ an artificial rooting medium and/or an inorganic nutrient solution, both of which may be a poor recreation of the nutrient environment of field settings. Nutrient starvation may be achieved when minor nutrient limitation, more representative of field conditions, is desired. Most microcosm studies also have either no or a highly simplified bacterial community, which may significantly alter weathering dynamics from natural settings. Another key drawback of microcosm studies is that the carbon and nutrient exchange dynamics of isolated seedlings in a laboratory may bear little resemblance to that of seedlings or mature trees in the field. In field settings hyphal networks may allow seedlings to avoid some of the initial carbon investment involved in establishing mycorrhizal colonization. Mature ectomycorrhizal trees are generally considered to be dependent on ectomycorrhizal communities for survival, while seedlings in the lab often experience growth reductions in response to mycorrhizal colonization and uncolonized seedlings can be far larger and more vigorous. Calculating the weathering rates in forest soils is critically important to forest managers, air quality policy, and models of forest productivity. Any removal of timber from a forest represents a removal of mineral nutrients; understanding how quickly those nutrients are replenished by atmospheric deposition or mineral weathering is a key component of a sustainable harvesting cycle . Mineral weathering rates determine a soil’s buffering capacity and are the single most important properties determining an ecosystem’s ability to buffer the effects of acidifying pollutants . Mineral weathering rates in soils are also the single most poorly constrained component of models designed to calculate acceptable airborne pollutant loads of nitrogen and sulfur deposition from power generation, transport, and agriculture . Accurate estimates of net primary productivity of forests over the course of the next century are critically important to global carbon models. Forest productivity is predicted to increase due to elevated CO2 . The extent of this negative feedback to elevated CO2 levels is largely dependant on forest trees’ ability to meet their increased carbon availability and water use efficiency with increased nutrient uptake . As the effects of anthropogenic nitrogen deposition continue to accumulate, large areas of forest are limited by base cation availability , which is a function of mineral dissolution. In coniferous trees, elevated CO2 has been shown to increase the ratio of root to shoot biomass  and allocation to mycorrhizal symbionts . To understand how forest productivity and forest carbon stocks will be affected by global change we must first understand whether increased carbon allocation to nutrient uptake organs actually results in increased nutrient uptake and whether this increased carbon allocation is a result of increased nutrient demand. Most forest trees of the temperate and boreal biomes are dependant on ectomycorrhizae for their survival . Ectomycorrhizal fungi  are symbionts that form an intimate association with the fine roots of trees and some woody shrubs. Increased nutrient uptake is generally considered to be the most beneficial effect of EMF on forest trees , though EMF have also been shown to increase water uptake , provide resistance to aluminum and other toxic metals , and increase pathogen resistance . EMF take up nitrogen from the soil and provide their host plant significant amounts of it; up to 80 % of total plant N uptake is from EMF . 

Ectomycorrhizal biomass may be much more recalcitrant than fine root biomass

However, ectomycorrhizae have also been shown to provide their host plants with significant amounts of the mineral derived nutrients calcium , potassium , magnesium , and phosphorous . Studies have shown that ectomycorrhizal fungi may also play a role in the weathering of soil minerals, enhancing mineral nutrient uptake from these otherwise highly recalcitrant nutrient pools . Ectomycorrhizal communities are species rich, with well over a hundred ECM species having been documented in monodominant forests , and dozens or more on individual trees . Our knowledge of the respective ecological niches of ECM fungi is poor, but there is ample evidence that suggests discreet, nonSoverlapping niches of habitat preference and nutrient acquisition exist for some species. As atmospheric CO2 concentrations rise, forest growth and tree’s nutrient demands may increase. The Earth’s atmospheric concentrations of CO2 are increasing due to fossil fuel combustion, agriculture, and deforestation, and are predicted to continue to rise, even if we arrest the increasing rate of anthropogenic CO2 emissions. A number of studies have shown that plants grow faster and fix more CO2 when CO2 concentrations are increased above ambient levels . This increased growth however, is dependant on increased nutrient uptake to support increased standing plant biomass There is evidence that forests are responding to this increased nutrient demand caused by CO2SstimulatedSgrowthSenhancement by increasing root growth and developing a deeper distribution of roots.

Anthropogenic nitrogen pollution threatens to alter the productivity and carbon storage of temperate and boreal forests. Anthropogenic nitrogen pollution from energy generation, transport,growing strawberries vertically and agriculture has more than doubled the inputs of nitrogen to terrestrial ecosystems . Emissions of the other major component of acid rain, sulfur, were successfully reduced in the early 1990’s, and public attention to acid rain has since diminished greatly. ANP however, has either remained steady or increased somewhat in the developed world, and has risen sharply, and is predicted to rise even more sharply in the 21st century in the developing world . Nitrogen put into the atmosphere by transport and energy generation returns to earth as HNO3 and can fall as wet or dry deposition many hundreds of miles from pollution sources . Soil nitrogen status is, for temperate and boreal forests, the dominant edaphic factor controlling forest productivity and shaping species composition . Anthropogenic nitrogen pollution has facilitated invasive species establishment in many forests of the temperate and boreal zone and contributed to widespread species loss . There is ample evidence that moderate levels of ANP may significantly increase the net primary productivity of temperate forests . Beyond a certain amount of accumulated ANP forest productivity may drop sharply as a result of soil acidification and excess N inputs leaching out other essential nutrients, which then become limiting to forest productivity. This shift from nitrogen limitation to limitation or coSlimitation by phosphorous , potassium , or calcium due to prolonged ANP has already been observed in a number of forests in Eastern North America. Thus, the continued productivity of forests sustaining heavy nitrogen deposition will become dependant on the uptake of these mineral derived nutrients. Mineral weathering increases the supply of these nutrients and neutralizes the acidifying effects of nitrogen deposition.Decreased below ground carbon allocation equates to decreased inputs of carbon into deep soil; carbon inputs which may lead to longer Sterm soil carbon retention than above ground litter inputs . This decreased below ground carbon allocation also has profound effects on mycorrhizal relations. Understanding how nitrogen deposition and elevated atmospheric CO2 concentrations will affect forest productivity and soil carbon storage is essential to predicting how future anthropogenic emissions of carbon will affect the Earth’s climate. 

At present, an amount of carbon equivalent to 600 % of our annual CO2 emissions is taken up by the planet’s terrestrial biota each year, the majority in forests . Even small increases in forest productivity could be a major negative feedback to greenhouse gasS induced climate change. There is five times as much carbon stored in soils as there is in living plant biomass, a change of just 1 % in soil carbon pools is equal to 3 years of anthropogenic carbon emissions . There is a wide variety of effects of anthropogenic emissions of N and C that may affect these huge stocks of soil carbon, with the net effect, increase or decrease, very much unclear. Ectomycorrhizal communities may play a major role in determining how forest productivity and soil carbon stocks are affected by anthropogenic carbon and nitrogen emissions. Elevated CO2 has been found to alter ectomycorrhizal community composition and increase mycorrhizal colonization . Anthropogenic nitrogen pollution has been found to alter ectomycorrhizal community composition , decrease ECM diversity  and decrease colonization intensity . The potential loss of ECM species from nitrogen deposition reduces forest biodiversity and may represent a reduction in forests’ resiliency to future environmental change. ECM represent a very large sink for fixed carbon; studies have found more than 60% of recent carbon assimilation and net primary production may be allocated to ectomycorrhizal symbionts, though most estimates are closer to 15%.Reductions in C allocation to ECM may significantly reduce soil C storage and serve as a positive feedback to global change. The reductions in carbon allocation to ectomycorrhizae observed under nitrogen limitation may continue as more N deposition occurs or may level off as other nutrients become limiting to forest growth. The increase in below ground carbon allocation observed with elevated CO2 may continue as global CO2levels increase, or may level off or reverse if plants become sufficiently nutrient limited that carbon fixation rates are reduced.

If ectomycorrhizal community shifts observed under elevated CO2 and nitrogen inputs represent a shift towards ectomycorrhizal species that are better able to provide the nutrients most limiting to plant growth, then forests may likely adapt to their shifting nutrient demands and continue to increase in productivity in response to increasing CO2 levels and nitrogen deposition. If, on the other hand, these shifts in community composition, and reductions in mycorrhizal colonization reflect temperate and boreal forests’ adaptation to limitation by nitrogen and only nitrogen,best vetical garden system  then increasing amounts of forest may experience reduced productivity in response to continued nitrogen deposition and the fertilization effect from CO2 enrichment will likely decrease as forests become more severely nutrient limited. My dissertation attempts to shed light on which of these two scenarios is likely to unfold over the coming decades of continued anthropogenic global change. In chapter one, I investigated the effects of nitrogen addition on ectomycorrhizal community composition and colonization in a deciduous forest. Very high levels of nitrogen fertilization significantly changed ectomycorrhizal community composition, decreased ectomycorrhizal diversity, and shifted the ectomycorrhizal distribution more towards the mineral soil. These results suggest that the ectomycorrhizal community may be shifting to meet the changing nutrient demands of the forest and outline a potential mechanism for increased soil carbon storage under anthropogenic nitrogen deposition. Based on the high fungal diversity found in the mineral soil, and the fact that the ectomycorrhizal abundance in the mineral soil increased in response to nitrogen deposition I decided to investigate how the heterogeneous distribution of nutrients in mineral soil determines fungal species distribution. In chapter two, I tried to assess which soil properties determine fungal species distribution. Our sampling design prevented us from assessing the role of some of the chemical properties examined, but carbon content and depth emerged as the most influential soil properties determining fungal community composition. 

Calcium content also appeared to be important in determining fungal community composition. While pure culture studies have shown that ectomycorrhizal fungi vary in their ability to stimulate mineral weathering and take up mineral nutrients, the observed species shifts in ectomycorrhizal communities can only reflect shifting nutrient demands from host plants if plants can allocate carbon to the mycorrhizal fungi that are providing the most mineral nutrients. In Chapter 3, I present a literature review on the current state of knowledge on how plant nutrient demand drives fungal weathering. Within the plant physiology literature there is evidence that plants may be able to respond to phosphorous and potassium limitation with increased carbon allocation to mycorrhizal fungi providing those nutrients. Magnesium limitation reduces below ground carbon allocation, and the effects of calcium limitation on carbon allocation are unclear. In studies of ectomycorrhizal weathering there is a distinct lack of explicit testing of the role of host nutrient status in driving fungal weathering. I conclude by making a number of recommendations for how future studies can address this important question. For the fourth chapter of this dissertation I investigated how elevated CO2 affects plant growth, biotic weathering, and organic acid exudation, as well as the roles of plants, their ectomycorrhizal symbionts, and organic acids in stimulating mineral weathering. Elevated CO2 increased plant growth but did not increase mineral weathering. Pine seedlings but not ectomycorrhizae significantly increased mineral weathering, though there was some indication that one of the two ectomycorrhizal species examined, Piloderma&fallax, may have stimulated mineral weathering. These results do not support the hypothesis that increased nutrient demand by plants, caused by increased CO2 availability, will stimulate weathering, though our system’s departures from forest soil conditions hamper our ability to directly relate our results to processes in forested ecosystems. In my doctoral research I set out to determine whether the ecological, chemical, and physiological nature of the ectomycorrhizal symbiosis will allow for ectomycorrhizal communities to shift their functioning in accordance with the changing nutrient demands of forests experiencing global change. My research indicates that ectomycorrhizal communities may shift to accommodate the shifting nutrient demands of forest undergoing sustained heavy nitrogen deposition, but failed to find an effect of elevated CO2 on mycorrhizal fungi or biotic weathering. I also identified a number of ways in which future studies could address these questions in a more targeted, verifiable manner.Anthropogenic nitrogen pollution is a global problem that threatens the ecological integrity of many terrestrial and aquatic ecosystems. Anthropogenic nitrogen pollution  from energy generation, transport, and agriculture has more than doubled the inputs of nitrogen to terrestrial ecosystems . Emissions of the other major component of acid rain, sulfur, were successfully reduced in the early 1990’s and public attention to acid rain has since diminished greatly. Anthropogenic nitrogen pollution however, has either remained steady or increased somewhat in the developed world, and has risen sharply, and is predicted to rise even more sharply in the 21st century in the developing world . Nitrogen pollution from agriculture either leaches into streams, lakes, and groundwater as nitrate or volatilizes off of fields and manure deposits to return to the earth in the form of ammonium. Nitrogen put into the atmosphere by transport and energy generation returns to earth as HNO3 and can fall as wet or dry deposition many hundreds of miles from pollution sources. As a result of both these processes, but primarily due to the more far reaching HNO3 , large forested regions are receiving inputs of HNO3 that threaten to dramatically alter their ecology and species composition , and, when prolonged severe deposition occurs, reduce their productivity.Ectomycorrhizal fungi are essential to forest health and are particularly important to the nitrogen nutrition of temperate and boreal forests. They form intimate associations with tree roots, providing the roots with nutrients and receiving fixed carbon from their plant hosts. Ectomycorrhizae form symbiosis with less than 3% of the world’s plant species but with many of the dominant trees of temperate and boreal forests, particularly the plant families Pinaceae and Fagaceae, . Ectomycorrhizal communities are species rich, with well over a hundred ECM species having been documented in monodominant forests , and dozens or more on individual trees . Ecomycorrhizae have been shown to transfer significant amounts of P , Mg , Ca , and K to their plant hosts, but their provision of nitrogen is generally considered their primary contribution to plant health.Studies have shown that ECM may provide up to 80% of a host plant’s total N uptake . The extraradical mycelia of ECM greatly increase the volume of soil that roots can exploit . Through the use of a diverse suite of enzymes ECM may be able to solubilize and take up nitrogen from organic N pools that roots cannot utilize .