Community Supported Agriculture connects farmers and the consumers of their products

The Mask R-CNN model produced an AP of 0.8996. Results were not stratified by size category. This result is 8 AP percentage points higher than the Mask R-CNN results on the Nebraska dataset for the medium sized category. This could be due to a number of factors. First, the imagery used was 0.26 meter, which is much higher resolution than Landsat 5’s 30 meter resolution. Therefore, each building instance could have a higher amount of pixels with which to compute informative features. A larger model with more parameters was also used, Resnet-101, which increases learning capacity as well as the tendency to overfit. Finally, some figures referenced in Wen et al. indicate that bounding boxes encompass multiple individual buildings. If these groups of buildings were used to represent a single instance to calculate mAP, the score would be higher than if each building was considered as a separate instance.Deines et al used a random forest model to classify irrigated pixels across the HPA using spectral indices, GRIDMET precipitation, SSURGO soil water content, a topographic DEM, and other features. The random forest model was trained on 40% of the data, validated on 30%, and tested on the remaining 30%. Since this method classified pixels and not objects, metrics are not directly comparable, however, the result shows that the model was quite successful in accurately mapping irrigated area across a wide climatological gradient. Pixelwise, overall accuracy for classified irrigated area was 91.4%, and results were visually assessed to correspond well with GCVI computed from Landsat scenes. Given that this model performed so well and that we are able to inspect feature importance for random forest models more easily than with CNN models, it is useful to determine if the features that were successful in the random forest model were or were not shared by the Mask R-CNN model so future models may take advantage of both approaches. The top six most important features in the random forest model that led to this accurate classification were,lettuce vertical farming in order of importance: latitude, slope, longitude, growing degree days , minimum NDWI, and the day-ofyear at peak greenness.

The authors note that the random forest model uses latitude and longitude to separate scenes by climatological gradients, which can improve detection as different climatological areas contain different agricultural patterns. Of these features, only minimum Normalized Difference Water Index can be directly learned by Mask R-CNN, though the model likely recognizes scene qualities such as bare soil that are correlated with a drier climatology. This indicates that pixel-wise features besides reflectances can contribute, and be even more important, than reflectance alone. Future approaches based on CNN’s could make use of not just reflectance information, but also the features named above. However, this would currently require training CNN models from scratch. This could be prohibitively expensive, increase training times, or lead to lower accuracies since weights are initialized randomly and training from scratch may not lead to as informative features as those that are arrived at after pretraining. The results from Deines et al. do not separate individual fields from each other; many clearly distinct fields are mapped as a single irrigated area unit. The method used here is useful for tracking pixel level changes in irrigation, yet it cannot be used for tracking field level statistics from year to year. The amount of training data was the second largest factor in determining the resulting performance metrics. Decreasing the amount of training data by 50% decreased performance metrics by a substantial amount. The large performance drop for the small size category, which had an 11.4% difference in terms of AP percentage points, indicates that more training data is especially important for more uncommon center pivot size categories, which highlights the importance of using as much training data as possible when training CNN-based models. Using a NIR-R-G vs RGB composite did not affect the validation AP, which is expected since center pivots are visually defined by shape rather than spectra. Correct preprocessing choices were also important in order to achieve good results, and the most important of these was to convert image values to 8-bit integers before normalizing by the mean. This step clips the very large values present in Landsat 5 OLI scene chips from high reflectance from snow and clouds and adjusts the range of values of the training set to more closely match that of the pretrained model’s original dataset, which in this case was Imagenet. Models trained on the original TIFF imagery performed very poorly even when these images were normalized by the mean .

Adjusting other hyperparameters did not affect the model performance as much as converting the data type of each image sample to 8 bit integer and using the largest training data size. Examples of hyperparameters tested include doubling the number of region proposals generated during the training stage and increasing the amount of non-max suppression to prune low-confidence region proposals. I expected that more region proposals and more aggressive pruning of low confidence proposals would lead to a better performing model, however making these adjustments did not impact model performance, which is likely because the default number of regions generated was sufficient to intersect with most potential instances of center pivots in each sample.These results clearly show that segmenting small center pivots will be a challenge for Landsat RGB imagery, given it’s coarse resolution and the less resolving power for smaller boundaries. However, Figure 13 shows that in a scene where there are many fields in full cultivation, they are for the most part all accurately mapped. This implies that inaccuracies from misdetected small center pivots in a stage of pre-cultivation could be remedied by applying the model to multiple dates and merging the output results in order to capture center pivots in a stage of cultivation during the growing season. The results from the medium category are overall much more accurate, though even in this size category, Figure 11 shows that there are many false negatives due to many pivots being in a stage of pre-cultivation or showing half in cultivation and half pre-cultivation. Figure 11 also highlights another source of error in segmenting center pivots, where corner areas surrounding center pivots are also cultivated. While these regions are not annotated in the Nebraska dataset as belonging to part of the center pivot, they could either be irrigated along with the center pivot as a single field using an extension to the irrigation apparatus or separately. This has relevance for accurately estimating individual field water use, and highlights the difficulty in resolving individual units using satellite imagery. Center pivots that are composed of multiple sections will be hard to segment, given the limited amount of training data that contains similar representations. As with small center pivots in stages of pre cultivation, one approach to segmenting these could be to apply a model to multiple dates in a growing season and then merge the highest confidence detections in order to reduce the amount of false negatives.

Because the dataset contained 52,127 instances of center pivots, it was infeasible to examine each with respect to 32 Landsat scenes as an image reference. In some cases existing center pivots went unlabeled . This impacted both the model training, leading to lower performance due to greater within class variability and similarity between the center pivots category and background category,vertical grow shelf and less certain evaluation metrics, since the results were evaluated on an independent test subset of the reference dataset. Furthermore, center pivots as a detection target represent some of the most visually distinct agricultural features, and therefore it is expected that these models would perform more poorly on small agricultural fields that do not conform to such a consistent shape and range of sizes. Finally, the reference dataset for Nebraska used a broad interpretation of center pivots to include center pivots in multiple stages of cultivation . This led to a considerable amount of within class variability during training which impacted model performance. This limitation could be handled better by assigning more specific semantic categories to the center pivot labels based on greenness indices, in order to distinguish between different developmental stages. Or, the model produced from the original dataset could be applied for multiple dates throughout the seasons and results could be merged based on detection confidence in order to map pivots when they are at their most discernible, i.e. cultivated.In the original CSA model, members support a farm by paying in advance, and in return they receive a share of the farm’s produce; members also share in production risks, such as a low crop harvest following unfavorable weather. An important social invention in industrialized countries, Community Supported Agriculture addresses problems at the nexus of agriculture, environment and society. These include a decreasing proportion of the “food dollar” going to farmers, financial barriers for new farmers, large-scale scares from food borne illness, resource depletion and environmental degradation. Together with farmers markets, farm stands, U-picks and agritourism, CSAs constitute a “civic agriculture” that is re-embedding agricultural production in more sustainable social and ecological relationships, maintaining economic viability for small- and medium-scale farmers and fulfilling the non–farm-based population’s increasing desire to reconnect with their food . The first two CSAs in the United States formed in the mid-1980s on the East Coast . By 1994, there were 450 CSAs nationally , and by 2004 the number had nearly quadrupled to 1,700 . There were an estimated 3,637 CSAs in the United States by 2009 . This rapid expansion left us knowing little about CSA farmers and farms and raised questions about their social, economic and environmental characteristics. Knowing these features of CSAs would allow for more-precise policy interventions to support and extend these kinds of operations, and could inform more in-depth analyses, in addition to giving farmers and the public a better understanding of them.We conducted a study of CSAs in 25 counties in California’s Central Valley and its surrounding foothills — from Tehama in the north to Kern in the south, and Contra Costa in the west to Tuolomne to the east. The valley’s Mediterranean climate, combined with its irrigation infrastructure, fertile soil, early agrarian capitalism and technological innovation have made it world renowned for agricultural production . In addition to its agricultural focus, we chose this region because we wanted to learn about how CSAs were adapting to the unique context of the Central Valley. Many of the region’s social characteristics — relatively low incomes, high unemployment rates and conservative politics — differ from those in other regions where CSAs are popular, such as the greater San Francisco Bay Area and Santa Cruz . An initial list was compiled from seven websites that list CSAs in the state: Biodynamic Farming and Gardening Association, California Certified Organic Farmers, Community Alliance with Family Farmers, Eat Well Guide, LocalHarvest, the Robyn Van En Center and Rodale Institute. Of the 276 CSAs that we found, 101 were in our study area. We contacted them by e-mail and phone. It became evident that some did not correspond, even loosely, to the definition of a CSA in which members share risks with the farm and pay in advance for a full season of shares. As the study progressed, we revised our definition of a CSA to mean an operation that is farm based and makes regular direct sales of local farm goods to member households. We removed some CSAs that did not meet the revised definition, based on operation descriptions on their websites or details provided by phone or e-mail if a website was not available. Some interviews that we had already completed could not be used for our analysis because the operations did not meet the revised definition. As the study progressed, we augmented the initial list with snowball sampling by asking participating farmers about other CSAs, which added 21 CSAs. Of these 122 farms, 28 were no longer operating as CSAs, seven turned out to be CSA contributors without primary responsibility for shares and 13 did not meet our revised CSA definition. We called the 28 CSAs no longer operating “ghost CSAs” because of their continued presence on online lists.