For test management, the Raspberry Pi handles another thread responsible for converting the dataset into EEG signals that are acquired by the ADC. To avoid data overlapping in the process of writing—DAC—and reading—ADC—the samples, a synchronization method is used between the two threads. Synchronization involves multiple threads efficiently waiting for each other to finish tasks. Specifically, the ADC thread waits to read the data until the DAC finishes writing the data, and the DAC waits to write new data until the ADC finishes reading the previous one. The ADC thread stores the read samples in a buffer, from which the classification thread reads them. Figure 8 illustrates the block diagram of the algorithm of the system software. As for the hardware design of this work, a proof of concept EEG-based real-time identification system using Raspberry Pi is demonstrated that integrates all the components needed for the practical use of a real subject identification system. The execution of the thread parallelism has been performed without any problem, and all the threads are synchronized to achieve their tasks without any errors. The MSE is low with respect to the range of the data conveying high similarity between the original dataset and the data generated by the DAC, which demonstrates the reliability of the results obtained during the model evaluation stage. Finally, Raspberry Pi provides portability and allows the system to be integrated into a network; hence, grow strawberry in containers the information is accessible from anywhere. We plan to develop a system for EEG acquisition and integrate it with the current Raspberry Pi-based system. The development of an acquisition system will eliminate the use of converter DAC and the need for a pre-stored dataset.
The motivation for developing our device comes from the comparison of the different commercially available products. The devices found were developed only for acquisition purposes or for more complex applications and most of them are very costly. For this reason, we are motivated to develop our own end-to-end, low-cost, real-time device involving EEG acquisition from minimum necessary channels for EEG biometrics, preprocessing, feature extraction, and subject identification. In the future, to improve our device, we will perform a deeper analysis of the hardware by studying the power and time consumption and alternative technology for minimization of cost and size of the device. The civilian use of autonomous craft for scientific as well as commercial purposes has grown significantly in the last few years. A 2018 report from the National Oceanic and Atmospheric Administration detailed the use of un-crewed systems, UxS1 , for the agency. The number of types of un-crewed aerial vehicles employed by the agency doubled in a one year period and the number of total vehicles deployed increased by 38% for the same year. Use of other UxS by the agency show a similar trend. In the same report they detail the use of UxS from space to the ocean floor and nearly every environment in between. UxS are also used to monitor dangerous environments. A New York Times article from 2021 demonstrated the use of a novel autonomous surface vessel that collected video imagery from within the eye of Hurricane Sam. There are examples in the literature of the use of UAVs to monitor wildfires and indeed commercial products exist for that purpose. UxS are in use in commercial sectors as well. In agriculture, for example, many types of systems exist for crop health monitoring, water use monitoring, crop phenotyping, and even autonomous weeding for organic crop production. Othercommercial applications exist in film making, sports broadcasting, real estate sales, house cleaning, personal use, etc. UxS are even deployed on other planets.
NASA currently has an autonomous ground vehicle on Mars, the “Perseverance,” but also a UAV specifically designed for the thin atmosphere of the planet . Clearly the use of these systems is widespread and growing. At the same time the autonomy requirements for the systems are growing more challenging. The ASV cited in the New York Times article, for example, can be deployed on missions up to one year in duration and the Mars Rover mission duration is likely longer. The implication for these longer duration missions is that vehicles must have onboard path finding capabilities, automated data capture and sample analysis, failure detection algorithms, data storage, and communication abilities to name just a few. Processing power requirements for autonomous systems are increasing as well, driven by the need for computationally expensive operations such as onboard computer vision, LiDAR sensor processing, and for evaluating machine learning models for object detection, image classification, and natural language processing. Longer duration missions and navigating complex environments place additional demands on the onboard processors. Finally, interacting in multi-agent environments, that is, environments with multiple autonomous systems adds dramatically to the complexity of the control system. Designing a modern autonomous system involves significant engineering effort and requires expertise in multiple areas. Given the widespread interest and use of autonomous systems there exists a need for a vehicle-agnostic controller—or autopilot—to enable the research and development efforts for new vehicles and new vehicle types. This autopilot should be capable of real-time computation ; easily modifiable in order to adapt to different vehicle configurations; and open-sourced in order to be accessible to a wide audience of users and contributors.
Furthermore, the autopilot may form the real-time core of a larger, distributed control system that also includes a single board computer for non-real-time tasks and a tensor processing unit to enable onboard ML capabilities. This architecture allows a developer to take advantage of the advancements in computing for SBCs and TPUs as well as to solve the issue of handling real-time tasks and non-real-time tasks simultaneously. To enable a distributed control system the autopilot must integrate with the external modules using a standard interface and a lightweight, extensible binary communication protocol. This architecture offers unique capabilities for autonomous vehicles. There are several sub-problems that can be specifically addressed with this architecture. The first is the means to develop and integrate real-time algorithms such as state estimation and vehicle control onto the autopilot. The embedded firmware operates using the simplest structure possible, with a hardware timer to dictate sensor and control update intervals. This structure allows an algorithm or sensor measurement to be encapsulated into a single function and therefore replaced or modified with minimal disruption to the remainder of the firmware. The controller can be adapted to any type of vehicle with relative ease, for example, or to evaluate different state estimation algorithms by changing one function and recompiling the firmware. Second, the SBC enables new capabilities for autonomous vehicles. Typically, the SBC has a Linux operating system providing access to standard functionality such as file storage, internet access, camera integration, USB devices, and running WiFi access points. These capabilities allow storage of sensor and vehicle state information into files from the vehicle, run mission planning software, or connect to USB peripherals such as the TPU. The OS also opens up extensive capabilities provided by open source software. For example, scientific software such as SciPy, computer vision software , ML packages , or even convex optimization software are all readily available to download onto the SBC. These advanced computational packages enable capabilities such as optimal trajectory generation, hydroponic nft channel mapping of landmarks, and advanced vision sensors. Finally, the TPU is a specialized processor that is designed to speed up ML inferencing tasks. The increased use of machine vision has made the need for onboard ML capabilities an attractive feature for many types of autonomous systems. It enables fast object detection or image classification without placing a computational burden on the SBC. For example, object detection allows for identification of landmarks or obstacles.
The TPU integrates seamlessly with the SBC via a USB interface and as in the case of the SBC runs open source software if possible. In summary, the increasing demands on autonomous systems and the technological advances in SBCs, TPUs, and sensing technology, suggest that a modular, distributed control system that includes a dedicated real-time autopilot is needed for the development of new autonomous vehicle classes. Small, resource-constrained systems benefit the most from this architecture and motivate this work. Given the proliferation of autonomous systems it is somewhat surprising that real time controllers are not more ubiquitous, few of the ones that do exist use open source firmware, and fewer still have open source hardware. The most promising one is a vehicle-agnostic control system: the modular rapid prototyping system, R2P. The project hardware and software are both open source. The system is designed so that each sensor or actuator is a standalone module called a node. The authors have developed a publisher-subscriber middleware to connect the nodes, a protocol called RTCAN. A series of connected nodes form the robot architecture. One downside to this approach is that the developer is restricted to the modules that have been developed for the system, that is, the system doesn’t have the ability to communicate directly with off-the-shelf sensors, nor can they easily be created by the user. Furthermore, each module is separately controlled with its own microcontroller and has an RJ45 wired connector so a complex system rapidly becomes unwieldy with many small boards wired together. Another downside for optimal real-time performance is the use of a real-time operating system , specifically the ChibiOS/RT. This is a powerful, lightweight and widely-used RTOS with a hardware abstraction layer that allows for re-use of peripheral drivers across different hardware platforms. However, modifying the firmware of a given module for a custom algorithm may be difficult to debug for real-time performance. For example, the inertial measurement unit module has its own attitude estimation algorithm encoded in it. If a developer needs a different attitude estimation algorithm it may be challenging to implement it and still guarantee the latency needed for the application. Nevertheless this system allows robot developers a path to quickly build and test a prototype robot, particularly if autonomy isn’t required and size of the control system is not a factor. An open source autonomous vehicle platform is found in the literature, the F1/10 platform. Although the project labels itself as an autonomous cyber-physical system platform, it is a 1/10 scale race car, employing an NVIDIA Jetson TX2 SBC running the Robot Operating System middleware. While this is a powerful SBC and ROS is a widely adopted middleware for robots, it is not real-time. In fact, the recent launch of ROS2 in part is due to the need for lower latency operations—a closer approximation to real-time, although it isn’t strictly real-time as it continues to operate on the Linux OS. In any case, this project is dedicated to a specific racing platform and therefore not easily adapted for other purposes. Aerial vehicles provide the richest source for real-time controllers, some of which have been adapted to different classes of vehicles. Table 1.1, adapted from a recent survey of open source UAV hardware controllers list the ones active as recently as 2018. Most of the platforms listed that use the STM microcontroller are designed to operate with an RTOS as the middleware, with the autopilot firmware loaded over it. The exceptions to this is the Chimera, which uses the Paparazzi autopilot firmware which has both an RTOS based firmware as well as a ‘bare metal’ implementation2 . The most prevalent autopilot firmware packages are PX4, Ardupilot, and LibrePilot . The other devices in the table, FlyMaple, APM2.8, Erle-Brain and PXFmini are no longer available. Both PX4 and Ardupilot support non-aerial vehicles to some extent. However, modifying the firmware within the RTOS is notoriously challenging due to the complexity of the code base. It is particularly difficult to implement a new vehicle type but also challenging to guarantee that latency requirements are met when modifying the estimation or control algorithms. These firmware packages are better suited to projects where modifying any of the underlying algorithms isn’t desired. The topic of an RTOS versus a bare metal application merits further discussion and will be covered in a later chapter.There are many papers on self-driving cars that describe vehicle control systems but very few discuss ones suitable for resource-constrained vehicles. For example, demonstrates a distributed architecture of a full size autonomous racecar that requires computational resources far beyond those available to resource-constrained systems. One exception to that, however, comes from, which reviews four different architectures with a focus on how resource-constrained systems can inform design choices for full size vehicles.