BY SUMEHRA TAZREEN AND AARIB HAIDER

Autonomous vehicles have promised solutions to our most consequential problems including car accidents, hours spent stuck in traffic, the expense of transportation infrastructure and the conversion of urban space into parking lots. With the leading company Waymo having logged over 20 million miles of autonomous driving on public roads (Waymo, 2024), the technology underlying autonomous vehicles has advanced significantly since their inception. Judging by the current development trajectory, increased use of advanced driver-assistance systems(ADAS) could reduce the percentage of car accidents in Europe by 15% in 2030. (Tom Seymour, 2018) These advancements have benefited immensely from the application of artificial intelligence (AI), which is widely used in perception, sensor fusion, path planning and object detection, to name a few.

Perception:

  • High Resolution Cameras:

High-resolution cameras play an integral role in visual perception for autonomous vehicles, enabling them to capture detailed images of their surroundings. Cameras for current advanced driver assist systems (ADAS) and fully autonomous cars include rear-facing, surround-view and forward-facing cameras (Sahin, 2019).  By doing so, the vehicle can detect and classify objects such as pedestrians, other vehicles, road markings and obstructions. The inputs are then run through an image processing algorithm, most of which are powered by convolutional neural networks (CNNs), which analyse these images to provide real-time data for efficient decision-making (Mohanjeet, 2023).

  • Light Detection and Ranging (LIDAR) :

Similarly, LIDAR is a highly promising technology utilised in autonomous vehicles to eliminate the risk of collision while driving. The LIDAR system emits laser beams, ranging from wavelengths of 850nm to 1550nm (Chaima, 2023), and measures the time taken for them to reflect off an object and return to the sensor. This data assists in generating a high-resolution 3D map of the surrounding environment, providing precise distance measurements and spatial awareness.

  • Radars:

Akin to the LIDAR, radars emit radio waves that detect objects and measure their velocity. They are particularly effective in adverse weather conditions, such as fog and rain, where the navigational capabilities of the cameras and LIDAR may become hampered. Overall, radars help in maintaining safe distances from other vehicles and detecting obstacles at longer ranges.

  • Ultrasonic Sensors:

Almost identically, ultrasonic sensors operate by emitting sound waves and measuring distances based on their reflection off nearby objects. They are mainly used for short-range detection, making them vital for tasks such as parking assistance, detecting obstacles, and low-speed maneuvers.

Sensor Fusion:

  • Klaman Filters:

Once the data has been collected from the input devices, it is processed through many algorithms; one of which includes the Klaman Filters. It is a mathematical algorithm which combines data from various sources to produce a precise and singular estimate of the speed and location of the autonomous vehicle. The Klaman Filters effectively aid in filtering out external noise and improve the reliability of the sensor data; something that is essential for precise control and navigation (Tony, 2019).

  • Particle Filters:

Similarly, Particle Filters (also known as sequential Monte Carlo methods) are of significant importance due to their capabilities of estimating the state of a system which evolves over time. For autonomous vehicles, their location can be tracked by the particle filters through them maintaining a set of possible states (particles) and re-evaluating them based upon the inputs of the sensors and motion models.

  • Bayesian Networks:

Furthermore, Bayesian Networks are probabilistic models that display the relationships between different variables. They consist of nodes (which represent random variables) and edges (which depict conditional dependencies between variables). Each node also has a corresponding probability function that quantifies the effect of the parents on the node. In autonomous vehicles, Bayesian Networks are used to model uncertainties in sensor data, generating probabilistic maps and making critical decisions in decisive moments.

Simultaneous Localization and Mapping (SLAM):

  • Graph-Based SLAM:

A Graph-Based SLAM quite simply displays the SLAM problem in the form of a graph. The nodes represent the poses of the vehicle and nearby landmarks whereas the edges represent the constraints between them. This whole process optimizes the entire graph to find the most consistent map and vehicle trajectory. Graph-Based SLAMs are also computationally efficient as well as scalable, meaning that they are more suitable for large-scale environments (G. Grisetti, 2010).

  • Extended Kalman Filter (EKF) SLAM:

SLAM is an essential requirement for autonomous navigation, enabling such vehicles to map out their visual surroundings. However, EKF-SLAM extends the basic capabilities of the Kalman Filter to allow non-linear models. It enables the vehicle to build a map of an unknown environment while simultaneously keeping track of its location within that map, infinitely aiding it when navigating through complex and dynamic environments.

Path Planning:

  • A* Search Algorithms:

The A* search algorithm is extensively used in autonomous vehicles due to its pathfinding and navigational capabilities. It works by finding the shortest path from a starting point to a target point by factoring in both the cost to reach the node as well as the estimated cost to reach the target from that node. Overall, A* algorithms are highly efficient and ensure the vehicle takes the optimal path, something that is essential for route planning and time efficiency.

  • Dijkstra’s Algorithm:

Likewise, Dijkstra’s Algorithm is another popular pathfinding algorithm that finds the shortest path between nodes in a graph. Unlike A*, Dijkstra’s algorithm doesn’t use heuristics and explores all the possible paths systematically. On one hand, this algorithm can ensure that the shortest path is taken. However, it can be slower than the A*, especially for large graphs. It is often used in situations where optimal pathfinding is required without heuristic guidance.

  • Rapidly Exploring Random Tree (RRT):

Comparably, RRT is also a path-planning algorithm that is made for high-dimensional areas. It rapidly searches the space by random sampling points and building a tree of possible paths. RRT is quite useful for navigating complex environments with many barriers and obstacles as it can efficiently find feasible paths even in high-clustered spaces.

Decision Making:

  • Finite State Machines (FSM):

FSMs are a mathematical model of computation which is utilised to model the behaviour of autonomous vehicles. This is achieved through defining a set of states and transitions between them, all of which are based on events or conditions. For example, an FSM might use states such as “driving” or “parking”, helping manage the vehicle’s response to various scenarios reliably through this structured approach.

  • Markov Decision Processes (MDP):

Similarly, MDPs are a mathematical framework for decision-making in which outcomes are partly random and partly under the control of a decision-maker. In autonomous vehicles, MDPs can model decision-making processes that involve uncertainty, such as driving through unpredictable traffic.

  • Reinforcement Learning (RL):

RL is a type of machine learning technique that trains software to make decisions to achieve the most optimal results possible. In autonomous vehicles, RL algorithms can optimize driving strategies, improve safety, and enhance efficiency by learning from experiences in simulated or real-world environments.

While theoretically ideal, its application poses a couple of challenges. The overwhelming number of variables in state and action spaces makes it difficult for RL algorithms to learn efficiently. Moreover, there are safety and ethical concerns regarding exposing an autonomous vehicle to life-and-death scenarios. Lastly, RL algorithms require a large amount of data to ‘learn’ whereas collecting and processing such data has proven to be costly.

Vehicle Control:

Autonomous vehicles comprise various modules responsible for different aspects of driving. Automatic driving relies on control, divided into longitudinal control for speed tracking and lateral control for precise steering (Yassine Kebbati, 2022).

  • Longitudinal Control:

According to a course on autonomous vehicles, longitudinal control involves tracking a speed profile along a fixed path. The key components are adaptive cruise control, braking and acceleration. Adaptive cruise control (ACC) ensures the vehicle stays a safe distance behind the car ahead and does not exceed the speed limit. When changing speed, the controller records the vehicle speed and adjusts throttle and brake commands to match the changed speed (Car and Driver Research, 2020).

  • Lateral Control:

Lateral control is how an autonomous vehicle stays on a specific path. Direct Inverse Control (DIC) is where output units provide feedback to input units, thus effectively handling non-linear systems. The key components of lateral control are steering control, lane keeping and collision avoidance(Arifin B., n.d.). Steering control takes into account the desired angle the vehicle has to turn. A Proportional Derivative Integral (PID) controller and an encoder control steering depending on the situational constraints (Pushpakanth, A. and Dhavalikar, M.N., 2022). LIDAR sensors are used to find lane lines as they reflect light differently from asphalt and edge detectors are used to estimate the lanes (Santana, E., 2017). Wireless communication and probabilistic models, aided by AI, are the basis of collision avoidance, be that with pedestrians or vehicles (Verstraete, T. and Muhammad, N., 2024).

Feedback Loop:

A feedback loop is essential for semi-automated besides entirely autonomous vehicles. The components of a feedback loop are perception of surroundings, decision-making, active implementation and evaluation of action. Perception can be further divided into detection and segmentation. Autonomous vehicles are equipped with a system of sensors and cameras to detect the environment. Segmentation is the process through which raw sensor data is used to build up a meaningful internal image (Liu, S., 2018). Decision-making relies on deep-learning algorithms whilst taking into account safety, traffic laws, ethical considerations etc. Active implementation is when all other parts of the autonomous vehicle receive the decision and implement it within a fixed time limit. Evaluation of action is where the AI powering the autonomous vehicles considers other possible decisions and their subsequent outcomes and records them for future use.  Feedback loops ensure efficiency and speed, allowing the vehicle to adapt to a dynamic environment (Wallace, E., 2023).

Object Detection and Classification:

  • YOLO:

YOLO (You Only Look Once) is a quick real-time object detection algorithm for processing images. It was introduced in 2015. The object detection problem is approached by the authors as a regression task rather than a classification task, by assigning probabilities to each detected image and spatially separating bounding boxes using a single convolutional neural network (CNN). As YOLO does not deal with complex pipelines, it can process images at 45 frames per second (fps), making it much faster than other methods. The latest YOLO models (such as YOLOv5) provide the best generalization, allowing for robust object detection by autonomous vehicles (Redmon, J., Divvala, S., Girshick, R. and Farhadi, A., 2015).

  • Faster R-CNN:

Another popular object detection method is the faster R-CNN model (region-based convolutional neural network). It is a deep convolutional network designed for object detection that presents itself to the user as an end-to-end, unified network. There are 3 main modules. The first uses a Selective Search algorithm to generate 2000 region proposals. The second module extracts a feature vector from each region proposal. The third module uses a pre-trained SVM (support vector machine) algorithm the classify the region proposal to the background or an object class (Gad, A., F., 2020).

Behaviour Prediction:

Behaviour prediction and reinforcement learning enable autonomous vehicles to anticipate future events and make decisions based on learned experiences. Deep-learning approaches have proved successful in complex, dynamic environments where split-second decisions make all the difference. MIT researchers have recently developed a method known as M2I which breaks a complex road situation (such as that observed at a four-way intersection) into smaller problems thus allowing several decisions to be taken at once in real time. This method uses a relational predictor and a conditional predictor (MIT News, 2022).

Redundant Systems:

Redundant systems ensure reliability and safety in autonomous vehicles. According to the Oxford Dictionary, redundancy, in engineering terms, is the inclusion of extra components which are not strictly necessary to functioning, in case of failure of other components. The redundant architecture consists of the braking system, steering, power supply and certain sensors. The system duplicates the battery, steering motor, wheel speed sensors and algorithms for data calculation. Radar, cameras, LIDAR and several other sensors are also duplicated as the information they deliver is irreplaceable. Mercedes’ semi-automated vehicles can safely hand over the vehicle to the driver in certain cases where all other measures fail. In conclusion, redundant systems ensure reliability in diverse conditions (Group, M.-B., 2022).

Fail Operational Mechanisms:

Fail-operational mechanisms in autonomous vehicles are crucial for the vehicle’s safety, particularly when encountering system failures or unexpected situations. It is notable to mention that fail-safe usually indicates that a driver can take over whereas fail-operational mechanisms ensure the vehicle can still function autonomously even if part of the system fails. If the autonomous vehicle’s intended path poses a threat, a fail-operational trajectory will be followed, after evaluation by AI, to ensure safety. .A CS-1 steering actuator has been designed which ensures that the system is capable of steering even when a system error (such as sensor dysfunction or power failure) occurs. The control software with advanced features can identify faults on its own and switch to the backup operating mode within a specific fault-tolerant time interval (FTTI) to ensure that the steering function continues to operate normally (Chassis Autonomy, 2022).

In conclusion, Artificial Intelligence is playing an integral and transformative role in revolutionizing the automobile industry. By enabling vehicles to detect, learn and adapt, we hold the potential to greatly enhance the safety and efficiency of the vehicles we drive. This progression marks a pivotal shift in our timeline; a change towards a smarter, sustainable and brighter future worldwide.

Bibliography:

Aarab, Chaimaa. “Everything You Need to Know about Lidar in Automotive.” Www.keysight.com, 4 Oct. 2023, www.keysight.com/blogs/en/inds/2023/10/04/everything-you-need-to-know-about-lidar-in-automotive.

Bansal, Mohanjeetsingh. “Image Processing in Autonomous Vehicles: Seeing the Road Ahead.” Medium, 5 Dec. 2023, medium.com/@mohanjeetbansal777/image-processing-in-autonomous-vehicles-seeing-the-road-ahead-b400d176f877.

Grisetti, G, et al. “A Tutorial on Graph-Based SLAM.” IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 4, 2010, pp. 31–43, www2.informatik.uni-freiburg.de/~stachnis/pdf/grisetti10titsmag.pdf, https://doi.org/10.1109/mits.2010.939925.

Lacey, Tony. Tutorial: The Kalman Filter.

Max Botix. “How Ultrasonic Sensors Work.” MaxBotix, 1 Mar. 2023, maxbotix.com/blogs/blog/how-ultrasonic-sensors-work.

Murel, Jacob . “What Is Reinforcement Learning? | IBM.” Www.ibm.com, 25 Mar. 2024, www.ibm.com/topics/reinforcement-learning

Sahin, Furkan E. “Long-Range, High-Resolution Camera Optical Design for Assisted and Autonomous Driving.” Photonics, vol. 6, no. 2, 25 June 2019, p. 73, https://doi.org/10.3390/photonics6020073

Silver, David. Lecture 2: Markov Decision Processes Lecture 2: Markov Decision Processes. Mar. 2020.

Stachniss, Cyrill. Robot Mapping EKF SLAM.

Kritayakirana, K. and Gerdes, J.C. (2012). Autonomous vehicle control at the limits of handling. International Journal of Vehicle Autonomous Systems, 10(4), p.271. doi:https://doi.org/10.1504/ijvas.2012.051270.

Waymo. (n.d.). Autonomous Driving Technology – Learn more about us. [online] Available at: https://waymo.com/about/.

Tom Seymour (2018). Crash repair market to reduce by 17% by 2030 due to advanced driver systems, says ICDP. [online] Available at: https://www.am-online.com/news/aftersales/2018/07/03/crash-repair-market-to-reduce-by-17-by-2030-due-to-advanced-driver-systems-says-icdp.

Waymo. (n.d.). Seeing is Knowing: Advances in search and image recognition train Waymo’s self-driving technology for any encounter. [online] Available at: https://waymo.com/blog/2020/02/content-search [Accessed 3 Aug. 2024].

Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection. [online] Available at: https://arxiv.org/pdf/1506.02640.

Han, J., Liao, Y., Zhang, J., Wang, S. and Li, S. (2018). Target Fusion Detection of LiDAR and Camera Based on the Improved YOLO Algorithm. Mathematics, 6(10), p.213. doi:https://doi.org/10.3390/math6100213.

Kazmi, S. (2024). Doubling down on safety: understanding our approach to redundancy in autonomous vehicles. [online]

Yassine Kebbati, Naima Ait-Oufroukh, Dalil Ichalal and Vigneron, V. (2022). Lateral control for autonomous wheeled vehicles: A technical review. 25(4), pp.2539–2563. doi:https://doi.org/10.1002/asjc.2980.

Arifin, B., Bhakti, Y., Suprapto, Nawawi, Z., Arttini, S. and Prasetyowati, D. (n.d.). XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE The Lateral Control of Autonomous Vehicles: A Review. [online] Available at: https://research.unissula.ac.id/bo/reviewer/210695009/7283PID6126331_Hasil_dari_IEEE_PDF_xPress_Bustanul_(1).pdf ‌

Pushpakanth, A. and Dhavalikar, M.N. (2022). Development of Steering Control System for Autonomous Vehicle. International Journal of Recent Technology and Engineering (IJRTE), 11(2), pp.50–53. doi:https://doi.org/10.35940/ijrte.b7105.0711222.

Santana, E. (2017). How do self driving cars drive? Part 1: Lane keeping assist. [online] Medium. Available at: https://medium.com/@edersantana/how-do-self-driving-cars-drive-part-1-lane-keeping-assist-581f6ff50349 ‌

Verstraete, T. and Muhammad, N. (2024). Pedestrian Collision Avoidance in Autonomous Vehicles: A Review. Computers, [online] 13(3), p.78. doi:https://doi.org/10.3390/computers13030078.

Liu, S., Li, L., Tang, J., Wu, S. and Jean-Luc Gaudiot (2018). Perception in Autonomous Driving. Synthesis lectures on computer science, pp.51–67. doi:https://doi.org/10.1007/978-3-031-01802-2_3.

Wallace, E. (2023). AI Feedback Loops Make Car Manufacturing More Competitive. [online] RTInsights. Available at: https://www.rtinsights.com/how-ai-driven-feedback-loops-make-car-manufacturing-more-competitive/ ‌

Chassis Autonomy (2022). The w. [online] Linkedin.com. Available at: https://www.linkedin.com/pulse/fail-safe-vs-fail-operational-what-difference-chassisautonomy/

MIT News | Massachusetts Institute of Technology. (n.d.). Anticipating others’ behavior on the road. [online] Available at: https://news.mit.edu/2022/machine-learning-anticipating-behavior-cars-0421.

Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2015). You Only Look Once: Unified, Real-Time Object Detection. [online] arXiv.org. Available at: https://arxiv.org/abs/1506.02640.

Leave a Reply