Machine learning in autonomous vehicles

By Published On: July 8th, 2021Categories: computer vision, machine learning

Autonomous vehicles (AVs) have been developing at a very high rate in recent years. They are expected to reach a full level of autonomy (level 5) very soon. The main technology for creating autonomous vehicles is Artificial Intelligence (AI) and in particular its subcategories Machine Learning (ML) and Deep Learning (DL). In this article we will look at the application of Machine learning algorithms in autonomous vehicles.

For a better understanding of the ML algorithms used in AVs, first information about the device and the principle of operation of autonomous cars is presented. Then the methods of machine learning and their application in various tasks of self-driving cars are presented. Finally, we have presented some popular simulators for autonomous vehicles, where you can test and train algorithms and make different scenarios.

About the Autonomous vehicles
Autonomous cars aim to eliminate the need for a driver. They will be able to take us from one location to another completely independently. To be able to accomplish this complex task, these cars have additional hardware and software systems that help them understand the environment, make decisions, and take action.

How self-driving cars make decisions?
Driverless cars can identify objects, interpret situations, and make decisions based on object detection and object classification algorithms. They do this by detecting objects, classifying them, and interpreting what they are.

How does a self-driving car see?
The three major sensors used by self-driving cars work together as the human eyes and brain. These sensors are cameras, radar, and lidar. Together, they give the car a clear view of its environment. They help the car to identify the location, speed, and 3D shapes of objects that are close to it. Additionally, self-driving cars are now being built with inertial measurement units that monitor and control both acceleration and location.

Let’s present schematically the main components of the autonomous car control system (Fig. 1). The main activities can be divided into two categories – Scene understanding and Decision making and planning. These activities can be fully implemented through Machine Learning methods. Later we will make the connection between ML algorithms and the two main activities. The main tasks of the control system are:

  • Sense: gathering sensor data from the environment;
  • Perceive and localize: recognize and locate objects and markers;
  • Scene representation: understanding the environment parameters and characteristics;
  • Plan and decide: path and motion planning, finding optimal trajectory according to the driving policy;
  • Control: setting the necessary vehicle parameters for acceleration, deceleration, steering and braking.

Figure 1. Main components of the AVs control system[1].

Machine learning algorithms
Machine Learning (ML) is a process whereby a computer program learns from experience to improve its performance at a specified task. ML algorithms are often classified under one of three broad categories: Supervised Learning, Unsuper-vised Learning and Reinforcement Learning (RL). Supervised learning algorithms are based on inductive inference where the model is typically trained using labelled data to perform classification or regression, whereas unsupervised learning encompasses techniques such as density estimation or clustering applied to unlabelled data. By contrast, in the RL paradigm an autonomous agent learns to improve its performance at an assigned task by interacting with its environment. Russel and Norvig define an agent as “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators” [2]. RL agents are not told explicitly how to act by an expert; rather an agent’s performance is evaluated by a reward function R.

According to their features and principle of operation, the algorithms of machine learning can be applied in various activities of the principle of operation of autonomous vehicles. In the “Scene understanding” activity, the use of supervised learning is appropriate, and in the “Decision making and planning” activity, the use of reinforcement learning is appropriate. Thus, the entire autonomous vehicle control system can be built from machine learning algorithms. Only the tasks for collecting the sensory data should be performed by other algorithms, related to sensor data processing [3].

With the supervised model, an algorithm is fed instructions on how to interpret the input data. This is the preferred approach to learning for self-driving cars. It allows the algorithm to evaluate training data based on a fully labelled dataset, making supervised learning more useful where classification is concerned.
Autonomous driving tasks where RL could be applied include: controller optimization, path planning and trajectory optimization, motion planning and dynamic path planning, development of high-level driving policies for complex navigation tasks, scenario-based policy learning for highways, intersections, merges and splits, reward learning with inverse reinforcement learning from expert data for intent prediction for traffic actors such as pedestrian, vehicles and finally learning of policies that ensures safety and perform risk estimation.
Machine learning algorithms can be loosely divided into four categories: regression algorithms, pattern recognition, cluster algorithms and decision matrix algorithms. For more details see [4].
We can say that supervised learning provides the necessary environmental information for reinforcement learning.

Popular Machine learning algorithms used in self-driving cars

  • SIFT (scale-invariant feature transform) for feature extraction
    SIFT algorithms detect objects and interpret images. For example, for a triangular sign, the three points of the sign are entered as features. A car can then easily identify the sign using those points.
  • Gradient boosting
    A technique for regression, classification and other tasks, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. When a decision tree is the weak learner, the resulting algorithm is called gradient boosted trees, which usually outperforms random forest. Gradient boosting and AdaBoost are working in a similar way but each method is using different mathematical models and algorithms.
  • AdaBoost for data classification
    This algorithm collects data and classifies it to boost the learning process and performance of vehicles. It groups different low-performing classifiers to get a single high-performing classifier for better decision-making. The method automatically adjusts its parameters to the data based on the actual performance in the current iteration. Meaning, both the weights for re-weighting the data and the weights for the final aggregation are re-computed iteratively.
    In practice, this boosting technique is used with simple classification trees or stumps as base-learners, which resulted in improved performance compared to the classification by one tree or other single base-learner.
  • TextonBoost for object recognition
    The TextonBoost algorithm does a similar job to AdaBoost, only it receives data from shape, context, and appearance to increase learning with textons (micro-structures in images). It aggregates visual data with common features.
  • Histogram of oriented gradients (HOG)
    HOG is a feature descriptor that is often used to extract features from image data. HOG facilitates the analysis of an object’s location, called a cell, to find out how the object changes or moves. The Histogram of Oriented Gradients method is mainly utilized for face and image detection to classify images. This field has a numerous amount of applications ranging from autonomous vehicles to surveillance techniques to smarter advertising. This algorithm is also used for recognizing and classifying a vehicle types.
  • YOLO (You Only Look Once)
    This algorithm detects and groups objects like humans, trees, and vehicles. It assigns specific features to each class of object that it groups to help the car easily identify them. YOLO is best for identifying and grouping objects. Yolo uses convolutional neural networks because they are great for understanding spatial information. They’re able to extract features like edges, lines and textures. YOLO has 24 of these convolutional layers. When AVs LIDAR sensors are paired with YOLO they’re able to navigate through dense traffic, being able to identify multiple objects and their special relationship with one another.

When you click on the name of each ML algorithm, you will be redirected to a page with a library for that. So if you want to test some of them follow the links. For some of the algorithms there are multiple libraries for the different platforms, so you can search for a specific library in the Internet. For example for the Yolo there are libs for Matlab, Keras, OpenCV etc.

Simulators for algorithm testing and training
Simulators are very good tools for experimenting and tasting different machine learning algorithms, especially RL algorithms. Simulators provide libraries for Lidars, radars, cameras, car models and different algorithms.
CARLA [5] – Urban simulator, Camera & LIDAR streams, with depth & semantic segmentation, Location information
TORCS [6] – Racing Simulator, Camera stream, agent positions, testing control policies for vehicles
AIRSIM [7] – Camera stream with depth and semantic segmentation, support for drones
GAZEBO (ROS) [8] – Multi-robot physics simulator employed for path planning & vehicle control in complex 2D & 3D maps
SUMO [9] – Macro-scale modelling of traffic in cities motion planning simulators are used
DeepDrive [10] – Driving simulator based on unreal, providing multi-camera (eight) stream with depth
NVIDIA DRIVE Sim™ [11] – an end-to-end simulation platform, architected from the ground up to run large-scale, physically accurate multi-sensor simulation. It’s open, scalable, modular and supports AV development and validation from concept to deployment
MADRaS [12] – Multi-Agent Autonomous Driving Simulator built on top of TORCS
Flow [13] – Multi-Agent Traffic Control Simulator built on top of SUMO
Highway-env [14] – A gym-based environment that provides a simulator for highway based road topologies
WEBOTS [15] – provides a complete development environment to model, program and simulate robots.

Conclusion
Machine learning has a big role in the Autonomous driving sector. In this post we have considered some basic aspects related to self-driving cars and the importance of the Machine learning algorithms as providers of the ability of autonomous behavior. We have listed some popular ML algorithms used in the practice and some of the simulators for autonomous vehicles.
Machine learning is a key component for improvement and development of AVs.
At the end we can conclude that as an application of ML, autonomous driving has the potential to reach full autonomy. That could lead to reduced road accidents, independence to those who are unable to drive and thus improve the traffic logistics.

References[1] Kiran, B. Ravi, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A. Al Sallab, Senthil Yogamani, and Patrick Pérez. “Deep reinforcement learning for autonomous driving: A survey.” IEEE Transactions on Intelligent Transportation Systems (2021).[2] S. J. Russell and P. Norvig, Artificial intelligence: a modern approach, (3rd edition). Prentice Hall, 2009.[5] Navarro, P.J.; Fernández, C.; Borraz, R.; Alonso, D. A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data. Sensors 2017, 17, 18. https://doi.org/10.3390/s17010018[4] Machine learning algorithms: https://iiot-world.com/artificial-intelligence-ml/machine-learning/machine-learning-algorithms-in-autonomous-driving/[5] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in Proceedings of the 1st Annual Conference on Robot Learning, 2017, pp. 1–16. 9, 12[6] B. Wymann, E. Espié, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner, “Torcs, the open racing car simulator,” Software available at http://torcs. sourceforge. net, vol. 4, 2000. 12[7] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics. Springer, 2018, pp. 621–635. 12[8] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in 2004 International Conference on Intelligent Robots and Systems (IROS), vol. 3. IEEE, 2004, pp. 2149–2154. 12[9] P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y.-P. Flötteröd, R. Hilbrich, L. Lücken, J. Rummel, P. Wagner, and E. Wießner, “Micro- scopic traffic simulation using sumo,” in The 21st IEEE International Conference on Intelligent Transportation Systems. IEEE, 2018. 12[10] C. Quiter and M. Ernst, “deepdrive/deepdrive: 2.0,” Mar. 2018. [Online]. Available: https://doi.org/10.5281/zenodo.1248998 12[11] Nvidia, “Drive Constellation now available,”https://blogs.nvidia.com/blog/2019/03/18/drive-constellation-now-available/ , 2019, [accessed 14-April-2019]. 12[12] A. S. et al., “Multi-Agent Autonomous Driving Simulator built on top of TORCS,” https://github.com/madras-simulator/MADRaS, 2019, [Online; accessed 14-April-2019]. 12[13] C. Wu, A. Kreidieh, K. Parvate, E. Vinitsky, and A. M. Bayen, “Flow: Architecture and benchmarking for reinforcement learning in traffic control,” CoRR, vol. abs/1710.05465, 2017. 12[14] E. Leurent, “A collection of environments for autonomous driving and tactical decision-making tasks,” https://github.com/eleurent/highway-env , 2019, [Online; accessed 14-April-2019].[15] Michel, Olivier. “Cyberbotics Ltd. Webots™: professional mobile robot simulation.” International Journal of Advanced Robotic Systems 1, no. 1 (2004): 5. https://cyberbotics.com/[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

 

— — —

We put a lot of effort in the content creation in our blog. Multiple information sources are used, we do our own analysis and always double check what we have written down. However, it is still possible that factual or other mistakes occur. If you choose to use what is written on our blog in your own business or personal activities, you do so at your own risk. Be aware that Perelik Soft Ltd. is not liable for any direct or indirect damages you may suffer regarding the use of the content of our blog.

Author: Denis Chikurtev

Share this story

You might be interested

  • Process for engineering AI software

    What is the Motivation for a Development Process for AI Projects? AI applications often involve not only classical [...]

  • Review of computer vision libraries and platforms

    Computer vision is a field that has undergone great development in recent years and it is becoming more [...]