Self-driving cars will surely be the first robots to become popular with the majority of the population. Because an autonomous car is still a computer with wheels (and with many sensors) that is capable of moving without human intervention, which falls within the definition of a robot.
So far in the 21st century, we have begun to see vehicles that, to a greater or lesser extent, are becoming autonomous: they park alone, drive on the motorway while maintaining a safe distance, or even brake to avoid being hit. However, there is still time (although little) for a fully autonomous vehicle to be marketed, which can circulate without a driver on any road and in any weather condition.
The technology needed for autonomous driving
Autonomous driving systems need several sensor systems, which work with different technologies, among which we can highlight the following:
- Ultrasonic sensors . They are effective at detecting items that are within a short distance of the vehicle. They work by emitting non-audible sound waves and calculating the time it takes to return to the point of emission. They have been common for several years in cars, due to their low cost, and are often used in parking assistance systems.
- Satellite positioning and navigation. Its use has also been extended for several years. These systems can locate a vehicle anywhere on the planet through the trilateration technique, using a network of satellites. Its main drawback is that it has a margin of error of a (few) meters, so it requires other techniques to achieve a more precise location, necessary for high levels of autonomy.
- Inertial navigation systems. Motion sensors and gyroscopic sensors intervene that calculate at all times, through estimation and without external references, the position of a vehicle, its direction and its speed. They are often used in conjunction with positioning systems to increase their precision, although they still do not guarantee sufficient precision for full autonomous driving.
- Camera system. They are widespread in the automotive industry. The technology is highly developed from the point of view of the quality and low price of the cameras. However, its effectiveness is reduced in low light or adverse weather conditions.
- Infrared sensors. Infrared spectrum light is invisible to the human eye and is used to detect and track objects in low light conditions. Since the beginning of the 2010s they have become relatively common as optional equipment items in high-end vehicles. The first vehicle to be offered with a night vision system was the 2000 Cadillac DeVille.
- Radar . It uses electromagnetic waves to detect and track objects. It can work very precisely over distances of up to about 300 meters and is very effective mainly in determining the presence of other vehicles, and also in determining the direction in which it is traveling and at what speed it is traveling. Its effectiveness hardly decreases in conditions of low visibility or adverse weather conditions and, in addition, its cost is relatively low. Its use is already widespread in the automotive industry in driving assistants such as adaptive cruise control, detection of vehicles in the blind spot, automatic emergency braking or pre-collision protection systems.
- Leader. It is an acronym for LIDAR (Laser imaging detection and ranging). This system makes it possible to determine the distance from a laser emitter to an object using pulsed laser beams. Modern ldar systems make it possible to collect information complementary to radar. The range of laser beams is much greater than that of ultrasound and the degree of definition achieved on the information of the environment is much higher, but they have the disadvantage that they work poorly in rainy or foggy conditions.
The autonomous vehicles need to gather information from all these systems and combine sensors in order to have a thorough knowledge of the environment. Logically, this data collection has to be constantly updated, adapting to the movements of the vehicle itself. It is common for a three-dimensional map to be drawn up to allow locating the position of the vehicle in relation to the environment. On the other hand, this information is compared with previously compiled maps, which allows locating a vehicle with absolute precision … although only in previously mapped areas.
But the biggest challenge of autonomous driving systems lies in the development of software that performs a correct interpretation of the images and a subsequent elaboration of the correct decisions. To do this, a machine learning algorithm is usually used, which is trained with different images that show all possible traffic situations. Each of the images is associated with the type of vehicle it contains.
The algorithm starts processing the images. Initially, you try to guess which vehicle is in each image and at first you will be wrong very often. Since it knows which vehicle is actually in each image, it will modify and adapt internal parameters and try again. The process will continue, successively reducing the failure rate. Later, when new images are presented to you, you can classify them correctly.
That same approach can be used for decision making. Instead of providing a list of norms with which to evaluate the action to take for each situation, an algorithm is trained with traffic situations in which the correct action to take is specified. As before, the algorithm will try to guess the correct action and will modify internal parameters based on whether it is wrong or right.
In short, the technological limitations of today’s autonomous vehicles do not lie in the hardware, but in the software.