Sense and Sensitivity in self-driving cars – The Hindu

Posted: January 17, 2022 at 8:44 am

The eco-system in India for self-driving technology is flourishing, with innovation occurring at every level including sensor technology.

The Consumer Electronic Show (CES) is an influential tech event held annually in January. In this years edition (CES 2022), General Motors announced its plans to introduce a personal autonomous vehicle by 2025. MobilEye, a leader in autonomous driving, presented its technology roadmap with an emphasis on robustness and safety. These companies are not alone. Waymo has been operating driverless cabs in Phoenix since 2019. Apple reportedly has plans to build an autonomous car in the next few years. At the heart of this technology are three sensors: camera, radar and LIDAR (Light Detection and Ranging), all of which help the vehicle accurately perceive its surroundings. Surprisingly, much of this sensor technology is already present in cars on the roads today. Cameras and radar sensors routinely provide driver-assist features such as: ensuring that cars stay within lane markings, warning of approaching vehicles during lane changes and maintaining a safe distance to the vehicle in front.

THE GIST

A camera system operates much like a human eye it can discern colours, shapes, recognise traffic signage, lane markings etc. Most cars have stereo cameras i.e., two cameras separated by a short distance. This enables it to perceive depth (like humans). However, a camera does have its limitations. It does not transmit any sensing signals and relies on ambient light that is reflected from objects. So, the absence of adequate ambient light (at night) limits its ability, as can other environmental conditions like fog and blinding sunlight.

A radar sensor transmits its own signals, which bounce off targets and reflect back to the radar. Thus, unlike a camera, a radar is not dependent on ambient light. Further, a radar transmits radio waves which can penetrate fog. The radar measures the time between the transmission of the signal and arrival of a reflected signal from a target to estimate the distance to the target. A moving target induces a frequency shift in the signal (Doppler shift) which enables the radar to instantaneously and accurately measure target speed. Thus, radars can accurately measure the range and velocity of targets largely independent of environmental conditions such as fog, rain and bright sunlight. However, unlike a camera, a radar cannot discern colour nor recognise street signs. A radar also has poor spatial resolution. So, an approaching car would be visible as a blob and individual features (such as the wheels, body contour etc.) would not be discernible like they would in a camera. Thus, the capabilities of a camera and a radar sensor complement each other, which is why many cars come equipped with both cameras and radars.

LIDAR is another sensor which is used in autonomous vehicles. A LIDAR scans the environment with a laser beam. In many respects, LIDAR combines the best features of both radar and camera. Like a radar, it generates its own transmit signal (thus does not depend on daylight), and can accurately determine distances by measuring the time difference between the transmitted and the reflected signal. The narrow laser beam that is used for sensing ensures that it has a spatial resolution that is similar to a camera. However, LIDAR does have its disadvantages LIDAR signals cannot penetrate fog, discern colour or read traffic signs. The technology is also significantly costlier than radar or camera.

Given the market potential, there has been a lot of effort both in reducing cost and addressing performance gaps of each of these sensors. While radar companies are developing imaging radars that significantly improve the spatial resolution of radar, there is new technology being explored that can bring down the cost of LIDAR. At the same time, the capabilities of camera-based vision perception continue to be enhanced with the application of Deep Learning. However, each sensor has its limitations based on physics and technology. While only a camera can recognise traffic signs, it cannot match the performance of radar in adverse weather conditions. Likewise, a radar cannot match the spatial resolution of a camera or a LIDAR. Experts agree that the technology for driverless vehicles cannot be based on a single type of sensor. There is however, some debate on an optimal sensor suite that is both safe and cost effective. Some researchers believe that camera and radar with a good deep learning back-end can eliminate the need for LIDAR.

The eco-system in India for self-driving technology is flourishing, with innovation occurring at every level including sensor technology. Most of the R&D for Texas Instruments Automotive Radar is happening in its India development centre. Velodyne, a pioneer in LIDAR technology, recently started a development centre in Bangalore. Steredian Semiconductors a start-up based in India has developed an imaging radar solution. Many of the leading semiconductor companies (NXP, TI, Qualcomm) are developing, in their R&D centres in India, the hardware and software for the perception algorithms that feed on these sensors.

Sandeep Rao is with Texas Instruments

Visit link:

Sense and Sensitivity in self-driving cars - The Hindu

Related Posts