You’re having a heated conversation when suddenly, your car slams on the brakes. Someone cut you off while you weren’t paying attention. In those few seconds of inattention, decades of research and development, dozens of sensors across several different spectra, and a super-fast processor worked in concert to prevent a car crash.
Cars promise more automation and assistance with every passing year, with some even offering to drive themselves. This computerization has brought with it a host of sensors and algorithms designed to make a vehicle more perceptive than its human driver is. The technology—and the computing power required to run it all—is nothing short of staggering.
“L” Is For The Way You Look At Me
Driver assistance aids are broken down by the SAE International into five levels. Level 0 is reserved purely for safety aids, like automatic emergency braking (AEB) or blind-spot monitoring. Level 5 is for full self-driving, in which vehicle occupants could sleep safely while their car tears down the interstate. Most cars on sale today are Level 1, with steering or throttle/braking assistance provided. Some cars are Level 2, which combines both braking and steering assistance. Level 2 implementations range from simple lane-keep assist and automated cruise control (ACC) to more advanced hands-free systems such as Ford’s BlueCruise or Tesla’s Autopilot. They all still require active driver attention to operate safely.
Level 3 systems are today’s most-advanced commercially available systems, and can drive themselves in low speed traffic jams during good weather on clearly-marked roads. They still can require that drivers intervene when needed, and are usually speed-limited to 40 MPH or less. Levels 4 and 5—true self-driving vehicles—are still in the research and development phase, and still have a long way to go before they’re for sale in dealerships.
Each one of these levels builds on the ones before it, and demands increasing amounts of hardware and software for safe operation.
“O” Is For The Obstacles I See
Instead of using light on the visible spectrum, most Level 1 assists use a radar system which transmits and receives radio waves. When the transmitted radio waves reach a solid object, they’re bounced back toward the radar system. This reflected wave returns to the radar system, where the receiver detects them. A processor mathematically determines how far away a detected object is, how fast its moving, and which direction it’s, based on the properties of reflected waves.
This is one of the oldest technologies in use for automated driving systems, with the first radar-detection systems dating back to the early 1900s. Its use in cars dates back more than 60 years, to the 1959 Cadillac Cyclone concept, which prominently featured a pair of radar “pods” on its nose to alert the driver of obstacles. It would take decades for radar to become compact and reliable enough for production cars to offer, however.
“V” Is Very, Very Extrasensory
As a result, the first consumer-available vehicle sensing technology actually used lidar, not radar. Lidar uses the same premise as radar — measuring the time it takes for wave pulse reflections — but instead of radio, it uses light waves. Mitsubishi used a lidar-based vehicle detection system in the 1992 Debonair, offered exclusively for the Japanese market. It was rudimentary compared to modern systems, as it had no throttle or brake control and could solely alert the driver of obstacles and turn off overdrive, for gentle deceleration. Additionally, lidar is less effective in bad weather and on wet roads, as it is highly sensitive to diffraction and reflection, which limits its effectiveness. Mitsubishi’s system was further limited in that it only operated when the cruise control was engaged, rather than constantly.
Mitsubishi continued development of its driver-assistance tech, adding throttle control in 1995 and a front-facing camera that used visual contrast processing to “see” road lines. A servo in the steering column could gently nudge the car if the camera system determined the driver was drifting out of their lane. This was a very early attempt at Level 2 automation, although the lack of automatic braking and reliance on lidar kept it from being very useful.
Cameras remained in limited usage until recently. Unlike radar, they have the same limitations the human eye does — rain and fog can rapidly make them useless — and unlike lidar, doing anything beyond the most basic image processing (say, looking for high-contrast speed limit signs or road lines) requires increasingly intense amounts of computing power.
“E” Is Even More Than Any Eye You Adore Can
Mercedes Benz was the first to market with radar-based cruise control in 1999, with a system called Distronic. Distronic, offered on the S-Class, was similar to modern cruise control systems, with the ability to apply both throttle and (some) brake force to match leading vehicle speeds. Radar is unaffected by bad weather and is functional over a much longer distance than lidar, which made it a perfect match for long-distance cruise control. However, radar is much less precise than lidar, which limits its usefulness primarily to roads with higher speeds and wide lanes, such as highways.
Radar was, however, the gateway to much more complex systems. In 1999, John Vaughan, an executive at the contractor that supplied the S-Class’s sensors, stated that “Adaptive cruise control is the first system in a network of sensors… It’s the beginning of the microwave era in automotive electronics.”
It didn’t take long for Vaughan’s prediction to prove prescient. By 2003, Honda had launched the world’s first automatic emergency braking system, first on the Japan-only Inspire, and then later to US buyers with the 2006 Acura TL. This “Collision Mitigation Braking System”, or CMBS, used a radar sensor to monitor traffic ahead. CMBS would alert a driver to slowed or stopped cars, and if the driver didn’t intervene, it would pre-tension the seatbelts and apply maximum braking force.
From here, combinations of sensory systems began to proliferate. Modern cars with advanced safety systems often have a combination of traditional cameras, radar, and lidar to paint the fullest picture possible of the world around them.
Level 3 Is All That I Can Give To You
Level 3 systems such as Mercedes Benz’s Drive Pilot pair these technologies with even more sensory and positional technology, such as road-moisture monitors and GPS antennas, just to work at limited speeds and in sunny, dry conditions on specific roads. While it can seem like overkill to have so much redundancy, self-driving cars have already suffered fatal accidents even with human oversight, which underscores the need for multiple systems in case one fails.
This equates to a lot of data: 34 gigabytes a minute, in the case of Drive Pilot. Intel estimated that the average self-driving car will generate 4,000 gigabytes of driving data per day (which would cost, at current storage rates, roughly $350,000 a year in server space). Additionally, all of this data is processed in real-time, which is computationally intense. If all 1.47 billion cars on Earth were self-driving, they would use four orders of magnitude more computing power than every data center Facebook owns, and would require more electricity than the entire country of Argentina uses.
The main bottleneck going forward, then, is not just refining the sensors, but dealing with the deluge of data. That is a problem that computer engineers have dealt with since ENIAC, and it’s likely automotive engineers will simply have to hope computing breakthroughs — either for processing power, or algorithms — come through. Until then, try not to rely on your car to save your bacon all the time.
Read the full article here