Why stereo vision cameras are integral to autonomous driving | Automotive News

2022-10-17 01:57:08 By : Ms. Amanda Zhan

There's nothing remarkable about cameras anymore. They've been around for centuries, and we carry them in our pockets every day now without a thought. But in fact, they are remarkable even today, as cameras are integral to one of the most transformative innovations of our time — autonomous vehicles and advanced driver assistance systems.

Combined with another age-old technology — stereo vision — cameras have become a cornerstone component of autonomous vehicles. Stereo vision cameras enable autonomous cars and trucks to navigate roads more safely than single cameras, radar and lidar systems, especially when they're traveling on highways at high speeds.

Modern software has evolved as well, and it's the key to making 19th-century technology effective for autonomous driving. New automotive chips with sufficient processing power using advanced software algorithms are now produced at scale, enabling video from multiple cameras to provide the reliable, high-fidelity 3D information autonomous vehicles need to navigate safely at all times, in all road and weather conditions.

Stereo vision cameras date to 1838, when British scientist Charles Wheatstone found that two pictures of the same object, each viewed by one eye and from a slightly different perspective, created the illusion of three dimensions. Smithsonian Magazine called stereo vision "the original virtual reality," noting that Oliver Wendell Holmes was an early enthusiast. "The mind feels its way into the very depths of the picture," Holmes wrote in 1859. "The scraggy branches of a tree in the foreground run out at us as if they would scratch our eyes out." Holmes coined the term "stereograph" and designed a simple "stereoscope" that set off a "stereography boom" across the country. Queen Victoria contributed to the boom when she admired stereographs displayed at London's Great Exhibition of 1851.

Now, 170 years later, stereo vision is more than relevant — it's vital to the next-generation technology of autonomous driving and other modern functions like last-mile delivery, as well as future use in robots and robotaxis. "Humans have two frontal-parallel eyes that see the world from two slightly different positions. It is intriguing that evolution has created this particular structure," researchers Yang Liu and J.K. Aggarwal wrote. "Can we do the same thing with computers?" They call this effort "one of the most fundamental and fruitful areas of computer vision."

Billions of cameras are now produced every year, and they keep getting better, to the point that they are now superior to the human eye.

The most significant development, however, is the use of multiple independently mounted cameras on autonomous passenger vehicles and trucks. Multi-camera systems with stereo vision can measure distances from different angles with ultra precision, in real time, and detect objects in the road at long range — up to 1,000 meters. Through triangulation, multi-camera systems detect the size of objects with great accuracy, even those as small as 15 cm at 150-meter range. During highway travel, long-range vision and object detection have the potential to save lives.

Depth estimation using monocular cameras and neural networks is widely considered an ill-posed problem. Depth is inferred by perspective, shading and other cues that are learned from a corpus of images. These techniques are known to be fallible. For example, such systems cannot distinguish between a real traffic scene and a picture of a traffic scene. Another approach, called "structure from motion," compares one frame to the next, but the perspective isn't always wide enough, and it doesn't work when the car isn't moving.

Some companies, including Tesla, train autonomous vehicles with artificial intelligence, exposing them to a million hours of driving data so they'll learn to think like humans. The vehicle's AI "brain" infers what's happening around it. But monocular cameras can't measure distances reliably or identify arbitrary objects at long range. They can't tell if a 3D zebra crossing painted on the road is an actual physical structure, for instance, or if a landscape up ahead is real or just a billboard.

All self-driving vehicles need to detect potentially hazardous objects while they are far enough away to take evasive action. If an autonomous truck encounters a fallen motorcyclist on the highway, say, it needs 350 meters to change lanes or stop in time. Radar, lidar, monocular and AI vision systems don't have that capability; multi-camera stereo vision systems do.

Many technological advances have converged to revitalize stereo vision cameras. They can be calibrated across 22 dimensions in real time, an immensely complex task enabled by advanced algorithms and in part by new dedicated processors for computer vision and neural networks.

Most importantly, new software developments enable imagers to be "untethered," allowing multiple cameras to be widely separated and placed in a variety of orientations. The resulting longer baselines enable long-distance sensing and uncoupling the cameras brings stereo vision to long-range sensing.

Current technology and trends can help us imagine what lies beyond the horizon for cameras and stereo vision technology. For one, 3D cameras are likely to become a standard feature of mobile devices, in part because they'll be essential to augmented reality and the metaverse. Most dramatically, we can expect to experience fully immersive photographs, 3D projections that can be viewed from any angle. On the professional side, 3D cameras using AI will be more widely adopted; they're already used in manufacturing, construction, industrial automation and entertainment, among other industries.

Stereo vision cameras are also heading toward a new machine learning application: robotics, for automating manufacturing and quality control. Growing demand for vision-guided robotic systems will drive market expansion across varied industries, from textiles and packaging to medicine, chemicals and food.

While stereo vision cameras become ubiquitous, more familiar cameras might fade away. Compact digital cameras are already redundant in the age of smartphones, and new mirrorless cameras seem destined to eclipse classic DSLR cameras.

Whatever comes to pass, the highest future achievement for stereo vision cameras promises to be truly remarkable — space missions. Scientists say spacecraft will one day use stereoscopic systems to manage robotics in orbit and help rovers navigate the uncharted planets they land on. You have to wonder what that camera's pioneers would think about that history in the making.

Have an opinion about this story? Click here to submit a Letter to the Editor, and we may publish it in print.

Please enter a valid email address.

Please enter your email address.

Please select at least one newsletter to subscribe.

See more newsletter options at autonews.com/newsletters. You can unsubscribe at any time through links in these emails. For more information, see our Privacy Policy.

Sign up and get the best of Automotive News delivered straight to your email inbox, free of charge. Choose your news – we will deliver.

Get 24/7 access to in-depth, authoritative coverage of the auto industry from a global team of reporters and editors covering the news that’s vital to your business.

The Automotive News mission is to be the primary source of industry news, data and understanding for the industry's decision-makers interested in North America.

1155 Gratiot Avenue Detroit, Michigan 48207-2997

Automotive News ISSN 0005-1551 (print) ISSN 1557-7686 (online)

Fixed Ops Journal ISSN 2576-1064 (print) ISSN 2576-1072 (online)