Nvidia, Pioneers of visual computing have launched the Drive PX platform for cars which will take self- piloted cars to the next level. It was unveiled at the International Consumer Electronics Show in Las Vegas.

The DRIVE PX platform is based on the NVIDIA Tegra X1 processor, enabling smarter, more sophisticated advanced driver assistance systems (ADAS) and paving the way for the autonomous car. Tegra X1 delivers 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. It is equipped with 10 GB of DRAM memory and combines surround Computer Vision (CV) technology, extensive deep learning training, and over-the-air updates to transform how cars see, think, and learn.

The Tegra X1 computing chip packs more than one teraflops of computing power in a chip the size of a thumbnail. With as much processing power as in the world’s most powerful supercomputer from just 15 years ago, the chip is based on the new hyper-efficient Maxwell architecture and draws only about 10 watts of power. At a press event Sunday, Jen-Hsun Huang, Nvidia’s CEO, said the devices will provide “more computing horsepower inside a car than anything you have today.”

Deep learning capabilities set Drive PX apart from its predecessors with the ability to identify and differentiate an ambulance from a delivery truck. It can interpret its surroundings and develop and classify objects based on proximity. The software can detect objects such as cars, people, bicycles, and signs, even when they are partly hidden. In addition the software can be remotely updated so that car manufacturers can fix bugs and add new functionality.

“It’s pretty cool to bring this level of powerful computation into cars,” said John Leonard, a professor of mechanical engineering at MIT, who works on autonomous-car technology. “It’s the first such computer that seems really designed for a car—an autopilot computer.”

This platform equips the car with true self parking capabilities. DRIVE PX delivers the massive processing power to enable techniques like structure-from-motion (SFM) and simultaneous localization and mapping (SLAM) from four surround-view cameras that cover the immediate area around the car. Additional cameras allow for greater distance coverage in forward and cross-traffic viewpoints. Conventional surround view systems show the driver a virtual view of the area around the car, but often have poor image quality due to warping effects from the fisheye camera lenses. DRIVE PX uses sophisticated SFM and advanced stitching for better image rendering and reduced “ghosting”, such as where a line on the pavement can appear in two places at once.

Two days after NVIDIA Tegra X1’s official launch, Audi confirmed that it will use the new mobile superchip in developing its future automotive self-piloting capabilities (source: Nvidia)

“With every mile it drives, every hour, the car will learn more and more,” said Ricky Hudi, Audi’s executive vice president for electrics/electronics development.

Audi’s confident these features will be eagerly greeted. Research indicates that one-third to one-half of those who buy luxury cars would choose self-driving features. From there, they’ll come to be seen as key mainstream safety offerings.

“We’ll start with premium cars and the next step would be democratization of it in volume models,” said Ulrich Hackenberg, Audi’s R&D chief. “You can’t offer safety only for premium customers; you have to give it to everyone.”

Read more here.


(Image credit: NVIDIA)

 

Previous post

Libraries Get Cutting Edge Tech with iBeacons

Next post

Data Mining Tops LinkedIn's List of the Hottest Skills in 2014