Formula E is for electric-power racing cars, but Roborace goes a step further by replacing the driver with electronics and programming.
Roborace cars will use Nvidia's recently announced Drive PX 2 'AI supercomputer,' CEO and co-founder Jen-Hsun Huang announced today.
Ten teams will take part in Roborace, he said, each entering two electric-powered cars.
"Drive PX is really a platform, and it scales," Shapiro explained. Drive PX 2 is a "production grade" platform already being used by around 80 organisations including traditional car makers, startups and research centres"
Exactly how much processing power will be required by production vehicles will depend on the applications car companies build on top of Nvidia's DriveWorks software (voice or gesture command processing, infotainment and so on) but Shapiro said there will probably be optimised versions of the board once those requirements become clear - the use of six rather than 12 cameras, for example.
Huang pointed out that the Drive PX 2 is equivalent to the boot-sized GPU accelerated supercomputer (pictured below) that Baidu built for its experimental autonomous vehicle. If you look closely, Huang is holding a Drive PX 2 - that's a lot less hardware to carry around in a vehicle.
Drive PX 2 features include the ability to detect up to 15,000 important points from every frame from each of its 12 camera inputs.
The design includes support for crowdsourced HD maps - finely detailed information about lanes, junctions and so on is sent to the cloud, and the resulting maps are made available to other vehicles. The more information that is already available about the road, the less has to be collected and processed by the autonomous car. Nvidia is already working with TomTom and other mapping and navigation companies..
DriveNet is based on the 'deep learning' concept - rather than trying to program a set of instructions that explain how to drive a vehicle, the software goes through a trial and error process and learns how to get a satisfactory outcome.
But that trial and error doesn't have to be all in the real world. Simulated inputs can be used in conjunction with appropriate feedback, and this gives the model a chance to learn about rare situations ('corner cases') that call for special treatment.
Nvidia technical marketing analyst Robert Perry previously demonstrated this ability to journalists, using a CAD model of a car park - the deep learning model had to drive around looking for a space and then manoeuvre into it. To make the outcome intelligible to humans, the environment and the moving vehicle are visualised using a 3D gaming engine.
The models would normally be trained on a DGX-1 system and then transferred to a Drive PX or PX 2 to actually control a vehicle, said Huang.
Disclosure: the writer attended the GPU Technology Conference as a guest of Nvidia