Why These German Researchers Became Self-Driving Cars

Why These German Researchers Became Self-Driving Cars

You’re lying on your stomach, with your arms draped forwards, almost like you’re going to get a shoulder massage. Except this is not a moment for relaxation. Through a VR headset, you see flashes of color, an unfamiliar view of the world, a group of red lines that looks something like a person. And now you have to make a decision, because you’re rolling forward, head first, and your right hand is wrapped around the joystick that determines which way you’re going. Do you continue forward, and risk hitting that blob that might be a human being? Or instead swerve to the side, into a patch of darkness, full of who knows what?

This is your taste of life as a self-driving car. It comes courtesy of engineers at Moovel Lab, a Stuttgart, Germany-based experimental arm of Daimler. The ride in question is The Rover, a four-wheeled electric. The VR headset mimics the sorts of data autonomous vehicles use to interpret their surroundings. You’re lying on your stomach because those engineers want you to feel ill at ease.

“We wanted to have this experience of becoming the car. If you’re sitting, it becomes too much like you’re driving the car,” says Joey Lee, one of the designers. “It just feels much more vulnerable in that position.”

As autonomous tech slowly steps into the real world, the humans who stay behind the wheel will find themselves sharing the road with robots who take an entirely new kind of approach to driving. As any student of history knows, misconceptions about others are a key catalyst for conflict. The Moovel engineers want us all to get along, and that means trying some cultural exchange.

Sure, engineers can explain how their vehicles build point clouds from lasers, or run machine learning algorithms, and use that data to make decisions about steering angles and acceleration rates. But, especially for the non-engineers out there, it’s hard to make these ideas concrete, rather than abstract. And it turns out a ride on an overgrown dolly may be worth a thousand lectures.

moovel lab

The Rover gathers data from a 3-D camera, which, like the sensors for a Microsoft Kinect, monitors moving objects. A simple lidar sensor determines how far you are from those objects. The onboard computer gathers it all together and gives you, through the headset, a series of multicolored lines that occasionally coalesce into recognizable shapes. It does its best to guess what they are—like a pedestrian or car—and even tells you, with a percentage, how confident it is in its guess. This is an artistic approximation of how AVs see the world, since the goal is to simulate the experience, not perfectly reproduce computer understandings of lidar and radar data.

The Moovel team has taken the setup to exhibitions and conferences, and used it for informal interviews rather than rigorous experiments. They’re keen to get people thinking about some of the issues, and believe making them tangible makes them easier to discuss. They say most of the volunteers who have gone for a ride found it fun—eventually—and informative.

moovel lab

Rolling a mile in a soulless robot’s tires may seem pointless, but the Moovel researchers see value in understanding, communication, and even empathy between people and driverless cars. With their plethora of cameras and other sensors, it’s easy to assume that robocars will be all seeing, all knowing. But seeing and processing are two distinct processes. The intelligence that makes decisions has to register and react to an object that appears in front of a camera. And that AI is a black box, even to the developers who train it with hundreds of thousands of examples of what not to hit. Moovel believes everyone should try to pick up at least a basic understanding of how it works—and its potential limitations.

“One thing that we do want to raise is how many sensors is enough to be confident that your machine is able to see the things that are necessary,” says Lee. If you step out into the path of an AV, will it definitely spot you, recognize you as a person, and stop? If you’re riding in a driverless taxi and it starts snowing, do you know how much its view of the road ahead is impacted? The more answers we have, the better we’ll all be able to live in peace.

The folks building real self-driving cars are tackling this communication gap, without the terrifying bit. Waymo and Uber have each developed interfaces that translate for human eyes what the car is doing, and how it sees the world. When in Autopilot mode, Tesla cars give a basic representation of what they see in the instrument cluster, an easy way for the human to double check that the car really has spotted that car cutting in front of you.

Maybe one day, in the utopian future of crash-less computer drivers, none of this will be necessary. But for the foreseeable future, when AVs with their learner permits are sharing the roads with humans who’ve never encountered them before, a better two-way understanding, and even a little empathy, will keep everyone safer.


I’d Like to Buy the World Some Code

Source: https://www.wired.com/story/moovel-self-driving-car-experiment/

Powered by WPeMatico