As cyclists, we have very good reasons to be skeptical.
First off, how reliable is the technology? So far, not reliable enough to put our full faith in it. A well-publicized incident in 2016 centered on the driver of a Tesla Model S who was killed when he drove straight through a tractor trailer while he was using their Autopilot system. It is believed that the car's sensors were not able to detect the truck's trailer which may have blended in somehow with the color and/or brightness of the sky. In Tesla's defence, the driver of the car, Joshua Brown, was not using the system as it was designed to be used - that is to say, the intent of the system is that it should assist the driver during momentary lapses in attention - not that it would completely take over for a driver who fully relinquishes control of the car for an extended period. Though Brown's family and lawyers have disputed it, some of the first people on the scene after the crash reported that he had been watching a movie when the incident occurred. A National Traffic Safety Board (NTSB) study into the crash determined that the driver was essentially misusing the Autopilot system by over-relying on it (NTSB found that he only had his hands on the wheel for about 25 seconds out of a 39-minute period of driving).
But what is there to keep other drivers from doing the same thing? Consider how many people without autonomous technology think nothing of taking their eyes off the road for extended moments to send/read text messages,etc., it should not be surprising that those with such technology would be even more inclined to put their faith in the autonomous driving functions for even longer periods.
Another incident, this time non-fatal, happened just last month when another Tesla Model S slammed into the back of a stopped fire truck. It couldn't see a big red fire truck in its path? Seriously?
Yet Tesla's owners manual even acknowledges that this exact situation can be a problem for the automatic driving system. It states: Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead.” Basically, the system gets confused by a changing and unpredictable traffic situation.
When these systems, which rely on a combination of radar, lidar, electric eye sensors, and GPS (and many gigabytes of computer microprocessing), can become "confused" by large, solid, and relatively predictable vehicles - what chance do we as cyclists have?
All the developers of automatic "driverless" technology admit that recognizing bicycles is a particularly difficult challenge. Bicycles are small. It can be difficult for computers to tell what direction they're heading. They tend to (though not always) move slower than surrounding traffic, but they can change direction very quickly. All the things that make cyclists "consternating" to human drivers make us a total puzzle for computers. These issues are exacerbated by the fact that many cyclists simply don't follow traditional traffic rules.
The tech-meets-transportation company Uber is another company that has been developing self-driving technology, and they revealed last year that their self-driving cars seemed unable to distinguish bike lanes from car lanes, and as a result, had difficulty spotting cyclists, and potentially worse, keeping the cars from driving in the bike lanes. Uber is still doing small-scale, localized testing of their tech, so it's unlikely that we'll be run down by a self-driving Uber (unless you live in Pittsburgh) - at least for now. And hopefully they'll figure out that hurdle before they go nationwide with it.
Some developers, acknowledging the weaknesses in their systems, seem to be trying to "share" the responsibility of safety by putting compatible technology onto bicycles, or onto the cyclists themselves, to help the cars' systems "see" them better. These solutions include putting chips or transmitters into helmets, or embedded into bikes, or creating special apps for the riders' cell phones. All of that sounds great for those cyclists who can afford (and desire) to equip themselves with the latest "smart" technology that will help keep them from being run down by self-driving cars. But it leaves a huge segment of cyclists on the road completely vulnerable. Are these riders expendable? I mean, living in an urban area I regularly see riders who are poorer, and riding beat up old bikes because they can't afford better, and they are on bikes in the first place because they can't afford cars. Cheap bikes are their sole source of transportation. What are the car and tech companies doing for them? These cyclist-centered solutions seem to me to place the burden on cyclists, rather than on the drivers and the companies pushing the technology. It's like saying, "You don't want our automated cars to hit you? Then you need to wear this special 'smart suit' or 'smart helmet,' ride a special 'smart bike,' or strap on some other kind of 'smart sensors' every time you ride. Oh yeah, and it's up to you to pay for it all."
Being the somewhat cynical and pessimistic person I am, I wouldn't dismiss the possibility that the automakers and tech companies could get together and pressure lawmakers to legally put the burden on the cyclists in the form of some kind of mandate. As this technology becomes more popular and profitable, if they can't figure out a way to make the systems more reliable as far as recognizing and reacting to cyclists, they could lobby to mandate that all cyclists strap on some variation of "smart" devices before taking to the roads, or else be held responsible for their own injuries when they get hit. Don't think that's likely? It's happened before - remember that the concept of "jaywalking" wasn't even a thing until the auto interests came up with it and got it written into the law books.
Another issue that comes up relates to a type of moral or ethical dilemma, sometimes referred to as the Trolley Problem, wherein a person must choose between two potentially deadly outcomes. In this case, the question is if an automated car has to make a choice between hitting another car or hitting a cyclist or a pedestrian, which course will it take? It isn't difficult to imagine a scenario where this could present itself. Picture an automated car overtaking a cyclist when an oncoming car suddenly moves left-of-center. Does the automated car remain in its determined path and take a head-on collision with the other car? Or does it swerve right to avoid the car, but hit the cyclist?
Shockingly (or perhaps not-so-shockingly, depending on your level of cynicism) one car company has already made that determination, and it doesn't bode well for cyclists. According to an article in Car and Driver, Mercedes-Benz has already decided to program its next-level autonomous cars to prioritize the protection of the people inside the car -- you know, the very people who shelled out big bucks for the self-driving technology with the expectation that it would keep them safer. Obviously, M-B wants to make sure their drivers live to buy another M-B. According to Christoph von Hugo, M-B's manager of driver assistance systems, "If you know you can save at least one person, save the one in the car. If all you know for sure is that one death can be prevented, then that's your first priority."
Apparently, Mercedes has decided that if the car kills a cyclist or pedestrian, that victim's family will sue them. And if their car takes an action that "saves" the cyclist, but results in the death of the Mercedes driver or other occupants in the car, then they will still get sued. I suppose they figure that if they're going to get sued either way, they're better off protecting the M-B owners (who can probably afford better lawyers). The only possible bright side is that ultimately, the goal of the developers of self-driving cars is to program these systems not to get into situations where they have to make a "trolley problem" choice in the first place. Is that possible? Or practical? I don't have that answer. There are so many potential variables in a typical driving scenario, I wonder if it would be possible to calculate them all.
The legal questions of regulation and liability are still totally up in the air, both in the U.S. and abroad. Here in the U.S., congress has only just begun to look at the issues of autonomous cars. Different states are looking at the issues separately, which could lead to a totally disconnected patchwork of laws nationwide. But in some states, it seems that legislators are willing to go full-throttle with robots in the driver's seat. Just this week, California lawmakers eliminated a requirement that autonomous vehicles must have a person in the driver's seat to take over in case of emergency. The new law also grants 50 companies a license to test self-driving cars in that state.
Is there a good side to all this? It's hard to say.
Currently, I believe one of the biggest threats to cyclists is probably the distracted driver, which I believe becomes a greater problem every year and with every new app or gadget. I'm still convinced that the "smarter" our phones get, the "dumber" the people get. Add that to a natural tendency towards self-indulgence and self-centered behavior that the phones seem to exacerbate, and the sense of anonymity, power, and entitlement that seem to infect many drivers anyhow, and you have a recipe that can be deadly for cyclists and pedestrians. Unfortunately, legislators seem almost as reluctant to cross the telecom industry as they are to cross the gun and auto industries - so real and effective bans on phone use while driving are few and far between. The development of autonomous vehicles almost seems to say "we can't (or won't) put a stop to it, so let's just enable it. If people won't put their phones away, let's find a way that they'll never have to."
Ultimately, I suppose it would be fair to ask the question: If I'm cycling home from work, would I rather the car behind me be driven by a texting teenager, or by a computer? And honestly, I just don't know the answer. On one hand, the noble idea of the autonomous car is that it doesn't get distracted. That sounds great. On the other hand, so far the technology seems to leave a lot to be desired. It would be difficult for me to put my faith in the robots until I get some reassurance that they can actually see me, and respond appropriately, and that they not be predisposed to sacrifice my life in exchange for the car's occupants. I might feel better if our laws would favor the more vulnerable road users over the industries' interests. So far, none of that seems truly certain.
I also wonder why should it seem like the only choice is between distracted drivers and robot cars? I mean, if I actually had a choice in the matter, I think I'd choose a human driver who's actually paying attention. Shouldn't we be able to reasonably expect that drivers not be distracted? Now I guess that would truly be the stuff of science fiction.
Joshua Brown was killed when he handed over the control of his Tesla to the Autopilot system. The system was unable to detect a tractor trailer that was crossing the roadway. |
But what is there to keep other drivers from doing the same thing? Consider how many people without autonomous technology think nothing of taking their eyes off the road for extended moments to send/read text messages,etc., it should not be surprising that those with such technology would be even more inclined to put their faith in the autonomous driving functions for even longer periods.
Another incident, this time non-fatal, happened just last month when another Tesla Model S slammed into the back of a stopped fire truck. It couldn't see a big red fire truck in its path? Seriously?
Yet Tesla's owners manual even acknowledges that this exact situation can be a problem for the automatic driving system. It states: Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead.” Basically, the system gets confused by a changing and unpredictable traffic situation.
When these systems, which rely on a combination of radar, lidar, electric eye sensors, and GPS (and many gigabytes of computer microprocessing), can become "confused" by large, solid, and relatively predictable vehicles - what chance do we as cyclists have?
All the developers of automatic "driverless" technology admit that recognizing bicycles is a particularly difficult challenge. Bicycles are small. It can be difficult for computers to tell what direction they're heading. They tend to (though not always) move slower than surrounding traffic, but they can change direction very quickly. All the things that make cyclists "consternating" to human drivers make us a total puzzle for computers. These issues are exacerbated by the fact that many cyclists simply don't follow traditional traffic rules.
The tech-meets-transportation company Uber is another company that has been developing self-driving technology, and they revealed last year that their self-driving cars seemed unable to distinguish bike lanes from car lanes, and as a result, had difficulty spotting cyclists, and potentially worse, keeping the cars from driving in the bike lanes. Uber is still doing small-scale, localized testing of their tech, so it's unlikely that we'll be run down by a self-driving Uber (unless you live in Pittsburgh) - at least for now. And hopefully they'll figure out that hurdle before they go nationwide with it.
Some developers, acknowledging the weaknesses in their systems, seem to be trying to "share" the responsibility of safety by putting compatible technology onto bicycles, or onto the cyclists themselves, to help the cars' systems "see" them better. These solutions include putting chips or transmitters into helmets, or embedded into bikes, or creating special apps for the riders' cell phones. All of that sounds great for those cyclists who can afford (and desire) to equip themselves with the latest "smart" technology that will help keep them from being run down by self-driving cars. But it leaves a huge segment of cyclists on the road completely vulnerable. Are these riders expendable? I mean, living in an urban area I regularly see riders who are poorer, and riding beat up old bikes because they can't afford better, and they are on bikes in the first place because they can't afford cars. Cheap bikes are their sole source of transportation. What are the car and tech companies doing for them? These cyclist-centered solutions seem to me to place the burden on cyclists, rather than on the drivers and the companies pushing the technology. It's like saying, "You don't want our automated cars to hit you? Then you need to wear this special 'smart suit' or 'smart helmet,' ride a special 'smart bike,' or strap on some other kind of 'smart sensors' every time you ride. Oh yeah, and it's up to you to pay for it all."
Being the somewhat cynical and pessimistic person I am, I wouldn't dismiss the possibility that the automakers and tech companies could get together and pressure lawmakers to legally put the burden on the cyclists in the form of some kind of mandate. As this technology becomes more popular and profitable, if they can't figure out a way to make the systems more reliable as far as recognizing and reacting to cyclists, they could lobby to mandate that all cyclists strap on some variation of "smart" devices before taking to the roads, or else be held responsible for their own injuries when they get hit. Don't think that's likely? It's happened before - remember that the concept of "jaywalking" wasn't even a thing until the auto interests came up with it and got it written into the law books.
Another issue that comes up relates to a type of moral or ethical dilemma, sometimes referred to as the Trolley Problem, wherein a person must choose between two potentially deadly outcomes. In this case, the question is if an automated car has to make a choice between hitting another car or hitting a cyclist or a pedestrian, which course will it take? It isn't difficult to imagine a scenario where this could present itself. Picture an automated car overtaking a cyclist when an oncoming car suddenly moves left-of-center. Does the automated car remain in its determined path and take a head-on collision with the other car? Or does it swerve right to avoid the car, but hit the cyclist?
Shockingly (or perhaps not-so-shockingly, depending on your level of cynicism) one car company has already made that determination, and it doesn't bode well for cyclists. According to an article in Car and Driver, Mercedes-Benz has already decided to program its next-level autonomous cars to prioritize the protection of the people inside the car -- you know, the very people who shelled out big bucks for the self-driving technology with the expectation that it would keep them safer. Obviously, M-B wants to make sure their drivers live to buy another M-B. According to Christoph von Hugo, M-B's manager of driver assistance systems, "If you know you can save at least one person, save the one in the car. If all you know for sure is that one death can be prevented, then that's your first priority."
Apparently, Mercedes has decided that if the car kills a cyclist or pedestrian, that victim's family will sue them. And if their car takes an action that "saves" the cyclist, but results in the death of the Mercedes driver or other occupants in the car, then they will still get sued. I suppose they figure that if they're going to get sued either way, they're better off protecting the M-B owners (who can probably afford better lawyers). The only possible bright side is that ultimately, the goal of the developers of self-driving cars is to program these systems not to get into situations where they have to make a "trolley problem" choice in the first place. Is that possible? Or practical? I don't have that answer. There are so many potential variables in a typical driving scenario, I wonder if it would be possible to calculate them all.
The legal questions of regulation and liability are still totally up in the air, both in the U.S. and abroad. Here in the U.S., congress has only just begun to look at the issues of autonomous cars. Different states are looking at the issues separately, which could lead to a totally disconnected patchwork of laws nationwide. But in some states, it seems that legislators are willing to go full-throttle with robots in the driver's seat. Just this week, California lawmakers eliminated a requirement that autonomous vehicles must have a person in the driver's seat to take over in case of emergency. The new law also grants 50 companies a license to test self-driving cars in that state.
Is there a good side to all this? It's hard to say.
Currently, I believe one of the biggest threats to cyclists is probably the distracted driver, which I believe becomes a greater problem every year and with every new app or gadget. I'm still convinced that the "smarter" our phones get, the "dumber" the people get. Add that to a natural tendency towards self-indulgence and self-centered behavior that the phones seem to exacerbate, and the sense of anonymity, power, and entitlement that seem to infect many drivers anyhow, and you have a recipe that can be deadly for cyclists and pedestrians. Unfortunately, legislators seem almost as reluctant to cross the telecom industry as they are to cross the gun and auto industries - so real and effective bans on phone use while driving are few and far between. The development of autonomous vehicles almost seems to say "we can't (or won't) put a stop to it, so let's just enable it. If people won't put their phones away, let's find a way that they'll never have to."
Ultimately, I suppose it would be fair to ask the question: If I'm cycling home from work, would I rather the car behind me be driven by a texting teenager, or by a computer? And honestly, I just don't know the answer. On one hand, the noble idea of the autonomous car is that it doesn't get distracted. That sounds great. On the other hand, so far the technology seems to leave a lot to be desired. It would be difficult for me to put my faith in the robots until I get some reassurance that they can actually see me, and respond appropriately, and that they not be predisposed to sacrifice my life in exchange for the car's occupants. I might feel better if our laws would favor the more vulnerable road users over the industries' interests. So far, none of that seems truly certain.
I also wonder why should it seem like the only choice is between distracted drivers and robot cars? I mean, if I actually had a choice in the matter, I think I'd choose a human driver who's actually paying attention. Shouldn't we be able to reasonably expect that drivers not be distracted? Now I guess that would truly be the stuff of science fiction.