[HN Gopher] Self-driving vehicles against human drivers: Equal s...
       ___________________________________________________________________
        
       Self-driving vehicles against human drivers: Equal safety is far
       from enough
        
       Author : Bologo
       Score  : 83 points
       Date   : 2020-12-30 14:33 UTC (8 hours ago)
        
 (HTM) web link (pubmed.ncbi.nlm.nih.gov)
 (TXT) w3m dump (pubmed.ncbi.nlm.nih.gov)
        
       | godelzilla wrote:
       | Why not safe, free, and extensive public transportation instead
       | of dangerous and unsustainable pipe dreams?
        
       | jedberg wrote:
       | This title is awful (but it was copied from the site). What it
       | should say is "Study finds that most people surveyed didn't trust
       | self driving cars until they were five times safer".
        
         | m463 wrote:
         | More like "people who have never owned or been in a self
         | driving car..." faster horses.
         | 
         | That said, it will happen. I just wonder what will happen as
         | self driving car safety exceeds human drivers. Will people be
         | prohibited/disincentivized to drive?
        
           | rootusrootus wrote:
           | > what will happen as self driving car safety exceeds human
           | drivers
           | 
           | Average human drivers, or are you including the drunks and
           | other bozos who cause the lion's share of fatalities?
        
           | happytoexplain wrote:
           | The comparison between all human drivers and all autonomous
           | vehicles is far more complex than "whichever is statistically
           | safer, as a whole". Belittling people who feel differently
           | than you about it muddies the conversation for no reason.
        
             | m463 wrote:
             | I apologize and cannot edit my comment.
        
       | dooglius wrote:
       | Based on the abstract, it looks like this is an attempt to
       | measure how safe self-driving cars need to be in order for people
       | to prefer using them. It is not any sort of requirement from the
       | NIH.
        
         | happytoexplain wrote:
         | I too was confused. I thought "we" referred to the NIH. It
         | should say "people".
        
       | Jabbles wrote:
       | I wonder what range in safety we tolerate in human drivers? How
       | much worse than the average is a newly-licenced 17 year-old (or
       | whatever age) or an 80 year-old?
        
         | rootusrootus wrote:
         | Well, in terms of overall crashes and injury crashes, you don't
         | get safer than the 80 year old until you're 30 [1]. Though the
         | rate of _fatal_ wrecks is about the same between 16-17 and 80+.
         | I think that may be due in large part to the fact that 80 year
         | old humans are far more fragile and likely to die in a wreck
         | where a younger person would walk away from.
         | 
         | [1] https://aaafoundation.org/rates-motor-vehicle-crashes-
         | injuri...
        
           | Jabbles wrote:
           | Very interesting, thank you for the data.
        
       | thedudeabides5 wrote:
       | Maybe people want self-driving cars to be 5 times safer because
       | they don't trust the people saying the cars are X times safer to
       | begin with.
       | 
       | Like you are testing how much they trust machines, and how much
       | they trust the people telling them the machine is X% better.
       | 
       | Given 2020, think a little skepticism on both/either is
       | reasonable.
        
       | 11thEarlOfMar wrote:
       | If it's a subjective matter tied to perception of risk rather
       | than actual, statistical risk, such perception can be swayed.
       | 
       | The challenge remains that people will be killed in accidents
       | involving autonomous control. And we anticipate that the number
       | of people killed will be fewer, hence 'saving lives'. However,
       | the lives lost in autonomous accidents will be a different set of
       | people than those that would have died in human driven accidents.
       | There will be cases where a court determines that the autonomous
       | system was the cause. Families of those killed will want justice,
       | while those separately saved by autonomous systems may never be
       | heard from in the same case.
       | 
       | I expect that in the end it will come down to a business
       | decision, and that decision will be informed by an actuarial
       | exercise: Will profits and insurance be able to cover the costs
       | of defending and settling such cases. Who knows, maybe the
       | threshold is crossed at 5x safer.
        
         | xenocyon wrote:
         | > However, the lives lost in autonomous accidents will be a
         | different set of people than those that would have died in
         | human driven accidents.
         | 
         | So far it seems that this is very much the case. Autonomous
         | cars do relatively well in highway scenarios whereas they
         | appear to do poorly recognizing bicycles, for instance.
         | Reducing safety to one single metric would be a big mistake.
        
         | Justsignedup wrote:
         | Sensationalism will win out quite a bit. And responsibility.
         | Its just a tough problem.
         | 
         | - People developing Bell's Palsy at the same rate with vaccine
         | than not. But suddenly only those with the vaccine show up in
         | the social media feeds. Because nobody just posts "I just got
         | bell's palsy" but now they do because people are paying
         | attention. Same will happen with AVs. "I just got into an AV
         | accident" will make headlines, while "I just dozed off and hit
         | a kid" will barely circulate.
         | 
         | - People inherently trust humans over technology. Just because.
         | So they will be quick to distrust autonomous vehicles. I
         | already had convos about the fact that yes, teslas do kill, but
         | on the whole self-driving teslas do kill far less than non.
         | 
         | - When a human drives, the liability is on the human. When a
         | car self-drives, the liability might be on the manufacturer.
        
           | otabdeveloper4 wrote:
           | > Just because.
           | 
           | Maybe they actually read those EULAs and privacy policies and
           | made an informed choice.
           | 
           | I mean, I wouldn't want Facebook making a self-driving car,
           | and it's not because I doubt their machine learning chops.
        
           | frenchy wrote:
           | > People inherently trust humans over technology
           | 
           | I don't think that's generally true, or at least, the
           | contrary notion is also true some times. I'm pretty sure that
           | if someone asked me the product of 13 x 6, they would trust
           | me more if I punched some numbers in to a calculator and gave
           | them a result versus if I just did it in my head. I don't
           | know, but I think the likelyhood of me mistyping numbers is
           | embarrasingly high, and probably about as likely as a mistake
           | in easy mental math.
           | 
           | It's also closely linked to your third point though.
           | Liability with self-driving cars is difficult. When people
           | talk about self-driving cars, they sort of just hand-wave
           | away the fact that there will be accidents, so as to avoid
           | this difficult problem. This does not instill confidence.
        
           | SkyBelow wrote:
           | >When a human drives, the liability is on the human. When a
           | car self-drives, the liability might be on the manufacturer.
           | 
           | I think this is a point that needs more emphasis, especially
           | on the word 'might'. Without both laws and a history of court
           | cases giving evidence for how those laws are interpreted and
           | enforced, it isn't possible to tell where liability may end
           | up. I wonder if that is part of the reason people are
           | hesitant. Liability is being removed from the driver, but it
           | doesn't see have have found a place to settle back down at,
           | so people are viewing it as if liability is just being
           | removed. For initial court cases (and the amount of time and
           | money it takes to fight them), this may not be an unrealistic
           | expectation.
        
             | ghaff wrote:
             | If liability is on anyone, it would seem it has to be the
             | manufacturer. And if there's no liability then the options
             | are basically set up some sort of vaccine fund-like system
             | or just to shrug and say it's between you and the insurance
             | company.
        
           | cj wrote:
           | > People inherently trust humans over technology. Just
           | because.
           | 
           | From the perspective of someone who rides a motorcycle, the
           | #1 thing you need to do to not crash is to anticipate what
           | all vehicles around you might possibly do.
           | 
           | For example, I always avoid riding in another car's blind
           | spot for obvious reasons.
           | 
           | The problem (for motorcyclists) will now be trying to adapt
           | to understand what a Tesla might do and where a Tesla's blind
           | spots might be - and once you add in the idiosyncrasies of
           | other AVs I could see it being really difficult to ride
           | safely around AVs.
           | 
           | It's fairly easy to anticipate actions of another human, and
           | not as easy to anticipate when actions are decided by an
           | algorithm.
           | 
           | FWIW I think the above also applies to cyclists.
           | 
           | (I suppose this becomes a non-issue if the assumption is that
           | AVs will be so superior to human judgement as to never strike
           | another motorcycle or cyclist - 5x safer sounds like a
           | starting point)
        
             | sliken wrote:
             | I actually expect the opposite. Motorcycles preferring to
             | be near Tesla's and any other car with sensor based safety
             | features that are on 24/7.
             | 
             | I'm frequently alerted of a motorcyclist approaching from
             | the rear by it appearing on my Tesla display because it's
             | detected by the cameras and ultrasound. I've rarely notice
             | the motorcycle before the car does.
        
             | ggreer wrote:
             | I own a Tesla and I ride a motorcycle. I would _much_
             | prefer to deal with 100% autopiloted Teslas than current
             | human drivers.
             | 
             | Teslas don't have blind spots. They have eight cameras that
             | give 360 degree views around the car. They also have a
             | dozen ultrasonic sensors that can detect obstacles up to 5
             | meters in all directions. The only way to collide with a
             | Tesla on autopilot is by doing something really dumb.
             | 
             | In practice, a Tesla on autopilot tends to drive like a
             | human taking a driving test: accelerating slowly, _always_
             | signaling before turning or lane changing, _always_
             | yielding to pedestrians, always braking or cancelling lane
             | changes if an aggressive driver gets in the way, never
             | honking. If traffic is too dense to lane change to the
             | desired freeway exit, it reroutes rather than cutting into
             | traffic (as pretty much any human would).
        
         | foobarian wrote:
         | > However, the lives lost in autonomous accidents will be a
         | different set of people than those that would have died in
         | human driven accidents.
         | 
         | This is a really good point. I drive really conservatively and
         | like to think I will never ever cause an accident let alone a
         | fatal one. I think if this lever was taken away I would have a
         | hard time accepting the automated driving a significant amount
         | of time.
        
         | throwaway2245 wrote:
         | > There will be cases where a court determines that the
         | autonomous system was the cause.
         | 
         | At the moment, manufacturers almost totally escape blame for
         | fatal accidents [involving human drivers] - it's understood
         | societally and in the legal system that the human driver was
         | the one at fault.
         | 
         | That isn't a totally accurate picture of the responsibility.
         | The manufacturer provided a vehicle that included a risk of
         | fatal accident. (Reducing this wrong and describing it as
         | 'lives saved' feels uncomfortable to me)
         | 
         | With an autonomous system, blame for fatalities can no longer
         | be placed on a human driver: and yet, there is still a failure
         | of responsibility (maybe this will be a more accurate placement
         | of blame)
        
           | CuriousPerson23 wrote:
           | Hasn't there been only 1 fatal accident? If not, I can't
           | imagine there have been >5, so seems unfair and misleading to
           | make bold claims like that. I think the courts will rule, and
           | if the system misunderstood something, the manufacturer will
           | be at fault.
        
         | wffurr wrote:
         | >> The challenge remains that people will be killed in
         | accidents involving autonomous control.
         | 
         | While that's almost certainly true (and depending on one's
         | interpretation of autonomous is _already_ true), some people
         | already believe that zero deaths in transport is possible:
         | https://visionzeronetwork.org/about/what-is-vision-zero/. If
         | it's possible to hold human drivers to that standard, why not
         | autonomous systems as well?
        
           | riversflow wrote:
           | Well that's a silly vision. I have a vision of immortality
           | too. This is a textbook example perfect being the enemy of
           | good. I don't want to hold any system to the standard of zero
           | fatalities, and further I think this is why "shoot for the
           | moon and even if you miss you'll land amongst the stars" is
           | faulty. People waste an exorbitant amount of time and fossil
           | fuels commuting in passenger vehicles. Anything that
           | significantly changes that balance should be considered.
           | Progress is progress and letting everybody use the time they
           | used to spend commuting is certainly progress.
        
             | wffurr wrote:
             | If you read about Vision Zero and its methodology, you can
             | see that it indeed celebrates incremental progress and
             | indeed encourages simplest improvements first.
             | 
             | Before you dismiss something as silly, perhaps you could
             | try understanding it some first.
        
             | bryanlarsen wrote:
             | You're mocking as silly a vision that some places have
             | already achieved.
             | 
             | https://www.theguardian.com/world/2020/mar/16/how-
             | helsinki-a...
        
           | bryanlarsen wrote:
           | That was my original hope for self-driving cars. The easiest
           | way to ensure that you don't kill anybody is to limit your
           | speed to 20mph. At and below that speed a car-pedestrian
           | collision is highly unlikely to result in a dead pedestrian.
           | Also, at 20mph you can stop on a dime. So I imagined a large
           | fleet of robot cars traveling at 20mph and normalizing that
           | speed, forcing human drivers to slow down too.
           | 
           | But it turns out that self-driving vendors spend a lot of
           | effort on "driving like a human", which includes driving
           | faster than the speed limit and faster than vision zero would
           | allow on shared streets & roads.
        
             | ghaff wrote:
             | Which could be done today. The vast majority of people
             | obviously don't consider that acceptable in general.
        
         | porknubbins wrote:
         | Somehow the idea of a computer error killing me seems way worse
         | than at least having a chance to save myself, since I'm a very
         | cautious driver (though unlikely safer than average by 5x) .
         | Self driving cars need to get to airline level safety where
         | crashes are a rare thing and most people don't think twice
         | about giving up control to the pilots/auto pilot. If that takes
         | expensive Lidar thats what we should use. I can't imagine ever
         | feeling good about trusting my life to a computer vision
         | algorithm.
        
           | sliken wrote:
           | Problem is, 95% [1] of drivers think they are better than
           | average.
           | 
           | [1] citation needed.
        
             | Ajedi32 wrote:
             | Which could be true, assuming the median skill level of
             | drivers is sufficiently higher than the average.
        
               | gifnamething wrote:
               | Mean is an average, not the average. Mode is an average.
               | Median is an average.
        
             | senko wrote:
             | Happy to oblige:
             | https://www.smithlawco.com/blog/2017/december/do-most-
             | driver...
             | 
             | In summary: in a study done in 1980, 93% of Americans
             | thought themselves better than an average driver.
        
               | tjoff wrote:
               | _And they are all correct._
               | 
               | Except for when they haven't slept, or aren't paying
               | attention etc.
               | 
               | That is not a good baseline to compare self-driving cars
               | against. That would be horrific.
        
               | the8472 wrote:
               | The average baseline involves a driver that is not paying
               | attention or hasn't slept X% of the time. You cannot
               | magically wave away all the bad days and pretend only the
               | good humans are on the streets. The bad days happen,
               | people die. The autonomous vehicle only has to do better
               | than that. That's the whole point.
        
               | tjoff wrote:
               | Statistically no. But you need to convince people -
               | _that_ is the point.
               | 
               | We are not rational. We do not respond well to a car
               | running straight into a barrier or stand-still vehicle at
               | 100 km/h without even attempting to brake (such as the
               | failure modes that Tesla has demonstrated) even if it
               | does well most of the time and has better statistics than
               | an average human.
               | 
               | And further, as noted in the thread, a good part of
               | driving is assessing other vehicles and vehicles behaving
               | oddly (even if it is objectively better in isolation) is
               | really bad and will increase the risk of collisions.
               | 
               | I think google have experienced this, that people do not
               | respond well do the driving technique of their car since
               | it doesn't behave as a human - not that it does anything
               | wrong.
        
               | the8472 wrote:
               | Individuals may not act rationally but regulators and
               | insurance companies with their birds-eye view will see
               | the hard numbers and hopefully provide incentives aligned
               | with the rational choice.
        
               | tjoff wrote:
               | I hope not. I expect them to produce cars that are much
               | better than the average human before setting them free on
               | the road.
               | 
               | That is the rational choice given the human psyche.
               | 
               | We can barely even convince people that vaccines are
               | good.
        
               | the8472 wrote:
               | Just as with vaccines waiting for better cars means
               | letting more people die in the meantime. That is a grim
               | hope.
        
               | tjoff wrote:
               | If companies rush this (as Tesla and Uber already has!)
               | too much the backlash will likely set back self-driving
               | unnecessarily.
               | 
               | I believe a more careful approach will get broader
               | adoption and likely save more lives.
        
               | the8472 wrote:
               | I didn't suggest that the technology should be rushed and
               | I agree that a careful approach can save more lives. But
               | what constitutes "careful" matters here. For example if a
               | city chooses to offer robotaxi discounts to people with
               | bad driving records (before some cutoff date to avoid
               | perverse incentives) then even an average taxi fleet
               | could be a net-benefit even though the taxis do not
               | perform better than the general population. And that's
               | just in terms of lives saved, not counting the other
               | benefits of having cheap transportation.
        
               | nelgaard wrote:
               | Which says:
               | 
               | == Obviously, not everyone can be above average. Exactly
               | half of all drivers have to be in the bottom half when it
               | comes to driving skills and safety. ==
               | 
               | Maybe, but the bottom half is not necessary drivers worse
               | than average.
               | 
               | I do not know how you would calculate "average". But
               | there are people on the road that could pull down the
               | average a lot. So that more than 50 percent are better
               | than average.
        
               | adwn wrote:
               | In colloquial speech, most people don't differentiate
               | between "mean" and "median". My guess is that, in that
               | kind of survey, the participants read or say "average"
               | and implicitly mean "median" - and exactly 50 percent of
               | drivers are better than the median, by definition.
        
           | matt-attack wrote:
           | I feel the opposite. The one sensor that is guaranteed to
           | have sufficient information to drive in all conditions is
           | vision. That's obviously because humans drive exclusively
           | with vision (and a single vantage point to boot - modulo
           | mirrors).
        
             | beat wrote:
             | That's why one place I really want a driving assist is
             | automatically backing out of spaces in parking lots.
             | Visibility is _terrible_ for the driver. You need to be
             | paying close attention in multiple directions at once, you
             | often don 't have visibility at all when you need to start
             | moving (like a larger vehicle parked next to you), and both
             | pedestrians and other vehicles can appear out of nowhere,
             | often moving in unexpected directions. It feels very
             | unsafe.
             | 
             | Computer vision could be making those go-stop decisions for
             | you, much more effectively than human drivers.
             | 
             | Heck, imagine a "smart" parking lot that tracks its
             | available spaces and communicates with your car. You enter
             | the parking lot and hand over control, and the car and lot
             | work together to park you safely in the best available
             | space.
        
             | ghaff wrote:
             | >The one sensor that is guaranteed to have sufficient
             | information to drive in all conditions is vision.
             | 
             | Minor nit but humans can't drive--certainly not safely--in
             | _all_ conditions. You can certainly get to a point in fog,
             | blizzards, and even very heavy rain where you really would
             | like to get off the road if possible. (Not always possible
             | of course and in snow particularly, pulling off to the side
             | of a highway isn 't a great option.)
        
               | wool_gather wrote:
               | Not a nit at all; the parent comment has the cart
               | completely in front of the horse. Humans use vision
               | (primarily) to drive because _it 's the only sense we
               | have_ that's even close to being sufficient.
               | 
               | There are certainly other senses (lidar, ultrasound,
               | radio signals) that robots could avail themselves of that
               | would be helpful even in conditions where vision also
               | worked.
        
             | jschwartzi wrote:
             | Kind of. The difference between human vision and computer
             | vision is that human vision is stereoscopic. We perceive
             | depth in addition to color and shape. And that gives us the
             | ability to perceive the 3-dimensional shape of an object,
             | which lets us anticipate how it might move. A lot of CV
             | algorithms operate on single images from a single camera,
             | which makes it impossible to judge depth. In that case
             | you'd have to use the size of an object as a proxy for its
             | distance and speed, so you'd tend to misjudge how far
             | things are from you and where they're going.
             | 
             | The nice thing about LIDAR is that you can gather that
             | depth/shape information and with sufficient calibration map
             | the shapes in the camera image to the depths in the LIDAR
             | image. You can do the same thing with two cameras so I'm
             | not sure why LIDAR would be preferred here.
        
               | sliken wrote:
               | Tesla as an example has 3 forward looking cameras,
               | additionally a single moving camera can sense depth since
               | differences between frames relates to the distance from
               | the camera.
               | 
               | LIDAR has its advantages, like precise 3D positions under
               | ideal conditions. However there are downsides as well.
               | Cost is a big one, but that's becoming less of a issue
               | over time. Another is sensitivity to rain, fog, blowing
               | sand, etc.
               | 
               | A complicating factor is human driven cars will assume
               | cars to act like they have human limitations. So higher
               | speeds when humans can see well, and low speeds when
               | humans can't.
               | 
               | Not sure Tesla's current sensors will do it, but seems
               | like camera based systems are likely to be quite
               | competitive with LIDAR. Maybe instead of 3 forward
               | cameras, 6 or 8 so there's overlapping views (for
               | stereoscopic vision), handling failures better, and
               | allowing a narrower field of view at a higher zoom.
               | 
               | More range will be a huge help, that way an autonomous
               | car can slow more gently when uncertain and drive more
               | like a human. After all superhuman reflexes aren't much
               | use if you get rear ended all the time.
        
               | yarcob wrote:
               | You can get 3D data from a single moving camera. The
               | technique is called structure from motion and has been
               | demonstrated to work well more than a decade ago.
               | 
               | The biggest problem with relying on visual data is that
               | you get very noisy data. Poor lighting and reflective or
               | glossy surfaces cause problems (I'm not sure what current
               | state of the art is, it's been a few years since I looked
               | at the research).
               | 
               | As far as I understand the big advantage of LIDAR is that
               | you get nice and clean depth data and it's not so
               | computationally expensive.
        
               | tjoff wrote:
               | We don't need two eyes to drive. And we don't need two
               | eyes to gauge depth. Neither does a machine, but adding
               | stereoscopic cameras is not hard.
               | 
               | The stereoscopic effect is very poor on driving
               | distances, doesn't help us that much. Primarily we use
               | clues form the environment to gauge distances. We also
               | have to focus our eyes to the correct distance - that
               | also tells us how far an object is.
               | 
               | Primarily we have had an insane amount of training to
               | understand our world. An understanding that a self-
               | driving car will never achieve unless singularity
               | happens. It might not need that, but it will need other
               | ways to compensate for it.
        
               | vagrantJin wrote:
               | > You can do the same thing with two cameras so I'm not
               | sure why LIDAR would be preferred here.
               | 
               | I almost spilled my beer about to comment that a camera
               | or two are equally if not more powerful than lidar. To me
               | personally, Lidar feels like an incomplete solution to 3D
               | mapping when high res images from a smartphone camera can
               | provide so many more data points from different angles.
               | 
               | My thinking was vehicles should have an idea where other
               | vehicles are without the need for comp viz. Like beacons
               | saying "hey Im here." And we can try to calculate
               | relative direction and distance. The vision bit should
               | ideally come in to validate and confirm other things like
               | road signs etc. Ideally we should add that data to
               | mapping software andnthe car should know these things
               | without "seeing it".
        
           | mankyd wrote:
           | While I don't disagree, I will note that most people don't
           | think twice about giving up control to bus and taxi drivers.*
           | 
           | I think we trust humans to make a reasonable decision in a
           | trolley-problem scenario (rightly or wrongly). Or rather, we
           | trust the human we're in a vehicle with to value their own
           | life, and thus our own, more than those outside of the
           | vehicle in most scenarios.
           | 
           | I expect there is research to investigate this, though I
           | wouldn't know where to begin looking.
           | 
           | *I've definitely had a few bad drivers, of course.
        
             | nelgaard wrote:
             | Busses are generally much safer because they are bigger and
             | heavier. Except in mountains. But I only encounter that on
             | holidays and on holidays we take a lot more risks.
             | 
             | And I do think twice about taxis.
        
             | xiphias2 wrote:
             | I always sit in the back of the taxi (just like what Waymo
             | does), and that already significantly decreases my chances
             | of dieing. And with bus it's again easily more than 5x
             | safer than a car.
        
           | thesuitonym wrote:
           | As cautious as you are, you still get distracted, focus on
           | the wrong thing sometimes, and have blind spots. Computers
           | don't.
        
             | lemonspat wrote:
             | Computers might not get distracted, but have many other
             | problems. And I assure you a computer can still focus on
             | the wrong thing and have blind spots.
             | 
             | https://spectrum.ieee.org/cars-that-
             | think/transportation/sen...
        
           | colechristensen wrote:
           | A computer error is going to be scarier by far.
           | 
           | When a human kills someone with a car it is almost always in
           | a way we can empathize. When you're around cars you have
           | pretty good mental models for the humans that drive them and
           | how the car will behave.
           | 
           | When you're around human piloted cars you can look at the
           | driver and get a pretty good idea of intent. You can tell if
           | the driver sees you, you can tell what their mental state is,
           | if they're paying attention, what they intend to do. You can
           | sum up a person with a glance, this is the power of
           | evolution, we're really good at figuring things out about
           | other living things.
           | 
           | Crossing the street in front of a car is a leap of faith, not
           | troubling at all when there's a human there, but a robot?
           | There's no body posture, no gestures, no facial expressions,
           | nothing to go on. There's a computer in control of a powerful
           | heavy machine that you're just expected to trust.
           | 
           | Robot cars don't make human mistakes, they make alien
           | mistakes like running down kids on the sidewalk in broad
           | daylight, things which don't make any sense at all that make
           | people feel like they aren't anywhere safe.
           | 
           | It won't take but a couple of cute kids killed in a
           | surprising matter to shut down the whole autonomous
           | experiment.
        
             | beat wrote:
             | The one time I was hit by a car as a pedestrian was a
             | driver who wasn't paying attention. He was making a
             | perfectly legal left turn at a green light, except for the
             | pedestrians in the way (me and my girlfriend).
             | 
             | The danger with an autonomous vehicle is it not seeing you.
             | The danger with a driver is not noticing you.
        
             | dkobran wrote:
             | > When a human kills someone with a car it is almost always
             | in a way we can empathize
             | 
             | Nonsense. You can empathize with someone texting and
             | killing someone?
             | 
             | This whole post reads like an attempt to appeal to people's
             | emotional attachment to human drivers coupled with
             | fearmongering about robots.
             | 
             | You are placing far too much emphasis one our ability to
             | "read" other drivers intent and the impact this has on
             | automobile accident fatalities. Many accidents occur
             | without any chance to see the offending driver e.g.
             | accidents at night, someone switching lanes when you are in
             | their blind spot, a drunk driver suddenly doing something
             | erratic, etc. Moreover, this so-called advantage of human
             | drivers is statistically meaningless unless you believe
             | that the number of deaths due to automobile accidents is at
             | an acceptable level and that it cannot be improved with
             | technology, in this case, AV. I certainly don't believe
             | that. In the not too distant future, I believe this
             | position will be laughable. Through adoption of autonomous
             | vehicles, many predict we will drastically cut the number
             | of fatalities. Will there be issues along the road? Most
             | certainly. But as long as the overall number is falling by
             | a significant amount, we simply cannot justify our love
             | affair with humans "being in control". We've proven to be
             | perennially distracted, we have terrible reaction times, we
             | have extremely narrow vision, we panic in situations
             | instead of remaining calm, etc. and yes, these faults do
             | lead to the deaths of children. These are not theoretical
             | deaths like the robot scare tactic examples, these are
             | actual deaths from human drivers.
        
               | ksk wrote:
               | >Through adoption of autonomous vehicles, many predict we
               | will drastically cut the number of fatalities.
               | 
               | Who are these many people, and why should we believe
               | their predictions?
               | 
               | > We've proven to be perennially distracted, we have
               | terrible reaction times, we have extremely narrow vision,
               | we panic in situations instead of remaining calm, etc.
               | and yes, these faults do lead to the deaths of children.
               | 
               | We've also proven that all software has bugs, and
               | developers keep introducing new bugs in every single
               | release. There is no reason to think that self-driving
               | car software will be any different. Whats worse is that
               | when software is updated, these bugs will now be pushed
               | out to tens of thousands of cars - instantly.
               | 
               | Bit much to call someones position nonsense when they're
               | just skeptical of obvious stuff :)
        
               | dkobran wrote:
               | I was referring the absurdity of empathizing with drivers
               | who kill people while texting, drunk, etc. (hence the
               | quotation). What part of that statement do you agree
               | with?
               | 
               | But I'll go further and double down and say the entire
               | post is nonsense. Why? Because the author's skepticism
               | doesn't extend to the human factor. The position is not
               | an accurate representation of the facts i.e what causes
               | accidents (humans) and the known data around AVs today.
               | If AV risk is so obvious as you claim then why does the
               | enormous amount of data show that AVs are involved in the
               | less accidents and lead to less fatalities than cars
               | operated by humans on a mile per mile basis? And how is
               | the negligent human driver not obvious as a source of
               | automobile fatalities? The notion that we are safe
               | because we can read humans is not substantiated by
               | anything. Maybe you believe this number of fatalities is
               | acceptable or the best we can do but I certainly don't.
               | There will be flaws in autonomous vehicles, no doubt. But
               | will there be a net reduction in automobile related
               | fatalities as a result? Like anyone else, I can't predict
               | the future. But to paint a rosy picture about how our
               | ability to read other drivers is somehow safer relative
               | to AVs is nonsense. It just is. The data doesn't support
               | this argument. And separately, if we're talking about
               | will happen in the future, the notion that humans will
               | ultimately prevail over AVs because for safety reasons
               | seems preposterous. We can debate the "when" in terms of
               | AVs but debating the "if" seems pretty out of touch with
               | the way society has progressed with respect to our
               | willingness to depend on technology.
        
               | ksk wrote:
               | >Because the author's skepticism doesn't extend to the
               | human factor.
               | 
               | And your over-enthusiasm for AV doesn't extend to the
               | human factor. We all have our own blinders ;)
               | 
               | >The notion that we are safe because we can read humans
               | is not substantiated by anything.
               | 
               | That is your own misinterpretation. I did not read the
               | comment that way.
               | 
               | >If AV risk is so obvious as you claim then why does the
               | enormous amount of data show that AVs are involved in the
               | less accidents and lead to less fatalities than cars
               | operated by humans on a mile per mile basis?
               | 
               | What you mean when you say AV, is actually "AV + Human".
               | We're running controlled experiments, limiting the
               | unknowns, and we're mandating a human be present -
               | because the current AV technology sucks.
               | 
               | > We can debate the "when" in terms of AVs but debating
               | the "if" seems pretty out of touch with the way society
               | has progressed with respect to our willingness to depend
               | on technology.
               | 
               | People used to say that about flying cars 40 years ago.
        
             | mcot2 wrote:
             | We can eventually make AI do any of that better than a
             | human by a long shot.
             | 
             | We tend to overestimate the power of the human brain. There
             | is a lot we don't know yet, but we shouldn't treat it as
             | magic and unsolvable by AI.
        
               | panta wrote:
               | There is no evidence that AI can reach the level of skill
               | and safety of a human driver. I'm not saying that it's
               | not possible, only there is no reason to be sure of the
               | contrary. IMHO we are extremely far.
        
               | mcot2 wrote:
               | And IMHO we are extremely close. There is lots of
               | evidence that AI can be much _better_ than a human
               | driver, although currently on things like well mapped
               | highway driving with clear conditions. Whats going on now
               | is just making that general purpose for all different
               | types of enviornments.
        
             | tuatoru wrote:
             | > When you're around human piloted cars you can look at the
             | driver and get a pretty good idea of intent. You can tell
             | if the driver sees you ...
             | 
             | This is a key area of driving that has been completely
             | overlooked in AVs so far - giving feedback to other non-car
             | road users.
             | 
             | Not hard on the face of it (excuse the pun).
        
             | asiachick wrote:
             | Honda did an experiment where they added LCD's to the
             | headlights to give the car expressive eyes to communicate
             | to people outside the car.
             | 
             | Also while it is scary to us and it will take a while there
             | are already self driven vehicles we just take for granted
             | like elevators and driverless trams/trains. Sure they are
             | much easier to make but they weren't trusted at first.
        
               | bwat49 wrote:
               | and here I was about to make a joke about adding
               | emoticons to the front of self driving cars
        
             | Chyzwar wrote:
             | Once self-driving is better we need to do is to create
             | moral rules for AI.
             | 
             | There is a car controlled by computer. Pedestrian (child)
             | abruptly enter into road from behind cover. The Computer
             | knows that with current speed it is impossible to stop. Its
             | other choices is to drive into the sidewalk killing an old
             | lady, drive into the opposite lane risking the life of a
             | car owner and people in another car.
             | 
             | A Human driver can decide on instinct, usually protecting
             | themselves. The Computer needs to have an algorithm that
             | decide who will live and who dies.
        
               | bryanlarsen wrote:
               | > The Computer knows that with current speed it is
               | impossible to stop.
               | 
               | Then the car was going too fast. Full stop. The rest of
               | your scenario is irrelevant.
        
             | beat wrote:
             | When a human kills someone with a car because they're
             | drunk, or texting, I don't have much empathy for them.
             | 
             | I read a statistic long ago - don't know how true it is,
             | but it feels truthy - that half of all traffic fatalities
             | happen between 9pm and 3am on friday and saturday nights.
             | The fact that autonomous systems will never be intoxicated,
             | distracted, or emotional makes me feel _much_ safer.
        
               | staunch wrote:
               | That stat seems to be very untruthy. Fatal crashes seem
               | to be distributed much more evenly than I would've
               | guessed.
               | 
               | https://injuryfacts.nsc.org/motor-
               | vehicle/overview/crashes-b...
        
               | beat wrote:
               | Maybe not 50%, but there's certainly a strong bias in
               | that data toward friday/saturday nights. Since the data
               | resets at midnight rather than on bar hours, look at the
               | difference in midnight-4am data on saturday and sunday
               | mornings, vs the rest of the week.
        
               | ealexhudson wrote:
               | It only makes me feel safer if those systems are
               | substantially safer than humans.
               | 
               | If the systems are broadly as safe as humans _including_
               | a significant set who are drunk / high / distracted, that
               | feels subjectively much less safe even though the
               | statistical number of accidents is the same.
        
               | beat wrote:
               | Oh, I concur. I want it measured against skilled, sober,
               | attentive drivers, not "bad" drivers.
        
               | YeGoblynQueenne wrote:
               | A brick tied to the gas pedal will also never be
               | intoxicated. It takes more than inability for
               | intoxication to make a system that can drive a car
               | safely.
        
             | the8472 wrote:
             | > It won't take but a couple of cute kids killed in a
             | surprising matter to shut down the whole autonomous
             | experiment.
             | 
             | That is sacrificing the counterfactual children that
             | wouldn't have been killed if the bad human driver had been
             | replaced by an average autonomous car.
        
             | dado3212 wrote:
             | > Crossing the street in front of a car is a leap of faith,
             | not troubling at all when there's a human there, but a
             | robot? There's no body posture, no gestures, no facial
             | expressions, nothing to go on. There's a computer in
             | control of a powerful heavy machine that you're just
             | expected to trust.
             | 
             | This is something that's very solvable though. Robot cars
             | should and almost definitely will have a way to communicate
             | to pedestrians. I agree with the general point though
             | around a greater possibility of very out of the norm
             | mistakes.
        
               | wool_gather wrote:
               | There's an opportunity for them to communicate _better_
               | with pedestrians than the average human driver. Drivers
               | tend to assume that their intent to stop or not to stop
               | is obvious and don 't bother with a clear signal like
               | flashing their lights or waving visibly.
               | 
               | From the pedestrian's perspective, it can be hard to see
               | the driver at all (small movements of the hand can be
               | invisible in sun glare; direction of gaze likewise), and
               | also hard to tell what they're doing. Just because
               | they're slowing somewhat as they approach doesn't mean
               | they see you or intend to stop.
        
               | jfim wrote:
               | Drive AI (now acquired by Apple I believe) used to have
               | LED matrix displays that communicated that way with other
               | road users. I recall seeing them say things like "waiting
               | for you to cross" or "driving autonomously" with an icon.
        
           | toper-centage wrote:
           | Good point. But it doesn't matter how carefully you drive if
           | the road is full of idiots and intoxicated drivers.
        
         | SoSoRoCoCo wrote:
         | > I expect that in the end it will come down to a business
         | decision, and that decision will be informed by an actuarial
         | exercise: Will profits and insurance be able to cover the costs
         | of defending and settling such cases.
         | 
         | I'm seeing type of phrasing occur more and more. Once the
         | defendant can be named in a legal action, we'll start seeing
         | SDVs. IMHO, the worry isn't that they will kill, but that no
         | one is to blame.
         | 
         | Although it will change the day to day narrative of a
         | pedestrian. E.g., My thought process will change from this
         | person might not see me, to, that car's AI might not see me.
         | ... or even "Oh, its a Toyota, they kill more than Hyundai...
         | stand back!" But now I'm just writing SciFi.
        
           | dvfjsdhgfv wrote:
           | > But now I'm just writing SciFi.
           | 
           | I don't think so. Normally whenever I want to cross the
           | street (as a pedestrian, but even more as a cyclist) and a
           | car approaches I (unconsciously) examine its speed, and if
           | it's higher than acceptable I try to make eye contact with
           | the driver to make sure they see me and it's safe for me to
           | go. How do I make contact with the AI of the car? More
           | importantly, how do I get the cue I've been noticed?
        
             | SoSoRoCoCo wrote:
             | > I try to make eye contact
             | 
             | That's a really good point. I forget how often when I'm
             | walking, running, or biking I will try to make eye contact
             | with a car to make sure we're aware of each other.
             | 
             | Now how do I do that with an AI?
             | 
             | More things we need to start thinking about!
        
         | njarboe wrote:
         | I can't think of any product that has been developed since 1970
         | that can kill people. The exceptions are medical devices and
         | pharmaceuticals. I sure hope self-driving cars can be an
         | exception, but that will definitely take a federal law limiting
         | the liability of manufactures. Similar to how small aircraft
         | manufacturers were being pushed to extinction due to very high
         | liability costs until the passage of the General Aviation
         | Revitalization Act in 1994.
        
           | ghaff wrote:
           | Lots of products _can_ (and do) kill people. But drug side
           | effects aside, it 's hard to think of modern consumer
           | products that, used and maintained properly, might just go
           | and kill you some day and people being OK with that.
        
           | drjasonharrison wrote:
           | Boeing 737 Max? Ikea Malm dressers? Many products listed at
           | https://www.cpsc.gov/Recalls
        
             | yadaeno wrote:
             | Add cars and motorcycles to this list.
        
           | yadaeno wrote:
           | Cars?
        
         | tgv wrote:
         | Idk about risk: that is hard to establish. There are so many
         | conditions in which an automatic pilot hasn't been tested. We
         | don't even know the factors involved in estimating the risk:
         | e.g., is it dependent on the human co-pilot? And self-driving
         | cars may change the car usage patterns, exerting a contextual
         | influence on the risk.
         | 
         | Then there's the question of responsibility. Who will be held
         | responsible when the automatic pilot is driving? If it's the
         | human, then a high risk of causing an accident will be
         | unacceptable to many drivers.
        
         | HALtheWise wrote:
         | It strikes me that a useful analogy here is the adoption of
         | automatic elevators in buildings. In some ways, it's amazing
         | that pretty much everyone in industrialized countries is OK
         | with being locked in a windowless box controlled entirely by a
         | computer, hanging over a hundreds of foot deep shaft, and in
         | fact many people were terrified of that when Elevator operators
         | were first replaced with computers. Some places even had
         | operators employed to simply stand there and push the buttons
         | to provide confidence that an trained expert was there, even
         | though they didn't actually contribute to safety. Eventually,
         | autonomous elevators got common enough that people will look at
         | you really funny if you're not willing to ride in one, even
         | though they are still responsible for ~20-30 deaths per year.
        
           | ggreer wrote:
           | Can you provide more info about people being terrified of
           | automatic elevators? I searched around and everything I found
           | seems to cite one NPR article from 2015.[1] The interviewee's
           | book is out of print and costs $100[2], so that's where that
           | trail stops. If public sentiment against automatic elevators
           | was as strong as described, it seems like there would be more
           | historical evidence available. It's easier for me to find
           | articles disparaging self-checkout systems than for automated
           | elevators. I realize the change in elevators happened long
           | ago, making articles harder to find, but you'd think at least
           | _one_ of them would have gotten digitized and indexed.
           | 
           | 1. https://www.npr.org/2015/07/31/427990392/remembering-when-
           | dr...
           | 
           | 2. https://www.amazon.com/Ascending-Rooms-Express-Elevators-
           | Pas...
        
             | HALtheWise wrote:
             | I just spent some time digging, but finding original
             | sources from the ~1950s is really hard without a NYTimes
             | subscription.
             | 
             | These sorts of things look like promising primary sources,
             | but I can't access the full text.
             | https://www.nytimes.com/1949/01/12/archives/city-gets-
             | elevat...
             | https://www.nytimes.com/1949/02/03/archives/tenants-want-
             | a-d...
             | https://www.nytimes.com/1928/11/11/archives/elevator-law-
             | cha...
        
           | dekervin wrote:
           | there is some kind of implied machine capability that a
           | layman assign to a computer. If a task is culturally (
           | through movies , series, ... ) though to be within that
           | implied capability people will be comfortable. ( cf automatic
           | trains, ... )
        
           | RcouF1uZ4gsC wrote:
           | The difference with elevators is that the safety systems are
           | actually in large parts independent of the control. Since the
           | Otis safety elevator, you could go so far as to cut the
           | elevator cord and it would still be ok.
           | 
           | With self-driving cars, you don't have those type of backup
           | safety systems.
        
       | kevin_thibedeau wrote:
       | There will always be a long tail where the machines fail in
       | scenarios a human can handle. We're just going to write off those
       | deaths as an act of nature?
        
       | AndrewKemendo wrote:
       | This is 100% just an artifact of a system going from human
       | control to non-human control. Nobody bats an eye at systems which
       | were never human controlled - or transitioned so long ago that
       | nobody recalls human control.
       | 
       | I've never seen anyone hesitate when getting on a fully automated
       | train system at an airport or an elevator. Even more-so with
       | amusement park rides that literally put people in extreme
       | situations.
        
         | stkdump wrote:
         | In amusement parks you get a much smaller selection of the
         | population than on the street or even at an airport. People who
         | don't enjoy thrill just have no reason to visit them at all.
         | 
         | Aside from that, all the systems you mentioned are mechanically
         | constrained way more than a car. Accidents happen, when these
         | mechanical constraints physically fail, not when a computer
         | makes a wrong choice because it failed to detect an obstacle or
         | similar.
        
       | gremlinsinc wrote:
       | Costs aside, what about something like mag-lev tracks for cars,
       | that can start/go on a dime, and on freeways go faster, even
       | switch lanes to get around slower traffic. Maybe even do away w/
       | speed limits just go as fast as you 'feel safe' going with the
       | only limit being the max. In cities you'd have sensors/grids
       | everywhere to detect non-car traffic, and regular cars could even
       | drive over the mag-lev, or it could be a separate track, and you
       | can go in/out of mag-lev/drive modes. Maybe it parks you, til
       | you're ready to take over control (say you're napping on the
       | commute). Alarm goes up, you wake up. Stretch, even get out and
       | stand up for a minute, get back in. Buckle up - drive the final
       | block to where you want to park at your job, or if it's a country
       | side location, up in the mountains, etc you might drive for
       | longer then park where ever.
       | 
       | Essentially you could just cover cities and highways to nearest
       | gas stations. Car's running out of gas/electricity it routes
       | itself to nearest depot.
       | 
       | Going cross country and want to stop at lunch? Program the car,
       | and it'll pull to nearest gas station in tim-buk-two and let you
       | figure out where to go from there.
       | 
       | Point: A/I self-driving aren't the only way to get autonomous
       | cars. 50/50 re-thinking infrastructure, sensors, car-to-car
       | communications could get us a lot closer faster.
        
       | jhpriestley wrote:
       | this is quite an academic exercise since a decade of intensive
       | research hasn't brought us close to working self-driving cars,
       | much less 1x safe self-driving cars, much less 5x safe, nor is
       | there any clear path to resolving this open research problem.
        
       | c1505 wrote:
       | That might be their current stated preference, but I don't think
       | it will be most people's actual choices. Imagine if self driving
       | was available on every car right now with the press of a button
       | and it was as safe or twice as safe as a normal driver. How many
       | people would press that button, start texting, and just
       | continuing to progress to paying less and less attention ? People
       | already don't pay the attention they should when driving or when
       | using a driver assistant system.
        
         | franklampard wrote:
         | On the contrary, you can argue that button saves lives by doing
         | a better job than the reckless drivers who aren't going to pay
         | attention in the first place.
        
         | loeg wrote:
         | Yeah, not to mention value of time. If I could hop in the car,
         | at the same level of safety as my own driving, and spend the
         | 1.5 hours to trailheads reading a book or even programming, I'd
         | much rather be doing that than paying attention to the road.
        
       | Causality1 wrote:
       | Is 5x safer a realistic goal? There are limits to how safe a car
       | can be on a road full of human drivers, no matter what sensor
       | suite it has and how fast its reactions are. A vehicle can only
       | respond so quickly to control inputs. Making a computer that's
       | five times as safe as a human might be a thousand times more
       | difficult than making one twice as safe.
        
         | sliken wrote:
         | Depends how you count. Being is 5x less accidents might be
         | unreasonably hard. But causing 5x less accidents seems
         | reasonable, especially since most humans are that safe.
         | 
         | I don't have the stats, but I believe the worst 20% of drivers
         | cause a large fraction of the accidents. That 20% often
         | includes the uninsured, the unlicensed, the drunk, high,
         | emotionally distressed, and the physically compromised (senile,
         | low blood sugar, tired, etc)
        
       | darksaints wrote:
       | What so many autonomous car advocates seem to miss is that it is
       | nearly impossible to meaningfully compare relative safety with
       | current self driving cars, because we don't have level 5 autonomy
       | yet.
       | 
       | In order to compare them with current technology, you'd have to
       | be able to answer the question: how safe would human drivers
       | actually be if they didn't have to perform their most difficult
       | tasks? Because that is what current autonomy does.
       | 
       | I'm willing to believe that current tech is capable of being
       | safer than human drivers, simply because they do so many things
       | way better than humans do, like stopping for pedestrians and
       | safely navigating around cyclists. But to compare them _in
       | general_ , that is left to be proven. You can't just compare
       | incidents per mile driven, because autonomous vehicles can
       | conveniently opt out of driving whenever the task gets too hard.
        
         | Bedon292 wrote:
         | It definitely tends to be the already safer driving, like
         | highways, that they do well on. I have a Model 3, and trust it
         | pretty well on highways. However it does not do turns at all,
         | and other 'city driving' type tasks well. It can now do stop
         | signs and traffic lights, which seems to be good so far too.
         | 
         | However, not living in an area with many sidewalks, I do not
         | trust it for one second to navigate around pedestrians or
         | bicycles. I don't think it will actually try to go around a
         | bike but, I have never given it the opportunity to either. I
         | take full control back and give them a very wide berth myself.
        
         | jjk166 wrote:
         | > You can't just compare incidents per mile driven, because
         | autonomous vehicles can conveniently opt out of driving
         | whenever the task gets too hard.
         | 
         | But isn't that kind of the point? We use autonomous driving for
         | the tasks where autonomy is objectively better, and we have the
         | human do what the human is still better at. Best of both
         | worlds.
        
       | anoyesnonymous wrote:
       | This should be calibrated to the risk the top X% of cautious/safe
       | drivers, and exclude reckless, inexperienced, or intoxicated
       | drivers. As a safe driver, you shouldn't have to accept risk
       | calibrated to "average" (i.e. drunk, reckless) driver.
        
         | tgv wrote:
         | Exactly. Because now we can punish those individual drivers,
         | lock them up, take away their car and license, but are we going
         | to pull the plug on all cars with auto-pilot X because X is
         | causing accidents? Is a small change in the software enough to
         | establish it as a new driver? It's "smoking is good for you"
         | all over again.
        
         | grecy wrote:
         | > _As a safe driver, you shouldn 't have to accept risk
         | calibrated to "average" (i.e. drunk, reckless) driver._
         | 
         | But you already do. Every day you're near a road, there is
         | chance the next vehicle around the bend is drunk or reckless or
         | using their phone.
         | 
         | It doesn't even matter if you are in a vehicle or not - even as
         | a pedestrian you already deal with them every day.
         | 
         | It sucks, but it's reality.
        
         | the8472 wrote:
         | Why should it be? In the end the dead bodies count and it
         | doesn't matter whether a cautious or inexperienced driver
         | killed them. Inexperienced drivers are prerequisite of
         | experienced drivers, there's no way to get rid of them.
         | Excluding them from statistics is just discounting those deaths
         | as... somehow less important?
         | 
         | If a self-driving vehicle is only 1.5x (instead of 5x) as safe
         | as the average human driver then you're not trading between
         | _death by humans_ vs. _death by machine_ , you're primarily
         | trading between _death by human_ and _spared by machine_ and
         | only secondly between the former.
        
           | ianhorn wrote:
           | Let's say you hire a chauffeur to drive your kids around. You
           | find out they've been drinking on the job and speeding
           | recklessly. When you confront them, they pull out stats that
           | they've been actually less drunk than average. Do you fire
           | them and find a new chauffeur?
           | 
           | When it's a robot chauffeur, you have to evaluate it like you
           | would a human one.
        
             | the8472 wrote:
             | In this quite hypothetical scenario, if the statistics he
             | cites are correct and also apply to chauffeurs (i.e.
             | chauffeurs are not statistically different from the general
             | population) then firing him and hiring a new one may not
             | improve your situation. It would be better to invest in a
             | breathalyzer or something. So what you're suggesting is an
             | appeal to emotion, fire your driver to ameliorate your
             | dissatisfaction even if it might result in an even worse
             | driver. So to turn the question around, do you prefer false
             | sense of of safety for your children or actual safety?
        
               | ianhorn wrote:
               | That's only the case if it's entirely statistical, while
               | the whole point is that there are factors under your
               | control. Hiring someone/something to drive your family
               | around isn't a reversion to the mean. You can make
               | certain efforts (interviewing, not tolerating bad
               | behavior, etc). It's a third person version of the usual
               | debate of 'I'm a safe driver' versus 'I only had like
               | three beers and that was two hours ago' versus 'robo
               | car.' If you bucket the first two together and throw your
               | hands up in the air saying humans are humans oh well,
               | you're pretending you don't have the agency you actually
               | have.
               | 
               | In the third person version, I suppose there's an
               | implicit unstated option that while your particular
               | chauffeur has evidence they are better than average, you
               | have an option to hire someone more responsible. That
               | aspect of agency is central here.
               | 
               | > if the statistics he cites are correct and also apply
               | to chauffeurs
               | 
               | I meant compared to the general population. As in self
               | driving versus general population stats.
        
               | the8472 wrote:
               | Ok, I see what you're going for. But then the question is
               | how much safety is that agency buying you? And how many
               | people even have an option to exercise such agency? You
               | do not have it when it comes to other drivers that may
               | cause accidents or run you over (or your children if you
               | wish) as pedestrian. You have far less of it for taxi,
               | rideshare or public transport services. And how many
               | parents will drive their children even when they're
               | stressed or haven't slept because the children simply
               | have to go somewhere and they can't afford other options?
               | 
               | In aggregate we can probably buy more safety by having
               | policies that encourage replacement of bad drivers with
               | merely average autonomous vehicles rather than attempting
               | to rely on individual behavior to improve safety.
               | 
               | If you want to still exercise personal options you could
               | choose an autonomous car plus safety driver.
        
               | [deleted]
        
           | degrews wrote:
           | > In the end the dead bodies count and it doesn't matter
           | whether a cautious or inexperienced driver killed them
           | 
           | It matters to the safe drivers. Bad drivers are mostly a
           | danger to themselves. At only "1.5x as safe as average", it's
           | a good deal for the bad drivers, but there are probably a lot
           | of "2x as safe as average" drivers that are getting a bad
           | deal. They are in more danger than before.
           | 
           | Edited for clarity.
        
             | beat wrote:
             | I am a safe driver. (My measure: two moving violations in
             | nearly 40 years of driving, the last one 16 years ago. No
             | accidents in 19 years, no injury accidents ever. And I've
             | driven daily for the whole time.)
             | 
             | In the past couple of weeks, I've narrowly avoided hitting
             | pedestrians three different times. Each time, the
             | pedestrian was somewhere other than a valid crosswalk (once
             | was on a highway exit). In each case, I think an autonomous
             | vehicle could have handled it better than me.
        
             | dunefox wrote:
             | > Bad drivers are mostly a danger to themselves.
             | 
             | Source? It seems only logical that the number of accidental
             | deaths goes up with the number of bad drivers on the road -
             | not just because they kill themselves.
        
               | degrews wrote:
               | I just mean that a disproportionate amount of the danger
               | created by bad drivers is to themselves. I don't have a
               | source, but I think this is obvious.
               | 
               | My point is that, even if we lower the total death count,
               | the safest drivers could still end up at greater risk,
               | because a disproportionate amount of the reduction in
               | deaths will go to bad drivers.
        
               | sliken wrote:
               | The most common accidents are a car hitting a non-car.
        
               | dunefox wrote:
               | Source?
        
               | sliken wrote:
               | IIHS says "Nationwide, 53 percent of motor vehicle crash
               | deaths in 2018 occurred in single-vehicle crashes." The
               | other categories being multi-vehicle and property only.
        
               | the8472 wrote:
               | How does that factor in pedestrians? Do they count as
               | deaths in signle-vehicle deaths?
        
           | jjj1232 wrote:
           | This person is saying that on an individual level, they are
           | not willing to cede control to an "average" AI when they know
           | (or believe) themselves to be above average.
           | 
           | You're talking about it at a societal level, as if everyone
           | switched over to robot cars at the same time.
        
             | the8472 wrote:
             | We don't need to switch everyone over at the same time. For
             | example we could start with young (more likely to be drunk
             | and inexperienced?) or known-bad (traffic offenses) drivers
             | where perhaps even sub-average autonomous vehicles could
             | make a difference.
        
               | Nimitz14 wrote:
               | The people who will have money to buy new cars which can
               | drive by themselves are not young.
        
               | jjj1232 wrote:
               | You're right, I shouldn't have said "At the same time"
               | but the point still stands: your other comment was
               | talking past the OP, not addressing their point.
               | 
               | You're talking about it as a macro optimization problem
               | while the OP was explaining a rational decision at the
               | level of the individual.
               | 
               | Edited for clarity
        
         | 908B64B197 wrote:
         | I've questioned the lack of driving experience as a risk factor
         | since the pool of experienced driver excludes those who died
         | becoming experienced.
         | 
         | Assuming someone has a certain (constant) probability of
         | excluding himself from the driving pool every year, over time
         | the average percentage will drop as folks most susceptible from
         | excluding themselves will have already done so.
        
         | dunefox wrote:
         | No, on the contrary: deaths through those drivers can be
         | eliminated. It only makes sense to look at total number of
         | deaths, including by alcohol, drugs, inexperienced drivers,
         | elderly drivers, distracted drivers (smartphone, etc.).
        
         | Isinlor wrote:
         | Nobody will be forcing you to buy self-driving car for quite a
         | while. But as a safe driver, you should care about eliminating
         | the most unsafe drivers from the roads.
        
           | falcolas wrote:
           | As a self-identified safe driver, no action I am capable of
           | taking will put an unsafe driver in a self-driving car.
           | 
           | I'll even go so far to say that many unsafe drivers can't
           | afford a self-driving car. They're often unsafe because their
           | car is on balding tires, the brakes don't work, and the tail
           | lights are busted.
           | 
           | The rest, well, they simply enjoy driving unsafely and thus
           | have no reason to get into a self-driving car.
        
       | JohnHaugeland wrote:
       | "We arbitrarily chose a number so we could feel like we were
       | making improvements. Nothing justifies 5 over, say, 3, or 10.
       | When cars are in fact 3x safer, all those saved lives won't be
       | saved, because our arbitrary 5 has yet to be reached."
       | 
       | This is meaningless and bad.
        
       | manfredo wrote:
       | Why, though? Even in the case that it's the same safety self
       | driving cars would yield huge productivity gains as people can
       | work or sleep while commuting and truckers can have 1 or 2 self
       | driving trucks following them cross country. And transportation
       | for the elderly or disabled who cannot drive themselves.
        
       | segmondy wrote:
       | What's so magical about 5? Why not 4x or 6x. 2x safer will be
       | 500,000 lives saved yearly. We can see that even 1.25x safer is
       | very significant. Just weird seeing that magic number 5x...
        
         | sliken wrote:
         | Not magical, but you have to pick something. I suspect the goal
         | is to pick a number that most would think is better than an
         | experienced, awake, attentive (not looking at a phone) human at
         | the wheel. So even the safest drivers would be safer on the
         | road as the number of autonomous driven cars increases.
         | 
         | At only 1.25x it might well be worse than you. Keep in mind
         | that the average safe human driver includes people that are
         | tired, high, drunk, unlicensed, mentally ill, physically
         | compromised, etc.
         | 
         | While common, someone getting killed because someone is drunk
         | or asleep is much more acceptable than having a computer make a
         | mistake.
         | 
         | If we want society as a whole to accept autonomous cars it's
         | best to show a clear benefit to society, not just better than
         | 51% of drivers.
        
         | Traster wrote:
         | I don't think it's useful to talk about global traffic deaths
         | in this context. Since obviously regulation will change by
         | country, the difficulty of developing self driving will change
         | by country, and road safety varies enormously by country. The
         | US is likely to get self-driving first, but is already way
         | safer than the average country, and the countries where deaths
         | are higher are less likely to be able to afford the roll out of
         | self-driving cars.
         | 
         | In the US there are ~36,000 deaths from motorvehicle accidents.
         | 
         | To give some context, America could improve their fatality
         | right by 5x by bringing themselves into line with the safety
         | standards observed in Western Europe whose fatality rate is
         | already at around 2.7 per 100,000 people.
         | 
         | It's also important to remember that self-driving is likely to
         | represent the safest journeys - highway commutes etc.
        
         | Tempest1981 wrote:
         | Something psychological I guess? Or maybe one group said 2x and
         | another said 10x?
         | 
         | > psychological mechanisms influencing the decision-making
         | regarding acceptable risk
        
       | RcouF1uZ4gsC wrote:
       | One issue that is often overlooked is that humans are pretty
       | robust to unforeseen situations vs AI. Take for example the
       | recent fires in California and the smokey skies. Many of the cell
       | phone pictures did a horrible job of capturing the photos,
       | because their AI had been trained that the daytime sky is blue.
       | 
       | And with such a failure, all the cars with similar software would
       | be affected at once.
        
       | comeonseriously wrote:
       | I want to know who is responsible when the AI makes the wrong
       | decision and someone gets hurt. I want that to be fleshed out
       | first.
       | 
       | Beyond that, if SDCs are even _just_ as safe as HDCs, I 'm good.
        
         | stkdump wrote:
         | > if SDCs are even just as safe as HDCs, I'm good
         | 
         | You might be a safer driver than the average human driver, in
         | which case a SDC increases risk for you personally (and
         | overall, in case the less safe drivers keep using non-SDCs). In
         | that regard, we should wait until SDCs are safer than almost
         | all human drivers.
         | 
         | Most drivers believe to be above average drivers, which is of
         | course impossible. But there might be interesting correlations,
         | for example with social status. Lower social status does
         | disadvantage people in many regards. I am sure that insurance
         | companies have data on the question if they have more (fatal)
         | crashes as well.
         | 
         | And it does stand to reason that people with higher social
         | status drive newer and better cars as well, so we could end up
         | with a situation where the better drivers are replaced by
         | computers before the worse drivers.
         | 
         | Interestingly, I think an ethics committee said a few years ago
         | that once SDCs are safer than human drivers, it becomes a moral
         | imperative to outlaw non-SDCs. I am wondering, if they will
         | explore the human driving safety 'distribution' before inacting
         | such rules. Waiting for the 5x margin could solve that problem,
         | because then you will probably have SDCs that are safer than
         | almost all human drivers, and it could be an incentive for
         | companies working on the technology to get to that level
         | faster, than they would in case they start selling them en
         | masse earlier.
        
       | bumby wrote:
       | Algorithm aversion is real and shows we prefer humans even in the
       | face of statistical evidence that humans are sub-optimal decision
       | makers. [1]
       | 
       | I suspect it's because we inherently dislike the idea of handing
       | control over to a complex black box. Barring sociopaths, we can
       | reasonably assume to interpret how a person thinks. This isn't
       | necessarily the case for algorithms, which leads to trust issues.
       | 
       | [1]
       | https://repository.upenn.edu/cgi/viewcontent.cgi?article=139...
        
       | ogre_codes wrote:
       | The big problem with numbers like this is how do you measure it?
       | 
       | Tesla claims their system is vastly safer than human drivers, but
       | currently it only engages in situations where it's already fairly
       | safe to use. So should that system be 5 times safer than all
       | human-driving, or safer than human-driving under the conditions
       | the Tesla is able to engage?
        
       | kstrauser wrote:
       | I have an older friend whose driving terrifies me, but who lives
       | in an area with effectively zero public transportation or
       | reliable cab service. While I don't want to see this person on
       | the roads, the alternative is literally moving into a senior
       | community (which would probably be the death of this person).
       | 
       | Frankly, if self-driving cars became .75x as safe as the average
       | human driver, it would still a net safety improvement if got this
       | person out from behind the wheel.
        
       | Scandiravian wrote:
       | So to speed up the adoption of self-driving cars, we could simply
       | make human-driven cars more prone to accidents :p
        
         | xibalba wrote:
         | This is just the sort of outside-the-box thinking for which I
         | read HN.
        
       | Zigurd wrote:
       | An oft overlooked factor in acceptance of AVs is that the
       | evolution of technology from driver-assist to autonomy will alter
       | perceptions:
       | 
       | First, human drivers using driver assist will become safer
       | "drivers" even though the added safety is properly the result of
       | the technology that is evolving toward AVs. For example, it
       | should become very difficult for a human driver to hit a
       | pedestrian or cyclist. Not impossible. Just exceedingly unlikely
       | to be the fault of the driver.
       | 
       | Secondly, driver assist will habituate drivers and other road
       | users to the performance characteristics of AV technology. The
       | upshot is that AV technology will not be benchmarked against the
       | way drivers and other road users behave and perform today. In
       | some ways the expectations for incrementally better safety will
       | be higher. In other ways, the "flavor" or road risks will change
       | in a way that converges on how AVs perform.
        
       | gamerDude wrote:
       | A lot of comments are focusing on safety via driving better. But
       | with self-driving vehicles, can't we make the layout of the car
       | safer, and thus accidents cause less harm to the people inside?
       | 
       | For example, right now because we need to see the road, I assume
       | there is significantly more danger from the windshield vs. a
       | padded back on both sides of the car with passengers facing each
       | other like in a train car.
       | 
       | It seems likely that we can make self-driving vehicles much
       | safer, even with the same number of collisions, by just changing
       | the layout.
        
       | spaetzleesser wrote:
       | This seems pretty reasonable and also very possible to achieve.
       | It would be insane to allow a technology on the streets that
       | makes as many mistakes as humans are making. I certainly wouldn't
       | use self driving cars if they killed 30000 people per year like
       | humans are doing right now. How would you assign responsibility
       | for crashes? Our current system is far from perfect but at least
       | it's something people understand and know how to navigate. And
       | there are drivers that are better and more cautious than others.
       | So it's not just an illusion of control.
        
       | AtlasBarfed wrote:
       | I think the insurance companies will have a different and much
       | more financially based standard.
       | 
       | More importantly, I doubt NIH will trump that conglomerate and
       | its influence on NHTSA
       | 
       | Also, you could argue that restricting a technology that will
       | result in 20% less deaths on the road is the opposite of the
       | public health.
       | 
       | To underline that, that is potentially 10,000 people in terms of
       | death or major disfigurement. PER YEAR.
       | 
       | And self-driving could be, in a targeted/situational manner FAR
       | safer if it took drunk/drugged/tired drivers out of the equation,
       | which are responsible for around 33% of deaths.
       | 
       | If someone is drunk, a technology 2x an alert driver will be 10x
       | a drunk driver.
        
       | sreekotay wrote:
       | Engendering trust and reducing materially regressive
       | liability/litigiousness is a good call - and something that
       | SHOULD be set as a standard by an external body.
       | 
       | IMHO this is typically a good role for government regulation -
       | setting a standard measurement of outcome for the public good,
       | but not dictacting HOW that should be achieved.
       | 
       | Now, we're just haggling over the price... __
       | 
       | ( __as not-churchill infamously didn 't say...)
        
         | dash2 wrote:
         | On the face of it, the delay in accepting self-driving cars
         | till they are 5x safer would cost thousands of lives in the
         | interim (while they are only twice as safe, three times as safe
         | etc.) Is there a reason ordinary people's views should have
         | prescriptive force here? Maybe they're just flat wrong.
        
           | jrockway wrote:
           | Indeed. Why would 1.00000001x safer not be a no-brainer?
        
             | renewiltord wrote:
             | Because human lives are not fungible. If some guy somewhere
             | else was gonna die and you make an intervention where I'm
             | more likely to die then that doesn't work for me. I will
             | oppose it to the end of my being (after all, the
             | alternative is the end of my being).
             | 
             | That is, if you take all the deaths from sleepy drivers,
             | drunk drivers, angry drivers and replace them with random
             | chance then I can no longer increase my chances of survival
             | by not driving at night, not driving on holidays, not
             | driving during commute hours, and avoiding shoals.
             | 
             | Instead now you've taken my ability to increase survival
             | and moved it into the base rate. Nope, I think I'd accept
             | maybe a thousand other arbitrary people dying before I'd
             | accept myself dying.
        
               | nmca wrote:
               | Sure, but in a democratic system with perfect information
               | you should expect to lose the vote on your hypothetical
               | "me vs 1000" trolley problem right? And in the absence of
               | perfect information you'd I guess you'd mount a special
               | interest lobby and hope for the best...
        
               | renewiltord wrote:
               | If it were me vs random one thousand and obviously so,
               | yes. But fortunately, the Wobegon Effect makes it so that
               | anyone can conceive of themselves being me (or even
               | better, of themselves being better than me - considering
               | I'm not particularly a safe driver).
               | 
               | It is precisely because it is democratic then that makes
               | it possible for any individual to exploit human cognitive
               | errors. An authoritarian meritocracy would not fall for
               | those tricks.
        
               | dash2 wrote:
               | It sounds like you agree that 1.0001 is a no-brainer, but
               | that democracies may be tricked into rejecting it.
        
             | sliken wrote:
             | Well that 1.00000001x would include all drivers. Including
             | those that are tired, on their cell phone, drunk, see
             | poorly, senile, high, distraught, unlicensed, pissed off,
             | etc.
             | 
             | Do you really want more cars on the road driving worse than
             | an average awake driver that's not drunk or looking at
             | their cell phone?
        
           | ianhorn wrote:
           | Why don't we mandate that people submit themselves to a
           | mandatory medical experimentation lottery? We'll do so much
           | better if we go through as many people as we do lab rats, and
           | it'll save unimaginable lives in the long run.
           | 
           | Utilitarianism via taking current lives to save future lives
           | is the wrong perspective here.
        
             | dash2 wrote:
             | There may be good reasons for the approach the article
             | suggests, but this is not one of them. Nobody takes any
             | lives, and there is no question here about experimentation.
             | This is not a trolley experiment. It is a choice of two
             | regulatory regimes. Under both of them, some people will
             | die. If we choose the regime "ban self-driving cars until
             | they are five times safer", then more people will die.
        
           | msandford wrote:
           | Arguing that people should do what you want, irrespective of
           | what they themselves want leads to all kinds of pain, on all
           | sides.
           | 
           | "Why are people voting against their own self-interest?" is
           | an analogous phrase. It seems awfully condescending to me.
           | 
           | Nobody's bound to your perspective of what's rational. Better
           | to just accept that this is the kind of hurdle that self-
           | driving will have to jump over and work on getting there
           | ASAP.
           | 
           | Elon realized that the best way to get people to buy electric
           | cars was to make electric cars that are better than gas cars,
           | not to tell people they're wrong and stupid for not wanting
           | to buy some inferior electric car. Once self-driving cars are
           | obviously better than all but the best race drivers, people
           | will accept them as a matter of course.
        
             | dash2 wrote:
             | I didn't say people should do what I want. I said that a
             | random focus group's opinion does not necessarily override
             | objective reasoning about what will save lives. Would you
             | use this approach to decide whether the 737 should fly
             | again, or what is the appropriate price of carbon, or how
             | strictly to restrict activities during the Covid pandemic?
        
       | davidmurdoch wrote:
       | I rented a 2019 Mercedes last week and drove it for over 1200
       | miles, most of which was driven with the cars driving assist
       | technologies enabled.
       | 
       | My guess is that because this car drives so "carefully", such as
       | automatically following at a safe distance (leaving maybe a 3
       | second gap between the car in front of it), human drivers will
       | end up causing many more accidents. There must have been more
       | than 50 drivers (with many annoyed stares into my window as they
       | passed) that made unnecessary lane changes to go around me just
       | to then closely follow the car in front of me.
       | 
       | This large gap may make it seem like the car is going slower than
       | it is, as so many drivers tried to overtake me but failed as
       | slower traffic in the other lanes blocked them.
       | 
       | Human drivers may just become worse over time as more law-abiding
       | autonomous vehicles hit the road. "5x" might not be as much of an
       | improvement in the future.
        
         | stronglikedan wrote:
         | I like the adaptive cruise control, because it drives more
         | carefully than me. I have the same experience as you when I use
         | it, regarding other drivers, but then I realize they're going
         | to drive that way whether I'm using it or not. Therefore, I'm
         | of the opinion that human drivers other than myself will cause
         | the same amount of accidents, but I may cause less using while
         | using it, so in the end there will be less accidents caused by
         | human drivers as more people use adaptive cruise.
        
         | [deleted]
        
         | renewiltord wrote:
         | I hope people aren't driving with cruise control in the left
         | lane. I think I'd probably pass on the right to get clear of
         | them.
        
           | davidmurdoch wrote:
           | This is _much_ more than just your classic speed-only
           | "cruise control".
           | 
           | You certainly couldn't drive with classic cruise control in a
           | center or right hand lane (USA) for any extended period of
           | time, at least not in normal highway-speed traffic where cars
           | are merging in and out every few miles.
        
         | opportune wrote:
         | On the contrary, I think as AV and other semi-autonomous
         | driving tech becomes more frequent on the road, people will be
         | more easily able to recognize it and won't behave irrationally
         | as you mentioned.
        
           | davidmurdoch wrote:
           | I don't share the same optimism. Many people already stare at
           | their phones while driving on the highway (I especially enjoy
           | jolting them back to attention with a friendly honk), I'm not
           | so confident they'll pick up on the subtleties of autonomous
           | driving.
        
         | sliken wrote:
         | My Tesla has a user settable distance, which I do change based
         | on conditions to avoid becoming a hazard as people constantly
         | try to fill the gap.
        
           | davidmurdoch wrote:
           | Interesting that a feature intending to increase safety ends
           | up being a hazard.
        
             | sliken wrote:
             | It would be cool if the car defaults to my preference
             | (which is a large gap) but adjusted it downward as more
             | people fill the gap.
        
       | Bostonian wrote:
       | I think this related to the "illusion of control". People feel
       | safer when they are driving, rather than a machine, even when
       | they are not safer. I hope government regulators do not impose 5x
       | safety requirements on self-driving cars.
        
         | mlthoughts2018 wrote:
         | If they are setting an objective measurement, how is that an
         | illusion of control? In fact it seems like exactly the opposite
         | - they are putting hard numbers on the level of risk they
         | consider tolerable. They are making that available to everyone
         | so they can debate and dispute it.
         | 
         | If anything, this is _removing_ the illusion of control. The
         | illusion of control would be to say you would _never_ trust
         | self-driving cars. Saying you will trust them at a level of 5x
         | measurable safety criteria above human drivers is totally
         | different.
         | 
         | Now we can make actuarial arguments about whether it should be
         | 5x vs 2.6x vs 0.9x and debate how to measure the safety
         | criteria - that's a completely different world from one where
         | people "feel like" human control of the car is safer.
        
           | dash2 wrote:
           | For sure it is good to seek a measurable criterion. The
           | question is whether laboratory subjects' views on the right
           | level should have normative force. An alternative take is:
           | these are just not-very-informed people, and unless they can
           | give reasons for their views, we shouldn't take them
           | seriously as inputs into the policy-making process.
        
         | [deleted]
        
         | VHRanger wrote:
         | It's inertia rather.
         | 
         | People can drive tomorrow because people can drive today
        
         | macintux wrote:
         | I think we also want someone to blame for an accident, and it's
         | not at all obvious who to blame when a self-driving vehicle is
         | at fault.
        
           | anonuser123456 wrote:
           | It may be easier to find fault in an autonomous vehicle.
           | Assuming it has a black box that records sensor data, you can
           | replay the algorithm and see what went wrong.
        
           | ghaff wrote:
           | Assuming the system is properly maintained and used, if
           | anyone's responsible it has to be the manufacturer. Certainly
           | the _passenger_ isn 't any more than if an Uber gets in an
           | accident today.
           | 
           | And, with the possible exception of drug side effect (and
           | even there there are lawsuits), we don't really see consumer-
           | facing products that, even if used as directed, kill a fair
           | number of people and we just go oops. Let's say autonomous
           | vehicles kill 3,000/year in the US, i.e. 10% of the current
           | rate. (In reality, human-driven cars will take a _long_ time
           | to be phased out even when self-driving is available but go
           | with the thought experiment.) Can you imagine any other
           | product we accept killing thousands of people a year and we
           | 're fine with that?
           | 
           | ADDED: As someone else noted, you could argue that tobacco
           | etc. fall into that category but we're mostly not OK with
           | that and is reasonably thought of as in another category.
           | (And pretty much no one is smoking because they think it's
           | good for them.)
        
             | the8472 wrote:
             | > Can you imagine any other product we accept killing
             | thousands of people a year and we're fine with that?
             | 
             | Unhealthy foods?
        
               | ghaff wrote:
               | Just about any food is potentially unhealthy if not
               | consumed in moderation. A bag of potato chips and a Coke
               | now and then isn't going to kill anyone. But a couple
               | bags and half a dozen cans a day sure isn't good for you.
               | And a porterhouse steak every day probably isn't that
               | great for you either.
        
               | the8472 wrote:
               | You asked for accepted products that kill people, not for
               | products that kill unconditionally. Foods are
               | conditionally unsafe (if consumed in excess) just like
               | cars are conditionally unsafe (if not operated
               | carefully). Deaths by cardiovascular diseases (partially
               | caused by inappropriate diet) exceed vehicular deaths.
               | And yet they're accepted.
        
               | ghaff wrote:
               | There is no shortage of products that can injure or kill
               | you if you operate them unsafely including cars. But you
               | won't "operate" an autonomous vehicle at least while it's
               | autonomous. An autonomous vehicle causing an accident due
               | to a software mistake is the equivalent of a regular
               | automobile suddenly losing steering control because of a
               | design defect on a highway--and the latter would
               | absolutely be a liability issue for the car maker.
        
               | the8472 wrote:
               | Right, I forgot that this was an argument about
               | responsibility. In the case of food I guess there's some
               | shared responsibility. The customers of course have a lot
               | of choice here, but the manufacturer still optimizes for
               | tastiness (increasing consumption) without necessarily
               | optimizing for healthiness. That could also be considered
               | a design defect.
               | 
               | Perhaps for an owned autonomous vehicle the equivalent
               | shared responsibility would be a user-selectable
               | conservative ("comfort") vs. aggressive ("sporty")
               | driving style. Or the option to drive yourself and only
               | let the software intervene if it thinks what you're doing
               | is unsafe.
               | 
               | So, back to the question
               | 
               | > We don't really see consumer-facing products that, even
               | if used as directed, kill a fair number of people and we
               | just go oops.
               | 
               | The only very nebulous other case that comes to mind are
               | unsafe computer systems in general. When a hospital or
               | critical infrastructure gets hacked then this is treated
               | almost like an unavoidable natural disaster rather than
               | the responsibility of the operator or manufacturer.
        
           | philipov wrote:
           | If corporations are people, you should be able to bring
           | criminal murder and manslaughter charges against them, with
           | the top-level executives acting as proxies to serve the jail
           | sentence.
        
           | spaetzleesser wrote:
           | You may have to sue the manufacturer and prove that their
           | system is at fault. Which is pretty much impossible
           | considering the legal resources these big corporations have
           | versus the little guy. This would end up like tobacco or junk
           | food where companies were able (and still are) able to
           | deflect any kind of responsibility.
        
         | Analemma_ wrote:
         | The illusion of control is a thing, but actual control is a
         | thing as well. One possible reason to avoid self-driving cars
         | is that there actually are safe and unsafe drivers, and fatal
         | accidents in self-driving cars will presumably be a much
         | flatter distribution among those drivers than the one we have
         | now. Which means that even if they're safer overall, they could
         | still be less safe if you're a good driver.
        
       | brighton36 wrote:
       | Doesn't this 5x requirement hurt more people than (say) 1.00001x?
       | What am I missing here...
        
         | toolz wrote:
         | I think this high level of certainty is basically just the
         | governments way of acknowledging they are terrible at
         | gathering/defining useful metrics and so with a wide margin
         | there's very little room for error on the politicians part. I'm
         | unsure if this is overly cynical, but I don't expect
         | politicians today became career politicians by worrying about
         | safety more than protecting their political status. Further, I
         | suspect media would look for any definition possible to blame
         | politicians for deaths so politicians feel it necessary to be
         | blameless before allowing interesting, progressive ideas to
         | materialize.
        
         | zebrafish wrote:
         | I would say that we tend to reduce human flourishing to
         | exclusively being alive. I think the 5x multiplier maybe covers
         | things like loss of liability in an accident, a sense of
         | ownership of the vehicle, loss of privacy or obscurity,
         | regulatory or operational infrastructure costs associated with
         | a switch to self-driving, freedom of choice, etc. All of these
         | have some ultimate impact on human flourishing beyond just a
         | binary dead or alive definition. My opinion is, if these aren't
         | included in the 5x, they should be.
        
         | notatoad wrote:
         | This is not the government saying "we the government
         | require...". It's the results of a study of what people
         | believe. people's risk tolerance is almost never rooted in a
         | rational calculation. Risk tolerance is based on emotion, and
         | self-driving cars currently trigger an emotional response.
         | 
         | As soon as self-driving cars become a regular part of people's
         | lives and not an exciting new thing, the calculation will shift
         | to a much more rational one
        
           | Slartie wrote:
           | This calculation is actually very rational. What you seem to
           | ignore is that, with conventional cars, there is a relatively
           | small amount of "known unknown" risks. There are of course
           | significant risks, but almost all of them are known not only
           | in kind, but also in quantity. Drunken drivers, dumb people,
           | broken brakes, whatever. We have several decades of data
           | regarding these risks. The amount of "unknown unknowns" can
           | also be assumed to be relatively low, given that the concept
           | of humans driving cars has quite a history now and largely
           | stayed the same for a good number of decades.
           | 
           | With autonomous cars, even once you have a few years of
           | safety data from a large enough number of cars to be able to
           | make the call of them being 5x less dangerous than human-
           | driven alternatives in that data, you will still end up
           | having much more "unknown unknowns" (of which I can't tell
           | you any, because they are by design unknown) in addition to
           | also having much more "known unknowns" like the possibility
           | for large-scale software bugs potentially causing thousands
           | of casualties at the same time. These risks will only go down
           | slowly with enough time, there's practically no way of fast
           | tracking getting these down, hence you have to incorporate a
           | large enough risk buffer in your assumptions for
           | rationalizing to even start using that fancy new tech, and
           | the only place where this risk buffer can come from is having
           | a much bigger difference in the "known knowns" department of
           | risks.
        
             | the8472 wrote:
             | Those unknowns already are being elucidated by experimental
             | fleets. Self-driving cars won't be deployed en masse before
             | the vendors can already demonstrate solid statistics worth
             | hundreds of millions of passenger-miles, which will be
             | sufficient to get the fatality rate.
        
               | Slartie wrote:
               | How much does that tell me about potential software
               | failure modes that don't kick in until a significant
               | scale (speaking of double-digit percentages of all
               | traffic, these test fleets are not even close to that)
               | has been reached? Or about weird, but potentially fatal
               | side effects of incorporating rules put up by regulators
               | into the software that cannot be tested with today's
               | alpha testing fleets because these rules might not even
               | exist yet? Or about how good all these different AI
               | vehicles of different vendors in very different software
               | and hardware revision states interact with each other
               | (think of situations like HFT trading algorithms that run
               | each other into a doomsday spiral, just with vehicles at
               | an intersection twitching around quickly in weird ways,
               | trying to interpret each others actions)? Or about the
               | hackability of future robotic cars (think for example of
               | those slightly modified fake traffic signs)?
               | 
               | Nothing. That's why regardless of how impressively big
               | these test fleets are, there will be a lot more of these
               | unknowns.
        
               | the8472 wrote:
               | Some of them seem like tail risks to me that are unlikely
               | to dominate fatality statistics even if they were to
               | occur and will be quickly patched or recalled if needed.
               | Many of these hypothetical concerns could also affect
               | existing driver assistance systems and aren't unique to
               | autonomous vehicles. Hacking can also happen with human-
               | operated vehicles. Interaction between multiple self-
               | driving ones can also be tested with experimental fleets
               | by concentrated local deployments.
        
         | Tuna-Fish wrote:
         | You are correct.
         | 
         | However, how many people get hurt is not the only thing that
         | needs to be considered. It's very likely that when a self-
         | driving car would be equally safe as a human driver, the people
         | who die in accidents caused by the car would be different than
         | those who would die in accidents caused by human drivers, and
         | so you'd end up with situations where individual next-of-kin
         | could make entirely legitimate claims after accidents that
         | their loved ones would be alive if not for the hellspawn car.
         | 
         | Trying to convince juries that it's alright because for every
         | person who die in the cars, two other people who would have
         | otherwise died got to live would probably be tough. Especially
         | as the accidents that the self-driving cars are most apt to
         | prevent are ones that could at least partially be considered
         | caused directly by bad choices made by the driver (DUI,
         | distracted driving, falling asleep at the wheel).
         | 
         | Once the data gets good enough that you don't need to do
         | statistics on it[0], it becomes a lot easier to sell the idea
         | to the public.
         | 
         | [0]: Relevant xkcd: https://xkcd.com/2400/
        
         | opportune wrote:
         | 1.0001x is definitely not acceptable because dangerous drivers
         | bring the total-human-safety metrics down. The "average"
         | (median, or even maybe 25%ile) driver is probably much less
         | likely to be in an accident (or fatal accident)than the drivers
         | who drive most dangerously, e.g. frequently texting while
         | driving or driving while intoxicated. So for most drivers,
         | 1.0001x the average human rate would actually be worse than
         | driving themselves, although they may find the risk acceptable.
        
         | nextaccountic wrote:
         | This is a psychological experiment. People aren't rational.
        
       | rstarast wrote:
       | How far do we think human drivers' safety level ranges? Like most
       | drivers I'm falsely convinced I'm a safer driver than most, but
       | still I expect quite a large range (say a factor 10 between 10th
       | and 90th percentile?). It seems reasonable for self-driving cars
       | to be expected to improve safety over human driving for the large
       | majority of drivers, not just half of them.
        
         | [deleted]
        
         | dougmwne wrote:
         | But how false is that impression that we are safer than
         | average? I don't drive drunk, drugged, tired or distracted. I
         | avoid driving in bad weather. I make sure I have good tires and
         | brakes. I don't intentionally speed. I bet most accidents are
         | caused by the above. I'm not interested in a self-driving car
         | that drives like it's checking its cellphone after 3 beers.
        
           | rootusrootus wrote:
           | Exactly this. Some people think that Tesla's autopilot is
           | just great, better than a human driver much of the time. As a
           | Tesla owner, I am flabbergasted by that. At best, AP drives
           | like I do. That is, perfectly straight and between the lines
           | down a straight road. With any curves, and sometimes just on
           | straight roads, it drives like a high-functioning alcoholic.
           | I can't imagine how some people drive normally where they
           | think that qualifies as 'good driving.' Some of us are very
           | attentive drivers -- I never look at my cell phone, I never
           | drink and drive, I don't drive when I'm tired, I avoid
           | driving in inclement weather or at night unless strictly
           | necessary, I am a very defensive driver. I don't get tickets,
           | I don't get in wrecks, and this is by design -- I take great
           | pains to reduce my exposure to these risks.
           | 
           | Personally I think the old joke about how 90% of drivers
           | think they're better than average is both true, and also just
           | a funny joke. We see a lot of perfectly good drivers on the
           | road, but we don't notice them ... because they're perfectly
           | good. There's only a few lanes on any given road, though, so
           | if 10% of drivers in near proximity are crappy drivers then
           | it practically shuts down the road. We notice that, and
           | assume that most drivers are crap. Wrong.
        
             | sliken wrote:
             | I have a Tesla and I agree that it drives somewhat poorly
             | compared to a Human ... when there's no surprises. Handling
             | lanes and turns just moderately well, but not great.
             | 
             | However it frequently notices things before I do. Lane
             | splitting motorcycles approaching for the rear for example.
             | Or a car in front of me slowing down, but not using the
             | brake lights.
             | 
             | It also does quite well when a car brakes in front of me,
             | especially if it's a surprising slow down like on an onramp
             | where I'm looking over my shoulder to merge.
             | 
             | So while I've not had an accident of any kind in over 25
             | years, I do appreciate the car noticing before I do.
             | 
             | So while I don't let the Tesla drive autonomously, I do
             | feel like I'm a much safer driver with the active
             | assistance from the car and that the Tesla (even with the
             | same sensors) will continue to improve. Not sure if they
             | will hit full autonmous on the current hardware, they might
             | need another revision (to add CPU and better sensors)
             | before they drive better than most humans.
        
               | rootusrootus wrote:
               | I don't disagree that at times AP has been helpful to me.
               | The sensors do pick up on things, and if you are actively
               | paying attention and ready to take over at a moment's
               | notice, it is probably a net positive. Though on average,
               | for me, things like the forward collision warning tend to
               | be more nuisance than help. Startling, and of the half
               | dozen times it's activated for me, once was me needing to
               | notice that traffic ahead and suddenly stopped, the rest
               | are things like right-turning drivers that are way out of
               | the way but the car panics about them. Even on 'late'
               | mode.
               | 
               | The technology will certainly improve, however. Probably
               | going to be quite a while, if ever, before I let it do
               | _all_ the driving, though :). At least partly because I
               | enjoy driving.
        
         | the8472 wrote:
         | > It seems reasonable for self-driving cars to be expected to
         | improve safety over human driving for the large majority of
         | drivers, not just half of them.
         | 
         | Depends, do you want to save lives? Then self-driving cars only
         | need to be a little safer than the drivers they replace. Which
         | means if infrequent drivers with little experience that get
         | replaced by robotaxis of average reliability could be a net-win
         | in saved lives. Delaying their deployment until technology
         | arrives that beats the most conservative drivers just means
         | accepting a higher death toll.
        
           | stkdump wrote:
           | > if infrequent drivers with little experience that get
           | replaced by robotaxis of average reliability could be a net-
           | win in saved lives.
           | 
           | I don't know if frequent drivers are inherently safer drivers
           | than infrequent drivers. There might be the negative effect
           | of reduced attention due to more 'routine'.
           | 
           | But I seriously doubt that frequent drivers drive so much
           | safer that they negate the effect of being exposed to the
           | risk so much more. Is a person that drives 10x more than the
           | average driver more than 10x safer? Why replace cars first,
           | that don't get on the street a lot? And how do you organize
           | deploying SDCs to infrequent drivers first? Unless those
           | people don't own the cars anymore, but rent them, in which
           | case I agree. That would increase utilizations of these cars.
        
             | the8472 wrote:
             | > Unless those people don't own the cars anymore, but rent
             | them
             | 
             | Yeah, that was the idea (hence robotaxis, not owned ones).
             | It seems feasible especially in urban areas where car
             | ownership is not essential so the remaining uses could be
             | replaced by rented autonomous ones.
        
       | rozab wrote:
       | Would this baseline include all the accidents from distracted
       | drivers, drunk drivers, drug drivers etc.? Or is it referring to
       | an average human driver who isn't intentionally breaking the law?
       | 
       | If the baseline includes all these sorts of human error, I see no
       | issue with holding robots to a higher standard. Imagine if we
       | rolled out robot policemen who only executed black people for no
       | reason at the same rate as humans do.
        
         | nelsonenzo wrote:
         | Sadly, the news around facial recognition and AI seems to imply
         | the government has been happy to roll out exactly that.
         | 
         | I guess when the government contractors can profit off self-
         | driving murders we will be good to go. /s
        
         | maxerickson wrote:
         | Without meaning to comment on how possible it would be to carry
         | out on a policy level, replacing the worst human drivers with
         | robot drivers that match average human drivers should be an
         | improvement for everyone.
         | 
         | It also potentially opens up policy options, or at least makes
         | them easier to choose.
        
           | lftl wrote:
           | Interesting idea. I think there may be feasible political
           | routes to accomplishing that. Tighten up the points system
           | that basically every state uses for deciding when to suspend
           | your license, and simply force those who would normally lose
           | their license into robot driving.
        
           | ksk wrote:
           | What does "worst" mean? Did they get into an accident once or
           | twice, but drive just fine on otherwise? Do they drive bad
           | every single day? etc etc.
        
         | aaaxyz wrote:
         | >Imagine if we rolled out robot policemen who only executed
         | black people for no reason at the same rate as humans do.
         | 
         | Human cops still have to do the killing for now, but that's
         | called predictive policing
        
         | superkuh wrote:
         | Behaving as a human would is often more important than staying
         | strictly in line with absolute and relative positioning with a
         | road.
         | 
         | The semi-permanent snow cover on many roads in 1/3 of the USA
         | that lasts weeks if not months in duration. Humans driving on
         | these snow covered roads form emergent lanes having little to
         | do with absolute positioning or even relative positioning of
         | the curb. They form lanes based on what other humans do.
         | 
         | Self-driving cars that depend on knowing absolutely where they
         | are and relatively where they are simply don't and won't
         | function. We need self-driving cars that can behave as a human
         | will for that. And that is a _long_ way off.
         | 
         | No autonomous car has shown it can handle these common
         | situations. Until then self-driving cars should not be approved
         | nationally and probably be restricted to the arid and warm
         | states that do not have winter.
        
           | rootusrootus wrote:
           | I think this is a good point. The best lesson my dad ever
           | gave me back when I was learning how to drive was to 'be
           | predictable.' People don't get in wrecks when everyone
           | behaves as expected. And the rules of the road are largely
           | aimed at guiding that predictability. But in the end,
           | regardless of the written rules, humans behave as humans and
           | a robot driver should behave like other drivers. And it may
           | change based on locality.
        
           | cmrdporcupine wrote:
           | Yes, exactly, this is always the same example I give (in my
           | case the 401 here in Ontario, Canada) -- blizzard in the
           | middle of February, lane markings covered, highly
           | unpredictable road surface, spontaneous temporary lanes, cars
           | working at a crawl, snow plows coming through that you have
           | to move over for, and can't pass, cars or trucks jack-knifed
           | or half in the ditch. This kind of thing happens to varying
           | degrees at least once a year, and I honestly don't think that
           | these scenarios are actually properly in the imagination of
           | the primarily-California-based engineers who work on self-
           | driving.
           | 
           | For context, the greater Toronto region is 6 million people,
           | and Great Lakes region from here over to Chicago is multiples
           | of that. Winter is 4-6 months. This is not an insignificant
           | edge case for a small population, and if self-driving can't
           | handle it, no thanks from this driver.
        
             | jjk166 wrote:
             | Why not just drive manually during the blizzard?
        
             | renewiltord wrote:
             | Yeah, you will be last in line. An insignificant market
             | with high entry costs. You won't have to decline, you won't
             | get the chance.
             | 
             | You're probably behind Tahoe in the line.
             | 
             | Just like you don't get Google Fi or Google Fibre. And just
             | like some countries don't get YouTube Premium or whatever.
        
               | cmrdporcupine wrote:
               | Nice snark, but what I said applies to the bulk of the US
               | northeast and midwest as well.
               | 
               | (As for Google Fibre, that's a fun one as I actually
               | worked on that product, though of course I couldn't get
               | it...)
        
               | renewiltord wrote:
               | It isn't snark. It's just blunt truth. Those places won't
               | get it first either. If self driving cars come about,
               | their full feature set may well be geo-limited. Even
               | covering just California, Arizona, and Texas would make
               | the technology amazing.
               | 
               | Not to speak of the Chinese, who will simply build their
               | cities to include road beacons or whatever is necessary
               | to keep AVs effective.
        
               | cmrdporcupine wrote:
               | I'm sorry but it's troll-level behaviour to slap the
               | "insignificant market share" label on the entire
               | northeast and midwest which includes 6/10 of the largest
               | "urban agglomerations in North America": https://en.wikip
               | edia.org/wiki/List_of_the_largest_urban_aggl...
        
               | smnrchrds wrote:
               | "It is the arrogance of a giant American corporation
               | which considers the correct spelling of the names of
               | millions of Dutch people an edge case."
               | 
               | https://medium.com/@hansdezwart/how-the-dude-was-duped-
               | by-bi...
               | 
               | Unfortunately, troll behaviour or not, that's how SF
               | companies behave in the real world. The usability of
               | their products tend to be proportional to how close you
               | are (physically or otherwise) to the bay area. I live in
               | Calgary and I would be very surprised (and happy) if I
               | see self-driving cars here before the end of the century.
        
               | renewiltord wrote:
               | You're just not important enough for how hard it is. Why
               | is that so offensive to you? You don't even _want_ it and
               | you 're upset no one cares to offer it to you? Bizarre.
               | 
               | Is this like not being invited to a party you didn't want
               | to go to? Okay, then, maybe Tesla's snow driving test
               | will give you the chance to ostentatiously decline.
        
               | ksk wrote:
               | Not the OP, but personally I don't want other self-
               | driving cars on the road with me risking me and my
               | family. We know how easily, and plentifully, software
               | bugs get introduced every single release, I would imagine
               | developers being the last set of people who are willing
               | to risk their life on software.
        
           | albntomat0 wrote:
           | Does self-driving have to handle all weather conditions right
           | away? A sensible implementation needs to take the current
           | conditions into account, such as the weather and the status
           | of the road and car. If those are bad, it would refuse to
           | active itself, similarly to how a responsible human would
           | choose to not drive in bad conditions.
        
             | cmrdporcupine wrote:
             | No, they do not need to operate in those conditions, but we
             | have people seriously making proclamations about the
             | imminent end of the truck driving employment industry as we
             | know it, because "trucks will have no need for drivers."
             | 
             | Those people are fantastically wrong. And that's just one
             | example.
        
               | jjk166 wrote:
               | Yeah, but an autonomous truck could pull over to the side
               | of the road and wait for better weather
        
               | albntomat0 wrote:
               | Definitely not all truck driving, but an autonomous truck
               | that's able to handle highway driving in good weather is
               | a much easier problem that would put a significant number
               | of truck drivers out of work
        
               | gremlinsinc wrote:
               | Yeah, all you need is one truck to lead a 'train' of
               | autonomous trucks.. Think of the conductor/engineer being
               | the truck in front, all other trucks ride so close to
               | each other they cut down wind-sheer to save gas. At
               | specific exits one truck will detach from the group, go
               | to a staging area where a local driver finishes the last
               | mile while the train keeps on going.
        
               | jjk166 wrote:
               | More likely the truck-driving model would change. Instead
               | of having one employee who goes where the truck goes,
               | you'd have employees who reside in or near shipping
               | destinations and meet up with the trucks for
               | loading/unloading/fueling/maintenance/etc. On the one
               | hand a given number of employees could service many more
               | trucks along a single route, decreasing labor
               | requirements, but at the same time covering large numbers
               | of routes may take more people, or companies may focus on
               | a narrower set of routes, offering more opportunities for
               | smaller shipping companies. Odds are the number of people
               | doing truck-driving related work would stay roughly the
               | same, but the total volume of shipping would go up.
        
             | ghaff wrote:
             | Driving in a whiteout blizzard is one thing. But people do
             | need to get around in northern states in the winter and
             | they absolutely sometimes have to drive in snow (and
             | sometimes snow happens mid-trip or you have to get home
             | from another location). I certainly don't go out of my way
             | to dive in substantial snow (and fortunately I don't need
             | to commute any longer) but it sometimes happens.
             | 
             | If it's just the autonomous system that doesn't work that's
             | fine but now you really can't depend on the car unless
             | there's a competent licensed driver who can take over.
        
               | albntomat0 wrote:
               | I think we're in agreement. My comment was in reference
               | to a more advanced version of what exists currently: a
               | car that can be driven manually and always requires a
               | licensed driver, but can activate its automation on
               | command.
               | 
               | I think we'll have versions like that for a long while
               | before we arrive at autonomous vehicles that do not
               | require a licensed driver at all.
        
           | randmeerkat wrote:
           | Tesla just dropped a snow driving beta:
           | https://www.teslaoracle.com/2020/12/29/fsd-beta-8-tested-
           | for...
        
         | kube-system wrote:
         | Exactly. For the benefit of passengers, self driving cars
         | should drive as well as a sober, attentive, well-behaving
         | driver.
         | 
         | When I get into a car as a passenger today, I already get the
         | benefit of riding with a better than average driver -- I can
         | identify and choose not to ride with people who are drunk,
         | inattentive, reckless, etc. That's the comparison that is a
         | more reasonable baseline expectation.
        
         | ianhorn wrote:
         | I think I read that ninety something perfect of driving deaths
         | or accidents are people being irresponsible in the manner you
         | said, but can't find a source again, so take it with a grain of
         | salt.
         | 
         | I was able to find one that of the 37k driving deaths in 2016,
         | 10-11k involved BAC over .08 and about the same involved
         | speeding. Not knowing the overlap, 10-22k/30k is 27-59% of
         | deaths involving drunk driving or speeding.
         | 
         | If it was on the high end of that, then to do better than
         | speeding and/or drinking alone, you have to be at least 2.5x
         | safer than human.
         | 
         | I wish there were better stats on the safety of the sort of
         | driver you would let drive you around (e.g. you wouldn't get in
         | the car with your drunk friend behind the wheel).
        
       | vincentmarle wrote:
       | I find it weird that nobody seems to talk about the national
       | security implications of self-driving cars. Imagine the Russian
       | cyber attack we just experienced happening on millions of self-
       | driving cars...
        
         | errantspark wrote:
         | This is already a risk with normal cars.
         | 
         | https://www.wired.com/2015/07/hackers-remotely-kill-jeep-hig...
        
         | xmodem wrote:
         | To be fair with the amount of computer control and always-
         | online features in current cars, this is already a huge risk.
        
         | boomboomsubban wrote:
         | Why would a state actor do this? It's a clear declaration of
         | war, and probably wouldn't kill more people than suddenly
         | launching missiles. Even if they could somewhat hide it, the US
         | is prone to retaliate on fairly flimsy pretexts.
         | 
         | There may be some terrorism risk, but any truly terrifying
         | scenario requires a multitude of incredibly stupid design
         | choices. Like constant internet connection, remote updates, no
         | manual override, and a single widespread system. Hacking
         | stoplights is roughly as scary.
        
           | tgv wrote:
           | It will cripple a country and its economy.
        
             | boomboomsubban wrote:
             | So will bombs
             | 
             |  _edit_ particularly as an attack on self driving cars
             | would be an intentional targeting of civilians, and would
             | likely be considered similar to using chemical weapons.
        
             | renewiltord wrote:
             | If you do that we will launch missiles. You will not
             | survive the attempt. See, this is the thing with stuff like
             | this. "You can't prove it!" doesn't work.
             | 
             | If a Russian terrorist cell (state sponsored or not) did
             | this to American vehicles on the road, Putin will be on the
             | phone begging to not be blown up. The leaders of a dozen
             | countries will be on the phone begging us not to blow him
             | up.
             | 
             | It's like America's power supply. Notoriously easy to
             | destroy, but if you _do_ destroy it, hell will rain down
             | upon you.
             | 
             | Because it turns out the devices that make the peace don't
             | operate like the devices that operate in the peace. So you
             | can't break them that easily.
        
       ___________________________________________________________________
       (page generated 2020-12-30 23:00 UTC)