[HN Gopher] It's a truck full of traffic lights [Tesla Autopilot]
       ___________________________________________________________________
        
       It's a truck full of traffic lights [Tesla Autopilot]
        
       Author : detaro
       Score  : 203 points
       Date   : 2021-06-03 18:42 UTC (4 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | samatman wrote:
       | This is a great example of how Tesla's strategy to build out
       | using real-life training data can pay off.
       | 
       | Tesla now has data about what it looks like to drive behind a
       | truck transporting traffic lights! No team in the world would
       | solve this problem, in advance, in simulation.
       | 
       | Not saying their strategy is going to work, not saying it's an
       | unbeatable advantage, but: just look at it! This is a compelling
       | demonstration.
        
         | gameswithgo wrote:
         | who isn't using that strategy? openpilot uses it.
        
         | adflux wrote:
         | A demonstration of why it will be a long time before it'll be
         | rendered safe enough to drive on its own
        
       | tibbydudeza wrote:
       | Clearly the probability of it happening is rather low unless you
       | live close to the local municipality road maintenance depot.
       | 
       | But it clearly means another edge case like detecting deer
       | (wonder if they can handle our local Kudu) that they need to deal
       | with.
       | 
       | They might as well develop AGI at this rate.
        
       | tppiotrowski wrote:
       | Who could've ever predicted this scenario? A great example of
       | something that's trivial for human beings to understand but where
       | ML training sets will come up short.
        
         | matt-attack wrote:
         | That's what's frightening about this. I feel not enough people
         | internalize just how primitive the car's
         | understanding/perception of the world is. A two year old human
         | brain would have no confusion in this scenario.
         | 
         | We discount how much the concept of "understanding" is required
         | in visual perception. We don't just see shapes we have a
         | complete _understanding_ of what we're seeing.
         | 
         | I might see a big rectangle flopping in the lane in front of
         | me. I can _immediately_ ascertain whether it's a piece of
         | tumbling plywood, or foam based on movement characteristics,
         | color, apparently size, etc. I can then use that understanding
         | to decide what evasive actions are required.
         | 
         | A Tesla it seems has absolutely zero of this capability.
        
           | TheAdamAndChe wrote:
           | One time, I fell asleep as a passenger in a car. I hit a
           | bump, looked up, and screamed as I saw what looked like a car
           | coming right for me.
           | 
           | It was a tow truck towing a car backwards. It was just enough
           | in my half-asleep state to scare the shit out of me.
           | 
           | Humans have millions of years of evolution behind our visual
           | processing systems. We have developed hacks that prevent our
           | brain from getting tricked by unusual situations like traffic
           | lights on trucks or backwards cars being towed. We only
           | developed the first computers a hundred years ago, and only
           | in the last 40 years have a small subset of people started
           | learning about visual processing systems.
           | 
           | It's easy to look at this video and scoff because of how
           | trivial it seems. But it's instead a marvel of our minds that
           | we can pick up on context clues so quickly and accurately
           | that such oddities basically never puzzle us. Given the pace
           | of our innovation, it wouldn't surprise me if our computer
           | systems match ours within a few human generations at latest.
        
             | formerly_proven wrote:
             | > Given the pace of our innovation, it wouldn't surprise me
             | if our computer systems match ours within a few human
             | generations at latest.
             | 
             | This is not a FLOPS problem. Moore's law can't save you
             | here.
        
             | Syonyk wrote:
             | > _It was a tow truck towing a car backwards. It was just
             | enough in my half-asleep state to scare the shit out of
             | me._
             | 
             | And you're sure that it was a bump, and not the driver
             | pulling up rather close behind it and then giving a sharp
             | tap of the brakes to wake you up? ;)
        
             | belltaco wrote:
             | >One time, I fell asleep in a car. I hit a bump, looked up,
             | and screamed as I saw what looked like a car coming right
             | for me. >It was a tow truck towing a car backwards. It was
             | just enough in my half-asleep state to scare the shit out
             | of me.
             | 
             | There are some funny(?) youtube videos of people in the
             | passenger seat waking up to this with tractor trailers
             | being towed in reverse.
        
           | hobofan wrote:
           | I'm very curious how a LIDAR based system would have faired
           | here. It very much looks like problem that would only appear
           | with a limited vision-only system like Tesla has.
        
             | [deleted]
        
       | mortenjorck wrote:
       | That the Autopilot world model is extrapolating these lights to
       | follow the passing geography in ~500ms cycles, similar to how
       | game netcode sends players running in their last known direction
       | when encountering packet loss, is an interesting insight into how
       | the system works.
        
       | [deleted]
        
       | tigerBL00D wrote:
       | Very interesting. There must be some kind of assumption in models
       | that traffic lights are stationary objects which results in them
       | falling off the back of the truck.
        
         | jschwartzi wrote:
         | I mean it makes sense. If you've never seen traffic lights be
         | delivered to a new intersection you'd probably blithely assume
         | you'll never encounter a stack of them that are unlit and
         | moving.
         | 
         | Somewhere at Tesla there's a junior engineer who's telling a
         | senior engineer "I told you so!"
        
       | [deleted]
        
       | offsky wrote:
       | If one of the lights in the back of that truck had illuminated
       | red, would the car have put on the breaks in the middle of the
       | road?
        
       | darepublic wrote:
       | My God, it's full of stars
        
       | akomtu wrote:
       | Using Tesla's "autopilot" isnt very different than working as an
       | unpaid driving instructor to keep an eye on the "autopilot"
       | trainee. To make things worse, the trainee is tripping on
       | mushrooms and sees nonexistent cars appearing from nowhere, trees
       | morphing into pedestrians and pedestrians morphing into traffic
       | lights. The trainee also has a problem with epilepsy and youre
       | expected to take control on a second notice. Edit: also, the
       | trainee has cognitive ability of a mentally challenged frog.
        
         | jsight wrote:
         | That's basically true, but there's no reason to limit it to
         | Autopilot. Although the high points of AP are better than any
         | other system that I've tried so far. I've seen it do amazingly
         | well in the rain at night, and in the rain on a road with faded
         | markings that were hard to pick out. I've also seen it
         | thoroughly confused by double-dashed lines at hov lanes.
         | 
         | I'll still take that over the ones that beep proudly to tell
         | you that they are about to fail to maintain their lane (hi
         | ProPilot).
        
           | notJim wrote:
           | > I've seen it do amazingly well in the rain at night
           | 
           | Not anymore, since they removed the radar. Rain seems to
           | really interfere with the vision based system, and apparently
           | auto high beams are required at night, and they flash
           | constantly.
        
             | bordercases wrote:
             | Why did they remove radar?
        
               | ffhhj wrote:
               | To buy more bitcoins.
        
         | timoth3y wrote:
         | It's fascinating to watch this technology develop, but as a
         | driver I usually feel more anxious using "autopilot" that not
         | using it.
         | 
         | With one _huge_ exception: self parallel-parking.
         | 
         | I don't understand why this innovation doesn't get more love.
         | Tesla's not the only manufacturer to offer this, of course, but
         | this particular innovation has increased m enjoyment of city
         | driving more than ... well, more than anything I can think of.
         | 
         | Even if they never get anything else to work, it would have
         | been worth it just for self parallel-parking. lol
         | 
         | From an engineering perspective, of course there are serious
         | problems that need attention, but sometimes it also good to
         | celebrate the wins.
        
         | ed25519FUUU wrote:
         | I actually don't want autopilot for the fast, freeway speed
         | driving. That's easy. I want it for the mundane, exhausting
         | stop-and-go, bumper to bumper traffic as we all slowly creep
         | through the traffic.
        
           | asdff wrote:
           | There are already cruise control systems that do this for you
        
             | [deleted]
        
             | slg wrote:
             | That's all Autopilot is, adaptive cruise control and lane
             | keeping. Autopilot doesn't react to stoplights, stop signs,
             | or basically anything else. That is a separate feature set
             | that they misleadingly call Full Self Driving.
        
               | cma wrote:
               | I think it still doesn't read speed limit signs either
               | like it used to be capable of, but some areas are
               | programmed in.
        
               | RC_ITR wrote:
               | You can once again -
               | https://electrek.co/2020/08/29/tesla-software-update-
               | visuall...
               | 
               | It was lost due to Tesla's decision to remove MobilEye as
               | a supplier.
               | 
               | Mobileye generally focuses on highly optimized HW/SW that
               | does individual things very well, in a manner similar to
               | how factory automation works (e.g., they basically built
               | a "lane keeping + auto-braking + sign reading"
               | appliance).
               | 
               | Tesla decided that 1) It was bad to outsource automation
               | 2) Starting from scratch and 'learning' how to drive
               | using ML was better than iteratively teaching a car how
               | to do discrete tasks very very well (this is why
               | Autopilot regressed a bunch in 2016).
               | 
               | In general, it's another symptom of Elon's 'I have an
               | extremely specific idea, let's figure out' mentality that
               | sometimes works and sometimes results in useless tunnels
               | under Las Vegas.
        
             | ed25519FUUU wrote:
             | They don't typically work below 25 MPH, and definitely
             | don't stop and go and keep you within the lane.
        
               | mbell wrote:
               | That was true of older systems, most newer adaptive
               | cruise control systems I've used have no issue with stop
               | and go traffic.
        
               | sarajevo wrote:
               | A Model 3 owner confirms this...
        
               | haliskerbas wrote:
               | I believe this one does all of those:
               | https://www.toyota.com/safety-sense/animation/pcspd
        
               | nethunters wrote:
               | My Toyota Corolla with Toyota Safety Sense 2.0 (fitted in
               | nearly every Toyota post 2018/2019) has stop and start,
               | either press resume if you've stopped for more than 5
               | seconds or tap the accelerator, and that's with lane
               | tracing assist as well. I don't think there's a second I
               | don't have both Adaptive Cruise Control and Lane Tracing
               | Assist activated.
               | 
               | Edit: You can also accelerate without disabling it.
        
               | belltaco wrote:
               | I think only the new Honda L3 system(only on one very
               | limited edition car in Japan) and a Subaru tech on some
               | mainstream cars(in the US too) have this capability.
        
               | jannyfer wrote:
               | Just off the top of my head - Honda Sensing works in
               | stop-and-go traffic and keeps you within the lane.
        
               | ipsum2 wrote:
               | Honda Sense lane keeping is only for > 40mph. In
               | addition, you need to press a button if the vehicle
               | completely stops.
        
               | ngokevin wrote:
               | Subaru Eyesight goes down to zero and handles stop and go
               | with lane centering. Though I guess provided you've
               | started the adaptive cruise control system above the
               | minimum speed.
        
               | spockz wrote:
               | Most cars support this here in Europe at least. However
               | true traffic jam assist seems reserved for cars with
               | automatic gears.
        
               | rad_gruchalski wrote:
               | My BMW 5 series does that. Huh. That's only when I ever
               | use it.
        
               | failwhaleshark wrote:
               | My mom's Subaru Forester does hands-off stop-and-go
               | traffic from 0 to 90 mph. It has 2 cameras and does lane
               | departure too. It will even activate antilock breaks if
               | it needs to panic stop.
               | 
               | Hyundais have exceptionally-good lane-following.
        
               | codeulike wrote:
               | My Kia EV does exactly this, and is fairly competent at
               | it.
        
           | runarberg wrote:
           | May I ask, is traveling through a bumper to bumper traffic a
           | frequent occurrence for you? If so, are there options for you
           | to avoid it (e.g. change commuting times, work from home,
           | walk, bike, or use public transportation)? If also yes, why
           | do you tolerate it?
           | 
           | Personally I see bumper to bumper traffic maybe 3 or 4 times
           | a year (when driving home from a vacation or being forced to
           | a doctor's appointment during rush hour). And I honestly
           | don't get why anyone would subjugate them self to this kind
           | of traffic as part of their daily commute.
        
             | kbelder wrote:
             | "Nobody drives that route at that time; it's too crowded."
        
             | industriousthou wrote:
             | Lot's of people have to be at particular places at
             | particular times. Hence, the rush hour.
        
             | greenyoda wrote:
             | Consider yourself very lucky. Not everyone has the luxury
             | of working from home, choosing their working hours or
             | living close to public transit or within walking/biking
             | distance from their job (in many areas, the farther you get
             | away from city centers, the cheaper the real estate
             | generally is).
             | 
             | In the NYC area, bumper to bumper traffic is common. It can
             | be caused by an accident or construction that blocks one or
             | more lanes, cars merging on to an already congested
             | highway, etc. These conditions frequently happen even
             | outside of rush hour.
        
             | goldenkey wrote:
             | Try living in New York City, the whole tristate, not just
             | Manhattan. 3 to 4 times a year is laughable. Try 30 to 40
             | times a year if you take the Belt Parkway.
             | 
             | I used to live in LA where Santa Monica Blvd was always
             | backed up. I doubt LA traffic has gotten better either.
             | 
             | I'm surprised by your insolence with regard to how shitty
             | traffic circumstances are in big cities. Simply changing
             | one's commuting times doesn't failsafe the issue.
        
               | runarberg wrote:
               | This question was specifically addressed to those that
               | have the option of avoiding it and still don't. I was
               | under the impression that there were several options of
               | escaping bumper to bumper traffic on your daily commute
               | in the New York City area.
               | 
               | In fact I've often heard people from that area complain
               | more often about lack of parking near their commuter rail
               | station. Which indicates that people do rather tolerate
               | circling the parking lot in their park-and-ride rather
               | then risking stop-and-go traffic jams.
        
               | oh_sigh wrote:
               | 30-40 times a year _if you own a car_ and drive it
               | regularly. That 's why > 50% of households in NYC don't
               | own a car.
        
               | greenyoda wrote:
               | Large parts of NYC (much of the outer boroughs) are not
               | near the subway, and not within walking distance of a
               | supermarket (especially if you're carrying groceries for
               | an entire family). Millions of people live there because
               | they can't afford Manhattan rents, especially for a
               | family-sized apartment.
               | 
               | Also, transit lines tend to connect well to mid-Manhattan
               | but poorly between other locations. So if you live in
               | Queens and work in Brooklyn, good luck getting to work
               | reliably by public transportation. (Before you object to
               | that arrangement, consider: If you own a house and have
               | kids in school, you're not necessarily going to uproot
               | your family and move just because your new job is further
               | from home.)
               | 
               | Thus, many ordinary New Yorkers rely on cars to commute
               | to work.
        
               | runarberg wrote:
               | I did a quick google map survey and found that it can
               | take about an hour to commute on public transit between a
               | residential area in Queens and to a commercial area in
               | Brooklyn[1]. Not ideal until you see that driving the
               | same route takes about 40 min. So not a huge different
               | and definitely passes as an alternative to avoid the risk
               | of stop-and-go traffic jams.
               | 
               | If you need to drive to the supermarket you should have
               | the option of choosing a time and route with minimal risk
               | of traffic jams. I find it hard to belief that many
               | people are frequently hitting bumper to bumper traffics
               | on their way to or from the supermarket. Occupationally
               | yes, but frequently no.
               | 
               | 1: https://www.google.com/maps/dir/84-25+168th+Pl,+Jamaic
               | a,+NY+...
        
             | ska wrote:
             | > I honestly don't get why anyone would subjugate them self
             | to this kind of traffic as part of their daily commute.
             | 
             | You may have trouble understanding it, but empirically a
             | huge number of people see this extremely regularly, if not
             | daily.
             | 
             | It's not even necessarily a feature of horrifically long
             | commutes. For one example, lots of places that have
             | basically ok traffic have bottlenecks at bridges, you may
             | be stop and go for a little while every day getting across
             | that.
        
         | danw1979 wrote:
         | The first thing that crossed my mind when I saw the clip of
         | autopilot hallucinating flying traffic lights was that it was
         | clearly DUI and there's no way I'm getting in the car with that
         | guy.
        
       | [deleted]
        
       | lifeisstillgood wrote:
       | I am fairly confident that the autonomous car business model is
       | pretty dead. Electric cars - that's great ! But self driving is
       | waaaay harder. And self driving has a inbuilt expectation of
       | nearly perfect. It's not that if we swapped to self driving cars
       | today we would halve car deaths (which would be something like
       | 500k people pa - which would be a huge very important thing) but
       | it would not be perceived as robots save 500k people a year, it
       | would be instead "robots kill the other 500k per year"
       | 
       | Liability, insurance, legal minefields and plain old marketing
       | would never allow cars ,that perform as well as the best
       | performing cars do today, to be on the roads in "every day"
       | conditions.
       | 
       | My conjecture is we end up building AV only roads. initially one
       | lane of a highway, then ring roads round cities and major
       | warehousing hubs, then across urban areas. Walled off in some way
       | they simply become _railways with benefits_.
       | 
       | At that point every business model ever written with Self Driving
       | in the title goes in the bin.
       | 
       | I am not saying the tech is useless - frankly it's fucking
       | awesome that this is happening in my lifetime. But fucking
       | awesome tech and workable business model aren't always the same
       | thing.
       | 
       | Not sure where I am going with the rant but I sure hope we get
       | more out of the billions spent here than Teflon.
        
         | propogandist wrote:
         | you can skirt all regulation and set false expectations by
         | simply calling your driver assist feature "autopilot"
        
         | exporectomy wrote:
         | It doesn't need to be almost perfect. Waymo, for instance has
         | remote human assistance when it gets stuck, and even a human in
         | a normal car drive to the scene and to get it unstuck if it's
         | really bad. There could well be a point where those humans plus
         | the tech are cheaper than a traditional full-time-per-customer
         | human taxi driver.
         | 
         | You might say people don't want those delays but we tolerate
         | delays in normal commuting. An unattended bag on a tube station
         | stopping trains, a car accident blocking traffic, traffic jams
         | blocking traffic, mechanical breakdowns, etc. As long as
         | they're infrequent enough, it should be OK for riders.
        
         | AbrahamParangi wrote:
         | You'll know self driving is almost here when it works in
         | simulation at non-interactive framerates.
        
         | separateform wrote:
         | Railways with benefits aren't good enough and would be an even
         | worse business model.
        
         | ctdonath wrote:
         | Au contraire, seems we're mostly there. It's incredible having
         | watched self-driving, and neural networks, go from new concepts
         | and barely functioning to "order here" and public use (tell me
         | the numerous YouTube FSD videos aren't what they are). Insofar
         | as there are still serious edge cases to address, they're being
         | solved.
         | 
         | Time and again I've watched "ain't happening" technology become
         | the preferred norm practically overnight. Eagerly awaiting my
         | FSD CT, and making long trips without having to micro-manage
         | every foot across thousands of miles.
        
         | failwhaleshark wrote:
         | Nope. You're just not patient enough. ACs will come, but very
         | gradually. They're already (mostly) here but it will take
         | another decade or two to be completely A.
        
           | treeman79 wrote:
           | So between "not invented" and "not proven impossible yet".
           | 
           | https://xkcd.com/678/
        
             | failwhaleshark wrote:
             | Lol, I guess. ACs aren't the same as the Moller Skycar.
             | It's a very, very complicated "DARPA Grand Challenge" where
             | the goal is to not get sued by Ralph Nader into oblivion
             | because the algorithm went MCAS and killed grandma. Waymo
             | or someone will need to do racing and the Gumball 3000
             | first before people will accept anything more than semi-A
             | (glorified cruise control and lane following).
        
         | erikpukinskis wrote:
         | Doesn't "I'm fairly confident" kind of imply you're not
         | actually confident?
        
           | Doctor_Fegg wrote:
           | It's common British English idiom.
        
         | ajross wrote:
         | I see this rather differently. In fact AI is already as good as
         | human drivers, statistically. Only a few years back, we could
         | point at people getting killed by software bugs like the Tesla
         | path-under-semi-trailer issue (not a lot, but it did happen and
         | they were real bugs).
         | 
         | That's not happening any more. All we have left is laughing at
         | stuff like this, where the visualization (not even the
         | autopilot!) gets confused by seeing real (!) traffic lights on
         | a truck, so it paints them in space, but then has to re-
         | recognize them because they are moving.
         | 
         | At some point, the luddites will just run out of ammunition.
         | It's sort of happening already.
        
           | akersten wrote:
           | > where the visualization (not even the autopilot!) gets
           | confused by seeing real (!) traffic lights on a truck, so it
           | paints them in space, but then has to re-recognize them
           | because they are moving.
           | 
           | Just my hypothesis: but I think the autopilot _really did_
           | see them as traffic lights, and just got lucky that they
           | weren 't powered and ignored them as out of order. Were there
           | a cross street, I suspect the car would have stopped and
           | treated it as an uncontrolled intersection...
        
           | lstamour wrote:
           | > AI is already as good as human drivers
           | 
           | But the AI drives slowly and gets confused easily. Regular
           | drivers routinely have to go around self-driving cars. Not to
           | say they won't improve, but it seems like current AI is
           | assistive to the point where it might be harmful when drivers
           | rely on it in speeds and situations where they shouldn't. I'm
           | sure it will keep improving, but I feel like this is one of
           | those situations where the amount of data and training
           | required, and the amount of iteration on the software
           | required to handle edge cases is not impossible but is
           | exceptionally difficult.
        
           | cvak wrote:
           | Do you have source for the statistics? I remember reading
           | somewhere that it was just a number one of the AI projects
           | said, with no verification, also with no knowledge of the
           | setup.
        
         | lvs wrote:
         | I think the issue is bigger than just self-driving. AI/ML is
         | wildly oversold as an engineering solution to real world
         | problems. In the best case, it's solving easy problems under
         | narrowly defined conditions. It's not really solving hard
         | problems robustly under real conditions.
        
       | ajross wrote:
       | This genre of posts is so tiresome.
       | 
       | As the second video in the thread demonstrates, the truck is
       | _literally hauling traffic lights_. The AI recognition is
       | correct, the only thing worth complaining about is that they 're
       | displayed as static objects for the user after recognition, just
       | to be re-recognized a few seconds later in a different place.
       | Note that the car is correctly not detecting they are lit, so not
       | inferring direction (though AP isn't engaged, so I guess we'll
       | never know what it would have done).
       | 
       | No doubt you could play the same game by putting a traffic cone
       | on your bike. The car wants to see important traffic objects,
       | it's literally what it's trained for.
        
         | theamk wrote:
         | And that illustrates the important problem with self-driving
         | car: if you want L5 autonomy, you need to be handle all the
         | weird cases.
        
       | sfblah wrote:
       | I recently test drove a Tesla just to see how the autopilot
       | system works. The way they handle traffic lights is pretty
       | entertaining. It seems them, particularly yellow lights, in a lot
       | of situations that are truly perplexing. It also had a tendency
       | to turn trees into traffic cones and to be truly impressive at
       | detecting garbage cans. On some level it's hard to understand
       | exactly what they're trying to do there.
       | 
       | I also remember being at an intersection where I was turning left
       | and was waiting behind another car. The display repeatedly showed
       | cars the cars crossing in front of us crashing into the car in
       | front of me. Not sure why.
        
       | cs702 wrote:
       | Well, of course. This is probably the first time the car has come
       | across a truck carrying multiple street lights, without cover,
       | stacked on the back of the truck, well above eye level. It's a
       | rather unusual edge case.
       | 
       | It's only a matter of _time_ before the software in these cars
       | can handle the vast majority of edge cases as well as or better
       | than human beings. Human vision isn 't exactly reliable.[a]
       | 
       | In the meantime, someone should make a playable game in which
       | trucks throw street lights at cars. Maybe someone at Tesla is
       | willing to make this game in good jest?
       | 
       | [a] See, for example
       | http://www.ritsumei.ac.jp/~akitaoka/index-e.html
        
       | failwhaleshark wrote:
       | Did they get to level 17? I hear you get an extra car.
        
       | etaioinshrdlu wrote:
       | As an ML engineer I'm a little bit baffled that Tesla has not
       | solved this by now. It's not like they lack data or ML knowledge.
       | 
       | It seems like they should have a million hard test cases that
       | must pass in simulation before releasing a new model. The
       | simulations should be harder and more extreme than anything
       | encountered in real life.
       | 
       | I think the real problem is obvious. They're trying to rush the
       | work because Elon said so.
        
         | fsh wrote:
         | I've never understood this argument. Isn't the main bottleneck
         | that you need well-labeled data for training the neural
         | networks? How is having tons and tons of random camera footage
         | going to help?
        
           | ampdepolymerase wrote:
           | For a lot of problems, clean data labeled data is no longer
           | the bottle neck, or in some cases, there are ways around it.
           | The bigger issue is dealing with the "long-tail" of unknown
           | new scenarios. This is a currently unsolved challenge.
        
           | unoti wrote:
           | > How is having tons and tons of random camera footage going
           | to help?
           | 
           | One way to think of this is that that the footage is
           | implicitly labelled: we have the benefit of hindsight: we
           | know what the state/location of the vehicle was going into
           | the future. That benefit of hindsight also can serve as
           | implicit labels by knowledge that the vehicle did not crash
           | or collide with something immediately after the footage.
        
           | stonemetal12 wrote:
           | Fairly simple, they have footage from every time a person had
           | to correct AP's driving. Take that footage label it. Train
           | some of it, save some for test cases. Finally don't release
           | an update until it drives better than the current system.
        
         | Havoc wrote:
         | > they should have a million hard test cases that must pass
         | 
         | Move fast and break things [like tests]
        
         | ModernMech wrote:
         | Right? This should be trivially solved with some sort of
         | temporal filtering on detected objects. If you detect a traffic
         | light at (x,y) in one frame, and it disappears in the next, but
         | there's a new one at (x+dx,y+dy), then you shouldn't place a
         | new one down in the world frame. You should only place a
         | traffic light down if you're confident it exists and is
         | operational. At the very least, the lights should be detected
         | on the back of the truck, but they should move _with_ the
         | truck. At least that matches what 's happening.
         | 
         | I don't understand why this is hard for Tesla engineers -- I
         | was doing this kind of thing in grad school a decade+ ago and
         | it worked fine. I've seen it in other demos where object
         | classifications rapidly cycle between person, bike, car, etc.
         | Are they not filtering anything? Is this a symptom of "AI-ing
         | all the things"? Because we did it with bog standard computer
         | vision techniques back then and never got behaviors like this.
        
           | nmca wrote:
           | Did any of your grad school work make it into the real world
           | at all? Typically I'd suggest that such ideas are simple in
           | theory and difficult in practice.
        
             | ModernMech wrote:
             | Yes, there are real systems out there working off of the
             | techniques we used back then. I didn't work much in theory
             | at all.
        
         | AceJohnny2 wrote:
         | Years ago, a friend worked in the Autopilot group. It took them
         | _a year_ to procure servers to store the telemetry of the
         | existing cars, and then weeks to have them setup.
         | 
         | They don't work there anymore.
         | 
         | From their experience, I know one thing: I will never work for
         | Elon Musk. He may be a great visionary and salesman, but he's a
         | _horrible_ manager.
        
           | etaioinshrdlu wrote:
           | This is fascinating, but what caused the delays specifically?
        
             | AceJohnny2 wrote:
             | Server procurement was a CFO thing, and their specific
             | requirements didn't fit under the existing buckets, so it
             | took them a long time to get it approved.
             | 
             | It was stunning to see the complete disconnect between
             | Musk's grand declarations and what the organization was
             | actually setup to deliver.
             | 
             | Frankly, it just gives me more respect for Tim Cook, who as
             | COO at Apple made his company able to turnaround and
             | deliver HW in record time.
             | 
             | Edit: in retrospect I wonder if Musk's grand public
             | declarations were actually a way to control and pressure
             | his own organization. Remember, Musk didn't actually found
             | Tesla, he rescued it from bankruptcy after the Roadster
             | didn't return as much as needed, so he inherited an
             | existing structure.
        
           | systemvoltage wrote:
           | Just like your anecdote, I have one to share as well: I know
           | a close friend that works on space lasers at SpaceX and has a
           | blast, best job ever according to him and he thinks Elon is
           | _an excellent_ manager mostly because there is zero
           | bureaucracy and people are not afraid of  "Do nothing" option
           | as well as removing complexity. In fact, he wouldn't work
           | anywhere else after seeing the company culture.
           | 
           | I know it's cool to hate Elon.
        
             | AceJohnny2 wrote:
             | I dunno, "move fast and break things" works for rockets
             | [1], not so much for cars.
             | 
             | [1] https://youtu.be/bvim4rsNHkQ
        
             | ipsum2 wrote:
             | I don't think Elon has a day-to-day role at SpaceX unlike
             | at Tesla, right? From what I've read, Gwynne Shotwell is
             | the main person in charge there.
        
               | simondotau wrote:
               | Elon spends nearly as much time at SpaceX as he does
               | Tesla. There's no question that Elon regularly mucks in
               | at the lowest levels of engineering at SpaceX.
               | 
               | If you want evidence from a reasonably neutral observer,
               | take Sandy Munro (himself an engineer who has worked on
               | everything from cars to aeroplanes). He recently
               | interviewed Elon, ostensibly about Tesla but the
               | interview was in a meeting room at SpaceX. After the
               | interview he was invited to a two hour design review
               | meeting and was "blown away" at Elon's depth of
               | involvement.
               | 
               | https://youtu.be/S1nc_chrNQk?t=370 (6:10 to 8:45)
        
               | Geee wrote:
               | He is very much involved at actual day-to-day engineering
               | at SpaceX. He is the CEO and the chief engineer there.
               | 
               | Sources: https://www.reddit.com/r/SpaceXLounge/comments/k
               | 1e0ta/eviden...
               | 
               | https://www.youtube.com/watch?v=S1nc_chrNQk&t=377s
        
         | kempbellt wrote:
         | I'm curious what the issue is here that seems unsolved? It's
         | unconventionally displaying what it is recognizing, but the car
         | isn't doing anything janky.
         | 
         | The car is properly recognizing traffic lights pretty darned
         | well, considering the circumstance. It looks like it has a
         | built in understanding that traffic lights are "always"
         | stationary - hence, assigning them static locations on the 3D
         | map - but it keeps having to update the model because the
         | lights are actually moving.
         | 
         | This seems like a very non-obvious edge case that I wouldn't
         | expect an ML team to even consider as a possibility. Now they
         | need to program into the ML model an understanding that traffic
         | lights are _typically_ stationary. Which seems even more
         | difficult to me, from a technical perspective - you don 't want
         | false negatives...
         | 
         | The car isn't braking or making any strange maneuvers from what
         | I can tell. I'm actually impressed that it's handling it this
         | well.
        
           | simondotau wrote:
           | I'd go further and predict that the stationary traffic lights
           | are just an artefact of the visualisation and not the vision
           | system itself.
        
           | [deleted]
        
           | sabhiram wrote:
           | So you are impressed that they ran headfirst into a bunch of
           | traffic lights because they did not know what to do?
           | 
           | When you don't know what to do - do nothing. What if it was a
           | traffic light on roller skates? Or a kid, dressed as a
           | traffic light, on roller skates?
        
             | tshaddox wrote:
             | They're unlit traffic lights. Would you rather the car slam
             | on the brakes?
             | 
             | But seriously, I'm inclined to be charitable here and
             | assume that this is merely a quirk of the UI display.
             | There's no evidence that the autopilot did anything unsafe
             | (apparently it wasn't even engaged?), and until I see
             | evidence of that I'm willing to withhold judgment. (I have
             | seen evidence of other situations where Tesla autopilot did
             | unsafe things and I'm in no way apologetic about those
             | situations.)
        
             | jschwartzi wrote:
             | I just figured out how I can make sure self-driving cars
             | stop for me when I'm trying to cross a street. Dress up as
             | a red light.
        
               | kbelder wrote:
               | Or wear one of these t-shirts: https://www.spreadshirt.co
               | m/shop/design/stop+sign+mens+premi...
        
             | comradesmith wrote:
             | Stopping would be a form of action too. -\\_tsu_/-
        
             | kempbellt wrote:
             | I don't know exactly how Tesla's safety systems are
             | designed, but this is my guess.
             | 
             | Collision detection systems (radar) are accurately _not_
             | detecting an impending collision because the lights are not
             | actually on a collision course with the vehicle.
             | 
             | Object recognition systems (computer vision) are working
             | very well, because they recognize the lights and are
             | updating the 3D map accordingly, but the traffic light 3D
             | model is not designed to be a moving object - unlike
             | vehicles, which frequently move. Which is why we see the
             | car "passing through" them.
             | 
             | What we are likely seeing is simply a weird edge case in
             | the output for the user-interface. I'd imagine if an object
             | was actually flying at the car and the car could see it, it
             | would brake accordingly.
             | 
             | Also, the map is two-dimensional. The car frequently drives
             | _underneath_ traffic lights that I 'm sure also appear "on
             | top of" the car in normal cases.
             | 
             | Object recognition and collision detection, from what I
             | understand, are two very different systems.
        
               | SahAssar wrote:
               | Tesla is not using any radar/lidar systems for
               | FSD/autopilot (anymore), it's all visual. It might be
               | that there are completely different systems for
               | recognizing obstacles and what is shown on the map, but
               | this still raises the question why and why one of those
               | systems seem to act like this.
        
               | [deleted]
        
             | codeulike wrote:
             | The car is not under autopilot. The driver is driving. (the
             | grey steering wheel icon would be blue if autopilot was on)
        
           | lvs wrote:
           | Working-as-intended certainly isn't an argument I was
           | expecting to see in this thread.
        
             | olyjohn wrote:
             | It's a feature, not a bug!
        
         | Syonyk wrote:
         | > _I think the real problem is obvious._
         | 
         | Yes. The human brain and visual systems aren't nearly so
         | trivial to replicate as a lot of people in the tech industry
         | seem to think.
         | 
         | Tesla is just one of many case studies in the paired tech
         | industry arrogance seen so frequently:
         | 
         | - "A human is just a couple really crappy cameras and a neural
         | network, we know how to do better cameras and neural networks,
         | how hard can it be?"
         | 
         | - "We can do anything we dream with 99.995% reliability in the
         | synthetic, computer-based world of the internet because we know
         | code. Therefore, we can do anything we want in the physical
         | reality with code!"
         | 
         | Both are far from evident in practice, but the belief in them
         | continues, despite it being increasingly obvious to everyone
         | else that neither one is true.
         | 
         | Human vision and world processing is quite impressive - and, as
         | pointed out elsewhere in this thread, a two or three year old
         | would have no trouble working out that the obstacles were some
         | things on a truck. I've got a nearly three year old, and I
         | guarantee he wouldn't confuse those for stoplights in the
         | slightest. I also wouldn't let him out on the road, though he
         | does well enough with a little Power Wheels type toy. But there
         | is _far_ more going on in the visual processing system than we
         | even understand yet, much less have the slightest clue how to
         | replicate.
         | 
         | And while code may be fine on the internet (where you can retry
         | failed API calls and things mostly make sense), the quote about
         | how fiction is constrained by what's believable and reality
         | sees no such restrictions is very true. Out on the roads, all
         | sorts of absolutely insane things can and do happen on a
         | regular basis - and you can't predict or plan for all of them.
         | But the car has to handle them or it crashes.
         | 
         | As a random example, a year or two ago, I was behind a car that
         | had poorly strapped a chunk of plywood to their roofrack with a
         | good chunk hanging forward, and the front end of it was
         | starting to oscillate awfully hard. I had a good clue that it
         | was going to come apart sometime in the very near future, so
         | backed off from a normal following distance to quite a way
         | back. Sure enough, half a mile later, it failed, went flying
         | through the air, slammed into the road a good distance behind
         | the car, and tumbled a bit. Had I been using a normal in town
         | following distance, it would have either hit me or tumbled into
         | me, but using a human visual system, it was obvious that my
         | existing following distance stood a good chance of being a bad
         | idea.
         | 
         | Stuff like this happens on roads _constantly._ Meanwhile, state
         | of the art self driving can 't tell the difference between
         | stoplights and some poles on a truck. You'll excuse me if I
         | don't think the problem is anywhere remotely close to solved
         | for a general case Level 4 purpose.
        
           | ve55 wrote:
           | Personally I view the problem here not as a failure of object
           | recognition or what would be considered a visual system, but
           | of abstract reasoning (or lack thereof, of course)
           | 
           | Aritical neural networks are pretty good at object
           | recognition, among hundreds of other things, and even better
           | than humans at some of them. They are, however, generally
           | pretty bad at abstract reasoning, critical thinking,
           | 'understanding' concepts in-depth, and so on, and I think
           | that's a more constructive way to phrase the problem we see
           | in this video.
           | 
           | When a problem is fully redicible to a simple vision problem,
           | modern neural networks are a great choice, but being a good
           | driver involves much more than just the visual cortex.
        
             | simondotau wrote:
             | The problem, to the extent there is one, is certainly with
             | the visualisation; it's not so clear if there's any problem
             | with the underlying vision system.
        
       | blhack wrote:
       | Is the car trying to stop? The visualization here is for
       | autopilot (which ignores traffic lights), so even if the truck
       | driver was trying to do something malicious, the car would ignore
       | it.
        
         | codeulike wrote:
         | The car is not under autopilot, the driver is driving. The car
         | is just displaying what it thinks it can see.
         | 
         | (the grey steering wheel icon means autopilot is not engaged,
         | it would be blue if it was on)
        
         | schmorptron wrote:
         | I don't think autopilot is actually enabled in this clip, the
         | steering wheel icon is greyed out.
        
         | detaro wrote:
         | At least in the clip it's accelerating, so it doesn't seem to
         | take them into account - which would make sense for non-lit
         | traffic lights anyways?
        
           | nucleardog wrote:
           | I'd wager the average case of a non-lit traffic light is more
           | likely to be a traffic light that's... out (at least around
           | here newly installed ones are covered until they're
           | activated) so no, I wouldn't say ignoring them would make
           | sense.
           | 
           | It would generally be a clue that there's an intersection
           | busy enough to require signals that now lacks signals or
           | signage which would warrant extra caution. I'd expect the car
           | to at least slow down significantly, if not come to a
           | complete stop before proceeding.
        
       | a3n wrote:
       | That is effing hilarious.
       | 
       | Except we're trusting ML to perform surgery, choose conviction
       | sentencing, evaluate job CVs, determine acceptable marriage
       | partners (why not?), determine who can have kids (why not?),
       | determine who gets into college (why not?), determine who gets a
       | loan (why not?), determine who gets to work on ML (why not?). And
       | drive cars.
        
         | david_allison wrote:
         | Not if the GDPR has anything to say about it
         | 
         | > The data subject shall have the right not to be subject to a
         | decision based solely on automated processing, including
         | profiling, which produces legal effects concerning him or her
         | or similarly significantly affects him or her.
         | 
         | https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL...
        
       | codeulike wrote:
       | In this video, I dont think the car is actually in autopilot. Its
       | just displaying what it thinks it can see, while the driver
       | drives.
       | 
       | (the steering wheel icon at the top of the screen is grey, not
       | blue)
        
       | dylan604 wrote:
       | Hopefully, an image of this truck never shows up in a captcha
       | challenge.
        
       | mikewarot wrote:
       | Tesla clearly needs to hire a "Red Team" to find weaknesses of
       | their autopilot (and other systems).
       | 
       | The odd thing is that even a randomly stupid AI for self driving
       | is statistically safer than most drivers. Clearly there is a ton
       | of room for improvement in both AI and Humans.
        
         | bigdict wrote:
         | > Red Team
         | 
         | I'm sorry, did you mean customers?
        
           | falcolas wrote:
           | He didn't. Telsa did.
        
         | ModernMech wrote:
         | Why hire a team to beta test your AI self driving car when you
         | have customers willing to shell out $10k for the privilege?
        
         | darepublic wrote:
         | > even a randomly stupid AI ... Is statistically safer than
         | most drivers
         | 
         | the fact that people still trot this out every Tesla thread is
         | super annoying. Sorry to break it to you but this is a 100%
         | false claim
        
         | Closi wrote:
         | > The odd thing is that even a randomly stupid AI for self
         | driving is statistically safer than most drivers.
         | 
         | This hasn't been shown yet at all. Statistics showing autopilot
         | have less crashes per mile always ignore that Autopilot is
         | doing the type of driving that has the least accidents per mile
         | (motorway driving).
        
         | mikewarot wrote:
         | I was wrong, it's almost as safe, not safer.
         | 
         | https://www.forbes.com/sites/bradtempleton/2020/10/28/new-te...
        
         | Syonyk wrote:
         | > _The odd thing is that even a randomly stupid AI for self
         | driving is statistically safer than most drivers._
         | 
         | A strong claim, lacking actual evidence for it. All we have to
         | go on are some Elon tweets (rather the definition of a biased
         | source) and the actual crash rate. Without a _lot_ more data
         | (which Tesla steadfastly refuses to release) about
         | environments, corrections, etc, it 's quite impossible to make
         | that sort of statement with any confidence.
         | 
         | The Tesla hardware is a weird combination of capable and
         | insanely dumb, and it's far from obvious which it will be in
         | any given situation until it's gone through it.
         | 
         | If an honest statistical analysis of the data indicated that
         | Tesla's automation was better than human drivers (or better
         | than other driver assist systems), I would fully expect them to
         | have released the values. Since they haven't, and only hint at
         | it and make statements that _sound_ statistical but really aren
         | 't, I assume they've done the numbers internally and know it's
         | not nearly as good as they like to imply.
         | 
         | If I drove in a city like their "self driving" beta was a few
         | months back, I would be hauled from the car on suspicions of
         | driving while hammered.
        
         | alkonaut wrote:
         | > The odd thing is that even a randomly stupid AI for self
         | driving is statistically safer than most drivers. Under sunny
         | highway conditions? Perhaps. In a night snowstorm? Probably yes
         | - because the AI would be at the side of the road waiting for
         | the human to drive.
         | 
         | I think self driving is a typical 80/20 problem. We won't have
         | "full" self driving because the costs are exponential for each
         | step closer to it. But driving on 80% of roads on 80% of days,
         | with supervision? That could happen.
         | 
         | But that said: we won't accept AI that just makes traffic safer
         | "on average". I'm fine with human shortcomings causing
         | accidents. People will not accept car manufacturers cutting
         | corners and causing accidents, even if statistically it's
         | safer. So the very high bar for self driving isn't just "as
         | safe as humans".
        
       | nullc wrote:
       | I saw a good recent youtube video showing numerous concerning FSD
       | failures in urban driving:
       | https://www.youtube.com/watch?v=antLneVlxcs
        
         | dawkins wrote:
         | I saw the video and it is crazy that they think it is even
         | close to production.
        
       ___________________________________________________________________
       (page generated 2021-06-03 23:01 UTC)