[HN Gopher] The End of Starsky Robotics ___________________________________________________________________ The End of Starsky Robotics Author : stefan8r Score : 131 points Date : 2020-03-19 16:27 UTC (6 hours ago) (HTM) web link (medium.com) (TXT) w3m dump (medium.com) | xiaolingxiao wrote: | This is a very good and honest retrospective. He shows clear | thinking, and surprising technical understanding for someone who | is not technical. And most importantly he shows humility. A+ | pj_mukh wrote: | Great read! And thanks to the author for all the candor. | | The business case always seemed clear to me and now reading this, | I wonder if there is a case to be made for an engineer and a | trucking operations veteran to build a business that requires | minimal capital (or maybe even is bootstrapped?!), to take it | across the finish line? | DanFeldman wrote: | Ah well, it was great while it lasted. | | Most of the engineering team has spread throughout the AV | industry, with most folks going to our neighbors and fellow YC | company Cruise Automation. Some are at Waymo, Zoox, AutoX, and | some purposefully exited the AV space entirely. | | I joined Applied Intuition to help build out Simulation and Infra | for other companies producing AVs/robotics. | | There are a few folks I know of who are still looking for their | next roles in the BizOps/PplOps side, which has been especially | hard during COVID-19 season if anyone wants to do some linkedin | stalking. | xiaolingxiao wrote: | Hey Dan, could I ask you a few questions about what you learned | at Starsky? I have a robotics background and what they tried to | do is very very interesting. | krak12 wrote: | Please Dan, ignore this idiot trying to steal anything from | your hard work to raise funds in China | mrfusion wrote: | Are any of those folks thinking to launch a new robotics | oriented start up? I've been wanting to get involved with that. | (Email in profile) | petermcneeley wrote: | All this L1/L2/L3 when most of the public failures in AV are at | the level of vision and interpretation of the world. | krisoft wrote: | So? That graph is about the capability of the system and how it | improves with effort. This has nothing to do with which part of | the system fails the most. Where do you see the contradiction? | alricb wrote: | It took me a while to figure out that "AV" means _autonomous | vehicle_... I was kind of wondering what AV as in AV club had to | do with trucking. | cjv wrote: | This is sad news. | | I worked at Starsky Robotics as a perception team intern after | graduating high school. I will always be grateful for the team | for the opportunity, it was a fantastic first job and everyone | who worked there was very kind (especially Stefan). | | Unfortunately, Starsky had effectively had no machine learning in | 2017 (when I worked there), using solely classical computer | vision techniques. This didn't match the company's ambitions of | not using LIDAR and there was a strong stigma against switching | to a deep learning approach. At the time, very few object | detection models had public implementations and I spent a lot of | time trying to get a YOLO9000 and RetinaNet implementations | running at real-time speeds. Frustrating, as a small startup the | labeling services kept screwing us over by returning poorly | annotated images. | | I think what I took away from the experience is that deep | learning in domains with long tails requires a enormous | investment in a labeling pipeline - dwarfing the computational | aspect - to get decent results. I don't think any solutions are | on the horizon that will allow us to bypass this reality. You | don't see improvements between Comma.ai and Tesla because it's | about the improvements far out on the tail. | ssivark wrote: | > _There are too many problems with the AV industry to detail | here [...] The biggest, however, is that supervised machine | learning doesn't live up to the hype [...] It's widely understood | that the hardest part of building AI is how it deals with | situations that happen uncommonly, i.e. edge cases. In fact, the | better your model, the harder it is to find robust data sets of | novel edge cases. Additionally, the better your model, the more | accurate the data you need to improve it. Rather than seeing | exponential improvements in the quality of AI performance (a la | Moore's Law), we're instead seeing exponential increases in the | cost to improve AI systems_ | | This is _exactly_ the problem with data hungry machine learning | approaches, specifically deep learning (and that's without even | mentioning the compute resources necessary to learn). The only | way to circumvent that is plausibly apply better inductive | biases, and fundamentally rethink what the field considers | important. | Barrin92 wrote: | I think the obvious problem is that induction (which is what | learning from data is), is simply only one tool in the huge | space that is intelligence, and it will never be enough to | emulate the skill of a human driver, which is more or less what | is necessary for autonomy in an open environment. | csours wrote: | Disclaimer up front: I work for General Motors. I don't work on | AV. Any opinions are my own. I have no special knowledge of GM's | AV strategy. | | > It didn't matter that that jump from "sometimes working" to | statistically reliable was 10-1000x more work. | | There's 2 states of functionality: | | 1) It doesn't work | | 2) It sometimes works | | The inverse, for disk drives: Failing and Failed. | | Think about apps/services. You could say that your app is | working, but over a long enough time period, it is only sometimes | working. It's working while you have disk space, free memory, and | a working network connection. It's working while your business | assumptions hold true. It's working while your datacenter has | power. We've developed strategies for managing all of these | things; for load balancing and Active/Active hosting. But even | with that, it only sometimes works. | | --- | | With all that, I think it may be useful to think of self driving | in terms of tasks. | | If you can put a box around what you expect a computer to be able | to do, you can define tasks that will always return a reasonable | output. | | The more tasks a computer handles, the easier it is for the human | in control (think driver assist, like lane keeping and automatic | cruise control). | | If you add enough tasks, and perform them well enough, maybe you | can take the human in control out of the vehicle. | | I think I'm in agreement with the authors that I don't see the | day when there isn't a human in control. Or the other way to say | that is that if there isn't a human in control, sometimes your AV | will just stop. | notlukesky wrote: | I always thought that autonomous driving was more than 10 years | off (if not 20) for cars in city traffic. But I thought trucking | in the US had a chance for intersate traffic till the proverbial | last mile, because there are less edge cases that you need to | train the model. Were you off by just 3 years for interstate | "driving" till the last mile? | | Will investors lose their loss memory and a whole new set will | invest in the space in say 3 years. | stefan8r wrote: | System worked. Problem was investors have mostly bet on full | autonomy, and when that failed to materialized they got scared | out of the space. | | Full autonomy isn't necessary. And I don't know if it's even | profitable for trucks. | ozborn wrote: | Does the current coronavirus pandemic assist you in promoting | a teleop model? I'm not sure how much human to human contact | there is in the business. | cosmodisk wrote: | There are some. Initial pickup, all the pit stops for | fuel,food,sleep. The end of the journey,where the recipient | may check the delivery and etc. | redis_mlc wrote: | > investors have mostly bet on full autonomy | | lol. | | I used to live on one of the side streets in MV that Google's | AV cars trained on. Saw three in a row once. They were | usually the only traffic, so that annual report of millions | of miles travelled was meaningless - they might as well have | driven arounf the Safeway parking lot at 5 am. | | I guess passenger-carrying quadcopters is next. Oh wait ... | chubot wrote: | Thank you for the great writeup. It sounds like you took a | responsible, valuable, and even economically viable approach, but | the market doesn't want to hear it. | | I wonder when the market will start listening to people who | actually WORK on the problem. | | > _which means that no one should be betting a business on safe | AI decision makers. The current companies who are will continue | to drain momentum over the next two years_ | | Sounds about right. | | ----- | | Here's me over 2 years ago simply quoting people who worked close | to the problem: | | https://news.ycombinator.com/item?id=16353541 | | _Here 's my negative scenario: self-driving is "AI-complete"; | you can't really hit all the edge cases without solving AI in | general, which is more than 30 years away (Kurzweil is the wild | optimist and predicts 2045)._ | | _You CAN use self-driving in limited circumstances, but those | limitations are precisely the ones that make driving yourself | around more attractive. The expense doesn 't go down as quickly | as anticipated because of this. They are a niche technology for | DECADES._ | | This looks about right, and I'm not claiming to be prescient, | just basically saying what Chris Urmson and Bill Gurley already | said years ago. It's weird to me that there's still so much money | chasing this pipe dream. | | It was a sign when Google spun out Waymo. If they really believed | in the product, it would be called something like "Google Self- | Driving Cars", not "Waymo". | cyanoacry wrote: | Thank you for the write-up! It's a little depressing (but not | entirely surprising) to hear that you couldn't get folks excited | about safety -- I work in the aerospace field, and our day-to-day | is all about risk management: how, why, when. It's frustrating | that high-reliability systems aren't seen as exciting when really | they're what makes the magic run. | | Best of luck, and I'm looking forward to your next venture! | rsp1984 wrote: | _There are too many problems with the AV industry to detail here: | the professorial pace at which most teams work, the lack of | tangible deployment milestones, the open secret that there isn't | a robotaxi business model, etc._ | | Just curious, why would there not be a business model for | robotaxis? | Animats wrote: | Uber: the drivers own the cars, the drivers maintain the cars, | the drivers clean the cars, the drivers store the cars when not | in use. | | Driverless: the company needs an operation, real estate, and | staff comparable to a big auto rental company to do all that. | Plus the engineering and technical staff required for autonomy. | | Even with startups doing autonomous shuttle buses, which works | at low speed, nobody is making money in that space. It's all | demos. | edshiro wrote: | Add to that the fact that you are _never_ going to get 100% | fleet utilisation, will need to pay for qualified tele- | operators, maintain /repair vehicle along with sensors... | | I recently listened to a podcast episode of the | Autonocast[1], where they interviewed a Harvard Researcher | who claimed the economics of Robotaxis just don't work. Very | interesting listen. | | [1] http://www.autonocast.com/blog/2020/3/11/177-ashley- | nunes-on... | cosmodisk wrote: | I think it will probably work at some point in the future( | 20-40 years). Today the reality is that the jobs we reward | the least are the ones that turn out to be the most difficult | to automate. Someone with 10 min training can do better job | with a thread and a needle than the most advanced robot | painfully trying to stich two pieces of garmet together.If | these can be overcome,then it may work. | jedberg wrote: | > Instead, the consensus has become that we're at least 10 years | away from self-driving cars. | | I'm going to assume the founder of a self-driving truck company | knows what he's talking about. | | But at the same time, I have a hard time reconciling that with | the fact that I sat in a car that drove itself all around San | Francisco, dealing with a ton of edge cases. | | Maybe we won't get to a 100% drive-anywhere-you-want-car in 10 | years, but to be fair, a lot of humans aren't capable of driving | a car anywhere either. | | There are a lot of LA drivers who can't drive in snow, for | example. I was one of them, until I got practice, and even then, | I'm not that safe at it. | | I think as long as we set the bar at "drive anywhere a car can go | with 100% safety" we will never reach that bar. | | But if the bar is at "drive as well as a human in most of the | places humans drive", well, I've already seen a car that can do | that. | edshiro wrote: | Thank you for sharing this very candid article on the Starsky | Robotics and generally the autonomous vehicle space. It's a real | eye opener. I've been following your progress for the last few | years (and I also read about your company through Reilly | Brennan's "Trucks - FOT" newsletter). | | I am sorry you could not get investors to believe more in what | you and your team, especially as you required a lot less funds | than many other companies in this arena. I also thought you had a | clear business case (I worked in ride-hailing and also logistics | so understand some of the problems in this space). | | I wanted to ask you a question: I am building a startup in the | dash cam video analysis space. We are building a large and | geographically diverse dataset of road videos, where our users | can annotate/label the data. We then are going to look at | detecting specific events like accidents and edge cases on | videos. Do you feel this type of business, the data we collect, | and insights we generate would have value for a AV startup? | | All the best in your next move. Stay strong - you can be proud of | what you and your team did. | Adams472 wrote: | Thank you for sharing this. The insights and details you share | here will help many future founders. | | I'm sorry things didn't end up in the exact way your team may | have hoped. I hope you can take pride in everything you | accomplished. I wish you all the best! | d_burfoot wrote: | > The biggest, however, is that supervised machine learning | doesn't live up to the hype. | | This is the key point. The new DNN approaches can outperform the | classical techniques, but only when they can exploit vast amounts | of training data. The dramatic successes of Deep Learning all | depend on either unsupervised learning against enormous raw | datasets (BERT, GPT-2, word2vec, etc) or games, where you can | generate unlimited quantities of labelled data by playing the | game against your own agent (AlphaGo, AlphaStar, OpenAI Five, | etc). | cosmodisk wrote: | As someone already mentioned here,I think it's probably not a | goood approach. The way we store info in dstabases is very | limited compated to what we can do in our heads. For instance | we know a concept of a table.It can be made of almost any | material,can have whatever shape,size, and colour we want and | yet we can instantly recognise it without having some concrete | data points on what it should look like. I can make a glass | cube,put it in a middle of the room and people would know it's | a table.How the hell we operate this way,I have no idea. | m0zg wrote: | I wonder what Elon is going to do when _his_ "full self driving" | fails to materialize. Which it will. Not only you can't do this | with just cameras and radar, I doubt you can do it _period_ | without modifying the roads specifically for such cars, and | segregating them from human drivers. And even then it will be | difficult psychologically and legally to convince the public that | this is "better" than a (possibly inebriated) human, for reasons | that have been discussed to death already. | MegaButts wrote: | Tesla makes their money selling cars, and Autopilot sales are | non-refundable. They get paid even if they don't deliver, so | I'm not sure how much they care other than keeping the hype | alive. | m0zg wrote: | That's all well and good until they get sued by people who | paid $7.5K each for FSD and never got it. | marcinzm wrote: | Then worst case they'll get a class action lawsuit, appeal | the verdict three times and then end up giving everyone | $100 worth of credits towards a future Tesla. Lawyers will | net a couple hundred million. | csours wrote: | I hesitate to comment, as I work for a competitor, but I | imagine that the reputational harm may be even worse than | the monetary. | marcinzm wrote: | I would have thought so before but we've had multiple | people killed in auto-pilot related accidents already and | people still buy. Tesla will spin the news and results | and probably get a settlement statement that makes them | look not so bad. Especially if everyone is in the same | boat and failing to get the technology working well | enough. Then Tesla will simply boast at how much better | they are than competitors. | m0zg wrote: | 30K people get killed in car accidents in the US, and | "people still buy". Heart disease kills half a million a | year and people still eat twinkies. What else is new? | That's called "freedom". | | The issue is that if I paid a ton of money for something, | I do generally expect to get what I paid for. And in this | case that's not gonna happen. | | [*] Hypothetically, I'm not presently a Tesla customer. | mft_ wrote: | _> It's widely understood that the hardest part of building AI | is how it deals with situations that happen uncommonly, i.e. | edge cases. In fact, the better your model, the harder it is to | find robust data sets of novel edge cases._ | | I have no idea whether Tesla will or won't succeed. But they do | have one major advantage over just about every other AV company | out there, which addresses the point above. That is, their huge | network of camera-equipped cars (a million and counting) | provides probably the deepest, richest AV learning dataset on | the planet, and probably by orders of magnitude. If accessing | the dataset and thus novel edge cases is one of the major | challenges in AV development, Tesla is very well placed. | Robotbeat wrote: | > _I wonder what Elon is going to do when _his_ "full self | driving" fails to materialize. ... | | > ...modifying the roads specifically for such cars, and | segregating them from human drivers._ | | You were #this close# to answering your own question! Answer: | https://boringcompany.com | dbt00 wrote: | If you're going to spend trillions of dollars to dig point to | point tunnels for autonomous vehicles, why not only spend | hundreds of billions to build subways instead that can carry | 100x the number of passengers? | aphextron wrote: | >Not only you can't do this with just cameras and radar | | Humans do it really well with just two cameras. It's not a | hardware problem; it is entirely software. Whether self driving | is possible or not with current AI techniques is debatable, but | we're not waiting on any advances in hardware to do it. | Animats wrote: | As the article points out, machine learning has its limits. | It can do some things reasonably well, but it does max out. | It's easy to get to 90% accuracy, hard to get to 98%, and | 99.9% is out of reach. | qchris wrote: | > Humans do it really well with just two cameras. | | If you don't mind, I think I'm going to steal that quote. It | makes a really good point very succinctly. | legolas2412 wrote: | Humans also have our brains backing up the eyes. | | Has Elon musk invented artificial general intelligence? If | not, the point isn't a good one at all. | MegaButts wrote: | It's true, but it ignores the fact that human eyes have | orders of magnitude more dynamic range than even _very_ | expensive specialized cameras, and it obviously ignores the | fact that we haven 't invented general AI yet. | | It's a pithy response that undermines the challenge of a | problem nobody has been able to solve even with years of | effort and billions of dollars. | qchris wrote: | I agree that human eyes do have benefits over your | typical camera, but I don't think the phrase ignores that | GAI hasn't been created yet; rather, it concisely points | that out and makes it clear how difficult it is to do. | | As the OP said, "it's not a hardware problem," in that | the quality or number of cameras, sensors, etc. isn't the | bottleneck to solving this problem. | aphextron wrote: | >It's true, but it ignores the fact that human eyes have | orders of magnitude more dynamic range than even very | expensive specialized cameras, and it obviously ignores | the fact that we haven't invented general AI yet. | | That level of resolution doesn't matter at all for | driving. People can drive just as well through a video | feed, like Starsky was doing. Yes general AI does not | exist yet, but my point is simply that the parent made a | comment about the need for hardware which is simply not | true. | MegaButts wrote: | > That level of resolution doesn't matter at all for | driving. People can drive just as well through a video | feed, like Starsky was doing. | | I can tell you from experience this is false. It usually | works, but when it doesn't you're fucked. People wildly | underestimate, by orders of magnitude, how many and how | complicated the edge-cases are for self-driving. | m0zg wrote: | > Humans do it really well with just two cameras _and general | intelligence_ | | FTFY. You can thank me later. | jschwartzi wrote: | And the problem with current approaches is entirely that | while we can "train" a NN to produce specific emergent | behavior associated with the training data to a great degree | of accuracy, we're totally unable to demonstrate that these | systems have perfectly(100% accurate) predictable behaviors. | | This is because we're totally unable to come up with a | coherent model for why the emergent behavior occurs given the | input data and NN training. We know how individual elements | of the NN work, and we can describe how the training system | works. But the whole notion of repeatedly letting | perturbations in a control value or control values dictate | the entirety of the performance of a system is nuts. | | The only way to determine how a NN will perform in a given | situation outside of the training set is to actually feed it | the stimulus and check the outputs. Given the amount of | stimulus that we as drivers routinely get, it's impossible to | say with any degree of certainty that a self-driving car that | is built on an NN classifying engine will accurately classify | everything in all situations and lighting conditions, because | it's impossible to feed it a training set large enough to | encompass those situations. | | That leaves the question of making a classifier that is | better than humans. And whether an NN is better than a human | depends very much on the situation. We could make some | statistical arguments, but when you're gambling with peoples' | lives here it becomes difficult to tolerate such arguments. | It's easy to be blase about it until it's your child chasing | a ball in front of an AV, at which point any discussion of | the statistics is academic. | mrfusion wrote: | Why not an aquihire? Surely lots of companies would want to | augment their av teams? | mrfusion wrote: | Welcome to the trough of disillusionment. (sp) | | I think the plateau of productivity will be really awesome | computerized copilots and safety assists/warnings. The human | computer driving team could Be quite a combo of designed right. | mrfusion wrote: | Perhaps for true self driving we'll need one more breakthrough | in AI. Perhaps a system that can identify its own edge cases | and request additional training or seek out the training | itself. | seibelj wrote: | We need AGI or we need to ban humans from driving, | pedestrians, bikers, children, deer, and anything that can | disrupt a perfect computer model. ___________________________________________________________________ (page generated 2020-03-19 23:00 UTC)