[HN Gopher] YOLOv4: Optimal Speed and Accuracy of Object Detection
       ___________________________________________________________________
        
       YOLOv4: Optimal Speed and Accuracy of Object Detection
        
       Author : groar
       Score  : 186 points
       Date   : 2020-04-27 13:05 UTC (9 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | rgovostes wrote:
       | The original author of YOLO stopped working on it[1]. Alexey
       | Bochkovskiy, aka AlexeyAB, created a fork on GitHub and wrote an
       | extensive guide to customizing YOLO's network architecture, added
       | new features, and has answered zillions of questions.
       | 
       | 1: https://twitter.com/pjreddie/status/1230524770350817280
        
         | DagAgren wrote:
         | So the author goes, "wait, this actually causes more evil than
         | good, I will not work on it any longer", and the other guy goes
         | "don't worry, I will keep doing the evil for you!"
         | 
         | Sigh.
        
           | oh_sigh wrote:
           | Do you commonly defer your moral judgements to a third party?
           | 
           | Or do you sigh only because in this instance you agree with
           | the original author, but you don't actually think it should
           | be a rule that others are constrained by the moral thoughts
           | of their predecessors?
           | 
           | I know of a promising treatment for types of hypothyroidism,
           | but the original discoverer doesn't want to continue work on
           | it because she doesn't agree with animal testing.
           | 
           | If my moral calculus says that the quality of life of
           | millions of humans is more important than the quality of
           | lives of thousands of rats and dogs, am I not allowed to pick
           | up where she left off?
        
           | deadmutex wrote:
           | It's tricky thing... where do you draw the line? If someone
           | works on the linux kernel, and someone uses the OS to do bad
           | things.. should one stop?
           | 
           | Also consider that object detection has a lot of impact in
           | positive ways as well. It doesn't seem so black and white.
        
             | qchris wrote:
             | I'm going to jump in here as someone who is using object
             | detection, and currently working on getting a Darknet
             | detector in particular up and running, in what I'd like to
             | think is a positive way.
             | 
             | I'm a researcher working on a system for monitoring
             | offshore kelp farms for renewable aquaculture with an
             | autonomous underwater vehicle. I can't use GPS underwater,
             | Dopper velocity logs and sensitive inertial navigation
             | systems are either too expensive and/or export-controlled,
             | and doing manual filtering of visual data is tricky and
             | inconsistent. Adding good object detection for kelp to
             | helps to make that kind of system much more reliable, which
             | can help provide useful metrics to groups working on
             | creating new biofuel sources and marine ecosystem
             | monitoring. I think that's valuable work, and it's enabled
             | by YOLO.
             | 
             | Joe Redmon has paid attention to where and how his work is
             | being used, and I respect him for disengaging with
             | something that he finds not to line up with his values. But
             | it's worth pointing out that there are people (not just me)
             | who are using that work in ways that might be worthwhile.
        
               | pstuart wrote:
               | I'm of the opinion that kelp farms could be a huge
               | benefit to humanity if we could scale them up enough.
               | 
               | Your work seems worthy of an AMA if you ever have the
               | time.
        
             | MiroF wrote:
             | I agree - but I imagine even personally it might be a hard
             | thing to hear that something you created is being used to
             | kill people, even with the knowledge that it is doing a lot
             | of good.
             | 
             | I'm curious if this is something that some sort of modified
             | license could help resolve. Do I have the option to license
             | my software so that it can't be used in war?
        
               | bangonkeyboard wrote:
               | _> I 'm curious if this is something that some sort of
               | modified license could help resolve. Do I have the option
               | to license my software so that it can't be used in war?_
               | 
               | "The Software shall be used for Good, not Evil" - https:/
               | /en.wikipedia.org/wiki/Douglas_Crockford#%22Good,_not...
        
               | rcw4256 wrote:
               | Yes, but you need to be prepared to take it to court when
               | someone uses your software in war.
        
               | hprotagonist wrote:
               | and when the answer is "hey, it's GPL'd, here's our
               | source download link, fuck off we're exactly following
               | the rules"?
               | 
               | Licenses are not a viable solution to this sort of
               | ethical quandry.
        
               | rcw4256 wrote:
               | That answer should be unsatisfactory to the court, since
               | it wasn't licensed under the GPL. The rules, in this
               | case, say no use in war.
        
               | wolco wrote:
               | Wouldn't object detection save people over the current
               | method of dropping a huge bomb at a wedding?
        
           | quotemstr wrote:
           | Technology by itself is never "evil". There is no such thing
           | as improving welfare by prolonging our ignorance of the
           | natural world.
        
         | hprotagonist wrote:
         | _Chris: So, I talked to him.
         | 
         | Mitch: You did?
         | 
         | Chris: Yeah, and he used to be the number one stud around here
         | in the 70's. (whispers) Smarter than you and me put together.
         | 
         | Mitch: So what happened? Did he crack?
         | 
         | Chris: Yes, Mitch. He cracked, severely.
         | 
         | Mitch: Why?
         | 
         | Chris: He loved his work.
         | 
         | Mitch: Well what's wrong with that.
         | 
         | Chris: There's nothing wrong with that, but that's all he did.
         | He loved solving problems, he loved coming up with the answers.
         | But, he thought that the answers were the answer for
         | everything. Wrong. All Science no Philosophy. So then one day
         | someone tells him that the stuff he's making was killing
         | people.
         | 
         | Mitch: So what's your point? Are you saying I'm going to end up
         | in a steam tunnel?
         | 
         | Chris: Yeah.
         | 
         | Mitch: What?_
        
           | tehsauce wrote:
           | What is this from?
        
             | antiuniverse wrote:
             | Real Genius: https://imdb.com/title/tt0089886/
        
             | [deleted]
        
         | 01100011 wrote:
         | The only thing stopping the robot apocalypse is the power
         | problem. We have every technology necessary to enable mass
         | casualties from commodity robots _except_ long battery life. We
         | don 't need AI. We need decent enough pattern recognition that
         | can pinpoint a weakness in the human body.
         | 
         | I often use my 'Stabby the Robot' diatribe to convey this to
         | people. Imagine a robot that moves too fast to evade, that
         | doesn't rely on expendable, limited ammunition(i.e. a blade or
         | sharp surface), and can locate the jugular vein of a human. You
         | could make one now out of a commodity drone, but it wouldn't
         | last very long because of the power problem. AI would obviously
         | make it more dangerous, but it isn't necessary.
         | 
         | We've already placed very dangerous tools into the hands of
         | humans. Maybe it helps AlexeyAB sleep at night to stop working
         | on YOLO, but his idle hands are not holding back the future.
        
           | airstrike wrote:
           | Reminded me of https://www.youtube.com/watch?v=HipTO_7mUOw
        
           | jefft255 wrote:
           | My roboticist opinion is that you need extremely developed AI
           | to do what you're saying. It requires navigational
           | capabilities that don't exist yet except in very open spaces
           | like fields. Also, lightweight drones which could do these
           | fast maneuvers aren't very robust; you can literally slap
           | them down with some pain. I get that even with a low success
           | rate it could be a disaster if you have a huge amount of them
           | deployed somewhat.
        
           | IdiocyInAction wrote:
           | While your robots do sound like they could kill a lot of
           | people, I feel like this is a bit overblown. For a start, for
           | "apocalypse", they would have to be produced in massive
           | numbers, i.e. by a nation state-level actor. State-level
           | actors can already make nukes or bioweapons.
        
             | 01100011 wrote:
             | What's so hard about manufacturing a few million drones?
             | With a relatively modest amount of money(relative to, say,
             | developing a nuclear program or chemical weapons program),
             | you could hire a contract manufacturer to spin out a few
             | million drones. They probably wouldn't even ask what the
             | blade/needle was for, although it would be easy to craft a
             | cover story(i.e. "They're for mass inoculation/slaughter of
             | livestock"). Update the SW and congrats, you now have a
             | weapon of mass destruction. You can target people by face,
             | race, location, etc.
             | 
             | There's a reason the US military has been playing with
             | drone clouds for years.
        
               | philipkglass wrote:
               | It's not hard for a nation to manufacture or order
               | millions of drones. It's also not hard to obtain enough
               | rounds of ammunition to shoot millions of people. It was
               | decades if not a century ago when the primary obstacle to
               | a nation killing millions of people ceased to be
               | "inherent technical difficulty."
        
               | 01100011 wrote:
               | It's not hard to shoot millions of people, but it is hard
               | to raise the army required to do so. It is hard to do so
               | without raising suspicions. It is hard to do so without
               | damaging infrastructure.
               | 
               | I rogue state, or even a non-state actor, can leverage
               | cheap drones(assuming, of course, a solution to the power
               | problem) to obtain a weapon of mass destruction. Drones
               | are fundamentally different from bullets.
        
         | chrischen wrote:
         | But don't we need people doing the research to solve the
         | problems that need to be solved but in the _right way_. Take
         | covid contact tracing as an example. If Apple and Google didn't
         | step in and preempt with their own tech, we may have had a
         | problem with privacy eroding tech being forced through.
        
         | longtom wrote:
         | The linked tweet:
         | 
         |  _> I stopped doing CV research because I saw the impact my
         | work was having. I loved the work but the military applications
         | and privacy concerns eventually became impossible to ignore._
        
           | [deleted]
        
           | Iv wrote:
           | The YOLOv3 paper is a blast to read
           | 
           | https://pjreddie.com/media/files/papers/YOLOv3.pdf
           | 
           | And its conclusion is gold:
           | 
           | But maybe a better question is: "What are we going to do with
           | these detectors now that we have them?" A lot of the people
           | doing this research are at Google and Facebook.I guess at
           | least we know the technology is in good hands and definitely
           | won't be used to harvest your personal infor-mation and sell
           | it to.... wait, you're saying that's exactly what it will be
           | used for?? Oh. Well the other people heavily funding vision
           | research are the military and they've never done anything
           | horrible like killing lots of people with new technology oh
           | wait..... [1] I have a lot of hope that most of the people
           | using com-puter vision are just doing happy, good stuff with
           | it, like counting the number of zebras in a national park, or
           | tracking their cat as it wanders around their house. But
           | computer vision is already being put to questionable use and
           | as researchers we have a responsibility to at least consider
           | the harm our work might be doing and think of ways to
           | mitigate it. We owe the world that much.In closing, do not @
           | me. (Because I finally quit Twitter).
           | 
           | [1] The author is funded by the Office of Naval Research and
           | Google.
        
             | DagAgren wrote:
             | I've seen people try to use YOLO for homebrewing a self-
             | driving car.
             | 
             | There's not even any need to go as far as assume evil
             | intent on the users of the software, just plain
             | recklessness can easily cause people to die.
        
             | [deleted]
        
             | KKKKkkkk1 wrote:
             | Very few grad students have a chance of showing their first
             | result on TED and getting 19k citations. Consequently, they
             | usually can't get away with criticizing their sponsors and
             | peers' ethics post-factum, regardless of what they think of
             | them.
        
               | DSingularity wrote:
               | Yup. He is an inspiration.
        
             | shrimp_emoji wrote:
             | Also, best cv: https://pjreddie.com/resume/
        
               | whymauri wrote:
               | Some context:
               | 
               | https://www.reddit.com/r/justneckbeardthings/comments/47n
               | b4e...
        
           | [deleted]
        
       | mchusma wrote:
       | I have had a lot of fun working with YOLO v3 for robotics
       | applications, very excited to try these updates. Thanks to the
       | authors for the updates and good documentation. Good object
       | recognition is the backbone of a huge range of future
       | applications, and YOLO has been a good option for a while.
        
       | punnerud wrote:
       | And code on Github: https://github.com/AlexeyAB/darknet
       | 
       | PyTorch version: https://github.com/Tianxiaomo/pytorch-YOLOv4
        
       | gjstein wrote:
       | I object to the use of the word "optimal" for a task like object
       | detection; it feels counterproductive to claim that this is the
       | "optimal" way of solving such a broad and complex problem. Great
       | results, but their language needs some tempering.
        
         | bawana wrote:
         | EVERY 'good' thing starts out with 'good' intentions. And while
         | it is small scale remains good. But as it scales up, it becomes
         | evil. Even google, remember their motto? Now harvesting data
         | like human bodies for its matrix. Even Facebook was a blast
         | when it started, but now it's a merch store. Even the internet
         | was beautiful when it started, now it's a sewer. It seems the
         | only thing that can scale well without getting perverted is
         | deep learning. But as long as humans are in that loop, it will
         | fail. Cloud computing is a mistake. Bring back the pc, you
         | know, 'personal computer'
        
           | wolco wrote:
           | These companies were always doing those things. Google always
           | collected and connected information on you. Facebook was
           | always using users content/data to experiment with. Cloud
           | computing was always a mistake for most (pay more, get less
           | control, get locked in). Deep learning opens up so many
           | taboos our society is not ready to deal with them. As it
           | scales it will open up more cans of worms.
        
       ___________________________________________________________________
       (page generated 2020-04-27 23:00 UTC)