[HN Gopher] Why can 2 times 3 sometimes equal 7 with Android's N...
       ___________________________________________________________________
        
       Why can 2 times 3 sometimes equal 7 with Android's Neural Network
       API?
        
       Author : aga_ml
       Score  : 32 points
       Date   : 2021-01-23 19:58 UTC (3 hours ago)
        
 (HTM) web link (alexanderganderson.github.io)
 (TXT) w3m dump (alexanderganderson.github.io)
        
       | kristjansson wrote:
       | It's an interesting observation, and a shocking title, but the
       | applicable lesson seems to be "don't use an aggressively
       | quantized network if your application is sensitive to
       | quantization errors"
        
       | throwaway2245 wrote:
       | So, computers are getting closer to human-like mistakes.
        
       | Hallucinaut wrote:
       | Reminds me of this classic
       | 
       | https://joelgrus.com/2016/05/23/fizz-buzz-in-tensorflow/
        
       | fuzzfactor wrote:
       | Never forget there's a reason why they call it Artificial
       | intelligence.
       | 
       | Sometimes nothing but the real thing can put you on the correct
       | path.
        
         | Blikkentrekker wrote:
         | That has nothing to do with it's "artificiality".
         | 
         | Some intelligence is simply less intelligent than others.
        
       | oso2k wrote:
       | Because of "The Secret Number" (https://youtu.be/qXnFr1d7B9w)?
        
       | bluejay2387 wrote:
       | Don't use an function approximator if you need the exact output
       | of a function?
        
       | Avalaxy wrote:
       | Using a neural network for things that have clear cut rules is
       | wrong. When you know the exact rules, implement them as such,
       | instead of bruteforcing a guesstimation. This is also why I'm
       | sceptical of the usr of GPT-3 for all sorts of purposes where
       | accuracy is important. Think of the code generation case. Bugs
       | may be very subtle and may go unnoticed.
        
         | Grimm1 wrote:
         | Code generation only needs to generate code with n bugs where n
         | is less than the number of bugs a human developer generates for
         | it to have usefulness, and maybe some other factor of severity
         | where they are generally less severe than human developers. I
         | think it'll make neat autopilot functionality for developers
         | but not replace the need to have someone look over and
         | understand the code.
        
           | yuliyp wrote:
           | This is a very simplistic of what code is and the role it
           | plays in a system.
           | 
           | There are many implementations that can fulfill a set of
           | requirements. Not all of them are created equal. The ways in
           | which they behave as the system changes can be wildly
           | different. Well-written code will be able to handle those
           | changes gracefully. Poorly-written code may end up proving
           | brittle and bug-prone. Generated code will be completely
           | unpredictable.
           | 
           | Imagine you're trying to build a street network for a city.
           | Some designs are much more predictable than others. If you've
           | played Factorio, the distinction between a spaghetti base and
           | one that has some design is abundant. Even if they currently
           | fulfill the same requirements now, the ability to improve
           | upon and reason about how it will behave after changes is
           | vastly different.
        
           | danfang wrote:
           | This is naive. The point is that code is a well defined
           | system with clear rules that can be expressed through logic
           | and mathematics. GPT is suited to approximate systems where
           | the rules are not well defined. Until AI can actually learn
           | the principles of logic, it may not be useful for code
           | generation on a meaningful scale, other than things just like
           | simple auto-completions.
           | 
           | Not only that, AI would also have to learn the principles of
           | system design, performance, security, readability,
           | maintainability. That's what makes "good" software. It's a
           | far stretch to say that AI could achieve anything of the sort
           | based on current abilities.
        
           | kulig wrote:
           | Its not that simple.
           | 
           | People are understanding when car crashes happen in busy
           | roads amongst other cars.
           | 
           | They are _not_ understanding if a self-driving car swerves
           | into the sidewalk and kills a group of children.
        
           | ben_w wrote:
           | I disagree that that is enough to be useful. To give a
           | deliberately extreme example: if it produces code which has
           | half the number of bugs as a human, but it only outputs
           | Malbolge source code, nobody else will be able to fix those
           | bugs which remain.
        
           | perl4ever wrote:
           | This is a perfect satire of the logic people use to advocate
           | self driving cars being rushed into production.
           | 
           | Only every time I read something similar, I think "surely no
           | programmer could think this". Are you a programmer?
        
             | Grimm1 wrote:
             | I sure am, and if I can code gen 90% of the boiler plate
             | away I'll do it happily. Besides attacking me, do you have
             | any point you'd like to make?
        
               | bobthebuilders wrote:
               | Do you want to die when your self driving car crashes?
               | Debug issues when your app des at 12am? Same concept.
        
               | Grimm1 wrote:
               | I don't want to die when I crash my own car, and I
               | already debug my own apps at 12am. If your argument is
               | that things need to be perfect than my god you must never
               | leave your home! I'd trust a machine to drive more
               | accurately than most people I see on the highway.
               | 
               | Humans aren't special, in fact more often than not we're
               | sloppy, subject to fatigue, and a whole bunch of other
               | negative things.
               | 
               | That considered, I had a pretty strict qualifier in my
               | above post which means the machine must perform better
               | than the average human in the respective task and
               | therefore I'd be more likely to die driving my own car
               | than a machine meeting my prerequisites.
        
               | Judgmentality wrote:
               | > I'd trust a machine to drive more accurately than most
               | people I see on the highway. Humans aren't special, in
               | fact more often than not we're sloppy, subject to
               | fatigue, and a whole bunch of other negative things.
               | 
               | Humans are much, much, much more capable than the
               | absolute state-of-the-art robots when it comes to doing
               | things in an uncontrolled environment.
               | 
               | https://www.youtube.com/watch?v=g0TaYhjpOfo
        
               | Hasnep wrote:
               | One of the advantages of an autonomous driver is that its
               | superhuman reflexes, never driving while tired, never
               | getting road rage, etc., will make it less likely to get
               | into an uncontrolled environment.
               | 
               | Would you prefer your pilots to fly your plane with no AI
               | assistance?
        
               | Judgmentality wrote:
               | > One of the advantages of an autonomous driver is that
               | its superhuman reflexes, never driving while tired, never
               | getting road rage, etc.
               | 
               | First of all, when you actually understand a self-driving
               | car stack, you'll realize those super-human reflexes are
               | more human than you think. The stack is complicated and
               | not only are there delays to be expected, some hardware
               | syncing requirements guarantee certain delays in the
               | perception pipeline. It's still better than a person, but
               | it's nothing close to approaching instantaneous.
               | Likewise, sensors can get dirty, and blah blah blah there
               | are other weaknesses robots have that humans don't. My
               | point is simple: robots aren't perfect. In fact, they are
               | almost always much worse than most people realize.
               | 
               | > will make it less likely to get into an uncontrolled
               | environment
               | 
               | You're misunderstanding me. I'm not saying less likely to
               | get into an accident. I'm saying the world, where cars
               | drive, is an uncontrolled environment - and the current
               | state of robotics is such that humans are better for
               | doing things in the real world. There is no "less likely
               | to get into an uncontrolled environment" because by
               | definition you are always putting it into that situation.
               | 
               | > Would you prefer your pilots to fly your plane with no
               | AI assistance?
               | 
               | AI assistance is fine. AI replacement is not.
        
         | dealforager wrote:
         | For code, I could see it being super useful for a beefed up
         | auto-complete. There are many times I find myself searching for
         | things like "how do I do X in Y language" to copy a snippet
         | that I'm sure has been written 10000x times before. I can
         | review the code and verify its correctness by writing tests.
        
           | [deleted]
        
       | [deleted]
        
       | jzer0cool wrote:
       | Why would use use a neural net to approximate 2 x 3 when there is
       | a clear definition of the result. Or as a fun side affect, neural
       | nets are prone to off by one errors too :)
        
       | unnouinceput wrote:
       | Famous Pentium F-DIV 20 years later, the sequel?
        
         | segfaultbuserr wrote:
         | It's a neural network. It gives approximate results. Here's a
         | newbie question that asks basically the same question, with
         | some interesting answers.
         | 
         | > codesternews: Any deeplearning expert here. Why Neural
         | network can't compute a linear function Celsius to Fahrenheit
         | 100% accurately. Is it data or is it something can be
         | optimised.                   print(model.predict([100.0]))
         | // it results 211.874 which is not 100% accurate
         | (100x1.8+32=212)
         | 
         | https://news.ycombinator.com/item?id=19708787
        
       | techbio wrote:
       | Baker's (half-)dozen?
        
       | moonbug wrote:
       | if only there was some way of doing computatiin without
       | Tensorflow.
        
       | YarickR2 wrote:
       | Well, every tool has it's own range of use cases; doing integer
       | math is not a use case for a guesstimate engine .
        
         | justicezyx wrote:
         | Or one can claim that it's entirely obvious when relating that
         | with human beings making mistakes, where not only 2*3 can be 7,
         | millions can die of some obscure disctators whim, without much
         | conacusoly realized the insanity...
        
           | vmception wrote:
           | Did someone just train a GAN on HN comments?
        
             | justicezyx wrote:
             | "The real question is not whether machines think but
             | whether men do. The mystery which surrounds a thinking
             | machine already surrounds a thinking man."
             | 
             | -- B F Skinner
        
             | MayeulC wrote:
             | It would be fairly interesting to try, and take votes as
             | feedback. That's what we all do here, anyway...
             | 
             | ...Although you can reach a point where you have enough
             | karma not to care and troll a bit/speak more freely, which
             | if you only look at the vote outcome, can net you big in
             | both directions (though there is a lower bound). In the
             | end, it's exactly like an optimization problem, if you're
             | "farming" karma: a lot of safe bets, and a few more risky
             | ones to maybe discover a new class of safe ones.
             | 
             | Reddit is full of safe gamblers who are farmink karma by
             | repeating canned patterns.
        
       ___________________________________________________________________
       (page generated 2021-01-23 23:00 UTC)