[HN Gopher] Automating programming: AI is transforming the codin...
       ___________________________________________________________________
        
       Automating programming: AI is transforming the coding of computer
       programs
        
       Author : edward
       Score  : 76 points
       Date   : 2021-07-09 14:38 UTC (8 hours ago)
        
 (HTM) web link (www.economist.com)
 (TXT) w3m dump (www.economist.com)
        
       | primaryobjects wrote:
       | Interesting timing, as tomorrow I will be presenting on this very
       | topic.
       | 
       | "AI programmer: autonomously creating software programs using
       | genetic algorithms"
       | 
       | https://dl.acm.org/doi/10.1145/3449726.3463125
        
         | neatze wrote:
         | So awesome!, rip my weekend plans, I have done similar thing
         | for fun using python and modifying class dictionary (half
         | working), more in scope of multi-objective optimization.
        
         | AnimalMuppet wrote:
         | Interesting. Would you mind telling me what the largest (in
         | lines of code) working program it has created is?
        
           | primaryobjects wrote:
           | One of the largest programs generated had consisted of about
           | 300 instructions.
           | 
           | This particular program included if/then conditionals,
           | counting down in a loop, concatenation of a numeric value
           | with text, and displaying output.
           | 
           | The system also attempts to optimize the number of
           | programming instructions executed, in which case complexity
           | can actually be considered based upon the resulting behavior,
           | rather than LOC.
        
             | AnimalMuppet wrote:
             | That's... decent. It's a lot larger than I ever got with a
             | GA.
             | 
             | I don't think we're ever going to get a GA to produce an
             | air-traffic control system, or a database, or an OS. But
             | 300 lines (that work) is further than I was aware had been
             | possible.
        
       | ausbah wrote:
       | co-pilot seems great at automatically writing boilerplate code,
       | but iffy at doing anything novel - like most supervised learning
       | systems
        
       | Animats wrote:
       | We may get automated programming, but not by the GPT-3 route.
       | GPT-3 is a generator for moderately convincing meaningless
       | blithering. It says something about the nature of discourse that
       | GPT-3 is better than many pundits.
       | 
       | Here's a GPT-3 generated manual for a "flux capacitor".[1] It's a
       | fun read, but meaningless. When the output text needs to
       | integrate a collection of hard facts, GPT-3 is not helpful.
       | 
       | [1] https://archive.is/lQFHC
        
         | minimaxir wrote:
         | In the context of code generation, coding typically follows
         | patterns, which GPT/Transformers is very, very good at
         | identifying.
        
       | WFHRenaissance wrote:
       | GitHub Copilot arrives from the future. Nothing human makes it
       | out of the near-future. It's probably already too late to mount a
       | meaningful counter-revolution. The average software engineer's
       | status and usefulness are eroding faster than ever. There's
       | almost nothing you can do. If you're above average, I suggest you
       | try to start a business and con a VC before this thing blows. If
       | you're below average... good luck, and I mean that in the most
       | sincere way. It's possible that a career pivot is your best bet.
       | Eastern European devs were coming for your jobs anyways, but we
       | just took a leap instead of a step with Copilot. Sure this is a
       | bit exaggerated, but to be clear... in this matter, the truth is
       | that we haven't seen anything yet.
        
         | DantesKite wrote:
         | Sam Altman made the interesting observation that people who
         | program might lose their jobs faster than people who move
         | things around physically.
         | 
         | Which is funny when you think about it, but it does seem like
         | it's headed in that direction.
        
           | WFHRenaissance wrote:
           | I think a majority of Web jobs will be automated before most
           | "trade" workers are automated. It just makes sense. The
           | nature of Web work itself sort of self-documents the
           | interfaces one would need to hand to a non-technical. Trade
           | work on the other hand is dangerous, extremely context
           | specific, and can't be done entirely from a datacenter.
        
         | ausbah wrote:
         | this is really exaggerated and really really pretentious jesus
         | 
         | coding is only a fraction of SWEing, sure Copilot writes some
         | code for you - but can it work with stakeholders, manage the
         | complexity of a complicated code base, or any other part of a
         | engineer's job
        
           | WFHRenaissance wrote:
           | Gathering requirements will be the product manager's job.
           | Implementation will be the job of the top fraction of our
           | field. Complexity can be managed by a team of humans a
           | fraction of the size of your current team. If you're a
           | competent engineer, you have a better future ahead of you but
           | if you're doing routine work or templating boilerplate...
           | it's almost over for you.
        
       | salawat wrote:
       | My dream in life is to work myself out of a job. To take a bunch
       | of the hard frustrating stuff out of things, the next guy doesn't
       | have to suffer through solving the same problems.
       | 
       | With programming, solutions aren't physically quantized except in
       | the sense sufficiently skilled person hasn't run into an
       | articulation of problem and solution they can actually understand
       | yet.
       | 
       | Programmers aren't an odd bunch. We just like writing
       | encyclopedias that people can read, understand, and build off of
       | so someone can find something cool to do with it.
       | 
       | There's just a point where as system appreciating people, we can
       | see where our own work is obviated or worked against if you just
       | toss it at a machine that mangled it in such a way where the
       | underlying ideas and knowledge is lost.
        
       | YeGoblynQueenne wrote:
       | This article of course presents automatic programming as if it's
       | a brand new thing that is only now possible to do thanks to "AI",
       | but that is only how companies that sell the technology find it
       | convenient to present things to market their products. The thing
       | to keep in mind is that program synthesis is an old field and a
       | lot of progress has been made that is completely ignored by the
       | article. Of course most people haven't even heard of "program
       | synthesis" in the first place, simply because it is not as much
       | hyped as GPT-3 and friends.
       | 
       | From my point of view (my field of study is basically program
       | synthesis for logic programs) code generation with large language
       | models is not really comparable to what can be achieved with
       | traditional program synthesis approaches.
       | 
       | The main advance seems to be in the extent to which a natural
       | language (i.e. English) specification can be used, but natural
       | language specifications are inherently limited because of the
       | ambiguity of natural language. Additionally, trying to generate
       | new code by modelling old code has the obvious limitation that no
       | genuinely new code can be generated. If you ask a language model
       | to generate code it doesn't know how to generate - you'll only
       | get back garbage. Because it has no way to _discover_ code it
       | doesn 't already know how to write.
       | 
       | So Copilot, for example, looks like it will make a fine
       | boilerplate generator - and I'm less skeptical about its utility
       | than most posters here (maybe partly because I'm not worried _my_
       | work will be automated) but that 's all that should be expected
       | of it.
        
         | plutonorm wrote:
         | Said all of the vision researchers before they got blown out of
         | the water by convolutions neural networks. Adapt or die.
        
           | charcircuit wrote:
           | CNNs can't recognize what they haven't been trained on.
        
         | codetrotter wrote:
         | > This article of course presents automatic programming as if
         | it's a brand new thing that is only now possible to do thanks
         | to "AI", but that is only how companies that sell the
         | technology find it convenient to present things to market their
         | products.
         | 
         | Reminds me of this essay that pg wrote in 2005, titled The
         | Submarine. Someone linked it in another thread here on HN
         | recently.
         | 
         | > One of the most surprising things I discovered during my
         | brief business career was the existence of the PR industry,
         | lurking like a huge, quiet submarine beneath the news. Of the
         | stories you read in traditional media that aren't about
         | politics, crimes, or disasters, more than half probably come
         | from PR firms.
         | 
         | http://www.paulgraham.com/submarine.html
        
         | montenegrohugo wrote:
         | Hmmm. A few points.
         | 
         | First, I've never heard of program synthesis, and it seems like
         | an interesting topic. Could you point me to some resources so I
         | can learn more about it?
         | 
         | Second. I take issue with this statement:
         | 
         | > "trying to generate new code by modelling old code has the
         | obvious limitation that no genuinely new code can be generated"
         | 
         | I disagree. We've seen GAN's generate genuinely new artwork,
         | we've seen music synthesizers do the same. We've also seen
         | GPT-3 and other generative text engines create genuinely
         | interesting and innovative content. AI dungeon comes to mind.
         | Sure, it's in one way or another based on it's training data.
         | But that's what humans do too.
         | 
         | Our level of abstraction is just higher, and we're able to
         | generate more "distinct" music/code/songs based on our own
         | training data. But that may not hold in the long term, and it
         | also doesn't mean that current AI models can do nothing but
         | regurgitate. They _can_ generate new, genuinely interesting
         | content and connections.
        
           | throwawaygh wrote:
           | _> First, I 've never heard of program synthesis, and it
           | seems like an interesting topic. Could you point me to some
           | resources so I can learn more about it?_
           | 
           | I'll leave it to the GP to give a lit review, but will say
           | that CS has (always had) a hype-and-bullshit problem, and
           | that knowing your history is good way to stay sober in this
           | field.
           | 
           |  _> We 've also seen GPT-3 and other generative text engines
           | create genuinely interesting and innovative content._
           | 
           | Making things humans find entertaining is easy. Markov chains
           | could generate genuinely interesting and innovative poetry in
           | the 90s. There's a Reply All episode on generating captions
           | for memes or something where the two hosts gawk in amazement
           | at what's basically early 90s tech.
           | 
           |  _> They can generate new, genuinely interesting content and
           | connections._
           | 
           | Have you ever dropped acid? You can make all sorts of
           | fascinating content and connections while hallucinating.
           | Seriously -- much more than when you're sober. Probably
           | shouldn't push to prod while stoned, though.
           | 
           | Art and rhetoric are easy because there's really no such
           | thing as "Wrong". That's why we've been able to "fake
           | creative intelligence" since the 80s or 90s.
           | 
           | Almost all software that people are paid to write today is
           | either 1) incredibly complicated and domain-specific (think
           | scientists/mathematicians/R&D engineers), or else 2)
           | interfaces with the real world via either physical machines
           | or, more commonly, integration into some sort of social
           | processes.
           | 
           | For Type 1, "automatic programming" has a loooong way to go,
           | but you could imagine it working out eventually. Previous
           | iterations on "automatic programming" have had huge impacts.
           | E.g., back in the day, the FORTRAN compiler was called an
           | "automatic programmer". Really.
           | 
           | For Type 2, well, wake me up there's a massive transformer-
           | based model that only generates text like "A tech will be at
           | your house in 3 hours" when a tech is actually on the way.
           | And that's a pretty darn trivial example.
           | 
           | There is a third type of software: boilerplate and repetitive
           | crap. Frankly, for that third type, I think the wix.com model
           | will always beat out the CoPilot model. And it's frankly the
           | very low end of the market anyways.
        
       | xaduha wrote:
       | Wake me up when it starts deleting bad code instead of producing
       | more of it.
        
       | ollien wrote:
       | I can't read past the fold here (paywall), but assuming this is
       | about Copilot, it feels a bit disingenuous to suggest it's
       | already "transforming" software development. It's a cool tool,
       | but it's not even available to everyone.
        
         | sitkack wrote:
         | Judging from the turmoil both here and on twitter about
         | Copilot, it has already made a _huge_ impact.
         | 
         | Two more papers down the line.
        
           | ollien wrote:
           | Impact? Sure. Especially given the controversy around how
           | it's trained. I just don't know that I'd suggest it's changed
           | software development as we know it.
        
         | cjg wrote:
         | https://archive.is/bTkFX
        
           | ollien wrote:
           | Thanks for the link! The article feels a lot closer than the
           | headline/first two paragraphs to what I actually see going
           | on.
        
       | js8 wrote:
       | I think Copilot is as helpful to programmers as ELIZA is helpful
       | to psychiatrists.
       | 
       | I don't think it really understands code, only very superficially
       | (unlike GPT-3 which might actually genuinely understand parts of
       | language, computer code requires more abstracted mental model).
       | ELIZA was similar, some people also believed that it can
       | understand what they are thinking. Turns out it was all just in
       | our heads.
        
         | qsort wrote:
         | I'm not even remotely worried for my job security, but I don't
         | think the ELIZA comparison is apt.
         | 
         | ELIZA is clearly a toy, Copilot is something that could become
         | integrated with the workflow of a professional developer. Maybe
         | not copilot specifically, but if Intellisense is in our IDEs
         | already, something like it is not a stretch.
         | 
         | > I don't think it really understands code
         | 
         | It clearly doesn't, but (a) nobody actually claimed that
         | outside of the usual media circus, and (b) that's a very high
         | bar. It doesn't need to understand your code in order to be
         | useful.
        
           | ska wrote:
           | > ELIZA is clearly a toy,
           | 
           | That's a red herring. The "Eliza effect" has nothing to do
           | with how toy-like ELIZA actual was, it is a lesson about how
           | humans were very bad at evaluating its actual capability and
           | insight. This continues to be true today, and is an effect
           | with at least some impact on essentially any AI/ML system
           | that people interact with.
        
           | js8 wrote:
           | ELIZA wasn't "clearly a toy" when it appeared. It actually
           | confused people.
           | 
           | I don't deny that good suggestions are useful in coding. I am
           | just not sure you can get these by mimicking other people's
           | code without really understanding it.
           | 
           | I think the reason why code is different than natural
           | language is that in code, the meaning of identifiers (words)
           | is very dependent on the context of the specific codebase,
           | and not nearly as universal as in natural language. Sure,
           | there are words like personal names in natural language that
           | change meaning depending on wider context, but they are not
           | as common in the natural text.
           | 
           | So focusing on the names only, rather than for example
           | understanding types of runtime objects/values, and how they
           | change, can be actually actively harmful in trying to
           | understand the code and making good suggestions. So I would
           | believe that suggestions based on e.g. type inference would
           | be more useful.
           | 
           | There is another aspect to this, text has inherently low
           | entropy, it's somewhat easy to predict the next symbol. But
           | if code has low entropy (i.e. there are universal patterns
           | that can be applied to any codebase in a particular
           | programming language), I would argue that this is inefficient
           | from the programmer's point of view, because this entropy
           | could probably be abstracted away to a more condensed
           | representation (which could be then intelligently suggested
           | based on semantics rather than syntax).
           | 
           | In other words, I think the approach assumes that the
           | probability of the next word, based on syntax, is similar to
           | probability of the next word, based on semantics, but I don't
           | believe that's the case in programming languages.
        
           | feoren wrote:
           | Copilot is clearly a toy.
        
         | falsaberN1 wrote:
         | I have played with GPT3 and similar lesser models for a while,
         | and in my opinion it doesn't understand anything. It's good at
         | correlation (it might figure out a pikachu is a pokemon or that
         | The Wall is a music album) which is kinda amazing but
         | ultimately seems to be really "useless"* and it's not
         | deterministic.
         | 
         | I think the current "AI" models will never understand a thing
         | unless they become able to update their own models and have
         | some guiding logic to them. Sure it has the context tokens so
         | it might give a little illusion of following along, but the
         | randomness of it all also means it will get wrong stuff that it
         | got right before (and vice versa) and will clearly break the
         | illusion when prodded ("pikachu is a purple dragon", "The Wall
         | was composed by Metallica"). It's unfortunate.
         | 
         | GPT3 and other text models have no sense of state. You can have
         | the same instance work with 2 (or more) different persons and
         | text provided by user A will have no bearing on anything user B
         | does, since it's just trying to complete the provided text in
         | form of tokens. As computationally intensive as it is, the
         | whole thing is a lot simpler than it seems.
         | 
         | It's like we managed to get the "learning" part of intelligence
         | more or less right, but nothing else yet. I always wanted to
         | have an AI "trusted assistant" kinda thing, but even the
         | biggest GPT3 models don't give me much in terms of trust.
         | Wouldn't trust it to turn a lamp on and off as it is. (I know
         | GPT3 is just a text model but you know what I mean). And if it
         | happens, it won't be run locally because the hardware
         | requisites for this stuff are obscene on a good day, which
         | opens a few cans of worms (some of which have been opened
         | already).
         | 
         | *useless as in getting anywhere close to intelligence. It can
         | become decent entertainment and might have some interesting
         | uses to correlate datum A to datum B/C/D, recognition and so,
         | but so far it's just fancy ELIZA. An "educated guess engine" at
         | best.
        
         | wongarsu wrote:
         | > unlike GPT-3 which might actually genuinely understand parts
         | of language, computer code requires more abstracted mental
         | model
         | 
         | Shouldn't computer code be simpler to understand and express
         | than natural language? The amount of knowledge about the world
         | is much smaller and mostly concerns artificial systems. And of
         | course programming languages are laughably simple and well
         | defined compared to the constantly evolving mess that is the
         | English language.
        
           | lambdaba wrote:
           | What would it mean to "understand" it. Imagine a program
           | written by such a system, it's easy to imagine an immensely
           | complex question about the system that sounds simple, for
           | instance asking about design choices, code architecture,
           | aesthetic choices like expressivity. Even if it could produce
           | some code, we couldn't say it understood it if it couldn't
           | answer those questions, and those are very difficult
           | questions which would require human-level intelligence, and
           | even more a human psyche.
        
           | js8 wrote:
           | In programming languages, it is much more common to define
           | new words (symbols) than in natural language. The identifier
           | (name) of the symbol (word) in such case is less informative
           | about its nature than the actual definition, and I think this
           | is the Achilles heel of the method.
        
             | lambdaba wrote:
             | Yes, programming languages are simple, programs are
             | infinitely complex, they could express anything, and
             | homoiconic languages can even define their own syntax.
        
       | ChrisMarshallNY wrote:
       | The thing that so many folks seem to forget about programming, is
       | that the most problematic part of the whole thing is the
       | Requirements Phase. That's where the most serious problems occur,
       | and they are often in the form of ticking time bombs that explode
       | while the project is well underway. A good implementation team
       | knows how to stop assuming, and get more information from the
       | requirements authors.
       | 
       | No matter how "smart" the implementation process is, we have the
       | principle of "GIGO" (Garbage In; Garbage Out). Automated
       | implementation has the potential to be a garbage multiplier.
       | 
       | There will always be a need for highly specific, accurate and
       | detailed specifications. If a machine does the work, and makes
       | assumptions, then that means the input needs to be of _even
       | higher_ quality than we see nowadays (See  "GIGO," above).
       | 
       | I see the evolution of "Requirement Specification Languages,"
       | which, I suspect, will look an awful lot like code.
        
         | [deleted]
        
       | ipaddr wrote:
       | Is sharing anything on github dead now? Should it be?
        
       | neonate wrote:
       | https://archive.is/bTkFX
        
       | sly010 wrote:
       | AI has an issue that AI assistants tends towards mediocrity.
       | 
       | In the mid-term this is great for sub-mediocre AI users who
       | seemingly get a boost in performance/quality, etc, but people
       | tend to not recognize that in the long term there is a dangerous
       | feedback loop. AI models just regurgitate their training,
       | pretending it's state of the art.
       | 
       | Every now and then people bring up the fact that relevant long
       | tail Google results are no more. Every time we click through on
       | the first page, we are collectively training google to be
       | mediocre and in exchange it's training us to be the same.
       | 
       | We had the same debate around criminal profiling, insurance,
       | credit ratings, automated censorship, even 'art'.
       | 
       | Programming won't be that different. It will be even useful in a
       | limited sense. AI will give a lot of mediocre results for
       | mediocre questions and the wrong answer for 'new' questions, but
       | we won't know which one is which. But remember that in the long
       | term we are feeding the outcome back to the models, so it's not
       | real progress, it's just a proliferation of "meh".
        
       | rjsw wrote:
       | We could feed CAD files into the same engine and get ... the 737
       | MAX.
        
       | ok2938 wrote:
       | One thing to keep in mind: With every request-response cycle for
       | the probably upcoming paid service, you are feeding constant data
       | points to a commercial entity. Basically your typing habit, the
       | project development over time, the bugs, the fixes, all will be
       | recorded.
       | 
       | Think about it, one well paid profession in the grips of capital
       | siphoning off valuable, highly intellectual work (by using "open
       | source" already), repackaging it and the selling it back to the
       | people it aims to replace.
       | 
       | Imagine a million lawyers' days being recorded in 60s intervals -
       | including what they type, the files they have opened, etc. - how
       | long would it take to encode the average lawyer into a hdf5 file?
       | How many lawyers would be fine with that kind of assistance?
       | 
       | If that's not ingenious, then I do not know what is.
       | 
       | Call to action: Programmer, wake up.
       | 
       | Edit: Just to address some typical rebuttals - I'm all for
       | progress and cool toys, but I'm all against monopolies and mega-
       | corps that control and monitor every aspect of your life.
        
         | staticassertion wrote:
         | > how long would it take to encode the average lawyer into a
         | hdf5 file
         | 
         | I'd say, minimally, a good 50 years, except perhaps for the
         | busywork. At that point you could maybe have a tool that could
         | handle a lawyer's job. Law's often an extremely creative,
         | verbal, expressive job, so I think machines are particularly
         | unlikely to do well there.
         | 
         | I'm not worried about being automated away. If someone wants to
         | make coding easier, neat, I'm not a programmer, that's just how
         | I get shit done. I solve complex problems, and I don't see
         | machines taking that job for a long, long time.
         | 
         | Frankly, if machines can take away the mechanical bullshit part
         | of a job and let us humans do the interesting, creative work,
         | that works for me.
        
           | kwhitefoot wrote:
           | > Frankly, if machines can take away the mechanical bullshit
           | part of a job and let us humans do the interesting, creative
           | work, that works for me.
           | 
           | How much "interesting, creative work" is there? Surely most
           | of the work that most of us do, and that most of us are
           | capable of and interested in, is not simultaneously
           | interesting and creative.
           | 
           | I count myself as lucky and I have certainly had a more
           | interesting working life than average but the proportion of
           | that forty years of work that counted as both interesting and
           | creative is probably, looking back on it, measured in single
           | digit percentage points.
           | 
           | If a machine does all the easy stuff how would I ever get
           | enough practice to stretch myself to the difficult creative
           | parts?
        
             | staticassertion wrote:
             | > How much "interesting, creative work" is there?
             | 
             | I would imagine that the answer is roughly "infinite". It's
             | like asking "how much art is there?" - infinite.
             | 
             | > If a machine does all the easy stuff how would I ever get
             | enough practice to stretch myself to the difficult creative
             | parts?
             | 
             | By doing the easy stuff? I don't know, you can do whatever
             | you want.
        
             | icoder wrote:
             | The mechanical bullshit part is being abstracted away time
             | and time again and none of the issues you bring up have
             | become problematic so far. Sure there's a point where this
             | might change profoundly, but AI is currently nowhere near
             | that point if you ask me.
             | 
             | (stuff being done for me: CPU design and work with the OS
             | on top of it including multithreading and IO, executing
             | database operations, turning higher level commands into
             | tons and tons of machine language instructions, libraries
             | that allow me to go higher and higher in abstraction level,
             | think synchronising, multi threading, basic data structures
             | like dictionaries, map/filter/reduce operations, etc -
             | where I need to I learn about them, but mostly I just use
             | them and let them do the heavy, boring, lifting)
             | 
             | (working with a low code solution like Mendix is
             | comparable, you can become creative without the boring
             | work, of course it has a learning curve, which learns you
             | how to use the higher abstraction layers without the need
             | to really practice or fully understand what it takes out of
             | your hands)
        
               | [deleted]
        
         | kp995 wrote:
         | > Call to action: Programmer, wake up.
         | 
         | The article also mentions - "head of AI products at the St
         | Petersburg office of JetBrains, a Czech developer of
         | programming software, sees time savings of 10% to 20%. "
         | 
         | and the subheading says - "The software engineers of the future
         | will, themselves, be software"
         | 
         | um, how seriously should we consider the fact that "skilled"
         | software engineer/programmer solving really complex problems
         | are at risk?
         | 
         | Can't say if making just another simple CRUD app is not at risk
         | but the statements and their propositions made in the article
         | are too generalize too plant a seed of insecurity in oneself.
        
           | icoder wrote:
           | Call me naive but I really don't see an AI take over my
           | (senior) software development work, there's just so much to
           | it, including a lot of communication and creativity.
           | 
           | I dare posing that by the time AI takes over my job, it's
           | gotten so advanced that I've got much more to worry about
           | than just my job.
        
         | bluetwo wrote:
         | A good lawyer, like a good programmer isn't valued just on
         | their output, but on the questions they ask that lead them to
         | the correct output. This is much more difficult to encode.
        
           | Jeff_Brown wrote:
           | It's an interesting comparison, law and code. GitHub's AI
           | gets a sense of what you're trying to do, and tries to move
           | the ball forward. Sometimes the context it has is enough to
           | do that. It's better with verbose languages.
           | 
           | Law can be pretty verbose. Given half a paragraph, how often
           | can an intelligence (artificial or natural) deduce what's
           | next?
           | 
           | Of course the coding AI has the advantage that coders write
           | (and publish) comments.
        
             | kwhitefoot wrote:
             | > coders write (and publish) comments.
             | 
             | And writers in law write commentaries, some of them having
             | great authority.
        
         | 8note wrote:
         | Its not going to find out the actual work I do, which is
         | largely offline
         | 
         | Once I've solved a problem, I document it in code. Only seeing
         | the code at the end misses the problem solving parts
        
         | i_haz_rabies wrote:
         | Programmers have the worst intelligence to gullibility ratios
         | of any profession.
         | 
         | * I should elaborate. We tend to easily accept the "this work
         | is meaningful" line, we volunteer our time because "programming
         | is a craft," and we will hop on tooling bandwagons like this
         | because it's shiny and new without considering the
         | ramifications.
        
           | loopz wrote:
           | This will sell like hot cookies. The only thing that can
           | destroy it will be the immense cargo-culting. So software
           | developers are safe, for now.
        
             | i_haz_rabies wrote:
             | I fear the "for now" is shorter than we think.
        
               | qayxc wrote:
               | I don't. This isn't the first time.
               | 
               | First it was accessible programming languages (BASIC,
               | Pascal, etc. ~1970s/80s), then it was standard software
               | (spreadsheet software, ERM packages, etc. 1990s), then
               | low code/no code (early 2000s), then model-/requirement
               | driven (Rational Rose etc., late 90s, early 2000s), in
               | between it was visual "programming" every now and again,
               | now it's back to low code/no code again.
               | 
               | As long as the mechanical systems cannot test
               | requirements for contradictions, don't accept non-
               | functional requirements such as performance or security
               | and produce non-correctness proven results that even fail
               | to compile or are syntactically incorrect from time to
               | time, I have no fears.
               | 
               | The current situation might as well be a local optimum in
               | which NLP/ML will be stuck for quite a while. It's really
               | hard to tell, but I don't see any reason for starting to
               | panic just yet.
        
             | Jeff_Brown wrote:
             | I've read three definitions of "cargo cult" and still don't
             | know what it or your reference to it mean.
        
               | loopz wrote:
               | It's mainly meant in jest, partly, at how corporations
               | tend to imitate what seems to work for other, more
               | successful corporations. Ie. at one point after FB,
               | everyone and their mother wanted to make their own Social
               | Media platform. Before that, Web Portals were all the
               | rage. It's often about good ideas, but doesn't always
               | make business sense. Implementations often miss the mark,
               | especially in the beginning.
               | 
               | The term comes from this, and uncovers a bit of the
               | futility about the so-called benefits of playing the
               | imitation game:
               | 
               | https://www.steveglaveski.com/blog/the-curious-case-of-
               | cargo...
               | 
               | The problem is that in the fast-paced corporate world,
               | you either make your own startup (and usually fail at
               | that), or you have to play the same game everyone else
               | does. There's simply not enough time and resources to get
               | ahead at everything. Unless you go for the smaller niche,
               | ie. embracing agility. Of course, everyone and their
               | mother is Agile (tm) these days too!
        
               | [deleted]
        
               | JohnWhigham wrote:
               | Literally: it's doing something because others are doing
               | it and that's it.
               | 
               | Developers have a _massive_ proclivity to follow
               | something simply because others have. We 're talking
               | everything from best practices to design methodologies to
               | whether or not to include a semicolon at the end of a
               | line. There's a reason for this: writing software is
               | _very_ hard to measure and quantify. It 's not a physical
               | _thing_ like a bridge or an automobile. When we hear
               | about some new paradigm /framework/language/etc. espoused
               | by some blue-checkmark-wielding developer who works at
               | Google, we pay attention because...he works at Google,
               | and _seems_ like a trusted figure.
               | 
               | Not saying this is good or bad, it's just how it is.
        
         | booleandilemma wrote:
         | Programmers are a weird bunch.
         | 
         | Look at stackoverflow for example. Thousands of programmers
         | giving away their professional advice, spending their valuable
         | time...for what exactly? Imaginary internet points?
         | 
         | Open source is another one: here's my code that I've worked so
         | hard on, spent countless hours on - please have it for free, my
         | labor is worthless.
         | 
         | It's like they're actively seeking to bring their own value
         | down.
        
           | Jeff_Brown wrote:
           | The only explanations I can think of are (1) altruism, (2)
           | intent to profit from one's reputation, and (3) simply
           | enjoying puzzle solving. Am I missing any? And are any of
           | them so strange?
        
             | icoder wrote:
             | Do not underestimate how much you can learn from teaching /
             | explaining something (you think you already know) to
             | someone else.
        
               | rapnie wrote:
               | Learning in public [0], discussed on HN before.
               | 
               | [0] https://www.swyx.io/learn-in-public/
        
             | staticassertion wrote:
             | (2) And (3) aren't strange and are very likely the main
             | drivers. It isn't hard to imagine that humans, and really
             | any intelligent life, have evolved to _enjoy_ problem
             | solving and structure incentives around being good at it.
        
           | icoder wrote:
           | Well there's many reasons. But in addition, most on Stack
           | Overflow is the boring stuff: getting a certain operation,
           | task, calculation right, often in a particular language, with
           | a particular framework. The actual development, in my
           | opinion, starts after that.
           | 
           | Just take my day today: interpreting the client's specs and
           | internally communicating and planning a minor change in the
           | context of a few parallel release paths, discussing a few
           | tweaks to a piece of code with a colleague carefully weighing
           | effort, impact on partially rolled out tests, and future
           | flexibility (making best guesses as to where things are
           | going), and analysing / fixing an issue in a piece of code
           | which design delicately balances simplicity, robustness and
           | performance.
           | 
           | Call me naive but I don't see an AI taking over before
           | reaching (or nearing) general AI. Nor do I see how someone
           | that's not a competitor already become one just by having
           | access to Stack Overflow, or even a tenfold of it.
        
           | smegma2 wrote:
           | Many things are not zero-sum, for example if I help you and
           | you help me we will both be better off than before. Sites
           | like stackoverflow make programmers more productive in
           | aggregate; we would be worse off without them.
           | 
           | If you can't understand why someone would decide to offer
           | their help for free then at least try to be thankful.
        
           | imnotlost wrote:
           | Philip Glass talks about this exact thing here:
           | 
           | https://thecreativeindependent.com/people/philip-glass-on-
           | co...
        
             | hkt wrote:
             | That link is amazing. Maybe my reply is off topic, but
             | Glass is an extraordinary composer and it is always a
             | pleasure to see him referenced saliently like this. Thanks!
        
           | meh99 wrote:
           | Humans were giving away their service to the church for
           | promises of forever life.
           | 
           | Mass delusions are easy to propagate as they require little
           | emotional nuance; a salute, tattoo, incantation... those used
           | to be enough to bind large groups.
           | 
           | Groups are out though. We're leaning into enabling humans as
           | atomic agents and automating away the utilitarian social
           | obligations, and work of biological survival to get there.
        
           | X6S1x6Okd1st wrote:
           | There are other frameworks than strict personal net worth
           | maximization.
        
           | sidlls wrote:
           | And then they complain bitterly when commercial entities use
           | it without contributing back, treat workers poorly, and all
           | that. It's not weirdness; it's ignorance and/or stupidity.
        
           | sudosysgen wrote:
           | Programmers aren't unique in thagt. If you create the right
           | conditions, driven people will just work for the sake of it,
           | if it can help someone or create something beautiful. It's
           | the reason why us humans are where we are to begin with.
        
             | ansible wrote:
             | This is what a post-scarcity society is _supposed to be_. A
             | world where everyone has enough, and they do things
             | (programming, art, whatever) for the joy of it.
        
           | MeinBlutIstBlau wrote:
           | The heart of the open source movement had more to do with
           | companies locking down software with operating systems and
           | less about star-trek socialism for code. RMS wasn't just some
           | hippie, he actually stuck to his guns and really "stuck it to
           | the man" in a way that actually revolutionized programming
           | today.
        
           | jhgb wrote:
           | > please have it for free, my labor is worthless.
           | 
           | That's weird logic. Presumably you wouldn't create something
           | you consider worthless - you'd create it because you needed
           | it.
        
             | eigenhombre wrote:
             | Or because the act of creation is pleasurable in and of
             | itself.
        
               | [deleted]
        
           | dave_sid wrote:
           | So so true. Imagine a plumber turning up at your house and
           | fitting you a new bathroom as their open source project. But
           | then again, if they charged a extortionate 'support' fee for
           | years thereafter for doing nothing then that wouldn't be as
           | crazy. A bit like Red Hat in that sense.
        
           | mftb wrote:
           | "Man cannot stand a meaningless life", Carl Jung. I've spent
           | twenty-five years coding, solved some problems, made some
           | money for myself, more for others, but what have I
           | accomplished? It's unlikely it will be through a
           | stackoverflow answer or a post on HN, but I quietly maintain
           | the hope that maybe someday, through Free Software or
           | otherwise sharing my knowledge and code, I might actually
           | accomplish something meaningful. The economic value you're
           | talking about has little meaning for me.
        
             | golemiprague wrote:
             | Interesting the use of "Man", maybe women are less willing
             | to share? It is well known that there are much less female
             | contributors to open source or wikipedia, even in the pre
             | childbearing phase.
             | 
             | I think it derives from the traditional roles of men and
             | women, men creates value or hunt or acquire resources in
             | some way and then give it for "free" to their wives and
             | children while the women role is to hog those resources for
             | themselves and children. I wonder if this is ever going to
             | change and how long it will take.
             | 
             | In my personal experience I have found women to be far less
             | generous, as if it it hurt them to give something without
             | some benefit derived, but that is just anecdotal, I wonder
             | what other people experiences are.
        
           | jasode wrote:
           | _> Thousands of programmers giving away their professional
           | advice, spending their valuable time...for what exactly?
           | Imaginary internet points?_
           | 
           | Before Stackoverflow, people were answering questions on
           | USENET newsgroups without any award of points.
           | 
           | And before USENET and the internet, hobbyists would gather to
           | trade advice on building home computers. E.g.:
           | https://en.wikipedia.org/wiki/Homebrew_Computer_Club
           | 
           | The common theme isn't the points... it's that _people like
           | to be helpful and share knowledge_.
           | 
           |  _> Open source is another one: here's my code that I've
           | worked so hard on, spent countless hours on - please have it
           | for free, my labor is worthless._
           | 
           | There are multiple motivations for open source. In my case, I
           | did it because I could get _more value from the community 's
           | enhancements_ than the "free code" the community got from me.
           | 
           |  _> It's like they're actively seeking to bring their own
           | value down._
           | 
           | Neither of your 2 examples look like erosion of labor value.
           | On the contrary, it shows that not every activity associated
           | with programming has to be driven by the exchange of money.
        
             | [deleted]
        
           | otabdeveloper4 wrote:
           | Programming isn't labor.
           | 
           | Or, at least, it's only "labor" in the same sense that
           | literature or painting is labor.
           | 
           | Does it seem strange to you that writers or painters would
           | want the world to see their work, even for free? It doesn't
           | seem strange to me.
           | 
           | Programming is the same.
        
             | rmah wrote:
             | I know a few painters and writers. None of them give away
             | their paintings or books for free, except to friends and
             | family.
        
             | machinehermiter wrote:
             | No doubt. You can even go a step further.
             | 
             | What about this discussion we are having right now on here
             | about this topic?
             | 
             | Why bother just for useless upvotes? No one is even paying
             | me to think about this.
        
             | Jeff_Brown wrote:
             | For glory!
        
             | Jeff_Brown wrote:
             | > it's only "labor" in the same sense that literature or
             | painting is labor.
             | 
             | I can't agree with this. Programming _can_ be like
             | literature. It can also be like writing copy for a bad
             | travel brochure.
        
         | swiley wrote:
         | It's not surprising that "normal users" don't think about this
         | but what I've found extremely surprising is that otherwise
         | fairly skilled programmers don't even think about it.
        
           | auggierose wrote:
           | The more skilled the less there is to think about.
        
       | distribot wrote:
       | I think some of us have our head in the sand about how disruptive
       | this will be to software engineering as an industry. I find the
       | applied stats-yness of ML really boring, but I also want to hedge
       | against some of these companies creating run away hits that make
       | my current job obsolete.
        
         | throwawaygh wrote:
         | If you're writing the sort of code copilot generates, you
         | should be more worried about things like wix.com, RPA products,
         | WordPress, or high school kids. (No hate, my summer job in high
         | school was PHP CRUD stuff, but it's definitely the burger
         | flipping of the software development world.)
        
       | pydry wrote:
       | >The software engineers of tomorrow will themselves be software
       | 
       | Is it me or is the Economist much more unabashed these days about
       | representing the fevered dreams of business elites? (as opposed
       | to engineers who know wtf theyre talking about)
       | 
       | This copilot circus reminds me of a restaurant I used to go to
       | went to shit in ~2015 because the owner thought giving iPads to
       | customers meant they could cut payroll by 20%.
       | 
       | Which was very much in vogue at the time:
       | 
       | http://images.gawker.com/itqtvwbe3c0skb99wirm/c_scale,fl_pro...
        
         | Jeff_Brown wrote:
         | Indeed. A more accurate title would be "A new AI product might
         | shave 15% off some developers' time spent coding, and some
         | otherr companies hope to cash in too."
        
           | pydry wrote:
           | Even 15% is way too optimistic.
           | 
           | I'm not even convinced that the effect will be positive. A
           | habit of blindly accepting copilot suggestions could lead to
           | more bugs.
        
             | Jeff_Brown wrote:
             | I agree. I was just paraphrasing the article itself.
             | Copilot could also be a productivity drag just by making
             | the user read a bunch of garbage they don't use.
        
         | cat199 wrote:
         | yes, now with copilot we will be doing visual object oriented
         | programming on our GUI only tablets, and typing code, laptops,
         | and desktops will be a thing of the past - the singularity is
         | in fact here!
        
         | gruez wrote:
         | >This copilot circus reminds me of when a restaurant I used to
         | go to went to shit in ~2015 because the owner thought giving
         | iPads to customers meant they could cut payroll by 20%.
         | 
         | What went wrong? At some fast food places the ordering machines
         | definitely displaced some of the cashiers. I'm surprised
         | customers wouldn't go for tablets if it meant they didn't have
         | to pay the 15% tip.
        
           | pydry wrote:
           | Orders took forever or got lost, etc. The servers looked
           | stressed as hell and were running everywhere.
           | 
           | The prices werent any lower they just had fewer staff.
           | Presumably the owner thought they could collect higher
           | profits.
        
           | dragonwriter wrote:
           | > I'm surprised customers wouldn't go for tablets if it meant
           | they didn't have to pay the 15% tip.
           | 
           | They don't _have to_ tip to start with. And the ones with the
           | most problem with it won 't, or will tip less.
           | 
           | And people go to places that aren't fast food because, at
           | least in part, of service, which iPad's don't provide the way
           | servers do.
        
           | rootusrootus wrote:
           | > I'm surprised customers wouldn't go for tablets if it meant
           | they didn't have to pay the 15% tip.
           | 
           | It seems to have become quite normalized to put tip jars out
           | for all sorts of businesses that aren't really service-based.
           | Setting the expectation. Or putting on a tip line on the
           | credit card receipt -- sure, you don't _have_ to tip, but we
           | want you to think you 're abnormal if you don't.
           | 
           | I don't think switching to tablets would do anything to
           | reduce the expectation of tips.
           | 
           | And it hasn't been 15% for years.
        
           | goatlover wrote:
           | Someone still has to bring the food out, handle requests not
           | on the tablet along with complaints, and bus the tables. Tip
           | doesn't apply to fast food, so that's not a good comparison.
        
           | rootusrootus wrote:
           | At least in my experience, what went wrong is that ordering
           | machines didn't really improve the experience. Maybe compared
           | to a bad cashier they're preferable. But other than that,
           | they are much slower and less convenient than just telling
           | another human what you want.
           | 
           | To use a fast food example, I somewhat regularly go to a
           | McDonalds when we road trip (I have kids, don't shame me) and
           | they have ordering screens. It takes easily 5x as long to
           | answer all of it's questions about permutations on the meal I
           | want than to just say "gimme a #2 with Dr Pepper, no pickles"
           | to a cashier.
           | 
           | Maybe if they put in a proper voice assistant it would be
           | better. But the touchscreen is not in any way a superior
           | experience.
        
             | nebula8804 wrote:
             | I seem to recall during my time in France is that they are
             | Windows based machines with a Reflective touch screen hat
             | moves the mouse cursor around. I wonder how much of the
             | problem is just to poor implementation of the UX and
             | ordering experience.
             | 
             | I saw this issue with NJTransit ticket machines as well.
             | They suck compared to NY subway.
             | 
             | NJTRansit: https://www.youtube.com/watch?v=avNAwTJStwU
             | 
             | There are so many tiny improvements to speed they could do
             | to make it a better experience, the tiny delays between
             | menus add up, the printer is slow and has multiple delays.
             | This all adds up and I have experienced the brunt end of
             | then whenever I am rushing to catch a train.
             | 
             | NY Subway: https://youtu.be/jcrrb48vzfY?t=87
             | 
             | Notice how each screen transition is instantaneous. The
             | money scanning is super fast and the ticket(card)
             | dispensing is quite fast as well. If you are a frequent
             | user, you can fly through this screen and this represents a
             | real marked improvement over human interaction, there is no
             | way you could replicate that speed with a human teller.
        
             | Jeff_Brown wrote:
             | Right. I was a waiter once. We had to learn to use the
             | computer. It wasn't hard but it was work, and customers
             | don't want to do it.
             | 
             | It's not even clear that it would be efficient for the
             | customer to learn it, even if they were willing, given how
             | rarely most people eat out and how quickly memory degrades
             | with disuse.
        
             | coolspot wrote:
             | - "a gum and two dr.Peppers were added to your pickup
             | order"
        
             | manmal wrote:
             | Does the cashier at McDonald's not enter your order into a
             | similar system? Maybe the public UI is just inefficient.
             | Where I live it was possible (a few years ago, they since
             | stopped that) to pre-order with a mobile app. I always
             | preferred that and felt it was a step in the right
             | direction, because food was usually finished when I turned
             | up.
        
       | awaythrowact wrote:
       | Did PyTorch increase or decrease demand for ML developers?
       | 
       | Did Ruby on Rails increase or decrease demand for web developers?
       | 
       | Did C increase or decrease demand for systems programmers?
       | 
       | Automation and abstraction is a complement, not a substitute.
       | 
       | If CoPilot works and we get to focus on higher level goals and
       | developer throughout increases, it will be good for developers
       | and good for the world. I hope CoPilot succeeds.
        
       | loloquwowndueo wrote:
       | https://outline.com/bFWAyk
       | 
       | It is about copilot (which it sneakily doesn't mention by name)
       | and tabnine and also generally about gpt-3 used in the realm of
       | coding. A lot of layperson-oriented phrasing around: " Microsoft,
       | an American software giant, released a new version of an AI-
       | completion feature which it embeds in coding software called
       | Visual Studio."
        
       | feoren wrote:
       | Copilot appeals to bad developers, which it's going to learn
       | from. It's been trained on a codebase that is mostly bad and/or
       | outdated. It will produce code that is usually bad and the bad
       | programmer using won't properly review and fix it, so it will
       | never learn how bad it is.
       | 
       | If copilot replaces anything, it will be the legion of failed
       | developers that end up working for Oracle and Comcast currently
       | doing god-knows-what with all their time. Those systems are
       | already giant festering mounds of technical debt; imagine how bad
       | it will get when literally no human has ever seen the code
       | producing that weird bug.
        
         | hkt wrote:
         | In all likelihood, it'll be much the same as when literally
         | only one human has ever seen the code producing that weird
         | bug...
        
       | distribot wrote:
       | Unrelated but Israel is so cool in terms of technological
       | prowess. A nation of 9 million people and almost every article
       | about cutting edge tech includes them. Really remarkable imo
        
       | oliv__ wrote:
       | Well, great maybe the AI can create and run a few more successful
       | SaaS businesses for me
        
       ___________________________________________________________________
       (page generated 2021-07-09 23:00 UTC)