[HN Gopher] TLDR explains what a piece of code does
       ___________________________________________________________________
        
       TLDR explains what a piece of code does
        
       Author : aaws11
       Score  : 47 points
       Date   : 2022-09-22 20:16 UTC (2 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | hn_throwaway_99 wrote:
       | FWIW, when I'm doing a code review, these are the _exact_ kind of
       | comments that I would tell a committer to remove.
       | 
       | That is, it's like it generates these kinds of comments:
       | // initializes the variable x and sets it to 5         let x = 5;
       | // adds 2 to the variable x and sets that to a new variable y
       | let y = x + 2;
       | 
       | That is, IMO the whole purpose of comments should be to tell you
       | things that _aren 't_ readily apparent just by looking at the
       | code, e.g. "this looks wonky but we had to do it specifically to
       | work around a bug in library X".
       | 
       | Perhaps could be useful for people learning to program, but
       | otherwise people should learn how to read code as code, not
       | "translate" it to a verbose English sentence in their head.
        
       | wudangmonk wrote:
       | TLDR but you actually end up reading even more than the original.
       | I could be wrong and this might actually work and condense a big
       | function but if that is true, why showcase such an example.
        
         | vxNsr wrote:
         | It's helpful if you can read English but the code is difficult
         | to understand. Most explanations of code are more verbose than
         | the code they're explaining because code is usually pretty
         | terse compared to natural language.
         | 
         | You can think of "too long" referring to the time it might take
         | someone to reason out a particularly terse, dense line of code
         | verse the actual length of the code.
        
           | nso95 wrote:
           | Yes, but there's quite good chance the translation is wrong
           | so you'll probably need to read the code anyway
        
           | naniwaduni wrote:
           | Something like this could be helpful if the stumbling block
           | is the _syntax_. If the output consistently looks like the
           | example, though, it 's not going to be very much help
           | explaining the longer tail of straightforward code that
           | simply implements hard-to-understand _logic_ , though.
           | 
           | I can see something this _functionality_ being useful to
           | explain dense, ungooglable code, like regex, or maybe APL.
           | That said, I couldn 't really trust current-generation ML to
           | actually _produce a correct explanation_ instead of being
           | confidently and wildly wrong.
        
       | worble wrote:
       | I just don't trust it, I've worked with GPT-3 before and it sure
       | does a real good job of _sounding_ convincing, but if you don 't
       | understand the code there's no way to know if what it's saying is
       | accurate, or whether it's just regurgitating random nonsense that
       | sounds plausible.
       | 
       | It knows how to create sentences that sound like something a
       | human would write, and it's even good at understanding context.
       | But that's it, it has no actual intelligence, it doesn't actually
       | understand the code, and most importantly, it's not able to say
       | "Sorry chief, I don't actually know what this doing, look it up
       | yourself."
        
         | AlotOfReading wrote:
         | Let's be honest though, how often do human programmers read a
         | function and think it does X, only to realize later that it's
         | actually doing something subtly different? The underhanded C
         | contest is a great practical demonstration this task is
         | difficult even for "true intelligence". I wouldn't trust it
         | further than comments, but I could see it being equally as
         | useful.
        
         | _jayhack_ wrote:
         | > it has no actual intelligence
         | 
         | This is a prime example of the moving goalpost of what
         | intelligence "actually" is - in previous eras, we would
         | undoubtedly consider understanding context, putting together
         | syntactically correct sentences and extracting the essence from
         | texts as "intelligent"
        
           | vinkelhake wrote:
           | Whether this thing is worthy of the label of "intelligent" or
           | not is fairly uninteresting. What matters for something like
           | this is its accuracy and if it can be trusted - that is what
           | I think OP is getting at.
        
       | iLoveOncall wrote:
       | Reading the code explains what a piece of code does.
        
       | galangalalgol wrote:
       | Can we get this for legal documents? Maybe with the ability to
       | spot things that might be loopholes?
        
         | forgotpwd16 wrote:
         | About license agreements, there was a program called EULAlyzer
         | (https://www.brightfort.com/eulalyzer.html).
         | 
         | edit: Comment modified since thought submitted link was about
         | the tldr pages rather another tool. For something like tldr
         | pages but legal-wise there's https://tldrlegal.com
         | (https://news.ycombinator.com/item?id=7367027).
        
       | billsmithaustin wrote:
       | So it translates the code into COBOL. That's awesome.
        
       | jopnv wrote:
       | Very fancy but, in my opinion, completely useless as a
       | development tool. I can't see how reading natural language is
       | better than reading code.
        
         | it_citizen wrote:
         | Natural language makes it easier to understand a general
         | concept. It is, however, more ambiguous.
         | 
         | Think about it has a general picture description vs a
         | description pixel by pixel. Both have value depending on what
         | you need.
        
         | Supermancho wrote:
         | > completely useless as a development tool. I can't see how
         | reading natural language is better than reading code.
         | 
         | Isn't this a variation on "who needs comments anyway, just read
         | the code?"
        
           | aphexairlines wrote:
           | Comments explain why the code is there. That's different from
           | translating code into English.
        
             | Supermancho wrote:
             | > That's different from translating code into English.
             | 
             | That's not what this "TLDR plugin" does, because there is
             | no magical translation. You put whatever you want in there.
             | I think this is obvious.
             | 
             | Ironically, you basically rephrased the title, while
             | objecting to it's use?
             | 
             | > Comments explain why the code is there
             | 
             | > TLDR explains what a piece of code does
        
           | marginalia_nu wrote:
           | Carefully consider the following description of a function:
           | 
           | The function adds 5 to i, and if j is less than three, the
           | function sets it to 2.
           | 
           | Does it describe this code?                 function f() {
           | i = i + 5;         if (j < 3) {           j = 2; // <--
           | }       }
           | 
           | ... or does it describe this code?                 function
           | f() {         i = i + 5;         if (j < 3) {           i =
           | 2; // <--         }       }
           | 
           | The answer is yes, and this is ultimately why we use
           | programming languages rather than natural language to
           | instruct computers.
        
         | ffhhj wrote:
         | Same impression. At least in that example it's faster to read
         | the code to understand. I'd like them to decipher some real
         | spaghetti code.
        
       | curl-up wrote:
       | How come general developer audiences aren't more acquainted with
       | GPT-3 (and Codex in particular) capabilities? People in the
       | twitter thread all seem completely mind blown over an app that
       | basically just passes your code to an existing API and prints the
       | result.
       | 
       | I don't want to sound negative of course, and I expect many of
       | these apps coming up, until Codex stops being free (if they put
       | it on the same pricing as text DaVinci model, which Codex is a
       | fine-tuned version of, it will cost a ~cent per query). I'm just
       | wondering how come the information about this type of app reaches
       | most people way before the information about "the existence of
       | Codex" reaches them.
       | 
       | For all the publicity around Codex recently (and especially on
       | HN), it still seems like the general IT audience is completely
       | unaware of the (IMHO) most important thing going on in the field.
       | 
       | And to anyone saying "all these examples are cherrypicked, Codex
       | is stupid", I urge you to try Copilot and try to look at its
       | output with the ~2019 perspective. I find it hard to beileve that
       | anything but amazement is a proper reaction. And still, more
       | people are aware of the recent BTC price, than this.
       | 
       | Source: have been playing with Codex API for better part of every
       | day for the last few weeks. Built an app that generates SQL for a
       | custom schema, and have been using it in my daily work to boost
       | ma productivity as a data scientis/engineer/analyst _a lot_.
        
         | vxNsr wrote:
         | I was aware of and use copilot but I didn't realize it was
         | built on top of codex. And wasn't even aware codex existed
         | until you commmented.
         | 
         | I read hn pretty regularly but unless you're really excited
         | about the AI space a lot of this news washes over you and you
         | mostly ignore it.
        
           | curl-up wrote:
           | This is what amazes me, since it seems like such big news,
           | and people in the field are just not aware of it. Just for
           | reference of what I am talking about, here is a piece of code
           | that was generated, without any cherry picking at all (you
           | just have to trust me on this, sorry) by allowing Codex to be
           | aware of the database with some smart prompting (this is on a
           | DB with music store data):
           | 
           | Q: Best selling artist per country
           | 
           | A: https://pastebin.com/qBVu2mvc
           | 
           | Needless to say, this query works and returns the data I
           | wanted. Whether this is useful or not is up for discussion.
           | But I cannot understand how it's not amazing.
        
         | P5fRxh5kUvp2th wrote:
         | MS has been trying to get AI into intellisense for years now
         | and I always turn it off.
         | 
         | The lack of control over it just makes it annoying. In many
         | ways it's faster to just type out the algorithm than it is to
         | lay the algorithm out and spend the time trying to understand
         | what's there so I can successfully convert the code to what I
         | need.
         | 
         | Then there's the lack of stability. Yesterday it did something
         | different from what it's doing today, so I can't even use
         | muscle memory to interact with it anymore.
         | 
         | Intellisense has _always_ had that annoyance factor of getting
         | in your way sometimes, forcing you to write code in a certain
         | way to minimize that. All this just makes it more annoying and
         | I don't believe anyone who claims it truly makes them more
         | productive.
        
           | curl-up wrote:
           | What I am talking about has nothing to do with Intellisense
           | or your workflow. What I am saying is that, if someone in
           | 2019 told you that there is a "thing" that is able to take a
           | very complex sentence and qith high accuracy (and awareness
           | of the database details) generate 50 lines of SQL, using
           | CTEs, complex JOINs, subqueries, string formatting, date
           | manipulation, etc, you would have been amazed. That thing now
           | exists, and it didn't exist before. It is a complete phase
           | shift and it cannot simply be viewed as a incremental
           | improvement. This is a whole different beast.
           | 
           | Using this beast as intellisense is just one application
           | (called "Copilot") and it has all these annoyance factors
           | sometimes. But I am not talking about that.
           | 
           | To me, this is like we found a way to transform iron to gold
           | with low energy usage, and people are complaining that gold
           | is not that useful. And most chemists not even hearing about
           | the news. I'm constantly amazed by this, every single day, as
           | I read threads like this one.
        
             | Wherecombinator wrote:
             | I've heard AI researchers describe this phenomenon before.
             | As soon as something is discovered or invented it
             | immediately becomes trivial and boring. The goalposts
             | shift, and now they have to find the next amazing thing
             | that will suffer the same fate.
        
             | P5fRxh5kUvp2th wrote:
             | I think starlink is a bigger deal by far.
        
               | curl-up wrote:
               | Can you elaborate? To me (but I know very little about
               | it) it seems like part of the incremental progress in the
               | internet availability. What am I missing?
        
             | wrikl wrote:
             | I can only speak for myself, but it just hasn't been around
             | long enough for me to properly trust any AI-driven tool to
             | give me correct output for anything important.
             | 
             | I'll admit I haven't played with Copilot yet (since I don't
             | think my employer would be happy for me to send off
             | proprietary code to third-party servers, so I've
             | effectively self-banned myself from using it at work*), but
             | I'd feel that for anything non-trivial like your example of
             | complex SQL queries I'd be reluctant to use the generated
             | output without extra scrutiny (essentially a very fine-
             | toothed code review, which is exhausting).
             | 
             | My opinion will probably change as the tools become more
             | mature, but for now I'm treating them as toys primarily
             | which limits the excitement.
             | 
             | Something like TLDR is less risky as it's not producing
             | code, just summarising it, but I'd still feel wary to trust
             | it since it's such a new field. Maybe this speaks more to
             | my own paranoia than anything else!
             | 
             | EDIT: *and on this topic while I'm here: I'm actually a bit
             | confused (and honestly... jealous?) on the topic of privacy
             | for these kinds of external models. Is everyone who's using
             | Copilot and tools like this working at non-Bigcos? Or just
             | ignoring that it's sending off your source code to a third
             | party server? Or am I missing something here?
             | 
             | It'd be against the rules to use external pastebins or
             | other online tools that send off private source code to a
             | server, so I'm kind of shocked how many devs are talking
             | about how they use AI tools like this at work... is this
             | just a case of "ask for forgiveness, not permission"?
        
               | curl-up wrote:
               | Check out the SQL example I posted below. If you're
               | interested, I'd be happy to post more. To me, this is not
               | about "is the machine accurate enough already". Maybe it
               | isn't and it needs to mature. But the door has been
               | opened now, and it's only up to "technical details" now.
               | 
               | And I'm not saying this can replace developers, as it
               | clearly isn't capable of building complete codebases and
               | reasoning about the system as a whole. But writing self-
               | contained code snippets seems like a solved problem to
               | me, and I think that's the biggest thing that happened in
               | our field since a long time ago.
        
       | bloppe wrote:
       | Why not just type the code into DALL-E 2 and have it paint a
       | picture of what the code does?
        
       | waynesonfire wrote:
       | like the auto summarize feature from MS Word that helped me do my
       | homework back in high school
        
       | PennRobotics wrote:
       | ... until it gets to                   i  = 0x5f3759df - ( i >> 1
       | );
        
         | sophacles wrote:
         | ...
         | 
         | 4. What the fuck?
         | 
         | ...
         | 
         | And returns the approximate inverse square root.
        
         | Terr_ wrote:
         | TBF, that is actually a situation where the big big pattern-
         | matching trained AI would probably easily find and regurgitate
         | the correct answer, just from prior exposure to a very
         | distinctive bit of code.
        
       ___________________________________________________________________
       (page generated 2022-09-22 23:00 UTC)