[HN Gopher] Unexpected ways generative AI will change how you wo...
       ___________________________________________________________________
        
       Unexpected ways generative AI will change how you work forever
        
       Author : jbcranshaw
       Score  : 73 points
       Date   : 2023-01-18 20:14 UTC (2 hours ago)
        
 (HTM) web link (maestroai.substack.com)
 (TXT) w3m dump (maestroai.substack.com)
        
       | a13o wrote:
       | On the topic of Content is King, I have a different view than the
       | author. I think in the case of these trained AIs, 'content'
       | refers to the training datasets and not the generated outputs.
       | 
       | Trained AIs are in something like the early digital streaming
       | days where there was only one provider in town, so that provider
       | aggregated All The Content. Over the next decade we would see the
       | content owners claw their content back from Netflix, and onto
       | competitor platforms -- which takes us to where we are today.
       | Netflix's third party content has dwindled and forced them to
       | focus on creating their own first party content which can not be
       | clawed away.
       | 
       | When these generative AIs start to produce income, it will be at
       | the expense of the artists whose art was in the training dataset
       | nonconsensually. This triggers the same content clawback we saw
       | in digital streaming. Training datasets will be heavily
       | scrutinized and monetized because the algorithms powering
       | generative AIs aren't actually carrying much water. Content is
       | King.
        
       | 29athrowaway wrote:
       | If social media resulted in a deluge of low quality crap, now you
       | can expect that same phenomenon to the power of infinity.
        
       | commitpizza wrote:
       | I paid for Tabnine pro since it was 50% off for a year but I
       | won't renew it unless it massively improves.
       | 
       | I mean, it does give good completions sometimes but the time
       | saved isn't that great imho. Maybe chatgpt is better but it feels
       | like AI still have some way to go to actually be so useful you
       | would be less sucessful without it.
        
       | [deleted]
        
       | RyanShook wrote:
       | Does anyone else feel like the crypto crowd just migrated to AI?
        
       | d_burfoot wrote:
       | I don't have a problem with the main point of the article, but
       | there is a huge terminology confusion that is rapidly gathering
       | force to confuse people. The key breakthroughs of GPT3 et al are
       | not primarily about generative AI. People had been building
       | generative models long before GPT3, and it was generally found
       | that discriminative models had better performance.
       | 
       | They key to the power of GPT3 is that it has billions of
       | parameters, AND those parameters are well-justified because it
       | was trained on billions of documents. So the term should be
       | something like "gigaparam AI" or something like that. Maybe GIGAI
       | as a parallel to GOFAI. If you could somehow build a gigaparam
       | discrimative model, you would get better performance on the task
       | it was trained on than GPT3.
        
         | jbcranshaw wrote:
         | Good point on the terminology. What do you think the right
         | terminology should be? LLMs is too much of a mouthful and is
         | not as informative for the general public, imo. People are also
         | using Foundation Models, which I rather like.
        
           | zone411 wrote:
           | I don't like "Foundation Models" because it's a term invented
           | by Stanford and they're pushing it hard while not really
           | doing all that much in the field.
        
       | impalallama wrote:
       | ChatGPT help me solve a refactoring bug today. I had spent hours
       | messing around trying to figure out what the issue was until I
       | realized, via asking ChatGPT, that I had misunderstood a piece of
       | the code and the docs. It was able to answer and provide examples
       | (until it had error and crashed) in a way a senior engineer might
       | have been able to.
       | 
       | The funny thing is I had tried just pasting in code and saying
       | "find the bug" and it wasn't helpful at all, but when I posted in
       | a portion and asked it to explain what the code was doing I was
       | able to work backwards and solve the issue.
       | 
       | Its nice anecdote where the AI felt additive instead of
       | existentially destructive which has been a overbearing anxiety
       | for me this last month.
        
         | ape4 wrote:
         | That works for a really small amount of code (like 100 lines).
        
         | teddyh wrote:
         | Sounds a lot like "rubber duck" debugging.
        
         | henry_bone wrote:
         | From Star Trek: First Contact: "When you build a machine to do
         | a man's job, you take something away from the man."
         | 
         | You surrendered the need to think to the machine. You are
         | lesser for it. I don't think these AIs are just removing
         | drudgery, like, say, a calculator. They actually do the work.
         | Or more correctly, they produce something that will pass for
         | the work.
         | 
         | Wholesale embracing of this sort of technology is bad for us.
        
           | RhodesianHunter wrote:
           | That sounds pretty Luddite to me.
           | 
           | I don't think the average person wants to be doing the menial
           | work, vs architecting a grander vision IE the purpose for the
           | work.
        
           | RosanaAnaDana wrote:
           | Meh. I'd rather find out where it can take us. That sounds
           | more fun and interesting.
        
           | XorNot wrote:
           | Does this mean pair programming makes you a worse engineer?
           | 
           | Or even just "asking for a second opinion"?
        
           | Karunamon wrote:
           | I don't know about you but if I had the ability to dictate
           | requirements and to get a program out the other side that
           | matches those requirements, the process of coding has become
           | mere busywork that can be eliminated for the benefit of me
           | and everyone else.
           | 
           | I'm sure the buggy whip makers had pride in their work as
           | well.
        
           | acchow wrote:
           | That's how Socrates thought about books. Yet here we are 2400
           | years later and our minds are mostly fine.
        
       | k__ wrote:
       | I used ChatGPT for my work as a writer, and it's pretty nice.
       | 
       | I wouldn't let it write a whole article, but it can really save
       | time at research. Just needs a bit of fact checking in the end.
        
         | astockwell wrote:
         | Can you elaborate more on your process, and the venue/focus of
         | the writing?
        
           | k__ wrote:
           | I write educational technical articles for a living. Dev
           | tools, frameworks, security, APIs, infrastructure, web3, etc.
           | 
           | I talk to the AI as if I would interview an expert on a
           | subject matter.
           | 
           | This usually gives a good starting point for an article, if
           | the subject is general enough, and not too new.
           | 
           | It's also good at structuring and rewriting texts. If you
           | already have all the correct data, you can use it to write an
           | outline or something like that.
           | 
           | The problems I saw were that it can't follow a coherent
           | thought for more than a few paragraphs, and the writing style
           | is generally a bit boring.
           | 
           | Also, the system uses sampling of results to sound more
           | interesting and to prevent overfitting, it happens regularly
           | that it tells you crap. One time you get a good answer, then
           | you change one word in your prompt and the results isn't
           | accurate anymore.
           | 
           | But I worked for years as a developer, so I usually notice
           | when things are off, and I also fact check manually with
           | Google when I want to be sure.
        
             | samvher wrote:
             | No offense but this approach worries me - it seems like a
             | novel mechanism to (perhaps inadvertently) generate and
             | spread false information. It takes a lot of fact checking
             | to make sure everything is right, and if you do the
             | research yourself that's a natural part of the process. It
             | seems way too easy to minimize that effort in a process
             | like this.
             | 
             | I was already worried about ChatGPT-like systems generating
             | mass-produced nonsense and polluting the internet, but if
             | people are also going to edit ChatGPT output just enough to
             | make it seem right (a mechanism I hadn't thought of so
             | far), that might make the nonsense a lot harder to detect.
             | 
             | I totally understand the reasoning though, it sounds like a
             | productive workflow.
        
             | thundergolfer wrote:
             | Do you have examples you can provide of these technical
             | articles? Because those topics your offered are really
             | broad and very few people are knowledgeable about all of
             | them, so it sounds like you're filling in your knowledge by
             | querying ChatGPT.
             | 
             | Using ChatGPT to fill in knowledge for a technical articles
             | sounds bad. If I'm reading an article about security, I
             | want it written by a security expert not a semi-layman plus
             | a ChatGPT model.
        
       | zabzonk wrote:
       | > The results are often wildly creative and spookily accurate,
       | giving these models a human-like feel.
       | 
       | or wildly inaccurate, particularly in fields such as programming
        
         | toss1 wrote:
         | Yup. What seems to be largely missed is that these models have
         | zero understanding, and are actually destroyers of information,
         | not creators. In classic Information Theory, information is
         | basically surprise value -- how much _unexpected_ info is in
         | the message? -- yet these  "AI" systems put out the _most
         | expected_ subset in each instance. This highly averaged output
         | is very recognizable and so very striking, but it is not
         | actually very informative (perhaps except in cases where it is
         | specifically used as a verbose search engine, where the query
         | takes advantage of the breadth of the AI 's training).
        
           | Gh0stRAT wrote:
           | > In classic Information Theory, information is basically
           | surprise value -- how much unexpected info is in the message?
           | -- yet these "AI" systems put out the most expected subset in
           | each instance.
           | 
           | Forgive me, but isn't this kind of moving-the-goalposts?
           | Information is the surprise value from the recipient's point
           | of view, which meas the recipient's bayesian prior
           | probability is "expected". Saying "these "AI" systems put out
           | the most expected subset in each instance" assumes that the
           | recipient's priors exactly equal those of the model which
           | would only be the case when the model is talking to itself.
           | (or I suppose to an even more complex model with perfect
           | knowledge of ChatGPT's weights)
           | 
           | The fact that no information is transferred when the model
           | talks to itself should not be surprising and would apply to
           | any AI. (even including a superhuman post-singularity god-
           | like AI)
        
         | aero142 wrote:
         | I've been asking friends in non-programming engineering fields
         | how ChatGPT does in their area of expertise, and I believe
         | programming is the area that ChatGPT is the most accurate.
         | Finding solution to general engineering problems seems
         | blatantly wrong in almost all cases, whereas in programming, it
         | seems to be able to generate mostly correct code for simple,
         | boiler-plate like tasks.
        
           | zabzonk wrote:
           | why is "mostly correct" ok for programming? also, i don't
           | believe that good programmers want to have boiler-plate in
           | their code.
        
             | fooker wrote:
             | Because that's what most programmers achieve too.
             | 
             | You can iterate from there by taking advantage of the last
             | 50 years of software engineering wisdom.
        
           | ccozan wrote:
           | yes, but why? Why is GPT so much better at programming than
           | other tasks?
           | 
           | can it be that programming itself can be so easily predicted
           | in a generative way, while others require more ingenuity and
           | real world model to be solved?
           | 
           | In this case I would totally offload programming to a GPT
           | /LLM AI, while my job is simply to specify largely the
           | business case.
        
             | lancesells wrote:
             | Is it because programming is a more limited and specific
             | language than the ones people speak? There's less room for
             | double-meanings, slang, meaning, or even sentence
             | structure.
        
             | impalallama wrote:
             | I have to imagine its because so much of its training data
             | is readily available programming docs, tutorials, and
             | general Q&A that there is an amazing abundance of online.
             | How many times have you just pasted an error into google
             | and hoped someone else has asked the exact same question on
             | stack overflow?
        
               | petra wrote:
               | True. Also there's a lot of commented open-source code
               | out there.
        
             | benkay wrote:
             | [dead]
        
         | blablablerg wrote:
         | Or worse, subtly inaccurate. The problem I have with generative
         | AI right now, its product looks like it makes sense and
         | sometimes it does, but there is always the risk of total
         | nonsense hidden somewhere in the middle. So you still need
         | someone capable to check and correct for most professional
         | work, and sometimes that is harder or more time consuming than
         | making the product itself.
         | 
         | The same sort of problem with self driving cars, they are often
         | correct but not often enough, and staying alert to correct the
         | AI is worse than driving yourself which is more work,
         | paradoxically enough.
         | 
         | AI might manage to push through these barriers, but I remain
         | skeptical with the technology in the current state: statistical
         | machines that are good in the common cases but sketchy at the
         | edges.
        
       | smoldesu wrote:
       | > Widespread adoption of generative AI will act as a lubricant
       | between systems,
       | 
       | I largely agree with this article, but I feel like you have to be
       | careful with these general predictions. Many technologies have
       | purported themselves to be this "business lubricant" tech (ever
       | since the spreadsheet), but the actual number of novel
       | spreadsheet applications remains small. It feels like the same
       | can be said for generative AI, too. Almost every day I feel the
       | need to explain that "generation" and "abstract thought" are
       | distinct concepts, because conflating the two leads to _so much_
       | misconception around AI. Stable Diffusion has no concept of
       | artistic significance, just art. Similarly, ChatGPT can only
       | predict what happens next, which doesn 't bestow it heuristic
       | thought. Our collective awe-struck-ness has left us vulnerable to
       | the fact that AI generation is, generally speaking, hollow and
       | indirect.
       | 
       | AI will certainly change the future, and along with it the future
       | of work, but we've all heard idyllic interpretations of benign
       | tech before. Framing the topic around content rather than
       | capability is a good start, but you easily get lost in the weeds
       | again when you start claiming it will change _everything_.
        
         | jbcranshaw wrote:
         | > Our collective awe-struck-ness has left us vulnerable to the
         | fact that AI generation is, generally speaking, hollow and
         | indirect.
         | 
         | This totally resonates with me. This is absolutely correct.
         | Thinking about the future of work, there's much of what I do
         | every day in my job that is hollow and indirect. And I would be
         | totally okay if I could have something like ChatGPT do it for
         | me.
        
           | [deleted]
        
         | teknopaul wrote:
         | "but the actual number of novel spreadsheet applications
         | remains small."
         | 
         | That's not my experience, I am continuously amazed by the
         | amount of tasks worker bees manage to do in excel.
         | 
         | I kind of wish MS access was more of a thing, because when
         | eventually it doesn't scale and you need a "proper" system, it
         | takes a rewrite.
        
           | oogali wrote:
           | It's not just that a system built in MS Access facing scale
           | concerns needs a rewrite from an engineer's perspective.
           | 
           | It's that the business will _also_ accept that it needs a
           | rewrite. As opposed to the current status quo where they 'll
           | ask what's wrong with continuing to use $Slick_and_Fancy_Tool
           | (then act surprised when it stops scaling with regards to
           | whatever business, performance, or compliance barriers you've
           | then reached).
        
           | smoldesu wrote:
           | That's fair enough, I've seen some pretty cool things in
           | spreadsheet software too.
           | 
           | My larger point, though, is that most people end up using
           | spreadsheets to do the same thing. It's fun to imagine novel
           | uses for a spreadsheet, like a DAW or video game, but
           | ultimately it's not very _useful_ for that. Similarly,
           | ChatGPT is great for writing convincing text - that 's what
           | everyone uses it for. Can it solve math though? Not very
           | well. Future applications of the tech are more likely to be
           | specialized, in that sense.
           | 
           | Mostly, I'm a curmudgeon and I despise these "flying car of
           | the future" articles. Popular Mechanics printed them for
           | decades, and half a century later nothing has changed (not
           | even the culture writing them).
        
             | mxkopy wrote:
             | I think we knew from the get-go that spreadsheets would be
             | used for pretty much anything to do with numbers. That
             | there aren't any new applications past that understates
             | their general applicability.
             | 
             | I agree though, chatGPT isn't a real flying car. Imagine if
             | someone revolutionized the paper clip. The day-to-day of
             | millions would be forever and irrevocably changed; and
             | almost nothing would happen.
        
         | ChrisMarshallNY wrote:
         | Weren't we all supposed to be lollygagging about, as our robots
         | did everything for us, by now?
         | 
         | I can't wait for Wall-E!
         | 
         | https://www.thelist.com/img/gallery/things-only-adults-notic...
        
           | thih9 wrote:
           | Wall-E was about as much about post-scarcity as it was about
           | escaping reality. To me it looks like we've focused on the
           | second part and we got pretty good at it.
        
       ___________________________________________________________________
       (page generated 2023-01-18 23:00 UTC)