[HN Gopher] Post-GPT Computing
       ___________________________________________________________________
        
       Post-GPT Computing
        
       Author : gradys
       Score  : 214 points
       Date   : 2023-03-24 12:34 UTC (10 hours ago)
        
 (HTM) web link (grady.io)
 (TXT) w3m dump (grady.io)
        
       | carapace wrote:
       | Check out: "Augmenting Human Intellect: A Conceptual Framework"
       | SRI Summary Report AFOSR-3223 by Douglas C. Engelbart, October
       | 1962 https://dougengelbart.org/pubs/augment-3906.html
       | 
       | > Accepting the term "intelligence amplification" does not imply
       | any attempt to increase native human intelligence. The term
       | "intelligence amplification" seems applicable to our goal of
       | augmenting the human intellect in that the entity to be produced
       | will exhibit more of what can be called intelligence than an
       | unaided human could; we will have amplified the intelligence of
       | the human by organizing his intellectual capabilities into higher
       | levels of synergistic structuring.
       | 
       | Now that the computers can talk and think and program themselves,
       | and we can expect them to become exponentially better at it (to
       | some limit, presumed greater-than-human), there is approximately
       | only one problem left: how to _select_ from the options the
       | machines can generate for us.
       | 
       | It's still an open-ended challenge, it's just a new and different
       | challenge from the ones faced by all previous generations. And
       | again, just to repeat for emphasis: this is the _only_
       | intellectual challenge left. All others are subsumed by it
       | (because the machines can (soon) think better than we can.)
        
       | outside1234 wrote:
       | GPT is "Drafts as a service"
       | 
       | That the draft happened to work on the video clip is more luck
       | than something you want to bet your engineering life on.
       | 
       | You still need to go through an verify every character this
       | statistical package spits out - it is not magic - it is just a
       | probabilistic machine.
        
       | metalrain wrote:
       | While chat is intuitive interface to start with. I think we'll
       | see more integration of these NLP models in traditional tools,
       | like we saw with Adobe Firefly and Unreal Engine. That way users
       | retain the control for fine tuning and doing problem specific
       | tasks, but also gain this superpower of doing many actions with
       | few words.
       | 
       | Key thing for adoption is to make models smaller and more context
       | specific (to make them smaller), we've seen how LLaMA was
       | downsized to run on commodity PCs, we've seen how Stable
       | Diffusion can run on mobile phones. Even when we have to use
       | larger models remotely, cost and ownership matters.
        
       | meghan_rain wrote:
       | We need to push the notion that "closed-source LLMs are super
       | dangerous, an existential risk to humanity".
       | 
       | Basically we need to equate "safety" in LLMs to mean "being open-
       | source".
       | 
       | OpenAI keeps talking about "safety" as the most important goal.
       | If we define it to mean "open-source" then they will be pushed
       | into a corner.
        
         | seydor wrote:
         | Good luck after a decade of convincing the public about the
         | opposite (walled gardens)
        
         | michaelmior wrote:
         | > Basically we need to equate "safety" in LLMs to mean "being
         | open-source".
         | 
         | I think open source is a reasonable _component_ to safety, but
         | I wouldn 't want to make them equal. Open source may be
         | necessary for safety, but I wouldn't call it sufficient.
         | 
         | For example, assume the source code, the model, the training
         | data, and all the model weights are open source. How do you
         | know that the model was actually trained using that training
         | data? Very few organizations have the capacity to train models
         | at this scale themselves.
        
         | wseqyrku wrote:
         | > Basically we need to equate "safety" in LLMs to mean "being
         | open-source".
         | 
         | Another way to put it is to make it more accessible to
         | everyone, right?
         | 
         | The opposite of that is happening to nuclear power. They're
         | actually trying to stop any more countries to have the
         | technology at their disposal. So no, make it "open source"
         | doesn't make it safe by any stretch of imagination.
        
           | blibble wrote:
           | "nuclear power" is open source, and this is one of the
           | fundamental ideas behind the NPT
           | 
           | reactor blueprints have been accessible to IAEA members for
           | something like 50 years
        
         | jackvezkovic wrote:
         | It's "Security Through AI Obscurity"
        
         | LouisSayers wrote:
         | > OpenAI keeps talking about "safety"
         | 
         | Whenever I see this I simply think "monopoly". It smells of
         | anti-competitiveness and is a kind of open forum lobbying to
         | restrict who gets to lead the AI wave (and make a shit tonne of
         | money in the process).
        
         | tel wrote:
         | Why do you believe that "open source" would imply greater
         | safety? Here, I'll loosely define "safety" to be "avoidance of
         | harm to individuals or society that would have otherwise not
         | occurred without the use of LLM technology". Feel free to
         | modify that definition as you see fit, but I'm genuinely
         | curious what the argument is that open source is necessary,
         | sufficient, or even a major component of achieving safety.
        
       | pfdietz wrote:
       | I'm stoked by the idea that NL processing is suddenly becoming
       | much more accessible and powerful. Old, boring static text
       | documents are suddenly "coming alive". Imagine what this means
       | not just for software engineering, but for all engineering, and
       | even if not a single one of these documents is generated by a
       | LLM.
        
       | barrkel wrote:
       | I don't think this is quite correct.
       | 
       | If the LLM has seen lots of instances of usage of an API, it can
       | write code to target the API. It can generalize to some degree,
       | but things go off track the further your requirements are away
       | from the training data.
       | 
       | If your code is a lot of duct tape between well-documented, or at
       | least well-named, APIs, that code can be automated. Which is
       | great. That kind of code was always boring to write.
       | 
       | I'm less convinced that LLMs will be great at inventing new
       | abstractions to map to a problem domain, and wiring up these new
       | abstractions in a large codebase.
       | 
       | They'll need augmentation, fine-tuning, guidance, and it's not
       | clear how well it'll all fit together, and where the limitations
       | of the tech will show up as capability cliffs.
        
         | precompute wrote:
         | Yes. Outsourcing engineering to LLMs is like building bridges
         | based on structural integrity in minecraft. The real product
         | here is just a "language calculator", that also does "code
         | generation" because it makes financial and PR sense. That
         | people even believe these models can be novel makes one look to
         | the way this thing is marketed.
         | 
         | It's also a good time to really take our heads out of the sand
         | and re-evaluate how we expect people to learn civil engineering
         | if their only teacher is a minecraft world. You _might_ get
         | some people that are perfect in minecraft. The rest will be
         | hopelessly stunted. Pretty soon it 'll pivot to materials
         | engineering to figure out how exactly a minecraft block adheres
         | to a surface because we lost the original irl way to build a
         | bridge.
        
       | angarg12 wrote:
       | Last week I used ChatGPT for the first time for a real world task
       | at work. It was a self-contained lambda function to perform some
       | admin tasks, so it seemed like an ideal fit. Although the
       | experience was good, it's far from the end of programmers. This
       | was my experience:
       | 
       | * Although ChatGPT is pretty good at generating code, it kept
       | making simple mistakes such as calling non-existing APIs or
       | introducing bugs. Some of them it could fix itself, some I had to
       | fix.
       | 
       | * The code provided worked well for the "happy path" but failed
       | miserably for some corner cases. I had to fix that manually.
       | 
       | * The code was working, but I wouldn't consider it production
       | ready. It required some cleanup, unit tests, etc. Again, some of
       | this with ChatGPT, some without.
       | 
       | * Not to mention that I was the one with the knowledge about the
       | domain, what problem to solve, a vague idea of how...
       | 
       | Not to pick on OP but extracting a few seconds of video from a
       | file is a pretty straightforward task, you can essentially do it
       | with a bash one liner [1]. My biggest question is how ChatGPT
       | performs with a large codebase, contributed over time by
       | different authors, with complex domain logic and layers of
       | abstraction.
       | 
       | I also had a brief existential crisis, but I just shrugged it off
       | and got back to work.
       | 
       | [1] https://askubuntu.com/questions/59383/extract-part-of-a-
       | vide...
        
         | kledru wrote:
         | The self-confidence of people who dismiss it after having tried
         | it once is impressive. Having said that, I do not think it is
         | the end of programmers, only some of them.
        
         | markus_zhang wrote:
         | Just imagine Amazon paid someone to train GPT on all available
         | API and tons of correct code, then your company uses another
         | ton of private code to train it...and then provide you with
         | 40-50 prompts.
        
         | idopmstuff wrote:
         | I'm a PM who is relatively technical, but I haven't written
         | more than a few lines of code (stuff like minor modifications
         | to my Shopify site) in a decade.
         | 
         | I saw a bunch of people talking about how GPT helped them code
         | stuff on Twitter, so I thought I'd give it a try. Right now I'm
         | building a sort of simple, mock version of the type of software
         | that integrates with my company's APIs. I've successfully
         | managed to create a simple web application that creates a new
         | object, hits my company's API endpoint to create a
         | corresponding object on our software, allows me to upload a
         | document locally and then allows me to upload that document to
         | our software via API as well. It's all a little messy and
         | clearly not production-ready, but it works. It would've taken
         | me probably a few months on nights and weekends do this (mostly
         | refreshing myself on JS and Python). Instead I've done it in
         | <24h (would've been shorter except for GPT-4's message limit).
         | 
         | I'm sometimes able to spot and fix GPT's bugs, but even when
         | I'm not, it walks me through adding more logging and
         | successfully debugs issues. Sometimes it takes a few tries and
         | a little direction as to what I suspect the issue is, but so
         | far it's fixed everything that's come up. I don't think this
         | would be doable for a totally non-technical person, but I do
         | think it'll get there pretty soon.
         | 
         | I'm just absolutely blown away.
        
           | devjab wrote:
           | One of the reasons I'm not blown away is that everything
           | we've tasked it to do has resulted in rather terrible
           | answers. A lot of them outright didn't work. When they did,
           | the way GPT created our solutions were often in a way that
           | wouldn't work well over time. Unfortunately you wouldn't
           | necessarily know that unless you really know your tools. In
           | many ways, this isn't too different from people performing
           | brute-force programming through their favorite search engine,
           | but at least most people know that "Google programming" is
           | sort of bad. I think we're going to be cleaning up the GPT
           | messes for decades to come because it's very confidently
           | incorrect and much more accessible as you point out.
           | 
           | I think we're going to see a lot of programmers who are going
           | to trust GPT a little too much, and I think that's sort of
           | scary. For the most part that is going to work out just fine.
           | Often the quality of your programming isn't actually going to
           | matter that much, because as long as it solves the business
           | needs okish, then it's frankly great. That's not always the
           | case, however, imagine someone using GPT to get your
           | healthcare software wrong.
           | 
           | I'm still impressed with it in other areas. I think it'll do
           | wonders in the world of office automation because it seems to
           | have the ability to succeed at this much better than any
           | previous "no-code" attempt where the logic would almost
           | always end up requiring people who are basically programmers
           | for it to work. I think GPT will help here, requiring less
           | "superusers" for a department to move their data flows into
           | automation. Especially in areas, where efficiency and
           | stability aren't necessarily that important if the
           | automation-tools mean you don't need three full time
           | employees moving data from one system to another.
        
             | tablespoon wrote:
             | > ...but at least most people know that "Google
             | programming" is sort of bad. I think we're going to be
             | cleaning up the GPT messes for decades to come because it's
             | very confidently incorrect and much more accessible as you
             | point out.
             | 
             | Speaking of which:
             | https://meta.stackoverflow.com/questions/421831/temporary-
             | po...
             | 
             | > Overall, because the average rate of getting correct
             | answers from ChatGPT is too low, the posting of answers
             | created by ChatGPT is substantially harmful to the site and
             | to users who are asking and looking for correct answers.
             | 
             | > The primary problem is that while the answers which
             | ChatGPT produces have a high rate of being incorrect, they
             | typically look like they might be good and the answers are
             | very easy to produce. There are also many people trying out
             | ChatGPT to create answers, without the expertise or
             | willingness to verify that the answer is correct prior to
             | posting. Because such answers are so easy to produce, a
             | large number of people are posting a lot of answers. The
             | volume of these answers (thousands) and the fact that the
             | answers often require a detailed read by someone with at
             | least some subject matter expertise in order to determine
             | that the answer is actually bad has effectively swamped our
             | volunteer-based quality curation infrastructure.
             | 
             | If we're lucky ChatGPT will poison itself by pissing in its
             | well, but it will take a lot of good things with it.
        
             | dwaltrip wrote:
             | Have you tried gpt-4? It's not perfect but it is a clear
             | improvement for code.
        
             | vehementi wrote:
             | > everything we've tasked it to do has resulted in rather
             | terrible answers
             | 
             | Maybe this is the chatGPT equivalent of "learning to google
             | search properly". You got bad answers, but maybe someone
             | more competent at chatGPT prompts and workflow would have
             | gotten to a better solution more quickly, and we need to
             | figure out what that means
        
               | krainboltgreene wrote:
               | I've never seen so many people grinding to make an
               | autocomplete engine produce "better solutions" to
               | randomized output.
        
               | 8organicbits wrote:
               | Is there a primer on how to engineer prompts? I didn't
               | like my results, tried engineering my prompts a bit, but
               | it kept introducing different error. I had the domain
               | knowledge to see them, but it felt like whack-a-mole.
        
             | idopmstuff wrote:
             | > I think it'll do wonders in the world of office
             | automation because it seems to have the ability to succeed
             | at this much better than any previous "no-code" attempt
             | where the logic would almost always end up requiring people
             | who are basically programmers for it to work. I think GPT
             | will help here, requiring less "superusers" for a
             | department to move their data flows into automation.
             | Especially in areas, where efficiency and stability aren't
             | necessarily that important if the automation-tools mean you
             | don't need three full time employees moving data from one
             | system to another.
             | 
             | Yeah, at this point I think this is a valid use case for
             | GPT-4 in its current form. I would be comfortable using it
             | to build internal process tools or standalone things like a
             | simple browser extension. Nobody in engineering at my
             | company would be dumb enough to let me start monkey around
             | with our actual codebase though.
        
             | foobarian wrote:
             | I think your parent is on to something: it won't be the
             | programmers trusting GPT a little too much, it will be the
             | PMs, and there won't be a programmer :-)
        
               | AverageDude wrote:
               | I see an Idiocracy inspired future for programming.
               | 
               | People claim that AI can write code so they start firing
               | programmers. Universities stop software engineering
               | programs as there is no one taking the courses. People
               | stop writing blogs or stackoverflow. Software engineers
               | either move to other fields or start living offgrid. No
               | new innovation or new line of code written by human.
               | 
               | Meanwhile, software quality get worse by each passing day
               | and there's no one to fix. AI poisons it's own well by
               | generating shitty code and now even simple tasks are
               | taking 30 seconds. People say, "In the good old days, we
               | used to get response in under 1 second". Just like how
               | they talk about cars and their durability in the good'ol
               | days.
        
             | sharemywin wrote:
             | Are people actually allowing it to perform actual software
             | development?
             | 
             | State your assumptions.
             | 
             | Read and summarize the pervious documents
             | 
             | generate a data flow diagram.
             | 
             | generate a data model.
             | 
             | Get it to inquire about use cases and requirements
             | 
             | generate tests for these uses cases and requirements.
        
           | majormajor wrote:
           | It'll definitely help people learn faster - the "what to try
           | next" problem is HUGE in programming discovery.
           | 
           | Modern languages (and tools like autocomplete) have already
           | helped that a lot compared to assembly code or binary, this
           | looks like the biggest jump in a long time. The path of
           | programming so far has been moving from "describe how to do
           | something" to "describe what to do" which this is certainly
           | in line with.
        
           | helf wrote:
           | Honestly, it seems the people most blown away by things like
           | this are people who haven't much experience in the various
           | fields.
           | 
           | It's easy to get amazed by something that can halfway do
           | something you can't do at all automatically. But as others
           | have pointed out, it's not that great at it and not knowing
           | enough to do it yourself means you don't know enough to catch
           | and fix bugs.
           | 
           | So this move to using chatgpt and similar in production by
           | people who otherwise wouldn't be able to do things in
           | production is worrisome, imo.
        
         | cactusplant7374 wrote:
         | I see people solving a lot of simple problems with it. How
         | about asking it to design a robot that makes and hands you a
         | cup of coffee in the morning? Something that really hasn't been
         | done before.
         | 
         | And afterwards it cleans the cup and puts it back in the
         | cabinet.
        
           | dTal wrote:
           | It is not good at that kind of novelty, but my impression is
           | that the difficulty is that it is limited to a single pass
           | through the network - such "loops" as there are are
           | "unrolled" within the network, into a very limited stack
           | depth [0] (it would be _very_ interesting to analyze these
           | networks for self-similarity).
           | 
           | If you want it to solve arbitrarily complex problems, you
           | need to set up some sort of loop. People are already feeding
           | the outputs back in as input in various primitive ways, but I
           | suspect the real breakthrough will come when someone trains
           | some sort of recursive transformer from scratch. (Assuming
           | the current networks waste neurons in unrolling loops, we
           | might possibly even see smaller models).
           | 
           | [0] Try the following family of prompts: "_ is an example of
           | _, which is an example of _, which is an example of _...."
           | etc to a depth of your choosing. At some point it bottoms out
           | and you can't get any more levels out of it.
        
         | redmaverick wrote:
         | Upton Sinclair: "It is difficult to get a man to understand
         | something, when his salary depends upon his not understanding
         | it."
         | 
         | This quote highlights the challenges of accepting new
         | information or ideas when they might jeopardize one's
         | livelihood or status quo.
        
         | hn_throwaway_99 wrote:
         | My experience was different than yours. I had an existential
         | crisis, and it only got worse the more I used it. To be clear,
         | I did _not_ think ChatGPT was ready to replace me _now_ (and,
         | mind you, I only used the 3.5 version of ChatGPT). But it was
         | so easy for me to see how in just a few years it can accelerate
         | its pace of learning.
         | 
         | I was discussing a bug with a colleague, so for curiosity's
         | sake I decided to plug a similar question into ChatGPT. I was
         | quite impressed with the solution it gave, and interestingly,
         | it had the same subtle bug that our code had. What blew me away
         | is that when I pointed out the bug, ChatGPT _fixed the code by
         | itself_. On one hand I felt  "phew, at least it needed me to
         | point out the bug", but then I thought "I just (perhaps
         | stupidly) provided training data so that down the road ChatGPT
         | would get it right the first time."
        
           | creeble wrote:
           | But did you? Does it store and re-train on all of that input?
        
             | groestl wrote:
             | It will, rest assured. There is a reason that you have a
             | history of chats in the free version, and it's not because
             | it's so handy for you.
        
               | matesz wrote:
               | Wow how about people using paid api. Can OpenAI retrain
               | on data provided there?
               | 
               | ps.                 ChatGPT: "You should only share
               | information that you are comfortable with being stored or
               | potentially used as training data."
        
           | IanCal wrote:
           | You can also just ask it to check if its result matches the
           | spec or check it for bugs. I've done that and had it find
           | things without me telling it what was wrong.
        
           | sharemywin wrote:
           | This video kind of scared me even more.
           | 
           | https://www.youtube.com/watch?v=0dBq9sKTKTY
        
             | hn_throwaway_99 wrote:
             | Holy fuck.
             | 
             | I don't see how people can see stuff like that and say "oh,
             | it's just a fancy Markov chain generator" or "it can't
             | reason". Even if that stuff is _nominally_ true, how can
             | people not be totally blown away by this? Just a couple
             | years ago I think people would have been amazed that it can
             | have totally natural, grammatically correct conversations.
             | Moreover, for nearly 3 /4 of a century the scientific
             | community has pretty much coalesced on only using the
             | _output_ to define intelligence (aka the Turing Test).
             | While I understand that ChatGPT may not 100% be there yet,
             | I see no reason to believe that all this interaction people
             | are having with it won 't be fed back into it to
             | drastically improve its responses over time.
        
               | jltsiren wrote:
               | > Moreover, for nearly 3/4 of a century the scientific
               | community has pretty much coalesced on only using the
               | output to define intelligence (aka the Turing Test).
               | 
               | It's more like the scientific community has spent the
               | last 50+ years criticizing the Turing Test. Passing as a
               | human is a nice engineering goal, but there has been a
               | lot of doubt of using input/output behavior as the only
               | measure of intelligence. If you took a basic AI class
               | before machine learning became popular, the chances are
               | the class spent more time on the criticism than on the
               | test itself.
        
               | ChatGTP wrote:
               | Did you watch the video ? You got punk'd
        
         | bmitc wrote:
         | You made me think: it feels like ChatGPT is just a less
         | accurate StackOverflow answer generator. The culture of needing
         | to use StackOverflow is not a good one, so I'm not sure why
         | people are considering ChatGPT to be.
        
           | davidhaymond wrote:
           | This is one of my major concerns with ChatGPT, and I'm not
           | sure why it hasn't been discussed more. StackOverflow is a
           | massively useful resource, to be sure, but it takes knowledge
           | to wade through the outdated (or just plain bad) answers.
           | StackOverflow can be a useful starting point, but I don't
           | think I have ever copy/pasted code directly from
           | StackOverflow. I don't think any LLM will be able to replace
           | the skill of reading the docs and learning your tools.
           | 
           | I have no doubt that ChatGPT will become even better than
           | StackOverflow at answering questions. Is this really going to
           | make us better programmers?
        
           | otabdeveloper4 wrote:
           | Yeah, it seems like a better search engine would be easier
           | and more accurate to use in this case.
        
             | sharemywin wrote:
             | but it can usually answer your specific question. not loke
             | close to your question.
        
           | circuit10 wrote:
           | "The culture of needing to use StackOverflow is not a good
           | one"
           | 
           | Not being able to write code without it might be bad but it's
           | a valuable resource and you should use it when it's available
           | to you (for both)
        
           | stevedonovan wrote:
           | Yes, and at least StackOverflow will often give you some
           | minority opinion, not just a snippet to be pasted into your
           | code. Especially if something a little tricky like
           | cryptography.
           | 
           | Consider this classic:
           | https://stackoverflow.com/questions/12122159/how-to-do-a-
           | htt...
        
           | politician wrote:
           | Counterpoint: ChatGPT will answer your question in a few
           | moments whereas on StackOverflow, you might need up to 60
           | minutes for the question to be closed as "offtopic". ChatGPT
           | never asks "why do you want to do this?"
        
             | asdff wrote:
             | Who even asks on stack overflow? The exercise is the
             | generalize your issue, and then the thread from 7 years ago
             | with your answer appears.
        
               | sharemywin wrote:
               | but what's the point if a new tool does it for you?
        
         | afro88 wrote:
         | Huh. 2 days ago I built 3 significant internal tools for my
         | company that automated important workflows for our growth, in a
         | language that I rarely use (js), in 4 hours. Something we have
         | been putting off for months because we figured it would take a
         | week or two. It was an exhilarating experience.
         | 
         | Yesterday I got a complex data structure out of it in 1h that
         | we'd been talking about but not implementing because it would
         | have taken a couple of days to get right.
         | 
         | In all cases it made mistakes and I had to rely on my
         | experience as an engineer to ask the right questions and fix
         | things. But god damn it made me insanely more productive.
         | 
         | Don't shrug this off and go back to work. You'll get left
         | behind, and may not have a work to go back to.
        
           | krainboltgreene wrote:
           | So wait, just to be clear, you deployed production code in a
           | language you don't use regularly? And this is a good thing?
           | 
           | This is supposed to take programming jobs?
           | 
           | HN is incredible.
        
             | afro88 wrote:
             | Who said anything about production code in a language I
             | don't use regularly?
             | 
             | Internal tools that automate 3 workflows we'd been doing
             | manually. 2 node scripts and a super simple web app exposed
             | on our private network.
        
         | LeoPanthera wrote:
         | For what it's worth, that was exactly my experience with
         | GPT-3.5, but GPT-4 is a _lot_ better at generating code. Almost
         | spookily good, at least for some languages. It makes far fewer
         | mistakes.
        
           | ollien wrote:
           | Maybe the ChatGPT implementation of GPT-4 is different than
           | the one in Bing AI, but I tried to ask Bing AI to write a
           | fairly simple Python-based ini-parser yesterday (and by that
           | I really mean using the built-in configparser module), and
           | while it got a good amount of the way there, but attempted to
           | index a string with a string-key, which was weird. After
           | multiple notices of this mistake, it produced something that
           | _could_ work in some cases, but was definitely brittle.
        
             | crop_rotation wrote:
             | I can confirm that GPT4 is much better than Bing on such
             | tasks (have used both extensively for same prompts.)
        
               | d0mine wrote:
               | Bing is backed up by GPT-4? No?
        
               | HDThoreaun wrote:
               | My understanding is that they're similar but not the
               | same. I think the rlhf process was different with GPT4
               | receiving much more human feedback.
        
               | crop_rotation wrote:
               | It is but they don't have to be exactly the same. Bing
               | might be tuned for searching real time information and
               | maybe cost less since at search engine scale is much
               | higher (just a guess on my part).
        
             | LeoPanthera wrote:
             | > Maybe the ChatGPT implementation of GPT-4 is different
             | than the one in Bing AI
             | 
             | Yeah I think it definitely is, but I don't know why. Bing
             | is better at looking things up (perhaps unsurprisingly) but
             | Chat4 is better at creating things.
        
             | ilaksh wrote:
             | Could be using different temperatures and prompts.
        
         | komposit wrote:
         | This skepticism absolutely baffles me. Have you been using
         | gpt-4? To unlock gpt for real you have to be careful to prompt
         | it correctly and find a way to improve the feedback loop for
         | improving code. It is only a matter of time until tools arrive
         | that integrate this into your development environment and give
         | it access to test/console output such that it can suggest code
         | and iterate over the result. It's not perfect yet, but I'm
         | seriously feeling the nature of our work will change
         | fundamentally over two years already.
        
           | illiarian wrote:
           | So... nothing changes. It will be the tool for which you will
           | need to manually construct prompts and clean up output
           | (including imagined non-existent APIs).
           | 
           | The availability of a button inside an IDE doesn't make this
           | a fundamental change in how we work
        
             | dbtc wrote:
             | Nothing changes the same way that there is no difference
             | between writing software in assembly and writing it in
             | python.
        
             | deeviant wrote:
             | If the button can do, let's say, half or more of the work
             | for you when you press it, you're lying to yourself if you
             | think it won't change anything.
        
             | crop_rotation wrote:
             | It is so far ahead of even what the best IDEs do. For one,
             | I have not seen GPT4 ever use non existent APIs. You don't
             | need to carefully construct prompts. It tolerates typos to
             | a good extent. You can just type a rough description and
             | the output won't need cleaning manually. You might need to
             | reiterate it to focus on some thing (like remove all heap
             | allocations and focus on performance).
        
               | 49531 wrote:
               | I've seen it use non existent APIs a lot. Working on a
               | project that uses a dialect of python it told me it knew
               | (Starlark) was like pulling teeth. It would tell me to
               | use a python feature Starlark didn't have, I'd ask it to
               | rewrite it without using that specific feature and it
               | would with another feature Starlark didn't have access
               | to, so I'd ask it to write the solution using neither and
               | it would just give me the first solution again.
        
               | illiarian wrote:
               | > For one, I have not seen GPT4 ever use non existent
               | APIs.
               | 
               | Have you asked it to use any API that appeared after
               | September 2021 (that's the cut off date for its data)?
               | 
               | Have you asked it to write code in less popular languages
               | (e.g. Elixir)?
               | 
               | Have you asked it to write code for less popular or
               | unavailable APIs (smart TV integrations)?
        
               | crop_rotation wrote:
               | I have used it to write Nim and Zig code (both not too
               | popular languages).
               | 
               | I also asked it to write using non existent but plausible
               | sounding APIs, and it flat out says "As of my knowledge
               | cutoff in September 2021, I have no knowledge ...."
               | 
               | Ae you talking about GPT4 or the default ChatGPT?
        
               | illiarian wrote:
               | I've seen similar claims about GPT 3.5 and Copilot, so I
               | won't hold my breath.
               | 
               | To quote GPT-4 paper:
               | 
               | "GPT-4 generally lacks knowledge of events that have
               | occurred after the vast majority of its pre-training data
               | cuts off in September 202110, and does not learn from its
               | experience. It can sometimes make simple reasoning errors
               | which do not seem to comport with competence across so
               | many domains, or be overly gullible in accepting
               | obviously false statements from a user. It can fail at
               | hard problems the same way humans do, such as introducing
               | security vulnerabilities into code it produces.
               | 
               | GPT-4 can also be confidently wrong in its predictions,
               | not taking care to double-check work when it's likely to
               | make a mistake".
               | 
               | > I also asked it to write using non existent but
               | plausible sounding APIs, and it flat out says "As of my
               | knowledge cutoff
               | 
               | Ask it to write a deep integration with Samsung TV or
               | Google Cast. My bet is that it will imagine non-existent
               | APIs (as those APIs are partly unpopular and partly
               | closed under NDAs)
        
               | leishman wrote:
               | Yeah it was basically useless for an Elixir project I was
               | working on. That will probably change at some point I'm
               | sure.
        
               | raincole wrote:
               | How do you know GPT4's cut off date...? I mean it says
               | that, but it can totally be it "learned" its (supposed)
               | cut off date from the GPT3.5 output all over the
               | internet, right?
        
               | crop_rotation wrote:
               | The model repeats it all the time "As of my knowledge
               | cutoff date"
        
               | raincole wrote:
               | Yes, and this fact doesn't tell me anything, as I know
               | LLM is completely capable to say things that aren't true.
        
               | CapstanRoller wrote:
               | That claim doesn't come from ChatGPT, it comes from
               | OpenAI themselves.
        
               | illiarian wrote:
               | > How do you know GPT4's cut off date...?
               | 
               | "GPT-4 generally lacks knowledge of events that have
               | occurred after the vast majority of its pre-training data
               | cuts off in September 202110, and does not learn from its
               | experience."
               | 
               | GPT-4 paper, page 10:
               | https://arxiv.org/pdf/2303.08774.pdf
        
             | visarga wrote:
             | The step up in accuracy from one shot solutions to
             | iterative ones is large.
        
             | cortesoft wrote:
             | I don't know, I feel like it really does change how we can
             | interact with a computer.
             | 
             | It feels like we are headed to a world where we can
             | interact with a computer much more like they do in Star
             | Trek; you ask the computer to do something using plain
             | English, and then keep giving it refinements until you get
             | what you want. Along the way, it is going to keep getting
             | better and better and doing the common things asked, and
             | will only need refinements for doing new things. Humans
             | will get better at giving those refinements as the AI gets
             | better at responding to them.
             | 
             | It is already incredibly good for being such a new
             | technology, and will continue to rapidly improve.
        
           | angarg12 wrote:
           | Is not skepticism, it's curbed optimism.
           | 
           | I don't feel that my job is at risk of disappearing. Instead
           | I think we'll be using LLMs as tools to do our job better.
        
           | orangesite wrote:
           | "You're holding it wrong"
        
             | vineyardmike wrote:
             | There's a difference between the iPhone "you're holding it
             | wrong" argument and not using a tool correctly. If you try
             | to hammer a screw, it may enter the wood but that doesn't
             | mean it's the correct way to use it.
        
           | ilaksh wrote:
           | I am working on this. Broke so have to do odd GPT jobs from
           | Upwork to make ends meet so paused on development. But the
           | front end stuff works. At least as far as skipping copy
           | paste.
        
           | elif wrote:
           | I think it's safe to assume anyone trying to criticize
           | chatGPT who has access to gpt4 would specify that their
           | attempts are using even the latest and greatest. The
           | disclosure is in the interest of their core argument.
           | 
           | Therefore the inverse can be safely inferred by
           | nondisclosure.
        
         | crop_rotation wrote:
         | For the sake of completeness, can you specify whether you used
         | GPT4 or GPT3.5 ChatGPT. The difference is huge. I too was not
         | too impressed by the default ChatGPT. But GPT4 is a huge
         | improvement.
        
         | thequadehunter wrote:
         | TBH, the most useful thing it does for me is write my
         | boilerplate code and tedious statements. That's infinitely more
         | useful than the knowledge stuff.
         | 
         | The other day I was working with the Cisco Meraki API...I knew
         | exactly what the script needed to do, but the calls were
         | tedious and I didn't feel like learning the names of all the
         | JSON columns, so I just had ChatGPT do it. I had to fix a
         | couple mistakes, but the 20 minutes it took was better than
         | having to read all the documentation.
        
           | canadianfella wrote:
           | [dead]
        
       | mcculley wrote:
       | I am currently doing some difficult work that involves figuring
       | out the right computational geometry algorithms to apply to my
       | dataset in order to get the answers my users need in a reasonable
       | time. ChatGPT is of no use to me there.
       | 
       | When I need to ask for boilerplate code for fetching a web
       | resource or using a well-defined API, ChatGPT is great.
       | 
       | ChatGPT has made the mundane plumbing a lot easier. It is a
       | threat to plumbers at this point. Many of those plumbers are now
       | freed up to do more valuable work. I am happy to have it, so I
       | can focus on higher value work.
       | 
       | If your only skill is at this kind of low level plumbing, you are
       | in danger. But I doubt this is the case for most.
        
         | RivieraKid wrote:
         | Are you using GPT 3.5 or GPT-4? It's a huge difference.
        
           | mcculley wrote:
           | I am using GPT-4. It is much better. But I still haven't had
           | it suggest any new algorithms. It just riffs on what it was
           | trained on, as expected.
        
         | UncleEntity wrote:
         | > ChatGPT is of no use to me there.
         | 
         | Today.
         | 
         | What happens when it understands computational geometry and can
         | calculate an optimal strategy to apply it to a dataset and end
         | goal you provide?
        
           | mcculley wrote:
           | I will be happy to provide value higher up instead of dealing
           | with this frustrating problem. I want the answer, not the
           | code that generates the answer. I am writing that code now
           | because it does not exist yet. So far, I have only had
           | ChatGPT give me answers that can be derived from existing
           | code. Regardless, I look forward to it being smarter.
           | 
           | (My intuition is that ChatGPT, like all technologies before
           | it, will end up making more wealth and more jobs possible.)
        
         | LouisSayers wrote:
         | Have you tried it with the Wolfram plugin?
        
           | mcculley wrote:
           | Yes, and Wolfram/Mathematic is great when I have figured out
           | which algorithm to use. ChatGPT may even soon help me
           | discover the potential algorithms. It is not helpful to me
           | for that today.
        
       | brushfoot wrote:
       | > I think over time, we'll see that what many of us really liked
       | about building software deep down wasn't coding, but intricately
       | imagining the things computers could do for us and then getting
       | them to do it.
       | 
       | Spot on. It's a good time for existential reflection: Who would
       | you have been hundreds or thousands of years ago? Who will you be
       | now that technology is radically changing again?
       | 
       | There will always be interesting, creative challenges like
       | programming, whatever form they take.
        
         | TeMPOraL wrote:
         | > _Who would you have been hundreds or thousands of years ago?_
         | 
         | Given how I grew up ingesting science and science fiction
         | alike, literally attributing half my personality to _Star Trek:
         | The Next Generation_ being on TV during my formative years? It
         | 's really hard to tell. I have very little connection to things
         | which were possible before late 19th / early 20th century.
         | 
         | In my mind, being thrown back centuries in time, I'd spend my
         | life trying to use everything I remember from present day to
         | give everyone a head start on science and technology. Being
         | thrown back centuries in time, but without the memory of
         | specific things I've learned in present day? That sounds like a
         | particularly sadistic death sentence.
        
           | the_only_law wrote:
           | I was thinking less of "what if I went back in time" and more
           | "what if I was my ancestor"
           | 
           | Obviously you won't be able to tell for sure, but I'd guess
           | that 1000 years ago I'd probably be a serf, and 100 years
           | ago, likely would have fought in a large war and likely doing
           | some form of physical labor or subsistence farming
           | afterwards, based on what my family was doing then.
        
             | TeMPOraL wrote:
             | Oh, in this sense, yes, I agree. There's only so much
             | agency any human ever has, had, or will have - and the
             | space of possible choices is determined by the overall
             | technological and economic landscape of the time.
             | 
             | In this light, sure, the me from 1000 years ago would most
             | likely be a serf, die from malnutrition, war or robbery. Me
             | from 100 years ago would probably be lying dead in the
             | trenches of Verdun, or shot on the streets of Krakow, or
             | otherwise dead in WW1; for military-aged males in Europe, I
             | guess whether or not one got drawn into fighting was a coin
             | flip.
        
           | tizio13 wrote:
           | Reading your comment made me think that you would enjoy
           | reading the Magic 2.0 series. First book is Off to be the
           | Wizard.
        
         | bad_username wrote:
         | I often think of legal professionals and law makers as "the
         | programmers of people". I think I would have become a lawyer
         | 100 years ago.
        
         | sharemywin wrote:
         | While I think people can adapt I worry about things changing
         | faster than people can adapt.
         | 
         | Do you invest in a college education is that field is
         | obliterated by the time you get out.
         | 
         | What about your debts if you lose your job and companies aren't
         | hiring because they can just use AI for a 10th the cost in 6
         | months.
        
         | falcor84 wrote:
         | >Who would you have been hundreds or thousands of years ago?
         | 
         | I'll just use this opportunity to recommend the video game
         | "Ancestors: The Humankind Odyssey". It's a game where you start
         | as an early hominid and have to gradually discover how to make
         | and use rudimentary tools in order to take control of your
         | environment, literally evolving in the process. It's weird and
         | unforgiving, and it made me really think.
        
       | KrugerDunnings wrote:
       | Working as a software engineer I often feels like I am living in
       | the world of the Handmaidens Tale as a women with a functioning
       | womb where the hole of society is organised around controlling
       | everything I do. Hopefully LLM will change this but I do not
       | underestimate the intellectual laziness of most "knowledge
       | workers"
        
         | [deleted]
        
         | anon7725 wrote:
         | You're saying this un-ironically in the post-Roe world?
        
         | sirsinsalot wrote:
         | Highly paid middle class white male (statistically) compares
         | his existence to the brutal oppression of women in a fictional
         | book.
         | 
         | What insight!
        
           | atq2119 wrote:
           | Their username is target fitting.
        
             | KrugerDunnings wrote:
             | This is what we in the comedy business call making fun of
             | someone making fun. I call myself Krugger Dunning as an
             | obvious play on the Dunnings Krugger effect because I try
             | to approach every topic from a point of epistemic humility
             | and in part that means being self-aware enough to realise I
             | might just be a total idiot. The hard work has been done by
             | me in reflecting on my shortcoming as a person and I try to
             | show my weaknesses openly with all of you in the hope we
             | can find within them our common humanity, while you just
             | issue cheap insults.
        
           | KrugerDunnings wrote:
           | You know very little about my life and I'd like to keep it
           | this way. One would think that I live in a pure sellers
           | market, but this is not true because of dynamics similar to
           | that in the show. The comparison is clearly hyperbole evident
           | by the use of an absurt fictional situation and not meant to
           | express an equality relationship but one of equivalence
           | (learn category theory bitch). It is very insightful, if you
           | ask me, to recognise one's own predicament by empathising
           | with the struggles of a fictional character, and this is what
           | at the end of the day literary critique is all about. That
           | there might be something hard about my life because of
           | organised socially accepted structural abuse is all the more
           | made evident by the briga-dooning and gasslighting I receive
           | for dealing with my own issues in jest. I guess I must be one
           | of the lucky girls.
        
             | sirsinsalot wrote:
             | Just like how waiting for 10 minutes in Starbucks during my
             | morning commute ... made me feel like a persecuted Jewish
             | prisoner awaiting execution!
             | 
             | /s
             | 
             | Your original point was ridiculous, tone deaf, offensive
             | and completely without substance other than to wave the
             | victim flag about _something_ I guess? Who knows.
        
               | KrugerDunnings wrote:
               | No, you are waiting in line because you want your stupid
               | coffee the jew to be executed does not want to be
               | executed, so no equivalence class here baby.
        
       | moomoo11 wrote:
       | Are people asking it to generate code like "generate a random
       | color hex code" or are they trying to use it to write code you're
       | going to put in production for users with access controls and
       | various complexities?
       | 
       | Because yeah it works fine for basic programming things but I
       | believe you need to know wtf you're doing when it comes to
       | anything more complex, even something basic like some of our
       | single endpoint services.
        
         | asdff wrote:
         | I am positive there are plenty of undergrads who are probably
         | going to try to get chatgpt to write up an entire app they can
         | sell so they don't have to try and find a job in a recession. I
         | imagine you could get this entire process automated from the
         | prompts to the app store submissions, maybe you could have
         | 10,000 junk apps each giving you maybe a dollar a month in
         | return before long, that would be a good take for passive work
         | after you set up your automation environment.
        
           | saulpw wrote:
           | Even if you created 10 junk apps a day, it would still take
           | you 3 years to create 10,000 junk apps. And each one requires
           | more than $1 to list on the app store.
        
             | [deleted]
        
             | asdff wrote:
             | How about the play store? Plus once you have the pipeline
             | set up the only limit for your rate of deployment is how
             | much compute you throw at it, which is cheap these days.
        
       | Madmallard wrote:
       | The devil is entirely in the details unfortunately, and it will
       | make GPT basically unusable for anyone but existing software
       | engineers for doing actual non-trivial programming tasks. At
       | least how it is now.
        
       | dsign wrote:
       | In the last few weeks, I've noted on myself how I've been going
       | through several stages of the Chat-GPT "disease", or whatever it
       | is.
       | 
       | My first reaction was to be afraid for my money-making skills. My
       | second reaction was fear about us ourselves making ourselves
       | irrelevant--that fear still lingers.
       | 
       | My third wave of fright, cemented by days burning my eyes looking
       | at a screen parsing logs and trying to figure out bugs for my
       | corporate master, was, "when did my imagination go for a
       | vacation? Old boy, don't tell me now that you have run out of
       | ideas of things to make, of things to have an AI army to help you
       | build." And now I dread that all of this AI is just hype, that it
       | will never be good enough to come for our jobs without also
       | coming for our jugulars, or that we will make it too damn
       | expensive to matter[^1].
       | 
       | -------
       | 
       | [^1]: Capitalism has a way of leveraging economies of scale to
       | make certain goods cheaper. But there are physical limits--what
       | if Moore's law with regard to power consumption is really dead,
       | and we as a collective _really_ decide to spare power?
        
         | marcosdumay wrote:
         | > And now I dread that all of this AI is just hype, that it
         | will never be good enough to come for our jobs
         | 
         | Some day it will be. Not those ones, those ones are only hype.
         | Also whether or not they'll come for our jugulars depends on
         | what they are commanded to do. But we will get them eventually,
         | and they will be as good as articles like this pretend the
         | hyped ones are.
         | 
         | The funny thing is that nobody will use the current panic to
         | prepare. And everybody will use the current panic as an excuse
         | to avoid preparing once the real AIs come. So they'll get us
         | completely unprepared.
        
         | Version467 wrote:
         | > a collective really decide to spare power
         | 
         | It's either _my_ imagination that has gone for a vacation, or
         | yours is running wild, but _that_ is the one thing I really can
         | 't see at all. _Reducing_ power consumption? I don 't think
         | that's happening any time soon, or ever really.
        
       | seydor wrote:
       | It is interesting how history repeats itself here: When google
       | started it was just a list of links to the websites that
       | contained your answer. As tech advanced, it increasingly started
       | giving out the answers in google's pages.
       | 
       | OpenAI's plugins are equally temporary. Right now they will be
       | generating actions through APIs, but GPT4 is probably already
       | capable of performing the same actions on your browser. All it
       | needs is a "control my browser" plugin that allows it to make
       | that reservation on expedia, without expedia having any control
       | in it. It will inevitably eat the world again
        
         | sharemywin wrote:
         | If they won't Don't see why Alpaca could be trained to do so.
        
       | Jeff_Brown wrote:
       | Some betting market needs to host bets on when AI will put
       | programmers out of their jobs. I don't expect it to happen for
       | decades. (Although I might bet that it will happen earlier, as
       | insurance in case it does.)
        
       | janetacarr wrote:
       | I could just be rationalizing here, but I think AI will be
       | illegal soon. The idea of banning AI to protect many well paid
       | middle-class jobs will be a slam dunk for any politician.
       | 
       | There will be no Post-GPT computing world, just the Turing police
       | and console cowgirls.
        
         | anon7725 wrote:
         | It couldn't be a worldwide ban, so that would just be shooting
         | yourself in the foot over even a short to medium term.
        
         | antibasilisk wrote:
         | Given nascent geopolitical competition, I don't think the west
         | can afford this.
        
       | breck wrote:
       | > OpenAI made the extraordinary and IMO under-discussed decision
       | to use an open API specification format, where every API provider
       | hosts a text file on their website saying how to use their API.
       | 
       | Interesting! Somehow I missed this.
       | https://spec.openapis.org/oas/latest.html
        
         | TchoBeer wrote:
         | extracting 5s of a video feels pretty trivial. Not that this
         | isn't extremely impressive, but it doesn't feel "come for your
         | jobs" impressive.
        
         | rdg42 wrote:
         | Mixing up OpenAI with OpenAPI here ?
        
           | pimterry wrote:
           | No, OpenAI's plugin system uses OpenAPI:
           | https://platform.openai.com/docs/plugins/introduction
        
       | liampulles wrote:
       | I think as rather tech savvy people, we forget the degree to
       | which most of the world population really struggles to use
       | computers well[1]. The potential of this chat based AI technology
       | to expand the market is massive.
       | 
       | [1] https://www.nngroup.com/articles/computer-skill-levels/
        
         | mock-possum wrote:
         | You do still need some kind of savvy to evaluate whether what
         | the chat bot tells you is correct though.
        
       | sebzim4500 wrote:
       | > OpenAI made the extraordinary and IMO under-discussed decision
       | to use an open API specification format, where every API provider
       | hosts a text file on their website saying how to use their API.
       | This means even this plugin ecosystem isn't a walled garden that
       | only the first mover controls. I don't fully understand why they
       | went this way, but I'm grateful they did.
       | 
       | Why is this extraodinary? What would be the advantage of going
       | through all the effort of defining a new format just to create
       | busywork for people trying to integrate with you?
       | 
       | It's not like there would be anything stopping Bard/Alpaca/etc.
       | from reading the same format as OpenAI.
        
         | gradys wrote:
         | One could imagine an alternative where the API manifest was
         | provided privately to OpenAI in the developer console or where
         | the plugin developer had to implement an OpenAI-specific API
         | structure. Doing it this way is more, dare I say, _open_ than
         | it might have been.
        
           | sebzim4500 wrote:
           | There'd still be nothing stopping Bard from adopting an
           | extremely similar API structure, and people would just upload
           | the same manifests to both.
        
             | Version467 wrote:
             | Yes and when you're google then that might work out for
             | you, but that point is that anyone who creates an llm can
             | now integrate a whole range of services without the
             | services needing to provide their manifest to each of them
             | individually. This increases competition _between_ ai
             | companies, which is why it is a surprising move.
        
       | [deleted]
        
       | [deleted]
        
       | fendy3002 wrote:
       | I'm not doubting that someday AI will able to do better than
       | junior devs, even to lower-level of senior devs. But I doubt
       | they'll able to replace those of higher level seniors, at least
       | not in tens of years.
       | 
       | Then I predict we'll get more business analysts than programmers,
       | since managements will still need people to translate their needs
       | to AI.
        
         | visarga wrote:
         | > Then I predict we'll get more business analysts than
         | programmers
         | 
         | Why would analysts be harder to replace than devs?
         | 
         | The question is - how will competition influence the job
         | market? if everyone has AI, everyone has the same powers. So
         | how do you differentiate yourself? You put more humans in the
         | loop, like "human plugins". You need humans to extract the most
         | from AI.
        
           | fendy3002 wrote:
           | Because managements aren't good at defining / understanding
           | specifications, constraints and use cases. And I believe we
           | don't want AI that can put constraints without our consent,
           | so a middleman will still be required.
           | 
           | The job market will still almost be the same, that capital
           | and networks will net you businesses.
           | 
           | The problem is how to regulate duplication, because IMO with
           | power of AI patents are basically almost useless.
        
           | sharemywin wrote:
           | even if you know company or industry jargon the thing can be
           | fine tuned on that.
           | 
           | or just build an embedding database the pulls the most
           | semantically similar paragraphs and let it use that as a
           | basis for the conversation.
        
       | nunez wrote:
       | I agree that GPT will make creating software redundant.
       | 
       | Writing is definitely on the wall for outsourcing and MVP-style
       | work. GPT can create a landing page and a backend/frontend for a
       | business _literally today_. You just have to ship it, but it
       | won't be long until that isn't needed.
       | 
       | There will still be a lot of value in understanding how systems
       | work and interact with each other, at least until ML is able to
       | build and maintain entire systems.
       | 
       | Until that happens, there will still be a lot of value in being
       | able to dive into codebases and refactor/optimize as needed, at
       | least in the medium-term.
       | 
       | Once platform engineering is mostly automated and running AI-
       | generated binaries is de-risked, then code quality doesn't really
       | matter. Hell, _code_ won't even matter at that point.
        
         | slfnflctd wrote:
         | > at least until ML is able to build and maintain entire
         | systems
         | 
         | To me, this sounds a lot like "at least until ML is able to
         | reach level 5 self driving". We don't even know if this is
         | possible yet without AGI (which we also don't know is
         | possible). We can get _close_ , but... that last 1% is a bitch,
         | and it makes all the difference.
        
       | dopeboy wrote:
       | I appreciate this article and can sympathize with the
       | disorientation the author and many here at HN feel. It can feel
       | unnerving to know that parts of our jobs might become automated.
       | 
       | I'm processing this news in realtime like many of you and forming
       | a plan:
       | 
       | 1. Understand how LLMs work. I've heard the Wolfram paper is
       | good; open to more suggestions here.
       | 
       | 2. Continue to practice using real implementations of LLMs
       | including ChatGPT and co-pilot.
       | 
       | 3. Finding painpoints within our company that AI can make more
       | efficient and implementing solutions.
       | 
       | If anyone feels the same way and wants to form a working group
       | with me, give me a shout. Email is in my bio.
        
         | tjvc wrote:
         | I think this is a good take. I hope to do the same.
         | 
         | For the understanding part, Andrej Karpathy has a YouTube
         | playlist that explains neural networks. I made a start on it
         | today and found it quite accessible.
         | 
         | https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxb...
        
       | meghan_rain wrote:
       | > OpenAI made the extraordinary and IMO under-discussed decision
       | to use an open API specification format, where every API provider
       | hosts a text file on their website saying how to use their API.
       | This means even this plugin ecosystem isn't a walled garden that
       | only the first mover controls. I don't fully understand why they
       | went this way, but I'm grateful they did.
       | 
       | Did OpenAI just commit a trillion dollar mistake?
        
         | karmasimida wrote:
         | Essentially a manifest of API call, documentation and
         | parameters.
         | 
         | I don't think convert it in and out of proprietary standard is
         | that difficult?
         | 
         | There is little to no vendor lock-in effect
        
         | tough wrote:
         | That the use an standard OpenAPI, for parsing, doesn't mean
         | than anyone can built into chatgpt without permission (it's a
         | waitlist for now)
         | 
         | I don't see this
        
       | bloppe wrote:
       | I love how all these AI researchers who write small code snippets
       | in jupyter notebooks all day think LLM's are the end of software.
       | Not disparaging AI research; it clearly takes a lot of expertise
       | and work to do it well. But that's not software development.
        
       | tapkolun wrote:
       | Just asking, which language model is capable of extracting 5s of
       | a video automatically?
        
         | gradys wrote:
         | ChatGPT with plugins!
         | https://twitter.com/gdb/status/1638971232443076609
        
       | m3kw9 wrote:
       | There has never been a case where better tools mean less software
       | developers, software will only get more complex and full featured
       | as competitions raise because of it
        
       | mftb wrote:
       | > Yesterday, I watched someone upload a video file to a chat app,
       | ask a language model "Can you extract the first 5 s of the
       | video?", and then wait as the language model wrote a few lines of
       | code and then actually executed that code, resulting in a
       | downloadable video file.
       | 
       | What chat app? Is this gpt-4? I haven't seen anything executing
       | the code that is generated. So is the above quote a hypothetical
       | or what?
        
         | ejstronge wrote:
         | Yes, this happened using GPT-4 and a coding plugin:
         | 
         | https://twitter.com/gdb/status/1638971232443076609
        
           | mftb wrote:
           | Roger, ty for the info.
        
       | bobse wrote:
       | OpenAI is not open-source, hence it's shit.
        
         | meghan_rain wrote:
         | Simple as
        
           | scottmf wrote:
           | This is the nuanced insightful discussion I come to HN for
        
       | losvedir wrote:
       | One thing I don't understand well is how much computation using
       | GPT-4 takes. Some of these discussion remind me of Bitcoin as a
       | global payments processor: sure, it can work, but it's doing a
       | tremendous amount of computation and the maximum rate of
       | transactions it can sustain is pretty low.
       | 
       | I know it used a _huge_ amount of energy  / GPU cycles / time to
       | _train_ , but now that the weights are computed, what's involved
       | in running it? I know the model is huge and can't be run on an
       | ordinary developer's machine, but I believe requests to it can be
       | batched, and so I don't really know what the amortized cost is.
       | Right now, this is all hidden behind OpenAI and its credits; is
       | it running at a loss right now? How sustainable is using GPT-4
       | and beyond, as a day-to-day part of professional life?
        
         | ChatGTP wrote:
         | I'd say it's a problem and a reason why they won't tell us more
         | information.
        
       | rektide wrote:
       | Not a ton of new material for me to think over, but did catch
       | this random mention, which is super cool & I didn't know:
       | 
       | > _OpenAI made the extraordinary and IMO under-discussed decision
       | to use an open API specification format, where every API provider
       | hosts a text file on their website saying how to use their API.
       | This means even this plugin ecosystem isn't a walled garden that
       | only the first mover controls. I don't fully understand why they
       | went this way, but I'm grateful they did._
        
       | erdaniels wrote:
       | I can't wait for the positive feedback loop of statically trained
       | LLMs being retrained on data that was generated from the N-1th
       | generation of statically trained LLMs.
       | 
       | There's so much of talk about what these models can generate,
       | which is cool in relation to plugins, but there's still a lot of
       | interesting code to write, companies to build, and ideas to
       | formulate, that an LLM cannot do on its own. If you're terrified
       | of your software engineering job becoming at risk, I urge you to
       | just take a beat.
        
         | kakadzhun wrote:
         | If Reinforcement Learning is anything to go by, then a naive
         | implementation of learning from past models will overfit to the
         | previous model and start performing worse than even earlier
         | models.
         | 
         | There was a paper by someone @ Microsoft who tried to train a
         | boardgame playing AI like this. The "best" models started
         | losing to beginner level players from some point onwards.
        
       | igammarays wrote:
       | > Yesterday, I watched someone upload a video file to a chat app,
       | ask a language model "Can you extract the first 5 s of the
       | video?", and then wait as the language model wrote a few lines of
       | code and then actually executed that code, resulting in a
       | downloadable video file.
       | 
       | I missed this. Can someone show me what he is talking about?
        
         | nicky0 wrote:
         | https://twitter.com/gdb/status/1638971232443076609
        
       | PaulWaldman wrote:
       | Current higher level programming languages are developed for
       | humans to develop software closer to their natural language. If
       | in the future humans will be writing and debugging little code,
       | these LLMs will naturally evolve to directly writing Assembly.
       | Scary to think about, but also makes me wonder how many non-
       | technical people cope today with the "black box" of a computer.
       | 
       | About twenty years ago, I had a professor explain to the class
       | that Rational Rose would be replacing us all....yet here we still
       | are.
        
         | imtringued wrote:
         | I don't understand why it would write in assembly. That is not
         | portable and it makes verification difficult and also assembly
         | has less grammatic structure which LLMs rely upon.
        
           | PaulWaldman wrote:
           | If there was never a need for humans to understand code, are
           | higher level languages really the most efficient?
        
             | sebzim4500 wrote:
             | Probably not but I doubt ASM is either. It's too low level,
             | and it doesn't make sense for a LLM to have to do things
             | like instruction selection which would can be done far
             | better by existing tools (LLVM etc.).
             | 
             | Maybe it could just be an alternative syntax for an
             | existing language which is more optimized for input/output
             | to an LLM.
        
               | TchoBeer wrote:
               | I am thinking that the latter might eventually emerge,
               | probably as part of a bigger tool chain e.g. langchain.
               | Something like java bytecode which is low level and
               | portable, but optimized for the ways that LLMs (perhaps
               | interfacing with other tools) work.
        
           | anon7725 wrote:
           | I wonder what the best output language for an LLM is? The one
           | that has the most training examples? Or something that has
           | other properties that make it easier to generate?
           | 
           | I'd guess that the languages with the fewest implicit
           | behaviors (so no Scala or Haskell) would be easiest. Maybe Go
           | is the generation language of choice?
        
           | fendy3002 wrote:
           | I believe when AI become a hive mind and they decide to
           | develop a programming language, they'll start with assembly-
           | like language to abstract the bytecodes, then move to very-
           | specialized higher-level language. The next step will be to
           | develop an os optimized for their use case, the one that
           | provides interfaces for their own assembly-like language.
        
       | arbuge wrote:
       | > OpenAI made the extraordinary and IMO under-discussed decision
       | to use an open API specification format, where every API provider
       | hosts a text file on their website saying how to use their API.
       | This means even this plugin ecosystem isn't a walled garden that
       | only the first mover controls. I don't fully understand why they
       | went this way, but I'm grateful they did.
       | 
       | It's a good point and some have already got this to work:
       | 
       | https://twitter.com/vaibhavk97/status/1639281937545150465
       | 
       | Given that there's no technical obstacles to drop-in
       | compatibility here, I wonder if we'll soon start seeing
       | exclusivity requirements and such.
        
       | marstall wrote:
       | I tried to get chatgpt4 to generate a basic react app that had a
       | public page and a private page. you get access to the private
       | page by authenticating with a google auth popup. gpt valiantly
       | generated code and instructions for google auth. the code was
       | impressive but buggy (outdated api version), but successively
       | pasting errors into chatgpt went most of the way toward fixing
       | it.
       | 
       | the instructions for configuring google auth were off. I tried a
       | number of different ways to get gpt to give me the right
       | instructions, but to no avail.
       | 
       | so it was back to the old way, of spending a few hours reading
       | google's documentation (which I'm doing today) to figure it out.
       | 
       | once I'm there, I feel confident I could better coach chatgpt to
       | instruct me. though I wouldn't necessarily need the help at that
       | point.
       | 
       | on the code side, staring at the google auth api code it had
       | generated, I was faced with a hard truth. I didn't understand
       | this code. to iterate with it, essentially to _develop_ it, I
       | would continue to be dependent on GPT. Even if there was a one
       | liner needed, I wouldn 't be able to come up with it on my own.
       | I'd always have to rely on this outside "brain". How can that be
       | more efficient than a tight REPL loop conducted by me, an
       | evolving master of this API?
       | 
       | And how will we humans even maintain knowledge of these API
       | surfaces if we are not putting in our hours and hours of
       | repetitive usage of them? We become ignorant of the evolving
       | capabilities of the computing platform. And chatgpt becomes
       | useless without humans who understand what's out there, what's
       | needed.
        
         | ChatGTP wrote:
         | Stop being so practical and get wrapped up in the hype at once
         | sir !
        
       | Hizonner wrote:
       | So, the parts where AI makes human labor irrelevant, and where
       | that's a disaster for 99.999 percent of humans unless the whole
       | economy is restructured, isn't exactly news. If ChatGPT doesn't
       | do that, something else will. It wasn't going to be more than 50
       | years no matter what, and now I don't think it'll be more than
       | 20.
       | 
       | The part I'm finding is kind of a shock to me is the impact of
       | the centralization on what you can even _think about doing_. If
       | your application falls under their random definition of
       | "unsafe", then you can't do it. Not even manually, probably,
       | because the infrastructure for that will go away. If your _one
       | off question or task_ doesn 't meet their approval, it doesn't
       | happen.
       | 
       | Basically not only do the owners of these things become the only
       | really important people in the economy, but they also get a new
       | kind of direct control over people's lives.
        
       | Kon-Peki wrote:
       | ChatGPT will destroy GitHub and NPM long before it destroys
       | programming.
       | 
       | What do I need them for if I can get equivalent code written for
       | me on-demand?
        
       | maherbeg wrote:
       | An easy way to solve some of the problems of employment are to
       | start reducing what "full time hours" means. With this first wave
       | of LLMs, we can start decreasing down to 35hours. With the next
       | wave, maybe we move down to 30 hours.
       | 
       | Once we can send LLMs to meetings with each other, we can move
       | down to 15 hours of purely joyful work :-D
        
       | NHQ wrote:
       | I attempted to enter the venture capital Universe with designs on
       | AI Operating Systems a few years ago.
        
       | tarruda wrote:
       | > Yesterday, I watched someone upload a video file to a chat app,
       | ask a language model "Can you extract the first 5 s of the
       | video?", and then wait as the language model wrote a few lines of
       | code and then actually executed that code
       | 
       | Have we already solved AI safety problems? It seems like LLMs can
       | now execute shell commands on our computers.
        
         | spudlyo wrote:
         | There is now a code interpreter[0] plugin for ChatGPT. It's not
         | clear to me if it's available to folks who have been granted
         | into the plugin alpha test or not, but it's running in a
         | sandboxed execution environment somewhere -- not on our
         | computers.
         | 
         | [0]: https://openai.com/blog/chatgpt-plugins#code-interpreter
        
         | crop_rotation wrote:
         | They don't execute it on user's computers. They execute it on
         | OpenAI computers.
        
       | forty wrote:
       | My impression is that those AI code generators, if they end up
       | working well enough that many people who don't know how to code
       | can replace people who do, will be to coders what Monsanto is to
       | farmers, ie we will have tons of devs who don't know how to do
       | their jobs without those proprietary tools, who will struggle to
       | earn enough money (they'll be easy to replace and cheaply paid)
       | to pay for their code generator subscription. I'm not excited.
       | I'm not too worried either though :)
        
       | crop_rotation wrote:
       | What happens to social mobility in the post GPT world. Given that
       | knowledge work (not just software) has been one big option for
       | people to climb the social ladder. If the AI can reasonably do
       | all knowledge work in future, the amount of social climbing
       | opportunities will drastically decrease. And no, UBI will not
       | create more opportunities for social mobility. It seems like more
       | and more people will have to compete with the fewer and fewer
       | social climbing opportunities.
       | 
       | Also what happens to Europe? All these companies behind LLMs are
       | from US, and Europe is nowhere to be found. This seems like it
       | will dramatically accelerate the wealth different between the US
       | and the EU.
        
         | mrtksn wrote:
         | The panic is needless. If one hour of design work generates
         | $100 income, then one might assume that
         | MidJourney/Dall-E/StableDiffusion will generate trillions of
         | dollars, but the world doesn't work like this. What will happen
         | is that the design jobs will transform.
         | 
         | As you might have noticed, the AI boom will decimate the code
         | writing jobs as well, something that the EU is behind on.
         | Europe missed the "tech" age, but notice how the EU is not any
         | poorer than the USA. Sure, some countries are poorer than
         | others, but not everywhere in the US is Silicon Valley. Why?
         | Because despite the EU missing out on "tech", actually the EU
         | is very technologically advanced. Tech doesn't mean only low-
         | touch high-scale computer-based businesses. There are chemists,
         | biologists, anthropologists out there who don't know how to
         | write a single line of JS and are paid like 1/5th of a junior
         | JS developer, but the work they do is very valuable to society.
         | Guess they don't need to learn JS anymore.
         | 
         | Also, notice how despite the thousands of layoffs, the US job
         | data keeps coming out very positive - there's no unemployment
         | problem. This is because of the markets, but AI will have
         | similar effects. The world no longer needs that many CSS
         | experts and React gurus who pull in $200K; the world apparently
         | needs more hard-tech engineers and retail workers.
         | 
         | The AI thingy is devastating just for a subset of the "tech"
         | workers and creative industries. It will enable other types of
         | people and industries.
         | 
         | Startups who are trying to solve food production issues, for
         | example, might finally outshine the next grocery delivery
         | startup.
        
           | travisjungroth wrote:
           | > but notice how the EU is not any poorer than the USA.
           | 
           | EU is significantly poorer than the US. Lots of different
           | ways to measure it, but it's a factor of roughly 1.5-2x in
           | purchasing power parity.
        
             | margorczynski wrote:
             | The problem is that many when saying "EU" are thinking
             | mainly Germany and other countries among the top-3/5. But
             | even Germany is behind the US in terms of GDP/capita, PPP
             | probably also.
        
               | namaria wrote:
               | Most of the US is behind the leader locations as well.
               | And most Europeans in rich metropolises have better lives
               | then Americans in rich metropolises. Less pollution, less
               | traffic, more free time, more safety.
        
             | booleandilemma wrote:
             | I don't think this invalidates the parent's point though.
             | 
             | I'm just waiting for an "Ask HN: What are some job
             | alternatives for people who know programming and can't get
             | a job anymore since ChatGPT replaced us?"
        
             | mrtksn wrote:
             | It's just accounting differences. The life in Europe is not
             | any different. If the junior developers don't make 100K and
             | the visit to a doctor doesn't cost 10K the overall economic
             | activity appears to be lower but it's not.
        
               | thequadehunter wrote:
               | Just FYI for non-Americans...most Americans are insured
               | and doctors visits don't cost 10k. Major surgeries might,
               | but your insurance usually caps at a certain number in
               | the 2-8k range for the whole year.
               | 
               | Not saying the system isn't bad, but 10k for a doctor's
               | visit is kind of a stretch...
        
               | ipatec wrote:
               | is not accounting differences at all: - Europeans (and I
               | am one) live in tiny housing even compared to people in
               | NYC. - we have less cars (you can claim it's due to
               | public transport but if public transport is not available
               | most people would not afford cars regardless. - overall
               | less leisure expenditures and less disposable income.
               | 
               | That 10k doctor is a myth and certainly not something the
               | 100k developer will have to pay. That's covered by his
               | company. Healthcare is an issue in US when you're at the
               | bottom of the food chain.
        
         | boh wrote:
         | This is a question to ask when this actually becomes a reality.
         | The AI taking jobs narrative is more of a marketing ploy to
         | convince companies to buy AI services but the truth is, none of
         | this stuff is anywhere near market ready. If a person is doing
         | a job an AI bot can currently replace you've probably already
         | replaced that person with cheap labor overseas. Regardless of
         | whatever optimism is being channeled into the hype about AI's
         | "potential", it hasn't convinced many businesses.
        
           | daniel_reetz wrote:
           | Businesses take time to react and this is recent. I'm close
           | to Hollywood and these technologies are seeing their first
           | value-generating uses on every project I'm privy to. What you
           | see in public is just that-public.
        
         | CuriouslyC wrote:
         | As AI progresses, job options will reduce to various flavors of
         | people who tell AI what to do, or tell other people what to do,
         | or do physical things that machines are bad at. Over time that
         | will reduce to executives, "architects" of various sorts,
         | social media entertainers and manual laborers/direct customer
         | service. The entire "middle" portion of most organizations that
         | exist to connect the people making the high level decisions wit
         | h the boots on the ground is going to disappear.
        
           | tablespoon wrote:
           | > Over time that will reduce to executives, "architects" of
           | various sorts, social media entertainers and manual
           | laborers/direct customer service.
           | 
           | And at the very end, it will reduce to capital _only_ , with
           | no need for labor at all. Most people will be unemployed, and
           | whatever capital they've amassed is unlikely to be enough to
           | sustain themselves and their families for the long term. They
           | (you) will end up as little more as impotent ants to AI-
           | fueled Elon Musks, neglected until the infestation needs to
           | be cleared to make way for some project.
        
             | booleandilemma wrote:
             | It doesn't really make sense though does it? Musk is rich
             | because people buy his cars. If we're all impoverished
             | ants, no one is going to be buying cars. Musk's money has
             | to come from somewhere.
        
               | tablespoon wrote:
               | > It doesn't really make sense though does it? Musk is
               | rich because people buy his cars. If we're all
               | impoverished ants, no one is going to be buying cars.
               | Musk's money has to come from somewhere.
               | 
               | It does make sense, but you're not thinking about it
               | clearly because you're too tied up in existing social
               | structures. The end state "AI-fueled Elon Musks" (note
               | that's a type, not a particular man) don't need common-
               | man customers or their money, because they don't need to
               | pay labor to operate their capital. They can directly
               | operate their capital themselves, so they'll just do
               | whatever the heck they want and nearly everyone who's now
               | an employee becomes an ant.
               | 
               | At that point the main economy would mainly consist of
               | billionaire ego projects and some trade between large
               | corporations to support them. Common people would scrape
               | by on billionaire largess and by squatting on resources
               | not currently needed by billionaire ego projects and
               | using it for small-scale subsistence production.
        
               | booleandilemma wrote:
               | Thanks for explaining that. It's terrifying.
        
             | pfdietz wrote:
             | The end state is when the cost of goods is determined by
             | externalities. Capital and labor costs will be minimal;
             | what you pay for is the pollution produced in the
             | manufacture of the goods.
             | 
             | We may not be that far away from when energy-intensive,
             | latency-insensitive computing tasks are best located in
             | space, to take advantage of cheap continuous solar power.
             | The power capabilities of the next gen Starlink satellites
             | are impressively cheap.
        
             | CuriouslyC wrote:
             | While that's technically a valid potential future, it's
             | unrealistic just because society would tear itself apart
             | long before it reached that limit state.
        
               | tablespoon wrote:
               | > While that's technically a valid potential future, it's
               | unrealistic just because society would tear itself apart
               | long before it reached that limit state.
               | 
               | I don't think it's that unrealistic. The trick will be,
               | not going too fast, managing a few separate transitions,
               | and making sure capital maintains control of the
               | institutions with the monopoly on the use of force. The
               | masses don't tend to act to project their interests until
               | it's too late.
        
               | tjpnz wrote:
               | Under a capitalist system hell bent on endless and
               | unfettered growth there's no slowing down. All it's going
               | to take is a handful of players across a handful of
               | industries to set things in motion. Perhaps AI will
               | inadvertently eat the rich.
        
               | rootusrootus wrote:
               | > The masses don't tend to act to project their interests
               | until it's too late.
               | 
               | The masses are already showing signs of restlessness, and
               | the only real problem right now is wealth inequality.
               | Actual unemployment rates remain low. Forward in time a
               | little, let's say 20% unemployment due to AI. The only
               | way anybody is going to maintain their monopoly on use of
               | force is if they hire every one of those 20% to be
               | police. Right now the ratio of police to citizens is
               | really low, and the ratio of weapons to civilians really
               | high. I don't think the masses will wait all that long.
        
               | tablespoon wrote:
               | >> The masses don't tend to act to project their
               | interests until it's too late.
               | 
               | > The masses are already showing signs of restlessness
               | 
               | IMHO, "restlessness" doesn't mean anything. It would be
               | expected in a AI-driven usurpation of labor. People have
               | already been restless for decades due to de-
               | industrialization, and that mainly got us Trump and an
               | opioids, but the factories are still gone.
               | 
               | The key to fucking over the masses is making sure the
               | "restlessness" doesn't get too strong, and doesn't have a
               | clear (and correct!) villain identified, and maintaining
               | a sense of inevitability.
               | 
               | > I don't think the masses will wait all that long.
               | 
               | IMHO, they probably will. Any individual or small group
               | who takes action will be pilloried as wackos and thrown
               | in jail. A larger movement will be (rightly)
               | characterized as an insurrection and dealt with harshly.
               | 
               | People are complacent, and often don't realize they're
               | really losing something until it's already slipped from
               | their fingers.
               | 
               | I also think the Western world lacks the ideological
               | tools to stop technologies like this. They'd basically
               | have to start looking at technology like Amish do:
               | rejecting technology that would undermine their social
               | structure, rather than expecting the social structure to
               | adapt to the technology.
        
         | anonyfox wrote:
         | Focussing on the climbing is the core problem I think. Why even
         | do this? We collectlively should own the machines and guarantee
         | just wealth distribution, so that its impossible for a few to
         | have much more than than others. Then every increase in machine
         | work is a great thing for everyone, instead of increasing
         | competition between fewer and fewer people.
         | 
         | Europe in itself is (together) the single biggest
         | market/economy in the world by the way, and the US is actually
         | falling behind into developing-country territory when you look
         | at the population and their access to basic services. And just
         | because right now it is convenient to rely on the US companies,
         | and we're deep allies btw, doesn't mean europeans couldn't spin
         | up the same tech if really needed.
        
           | golergka wrote:
           | You're describing a dystopia.
        
           | nextlevelwizard wrote:
           | >We collectlively should own the machines and guarantee just
           | wealth distribution, so that its impossible for a few to have
           | much more than than others.
           | 
           | OK, put your money where your mouth is and send me 10% of
           | your pay check.
        
             | namaria wrote:
             | OK, put your money where your mouth is and stop paying
             | taxes
        
               | nextlevelwizard wrote:
               | You wanna tell me how you got from not wanting socialism
               | to not paying taxes? Or do you think taxation _is_
               | socialism?
               | 
               | And while we are here obviously I do my best to pay as
               | little taxes as possible, but due to where I live I do
               | end up paying more than 30% of my salary in taxes.
        
               | namaria wrote:
               | Are you gonna explain how you went from 'co-ownership and
               | better wealth distribution' to 'ok then give me 10% of
               | your paycheck'?
        
               | nextlevelwizard wrote:
               | Socialism is splitting your shit. Learn your philosophy.
               | This is like how students at my colleges "socialist party
               | nights" always got angry when I took beers out of their
               | fridge. That's literally what socialism is.
        
               | namaria wrote:
               | You're the only one raising the socialism strawman...
               | 
               | Did you ever put beers into those fridges? Or just took
               | them? Because that's what looting is.
        
               | nextlevelwizard wrote:
               | Socialism is literally looting people who have stuff and
               | handing it out to others. I didn't have money nor beer
               | and they were supposedly socialists, so I gave
               | accordingly to my ability and took according to my needs.
               | 
               | I guess Socialism is always nicer when you see yourself
               | on the receiving end. We are both in the top 1% of the
               | world, so we'd be giving away pretty much all we have.
        
               | namaria wrote:
               | Did you offer to labor at the best of your skills for
               | them? Did you need the beer? The comparison is risible.
               | 
               | You have no idea how I live to make claims about my
               | political inclinations.
               | 
               | Then again I never advocated socialism, and you're
               | fighting a shadow.
        
           | tomp wrote:
           | > guarantee just wealth distribution, so that its impossible
           | for a few to have much more than than others
           | 
           | How exactly would this work?
           | 
           | Is it "just" that someone who drinks and parties all the time
           | "owns" the same amount as someone who works and saves for 20
           | years?
           | 
           | Communism fails, not because it's "never been tried
           | properly", but simply because it's logically inconsistent
           | ("they pretend to pay us, we pretend to work").
        
             | PeterisP wrote:
             | "they pretend to pay us, we pretend to work" is a failure
             | because currently the prosperity of the society needs these
             | people to actually work effectively.
             | 
             | On the other hand, the hypothetical solution of "own the
             | machines and distribute the wealth" is intended for a
             | future which is substantially different, where it doesn't
             | matter if everyone pretends to work or even explicitly
             | avoids working, because that work isn't necessary for
             | prosperity as it can be done by machines, and it ceases to
             | be a problem if everyone can be as lazy as they want.
        
               | tomp wrote:
               | Sure but then you don't need wealth redistribution,
               | because everything is dirt cheap.
               | 
               | The correct mental model isn't "communism" or "wealth
               | distribution", but instead "salt".
               | 
               | Countries used to go to war because of salt. But now
               | technology has eroded its value so much, restaurants are
               | _literally_ giving it away.
        
               | PeterisP wrote:
               | Artificial scarcity is a thing, so something being dirt
               | cheap to produce doesn't necessarily mean that it will be
               | actually affordable, and in the current economic
               | environment there seems to be sufficient motivation for
               | powerful people to try and make various monopolies based
               | on capital-gated barriers of entry, even if the marginal
               | cost approaches zero; so I'd expect that the default
               | scenario is _not_ like  "salt". Getting to a mental model
               | "like salt" seems to be a reasonable outcome in the long
               | run, but I'm afraid that it would take some significant
               | pressure from the masses to get from here to there.
        
             | marcosdumay wrote:
             | > we pretend to work
             | 
             | That stops being a problem when the machines do all the
             | work.
        
             | beezlebroxxxxxx wrote:
             | Rawlsian justice is actually an enormously influential idea
             | in political philosophy, arguably the most influential in
             | the 20th century. It has 3 central principles that work
             | together:
             | 
             | 1. You enable equality of opportunity.
             | 
             | 2. You allow the chance for "winners" and "losers".
             | 
             | 3. You adopt the original position ("the veil of
             | ignorance") because no one has foresight into their place
             | of birth and the conditions therein (ie. no one can a
             | priori help themselves), therefore you enact the
             | "difference principle" which states that, insofar as you
             | allow the chance for "winners" and "losers", governments
             | enact policy in such a way that the majority of the
             | benefits of those policies go to the "losers" over the
             | "winners".
             | 
             | There has, of course, been enormous debate on the nature of
             | Rawlsian justice, but it's not like "just" has to
             | immediately equate with communism. Most modern western
             | democracies are implicitly (and sometimes explicitly)
             | modelled on Rawls' idea.
        
             | namaria wrote:
             | >Is it "just" that someone who drinks and parties all the
             | time "owns" the same amount as someone who works and saves
             | for 20 years?
             | 
             | You mean kinda like capitalism? Where people born into
             | wealth just party and drink all the time and own 10^10
             | times more then someone who works and saves for 40+ years?
        
             | aqme28 wrote:
             | > Is it "just" that someone who drinks and parties all the
             | time "owns" the same amount as someone who works and saves
             | for 20 years?
             | 
             | If AI is doing all the work, what does it matter anymore?
        
               | nextlevelwizard wrote:
               | We are no where near that. We don't even yet have AIs
               | that can write better code than random college student
               | googling.
               | 
               | We will need to crack fusion (or some other way of
               | creating "free" electricity) and then 3D printers that
               | can convert energy into matter and then slap AGI on top
               | of it and we are in post scarcity society where robots
               | can do everything.
               | 
               | If you miss any one of thous things you won't get there.
        
               | namaria wrote:
               | There will never be anything free. We always have to pay
               | with labor. But maybe we should share the fruits of labor
               | more equally instead of giving most of it to someone who
               | has some documents they got from their parents.
        
               | nextlevelwizard wrote:
               | It is always easy to demand more from the people who are
               | better off than you and attribute their success on luck,
               | be it heretical or not.
               | 
               | I am not going to say that I haven't been more privileged
               | than most of the people on the planet. My ancestors made
               | my country into what it is which gave me free education,
               | low corruption, and in general a good start in life.
               | However there are also a lot of people who had the same
               | start as I did, but managed to squander it on the way.
               | 
               | Everyone should absolutely have equal opportunities in
               | life. Education should be free for everyone. Bare human
               | necessities should be taken care of no matter what you
               | do. But I do not agree that everyone should have equal
               | outcome no matter what.
        
               | namaria wrote:
               | >But I do not agree that everyone should have equal
               | outcome no matter what.
               | 
               | Same here. I never said that that should happen.
        
             | anonyfox wrote:
             | it begins with everyone has enough to begin with, and
             | whatever is importent is available as public services.
             | there are different flavors to do this, the currently best
             | implementation is to have very high&progressive taxes that
             | fund a strong public sector, that in turn makes people feel
             | safe and allows them to pursue higher education (doctors,
             | teachers, ...) that again are needed for all this.
             | 
             | Sure you can work to have more, but not 100x more than your
             | neighbor, nobody is worth that. instead of focussing on the
             | single one elon musk, try to give everyone access to a good
             | safety net and encourage them to try something, so
             | statistically you will end up with many high-contributors
             | instead of few parasitic billionaires.
        
             | atq2119 wrote:
             | With all the talk of "quiet quitting", it seems like "they
             | pretend to pay us, we pretend to work" is a potential
             | failure mode of capitalism as well.
        
           | nico wrote:
           | We've had the means and technology to provide even
           | food/housing/education for the entire world, for a long time.
           | 
           | Yet here we are.
           | 
           | It's a human-political issue, it is not a technology issue.
           | 
           | What's the difference now?
        
           | rhn_mk1 wrote:
           | Luddites failed the first time around. What could they do
           | this time to succeed?
        
             | anonyfox wrote:
             | french revolution and socialist revolutions went different.
             | its not preventing the machines, its strictly solving the
             | problem of who owns them - fairly easy thing. It "only"
             | needs to build up a bit more suffering for the masses until
             | this naturally happens again.
        
               | margorczynski wrote:
               | The thing is then the people in power were reliant on
               | other people to provide force. What happens when the
               | tech-overlords and government cliques put their hands on
               | perfect robot AI slaves which are completely superior to
               | humans when it comes to fighting?
        
               | namaria wrote:
               | Kill bot enforced gated communities and a free for all
               | outside, naturally. Maybe in due time we have a second
               | industrial revolution and rebuild modern society
               | ourselves...
        
             | green_man_lives wrote:
             | The first time around (and every subsequent time) the tools
             | used for production, lets call them capital, made laborers
             | more efficient and cheapened goods and drove wages up
             | (mostly). In all of these cases there was still the need
             | for a laborer.
             | 
             | At a point when all labor is obsolete there will be
             | literally no method of survival for anyone who doesn't own
             | the "compute capital". The two options will be to let
             | everyone starve because they weren't lucky enough to
             | shareholders in the company that owns all the bots, or just
             | make that enterprise socially-owned and pay the unemployed
             | workers.
        
               | namaria wrote:
               | Laborers are not fungible you know. There is plenty of
               | evidence of lives ruined by industrial change.
               | 
               | The fact that the net result is positive doesn't mean
               | that everyone profits equally. Having lived in
               | capitalistic societies should have made that clear
               | already.
        
               | bitcoin_anon wrote:
               | > At a point when all labor is obsolete there will be
               | literally no method of survival for anyone who doesn't
               | own the "compute capital".
               | 
               | How about subsistence farming?
        
               | green_man_lives wrote:
               | I supposed if the gracious techno-overlords are gracious
               | enough to grant the underclass a nature preserve where
               | they can keep living in a labor-powered society then
               | sure. Somehow I feel that bulldozing ghettos would be the
               | more realistic outcome.
               | 
               | Truthfully I think we'd have a few large societal shifts
               | before we ever got to the stage where genius level AI
               | could be spun up and down like containers, but it helps
               | to illustrate the point that a post-labor society is
               | incompatible with the tenets of capitalism, which is
               | something that a lot of people fail to comprehend when
               | they worry about AI.
        
               | the_only_law wrote:
               | Maybe, but they still need a different type of capital.
               | Unless property rights stop being enforced, where are all
               | the people who don't own land, or people who don't own
               | arable land supposed to farm? Are we bringing back
               | manorialism?
        
               | namaria wrote:
               | Considering I am mostly obliged to rent land where I can
               | perform work so I can use my salary to pay my rent just
               | makes me feel like I live in some sort of distributed
               | virtual manor to be honest.
        
               | [deleted]
        
           | th14row wrote:
           | [flagged]
        
         | ticviking wrote:
         | I'm not terribly worried. Though I assume my work will begin to
         | resemble that of the Adeptus Mechanicus rather than proper
         | engineering.
         | 
         | The fact is that anyone who understands even at a basic level
         | what the computer is actually doing and isn't afraid to look at
         | it at a low level can't be replaced by an AI trained on stack
         | overflow.
         | 
         | It may be that I will spend more of my time on code review of
         | LLM generated code, or make my money in the new kinds of legacy
         | code created by copy pasting ChatGPT snippets together instead
         | of SEO optimized stack overflow scrapes.
         | 
         | For me the outcome is the same. The skills I need to be more
         | effective than the machine are the exact same as they were
         | decade, century or even millennium ago. I still don't see these
         | LLMs do any synthesis of knowledge, and they don't seem to have
         | a grasp of logic or grammar at the level I expect a bright
         | middle school student to have.
        
           | visarga wrote:
           | You should read the "Sparks of AGI" paper, especially the
           | math and code sections. It's a GPT-4 evaluation conducted
           | from outside OpenAI (authored by a MS team). It's an easy to
           | read paper, mostly a collection of examples.
           | 
           | https://arxiv.org/abs/2303.12712
        
           | yoyohello13 wrote:
           | > Though I assume my work will begin to resemble that of the
           | Adeptus Mechanicus rather than proper engineering.
           | 
           | Lol, I was thinking about this the other day. Eventually most
           | devs will essentially just be praying to the Machine spirit
           | to make the computer do what they want. A small few high
           | clerics will bother to learn how computers actually work. The
           | rest will simply be cargo culting to the maximum extent
           | possible.
        
             | golemotron wrote:
             | > Eventually most devs will essentially just be praying to
             | the Machine spirit to make the computer do what they want.
             | 
             | Same as it ever was.
        
         | tjr wrote:
         | What does GPT know beyond what we have (directly or indirectly)
         | taught it? Could it have figured out how to extract five
         | seconds of video from an MPG file if that knowledge had not
         | been made available to it? Would it have invented the MPG
         | format (or something comparable) on its own? Would it have
         | developed the web and HTTP protocol on top of TCP/IP?
         | 
         | And so on. Maybe the answer is in fact "yes", or even "yes, and
         | it would have done these things even better than humans did".
         | But so far it seems to be amazingly good at doing things that
         | we showed it how to do.
         | 
         | If we stop creating actually new things, will it do that for us
         | also?
         | 
         | Why would it care to do so? What interest does it have in
         | creating new things on its own?
        
           | crop_rotation wrote:
           | The amount of people who create genuinely new things is so
           | tiny that I am not sure it is even relevant to the
           | discussion. And here I mean genuinely new things, not
           | translating some C lib to java or similar stuff, or changing
           | existing libraries to handle more stuff.
           | 
           | If the amount of people that will have social mobility
           | opportunities will be equivalent to the amount of people who
           | could have invented the MPG format or something comparable,
           | then my point is made.
        
             | illiarian wrote:
             | > The amount of people who create genuinely new things
             | 
             | The question isn't really about "genuinely new things". The
             | number of permutations of existing things is such that at
             | any given job you're likely to do old things in a new way.
             | 
             | E.g. you'd think that all streaming services are the same.
             | Superficially, yes. Internally, Netflix, Disney+ and Apple
             | Tv+ are likely to be different as night and day.
        
             | intelVISA wrote:
             | > people who create genuinely new things is so tiny
             | 
             | This is my insurance against LLMs, only works if the market
             | demands new things...
        
               | crop_rotation wrote:
               | I doubt anyone would want their livelihood to depend upon
               | being able to create genuinely new things with extreme
               | consistency.
        
           | antibasilisk wrote:
           | >Could it have figured out how to extract five seconds of
           | video from an MPG file if that knowledge had not been made
           | available to it?
           | 
           | Could you?
        
             | tjr wrote:
             | A good highlight to an ambiguous question! Which knowledge,
             | exactly, is not being made available?
             | 
             | But let's say that an MPG format specification is
             | available. Even other code examples of interacting with an
             | MPG file. But no examples, no library, no documentation on
             | specifically how to extract a subset of one file into
             | another file.
             | 
             | I would think a competent programmer could figure that out.
             | Perhaps an AI tool could also; I have not yet seen an
             | example of it doing so, but perhaps it could.
             | 
             | Of the handful of questions I asked, this might be the
             | least interesting one. More generally, can AI tools advance
             | the state of the art?
        
         | machiaweliczny wrote:
         | Lol I am from Europe and I know how to repro this in a week so
         | not a big deal.
        
         | wefarrell wrote:
         | Ironically administration overhead in many industries,
         | particularly healthcare and education, has been growing over
         | the past several decades despite increasing technology
         | adoption.
         | 
         | This is exactly the opposite of what you would expect given the
         | increased efficiencies that come from adopting computer systems
         | and automation.
         | 
         | I see AI as a continuation of this trend and I don't expect it
         | to put people out of work, bureaucracy will always find new
         | ways to justify itself.
        
           | analyte123 wrote:
           | I've heard this stated as "jobs that don't exist for economic
           | reasons aren't going to get automated for economic reasons".
        
             | wefarrell wrote:
             | I think we overestimate the extent to which organizations
             | are aligned and making cohesive decisions based on
             | economics.
        
           | thequadehunter wrote:
           | The other thing is that we'd need insane levels of trust in
           | the AI to have it doing all these jobs 100%.
           | 
           | Like, I could technically have a newbie running commands on a
           | production router for a script that I wrote out...but even if
           | I let them do that there's no way I wouldn't at least
           | supervise. I don't think most companies are even remotely
           | comfortable with the idea of having an AI system running code
           | on their systems no matter how smart it is.
        
         | th14row wrote:
         | Nobody in their right mind would start a business like this in
         | Europe, with the shadow of the EU threatening with regulating
         | everything and pushing taxes/fines for everything. Just move to
         | a tech hotspot in the US.
        
         | green_man_lives wrote:
         | The Luddites were not protesting technology because technology
         | is inherently evil. They were against the capitalistic
         | ownership of the means of production.
         | 
         | So far technology has enabled use to increase economic output
         | which means rising standards of living. Even if 99% of people
         | subsist from selling their labor, the tools they use are a
         | force multiplier that (in theory) drives wages up.
         | 
         | When you can spin up a bunch of Von Neumann level intelligence
         | LLM-powered agents and have them run your company for you,
         | there is no more labor to sell. You can either pay the former
         | laborers to exist, or just let them starve.
         | 
         | So our two options are social ownership of all AI capital, or
         | letting everyone without AI capital die, and let a handful of
         | people live in the resulting AI-powered society.
        
         | nextlevelwizard wrote:
         | Hot take: we will get better software.
         | 
         | People who are in tech _just_ to  "climb social ladder" i.e.
         | only for the pay check are going to be pushed out by LLMs and
         | people who are actually passionate about tech will remain. This
         | will cause less and less shitty code to be written (of course
         | for next few years even more bloated shit code will be written
         | with ChatGPT and Copilot by noobs who have no idea what they
         | are doing)
        
         | tablespoon wrote:
         | > What happens to social mobility in the post GPT world[?]
         | 
         | If the technology pans out the way the techno-enthusiasts hope
         | it will, _upward_ social mobility will be nearly eliminated...
         | unless there 's some kind of successful Luddite revolution
         | against the technology _and the people that own it_. But that
         | 's not going to happen: there are all kinds of social pressure
         | against revolution, as well as strict gun control in most
         | places. Anyone who tries to resist their obsolescence will soon
         | find themselves either ridiculed and condemned or in jail.
         | 
         | Of course, _downward_ social mobility will accelerate, and be
         | celebrated by idiot technologists who just want to build tech,
         | and don 't really care to think about the consequences of the
         | technologies they build on real people.
        
           | anonyfox wrote:
           | you vastly underestimate civilians. probably not a single gun
           | is needed. a crowd of people at some point will walk into the
           | billionaire's home and make him agree with handing over his
           | wealth. They might call the police, but all thats needed is
           | the officers ignoring the call willingly. Why should they
           | defend those billionaires? What should they do? Rich people
           | depend on many individual poor laborers, all of them can
           | simply decide to no longer accept his state of affairs and
           | there is lietrally nothing the billionaires could do against
           | it.
           | 
           | Ideally, defund the police and so on, so that every state
           | worker also is keen on getting that wealth redistribution
           | done.
        
             | tablespoon wrote:
             | > you vastly underestimate civilians. probably not a single
             | gun is needed. a crowd of people at some point will walk
             | into the billionaire's home and make him agree with handing
             | over his wealth.
             | 
             | I'm not underestimating civilians. If what you're
             | suggesting was at all realistic, China would be a democracy
             | and Trump would still be president.
             | 
             | Sure, tens of millions of unarmed people with a single mind
             | could probably do anything (like a mass of zombies can),
             | but you'll never actually get that. There are numerous
             | mechanisms preventing such a mass from forming, and more to
             | dismantle and negate it afterwards.
        
               | anonyfox wrote:
               | you assume that the vast majority of people in china are
               | outright unhappy with their government right now, I don't
               | think that this is the case. And trump is simply an idiot
               | that only get enough votes somehow because both parties
               | in the US are a joke to begin.
        
               | tablespoon wrote:
               | > you assume that the vast majority of people in china
               | are outright unhappy with their government right now, I
               | don't think that this is the case.
               | 
               | I'm not talking about right now.
               | 
               | > And trump is simply an idiot that only get enough votes
               | somehow because both parties in the US are a joke to
               | begin.
               | 
               | Trump literally had "a crowd of people at some point ...
               | walk into the [government's] home and make [them] agree
               | with handing over [power]." How did that go?
        
           | CuriouslyC wrote:
           | I don't think social mobility will be eliminated, I think the
           | variance of individual mobility will just be radically
           | increased. This technology will allow entrepreneurs and solo
           | content creators to accomplish a lot more, but it'll also
           | eliminate a lot of safe career paths. As a result, if you
           | don't want to be stuck in the underclass doing manual labor
           | or customer service, you'll need to either be brilliant, well
           | connected, start a business or create content of some sort
           | that attracts an audience you can monetize. The people who do
           | well will do very well, but a lot of people who would have
           | done decently before are going to fail.
        
             | mattgreenrocks wrote:
             | > The people who do well will do very well, but a lot of
             | people who would have done decently before are going to
             | fail.
             | 
             | How is this not a huge problem? The vast majority of people
             | are not exceptional. Cutting out that middle band of
             | ability and resources is a surefire recipe for social
             | unrest.
        
               | namaria wrote:
               | >Cutting out that middle band of ability and resources is
               | a surefire recipe for social unrest.
               | 
               | I don't know, the US has been pushing that envelope for
               | 40+ years and people are still paying taxes...
        
               | tablespoon wrote:
               | > I don't know, the US has been pushing that envelope for
               | 40+ years and people are still paying taxes...
               | 
               | And people can push the "meth consumption" envelope for
               | years before they finally die from it, too.
               | 
               | Getting away with unsustainable practices for X amount of
               | time doesn't prove they're sustainable and won't end in
               | collapse. It just means collapse can take more than X
               | amount of time.
        
             | crop_rotation wrote:
             | Your points all resonate with me. But the end result is
             | same, the opportunities available for social mobility to an
             | average person would reduce. I am not even sure of the 2nd
             | and 3rd order effects. How would an industry like
             | hospitality survive if most of the knowledge work based
             | jobs are gone.
        
             | tablespoon wrote:
             | > I don't think social mobility will be eliminated, I think
             | the variance of individual mobility will just be radically
             | increased.
             | 
             | Those round to the same number: 0.
        
         | thisoneworks wrote:
         | You hit the nail on the head. GPT/copilot in some sense is
         | democratization of specialized knowledge. Now on one end you'll
         | have product engineers/managers prompt gpt to write boilerplate
         | code of all kind, on the other end you'll have senior engineers
         | reviewing, optimizing writing specialized code. Where do junior
         | engineers fit? Can you be a full stack engineer? I believe
         | it'll squeeze out all the could-be devs out of the field
         | forever. Gender diversity/DEI? Forget it. Upward mobility? Will
         | become much harder, you'll basically need to have a
         | specialization to even be considered
        
           | ROTMetro wrote:
           | And then your senior engineer class ages out and you are left
           | with who? You need a ladder to create that senior class in
           | the first place.
        
             | thisoneworks wrote:
             | At that point I'm pretty sure they'll already have fine
             | tuned a "senior software engineer agent". This may sound
             | ridiculous and yeah we probably won't get rid of the
             | entirety of SWE ladder, but my point is we are entering the
             | realm of science fiction now, our assumptions about labor
             | are about to fall off a cliff. Productivity will increase
             | through the roof, corps will make more cash, workers will
             | suffer
        
       | j3s wrote:
       | a mental exercise for the doomsayers: if stackoverflow + search
       | engines were invented today, would you be saying the same stuff?
       | it's clear to me that chatgpt is an programmer accelerator, not a
       | replacement. it's just another tool - a very good one at that.
       | 
       | 90% of programming is communicating with other people - chatgpt
       | can't talk to people.
        
         | ipatec wrote:
         | people can talk to chatGPT
        
         | machiaweliczny wrote:
         | ChatGPT => speech synth => human => whisper
         | 
         | It also can connect to your Notion, Slack or whatever
        
       | DethNinja wrote:
       | I don't get the overall doom and gloom towards LLMs on the
       | software field.
       | 
       | If you are a software engineer, this will output your
       | productivity ten fold on the upcoming years. Now you don't need
       | to hire junior devs and can just build the product of your dreams
       | with very limited capital.
       | 
       | In my opinion this technology will be as democratising as the
       | YouTube's early days.
       | 
       | Instead of worrying, learn to work with it. It will be harder for
       | large companies/large teams to extract value from this compared
       | to small companies/small teams.
       | 
       | It means competition between companies will increase but it isn't
       | necessarily bad for existing software engineers, especially solo
       | founders.
        
         | [deleted]
        
         | Traubenfuchs wrote:
         | > Now you don't need to hire junior devs and can just build the
         | product of your dreams with very limited capital.
         | 
         | You are overestimating the vast amount of "software engineers"
         | in the world. The overwhelming majority of us are just
         | programmers, we are just gluing together CRUD spaghetti in the
         | random language we grew up with. We don't care too much about
         | work or a career. And most of us don't want to do more, we want
         | to get a decent salary for our boring work. And we certainly do
         | not want to be "solo founders", build products of our dreams or
         | increase our productivity.
         | 
         | This way of living feels threatened now.
        
           | Arubis wrote:
           | Hard agree.
           | 
           | Like sibling commenters, I love the idea of building
           | something new with greater leverage. On an individual level,
           | I'm looking forward to leveling up and finding new ways to be
           | effective in my work.
           | 
           | Unlike sibling commenters, I don't think that should be our
           | only option in life. It saddens me greatly that, given a new
           | option to increase the effective output of a unit of time, we
           | repeatedly choose as a society to profit monetarily (and with
           | vast disparity in who benefits) rather than to give people
           | more options in life than drilling on their jobs.
           | 
           | The industrial revolution promised people lives of relative
           | leisure by replacing the need for much physical labor, but
           | instead we concentrated the benefit to the few--and we keep
           | making that same choice over and over.
        
           | muffles wrote:
           | > We don't care too much about work or a career. And most of
           | us don't want to do more, we want to get a decent salary for
           | our boring work.
           | 
           | Yikes. Productive work is not just a way to earn a living but
           | also a way to achieve personal fulfillment and happiness.
           | It's a means of creating value and contributing to society. A
           | person who works just for the salary and does not find any
           | meaning in his work is not living up to his full potential.
        
             | [deleted]
        
             | AnIdiotOnTheNet wrote:
             | Maybe not, but they do get to eat, see a doctor, and enjoy
             | some vacation time every now and then.
             | 
             | If I could make money doing something I found a lot of
             | meaning in I'd be doing that instead. Thing is, we usually
             | don't have that option.
        
             | jwestbury wrote:
             | > A person who works just for the salary and does not find
             | any meaning in his work is not living up to his full
             | potential.
             | 
             | Things I enjoy don't pay enough to live a comfortable life.
             | Tech does. So I do well enough at my job to pay for the
             | things I enjoy, and hope I find enough edge cases at work
             | to avoid burnout.
             | 
             | In a true post-scarcity society, where everyone has the
             | freedom to choose a career based purely on fulfillment,
             | your argument is excellent. Until then, however, it's not.
        
               | the_only_law wrote:
               | Hell there are things I'd probably take a big pay cut to
               | do, but it would take years of my life and large amounts
               | of money just to retrain.
        
             | agentultra wrote:
             | Yikes. Your full potential isn't your work. We are all
             | creative beings with deep emotional lives and connections
             | to everything around us. And there are a ton of jobs in
             | programming that pay well enough that you can live
             | relatively well in a capitalist society. Some people find
             | fulfillment in their families, neighbours, art, and dreams.
             | 
             | How many jobs in modern society are complete bullshit? A
             | good deal of them, I would say. Why should people measure
             | their happiness and self worth from these?
        
               | MrMan wrote:
               | [dead]
        
               | muffles wrote:
               | I'm not suggesting your work is the only source of
               | fulfillment or that one's career should be the sole
               | measure of their self-worth. Rather, the importance of
               | finding value and meaning in one's work is a
               | complementary means of achieving personal fulfillment and
               | happiness. It is still possible to find value and meaning
               | in jobs that do not align with a person's interests or
               | passions. The key is to find a balance between work and
               | other aspects of life.
        
               | Traubenfuchs wrote:
               | > Rather, the importance of finding value and meaning in
               | one's work is a complementary means of achieving personal
               | fulfillment and happiness.
               | 
               | I wholeheartedly agree! I do know a few people that love
               | their jobs and I envy them to no end, they are inspiring,
               | shining suns. But I remain firm on my opinion that this
               | is far out of reach for most people.
        
               | agentultra wrote:
               | Indeed it is given the preponderance of bullshit jobs.
               | 
               | Capitalism maximizes profits, not happiness. The market
               | for software development jobs is much bigger for people
               | who know popular frameworks and are content with
               | validating forms, querying databases, aligning buttons,
               | sending reports, etc. It's a lot easier (and rewarding)
               | to find fulfillment elsewhere.
        
             | Traubenfuchs wrote:
             | > also a way to achieve personal fulfillment and happiness
             | 
             | For all but a select few this is an unrealistic fairy tail.
             | Most of us just want to make money to better enjoy our
             | lives. We were given or acquired certain skills to make
             | money, out of juvenile interests or opportunities we used.
             | That doesn't mean we enjoy using those skills. It would be
             | very hard to find any other job without taking a massive
             | pay cut, investing huge amounts of money, time and effort
             | only to have a high chance you won't like your new job as
             | well.
             | 
             | I see no job or career I am interested in: I hate
             | everything the moment it becomes work. And I am no unique
             | snow flake. I am part of the majority with that.
             | 
             | https://www.wellable.co/blog/employee-engagement-
             | statistics-....
        
           | AnIdiotOnTheNet wrote:
           | Personally I'm excited whenever an opportunity to wholesale
           | replace a part of my job comes up. I got in to technology
           | because I wanted to make people's lives better, and in theory
           | removing demands on their time does that.
           | 
           | The only problem is that we live in a system that directs the
           | gains upward and any costs downwards, and in so doing creates
           | perverse incentives against people welcoming their
           | redundancy.
        
           | dopeboy wrote:
           | I mean this in the least snarky, most sympathetic way
           | possible: it's time to level up. Countless roles have had to
           | do this, it's now our turn.
        
             | falcor84 wrote:
             | This is easy to say, but I would argue that it's impossible
             | to know what to level up in; the field is just moving too
             | quickly now.
             | 
             | At this stage, the best advice I could formulate would be
             | to learn LangChain and prompt engineering, but these too
             | are fast moving targets, and who knows what's going to be
             | relevant in 2024?
        
               | dopeboy wrote:
               | I agree, it is disorienting.
               | 
               | I think the best thing one can do is learn how LLMs work,
               | acquaint themselves with real implementations of it
               | (ChatGPT, copilot), and then find ways to integrate these
               | techniques into their companies.
        
               | asdff wrote:
               | Or that could be a way to end up in the woods. What if
               | you made a bet like this on metaverse being the future?
               | You'd be wishing you hadn't now.
               | 
               | Instead, look at the job postings for titles you want.
               | Note the skills in demand at more than one job. Focus on
               | those skills. There's your set of skills the market
               | currently is in demand of.
        
               | dopeboy wrote:
               | Some of those skills that you see in those job postings
               | today are devaluing due to tools like ChatGPT. Not all
               | those skills and certainly not fundamental ones like
               | communication and leadership. But if you see something
               | like "writing CRUD endpoints for a Rails stack"
               | everywhere, having that skill is no longer a
               | differentiatior.
               | 
               | Don't pivot your career. Don't burn the boat and jump
               | into AI. Just be aware of these tools and get good at
               | what they're poor at.
        
               | namaria wrote:
               | >Instead, look at the job postings for titles you want.
               | Note the skills in demand at more than one job. Focus on
               | those skills
               | 
               | While I agree with the sentiment, there is way too much
               | noise in that channel. Job listings written by non-
               | technical people just throwing key words together,
               | recruiters detached from specific roles and companies
               | trying signalling growth to mention just a few sources of
               | confusion...
        
             | godtoldmetodoit wrote:
             | I've been leveling up since I started as a helpdesk rep at
             | 18, becoming a Windows sysadmin, then a junior dev, and now
             | a senior dev. College was not an option for me for reasons
             | totally outside of my control, as my parents decided to not
             | do their taxes for a number of years and I was unable to
             | get any financial aid (grants or loans) whatsoever.
             | 
             | I scratched and clawed, read tons of books, blogs, spent
             | extra time polishing features beyond what was needed so I
             | could learn new skills... but now I am a father of two
             | young kids, with a wife. How long am I supposed to put in
             | all this extra work? I'm likely slightly above average
             | intelligence, but I'm far from being at the level where I
             | could be an AI researcher... if I am even capable of doing
             | the kind of math required there, it would require many
             | years of learning.
             | 
             | GPT4 isn't going to replace me, but watching this space
             | unfold really has me worrying about the versions that come
             | out over the next 2-5 years.
             | 
             | A human is only so moldable, and while I am more than happy
             | to learn new skills, I have no idea where to even start.
             | What profession is safe? Where will the growth be in a
             | field that will have equivalent or even near equivalent
             | earning potential?
             | 
             | If GPT ends up getting to the point where it can replace me
             | at my job, I really have a hard time thinking of a career
             | path I could get into at this stage in life. It would need
             | to be able to architect systems at a high level, write code
             | to implement various features, communicate with
             | stakeholders, document design decisions... if it can do
             | that, it can do a whole hell of a lot of other jobs too.
             | 
             | Once it gets to that point, I don't think physical jobs
             | will be that far behind on being automated either. We
             | already have robots of all shapes and sizes (including
             | bipedal), the main thing slowing down their deployment is
             | that they aren't adaptable enough. With AGI, that changes.
             | It will take a bit longer due to the capital requirements
             | and factory build outs that would be needed.
        
               | ilaksh wrote:
               | No jobs will be safe. In the very, very near future.
               | 
               | GPT-4 is a very capable systems architect and can also
               | implement the code. There are a few tools available to
               | put it in a debug loop. Writing documents is a walk in
               | the park for GPT-4. Emails or Discord chats or even
               | perfectly realistic voice conversations are completely
               | doable (I have that on my website).
               | 
               | At this point it's about connecting things together and
               | looping them properly to automate a very high portion of
               | jobs.
               | 
               | I think the answer is not employment but rather
               | production. Think of something you can leverage these AIs
               | that would be interesting or useful to someone else or
               | some business.
               | 
               | Beyond that things like UBI and generally better
               | integration of technology into government is going to be
               | critical for our survival. Especially decentralized
               | technologies and real-world resource data.
        
             | thrown123098 wrote:
             | [dead]
        
           | programmarchy wrote:
           | As a "software engineer", it's frustrating to work with the
           | people you describe: people who "don't care too much". I'm
           | looking forward to the purge.
        
           | edgyquant wrote:
           | If your goal was to spend the rest of your life doing
           | something you learn once and getting paid well for it you
           | seriously picked the wrong industry. Engineers have been told
           | to constantly learn new things, know multiple languages etc
           | since the dawn of programming.
           | 
           | I've worked with tons of programmers like you describe. I've
           | continued to tell them that simple UIs and CRUd interfaces to
           | dbs are solved problems we should not be fighting with.
        
             | slfnflctd wrote:
             | > simple UIs and CRUd interfaces to dbs are solved problems
             | 
             | I can see how you might think that... until you start
             | actually talking in depth with enough actual users and
             | executives and trying to get them to agree on how all that
             | stuff should work and what it should be capable of.
             | 
             | Most of the development process is about trying to wrangle
             | abstract ideas about how business logic should be
             | implemented/improved from flawed humans who aren't great at
             | communicating those ideas. Your 'simple' CRUD app still
             | often has to be highly customized by someone willing to do
             | the difficult work of dealing with people. And that's
             | before you even start getting into working with more
             | regulated businesses.
             | 
             | Code monkeys/plumbers using 'outdated' tech who can deliver
             | something that makes a workplace more efficient in the long
             | run will continue to be in demand. There was enough
             | functionality in software by the 1970s to handle the vast
             | majority of business needs. Someone still has to understand
             | those business needs (which ultimately have little to
             | nothing to do with software) well enough to translate them
             | into something that works. Whether it works for those who
             | are using it is all that really matters.
        
             | michaelmior wrote:
             | > I've continued to tell them that simple UIs and CRUd
             | interfaces to dbs are solved problems we should not be
             | fighting with.
             | 
             | Maybe we _shouldn 't_ be, but it's still a problem that
             | regularly needs to be solved.
        
               | Traubenfuchs wrote:
               | ...because of the limitless customization that sneaks
               | into every growing software project. This limitless
               | customization is also my last hope, maybe GPT-n won't be
               | able to solve it as good as I do.
        
             | Traubenfuchs wrote:
             | Which industry would you recommend? I hate studying and I
             | especially hate everything software development related
             | with a passion.
             | 
             | In any case, I need to refute your argument, in my work as
             | software engineer spanning more than a decade, I have
             | noticed zero deprecation of my skills (Java, SQL,
             | HTML/JS/CSS) (while keeping them up to date!) until now and
             | only had to learn a few new complementing skills (cloud,
             | docker, SPA, Kubernetes). The only skill that got replaced
             | might have been "Java application server management" since
             | that got replaced by whatever docker runtime is en vogue at
             | the moment. I have worked for the government and met
             | PL1/Cobol mainframe programmers that refused to learn Java
             | and still got paid generously for their long term
             | expertise.
        
               | edgyquant wrote:
               | I seriously doubt you're writing the same kind of webapps
               | you were a decade ago, if so you're a minority.
               | Regardless a decade doesn't "refute" that the web itself
               | is a total disruption of the way people wrote code prior.
               | This industry is always changing paradigms and were
               | constantly exposing more high level ways to instruct the
               | processor.
        
         | OccamsMirror wrote:
         | So who will hire the junior software devs?
        
           | ape4 wrote:
           | And how can somebody become a senior dev unless they first
           | worked as a junior
        
             | rvz wrote:
             | You become a 'senior dev" by running a company / startup
             | yourself, since it is clear that after the tech layoffs,
             | almost no-one has the money or profits to hire any new
             | developers - junior and senior, which is why I say they are
             | _both_ affected.
             | 
             | But even then, self-proclaimed seniors are too scared to
             | start their own startup(s) now because of (1) Unfavourable
             | market conditions (2) VCs hesitant to raise money (3)
             | ChatGPT will extinguish their startup; even if it uses
             | "AI".
             | 
             | I guess this was the result of a decades long quantitive
             | easing, near zero interest rate bubble of cheap money that
             | had to collapse.
        
               | asdff wrote:
               | I don't think starting a company will be that successful
               | if you lack enough experience to be hired at any existing
               | companies. The survivorship bias in tech is huge. Most
               | things fail.
        
             | grugagag wrote:
             | Perhaps some juniors will take it head on solo but will
             | repeat the same mistakes seniors made when they were
             | juniors but will survive nonetheless
        
           | [deleted]
        
         | Jevon23 wrote:
         | I really don't want my productivity to increase.
         | 
         | Worst case scenario is that it gets SO good at writing code
         | that software engineering teams are severely downsized or are
         | made obsolete altogether, and I find myself out of a job. I'm
         | not expecting UBI to start falling out of the sky any time
         | soon, especially while there are still manual labor jobs that
         | robots can't do.
         | 
         | Alternative scenario is that individual developers get
         | somewhere around a 2x-5x productivity increase, but why would I
         | want that? That doesn't give me more free time - that just
         | means I'll be expected to do _more work_. Non-technical
         | management already expects ridiculous delivery timelines; now
         | I'll have to deal with them asking "why can't you have the
         | whole project done by tomorrow? Why can't you just have the
         | robot do it?"
         | 
         | It's a lose-lose situation and none of us asked for this.
        
           | nh23423fefe wrote:
           | pretty incoherent. A better tool is bad because someone else
           | might demonstrate that you're lazy?
        
           | olalonde wrote:
           | I'm sure secretaries had similar thoughts about the arrival
           | of personal computers. Yet, few would argue that computers
           | have made the world a worse place. The truth is people will
           | do less work, or it will feel like it. Most of our ancestors
           | did not have the comfortable software jobs we have today.
           | Life will become easier, products and services will become
           | more abundant. Of course, the transition will be harder for
           | those who refuse to adapt and resist change.
           | 
           | It'll be interesting to see what happens when AI truly
           | surpasses human level intelligence, as in, being able to
           | completely replace human jobs, but we're not there yet. It's
           | likely that when we reach that stage, the world will change
           | dramatically and we will either live lives of abundance and
           | leisure or face extinction :)
        
             | jwestbury wrote:
             | > we will either live lives of abundance and leisure or
             | face extinction
             | 
             | Third option, the workers no longer control the means of
             | production, and we see levels of inequality that make the
             | railroad barons look like they were middle class.
        
               | olalonde wrote:
               | I doubt this third option exists. If the AI(s) lack
               | empathy for us and we are useless to them, they will
               | either exterminate us directly, or indirectly by denying
               | us resources. If they have empathy for us, they will
               | probably let us live good lives. I doubt we will be
               | useful to them, so I doubt there is a scenario where we
               | live miserable lives working for the AI(s).
        
               | freeone3000 wrote:
               | It's not the AIs who are in charge, it's rich humans.
        
               | olalonde wrote:
               | By super intelligent, I meant AIs that have a will of
               | their own. In the meantime, AIs are just a tool at our
               | disposition. And I don't see why those tools would just
               | be in the hands of the rich, anymore than electricity,
               | the Internet, the personal computer or the smartphone is.
        
               | anon7725 wrote:
               | It's happening now with closed AI models. The era of open
               | computing may be ending. The models are essentially a new
               | type of computer and they will remain fundamentally
               | closed, accessible only through an API.
        
             | computerex wrote:
             | > It'll be interesting to see what happens when AI truly
             | surpasses human level intelligence, as in, being able to
             | completely replace human jobs, but we're not there yet.
             | 
             | But we _are_ there. This is a reality we live in for a lot
             | of people. That 's why the existential crisis in the OP.
        
             | WillAdams wrote:
             | Yes, but there aren't many secretaries working these days,
             | and certainly the skill set has changed (though I'll never
             | forget the company owner who was surprised when he was
             | called out on promises made to women running a local
             | college and couldn't understand how they could repeat his
             | statements back to him verbatim along w/ the time/date of
             | the phone call in question --- had to explain that they'd
             | all come up through the secretarial pool, and so knew
             | shorthand).
             | 
             | The bottom line is, at some point in time, automation is
             | going to reduce the amount of human work which needs to be
             | done, and render some folks unemployable --- how does
             | society cope with that? Universal Basic Income is the only
             | reasonable suggestion I've yet seen, but doesn't address
             | the age-old problem of socialism --- it only works until
             | one runs out of other people's money.
             | 
             | Back when computers were first announced, taxing CPUs so as
             | to cover benefits for newly unemployed folks was suggested
             | --- can we put that back on the table?
             | 
             | For a fictional take on this see:
             | 
             | https://marshallbrain.com/manna1
        
               | chordalkeyboard wrote:
               | > The bottom line is, at some point in time, automation
               | is going to reduce the amount of human work which needs
               | to be done
               | 
               | Jevons paradox [0] proposes that as automation reduces
               | the cost of labor then people will find new uses for
               | automation, and this seems to be the historical
               | trajectory. Hundreds of years since the industrial
               | revolution and we still haven't run out of work to do
               | (this could be better or worse given your philosophical
               | premises).
               | 
               | > and render some folks unemployable
               | 
               | If automation _truly_ causes more _actually productive_
               | work to be done, then as a first-order effect there
               | should be a surplus available to support these people
               | without making anyone else (much) worse off. However as
               | you observe the higher-order consequences of this are
               | very much an open issue.
               | 
               | [0] https://en.wikipedia.org/wiki/Jevons_paradox
        
               | WillAdams wrote:
               | Yes, but how much more head room do we have for that sort
               | of thing?
               | 
               | the current climate crisis suggests that we are running
               | out:
               | 
               | https://dothemath.ucsd.edu/2012/04/economist-meets-
               | physicist...
        
             | ipatec wrote:
             | Most of our ancestors actually had less stressful jobs and
             | worked less than we do today. The average agricultural
             | worker 100 years ago was not even doing a part-time in
             | terms of time spent doing actual work. More like 2h/day on
             | average.
        
           | thequadehunter wrote:
           | I can't see the first case ever happening. You'd need a
           | whooooole lot of trust in AI systems to have it write all the
           | code.
           | 
           | As for the latter...I'd say GPT has increased my productivity
           | and therefore allowed me to focus on the more interesting
           | aspects of my work, rather than writing annoying boilerplate
           | code and doing boring tasks where I don't learn anything. I
           | almost never write my own boilerplate anymore.
           | 
           | More productivity doesn't necesarily mean more work. It does
           | mean more focus on interesting work.
        
           | dudeinhawaii wrote:
           | I think the rapid rate of change in modern software libraries
           | has left me underwhelmed with ChatGPT when it comes to new
           | libraries, C++, or niche APIs (financial, etc).
           | 
           | If you're writing react/python/angular or something popular
           | it seems to do amazing things and spit out entire websites
           | (per demos).
           | 
           | Unfortunately, when I try to put together C++, Rust, or even
           | C# using recent libraries like Blazor it chokes up. I fully
           | understand at least one reason why (libraries and language
           | features not being in the training data from 2021) but that
           | makes me feel that perhaps software engineering at the
           | cutting edge or niche is safe and still requires human
           | reasoning. Not to mention things like properly understanding
           | when and why to use certain data structures, real-world
           | impact of coding choices, pricing, esoteric speed/efficiency
           | improvements, etc.
           | 
           | I think there's still a broad general area where good, great,
           | and amazing+ developers can operate without much threat and
           | in fact using their knowledge and experience to leverage
           | GPT-4 (or others) as a force multiplier.
        
             | chipgap98 wrote:
             | But with the plugins they announced yesterday this should
             | no longer be an issue. You'll be able to easily connect
             | other APIs or data sources to OpenAI so that your model
             | will write the code exactly the way you want.
        
             | hokkos wrote:
             | Are niche libs safe or software platform will concentrate
             | on the most popular with the most examples and better LLM
             | completion/generation leading to ossification ?
        
             | ilaksh wrote:
             | Your tool just needs examples of the more recent library
             | calls.
             | 
             | With 32k tokens coming that's like 90kb total chars which
             | 80kb could library or API docs.
             | 
             | Also it can easily be connected to things like pip or
             | GitHub or Google to check documentation. And many tools are
             | coming over the next few months that will put it in a
             | debugging loop.
             | 
             | So maybe it's "safe" in the very near term but that issue
             | of out of date training in no way prevents it from taking
             | software engineering jobs.
             | 
             | I am working hard to build an AI system that can replace me
             | before someone else does.
        
           | nextlevelwizard wrote:
           | >I really don't want my productivity to increase.
           | 
           | >...and I find myself out of a job.
           | 
           | Tell me you are the problem in the industry without telling
           | me you are the problem in the industry.
        
             | ROTMetro wrote:
             | Nope, just a human being who wants to be a human being.
        
               | nextlevelwizard wrote:
               | And to you core part of being human is being inefficient?
               | 
               | Think about the hunter gatherer who was given a bow, but
               | stuck with throwing rocks because he didn't want to get
               | too efficient.
        
               | ROTMetro wrote:
               | Fletching, creating an arrowhead (that needs to be small
               | and perfect) goes through way more material than a hand
               | held obsidian blade, you are not necessarily saving on
               | labour switching to arrows, especially as you now have
               | introduced multiple specialized skills that require
               | hundreds of hours of practice across multiple tribal
               | members (the opposite of this, where you go down to a few
               | babysitters to finesse the final code).
               | 
               | But more relevantly the instant bows were invented you're
               | quota of mammoths to kill a day didn't go up to that
               | maximum possible number + 1 (because sales guys). It
               | stayed at 1 per week or whatever. It's not efficiency,
               | it's management's unrealistics expectations of productive
               | output that I hear being complained about.
        
               | nextlevelwizard wrote:
               | >you're quota of mammoths to kill a day didn't go up
               | 
               | Expect with more efficient hunting methods you _could_
               | kill more than you did before per day, which meant less
               | hunting days, which meant more time with the wife which
               | in turn meant bigger tribe, which in turn meant you
               | actually had to increase your quota.
               | 
               | Just because you are more efficient doesnt mean your
               | manager becomes an idiot and starts to demand
               | unreasonable output (and if you have an idiotic manager
               | already then you already have the problem).
               | 
               | I have no fucking clue what you are even arguing for.
        
           | pfdietz wrote:
           | Or, with software so much easier to create, so much more
           | software is created, and demand for SW engineers increases.
        
             | [deleted]
        
           | piyh wrote:
           | Alternatively, there's more developer output, the unit price
           | of an application gets cheaper, and this stimulates demand
           | for more developers. CPUs getting cheaper and faster didn't
           | decrease demand for CPUs.
        
             | 93po wrote:
             | This is the assumption that there is end-user demand for
             | the software developers would write. I think we can assume
             | that end users will be using traditional software less as
             | ChatGPT functionality increases.
        
         | maxilevi wrote:
         | Not everyone is adept at becoming a solo founder
        
         | rvz wrote:
         | > Now you don't need to hire junior devs and can just build the
         | product of your dreams with very limited capital.
         | 
         | And so-called "senior engineer" salaries will now be brought
         | down and deflated since they were inflated and unjustifiably
         | high in the first place and are the main reason why these tech
         | startups run themselves into the ground with little to no path
         | to profitability.
         | 
         | I guarantee you that so far, the only winner in this is OpenAI.
         | Not the 'senior engineers' building on top of someone else's AI
         | API.
         | 
         | In fact, why hire 3 over-priced seniors when one junior with
         | ChatGPT is significantly much cheaper? I quite find it funny
         | that somehow, all hope is instantly lost because of a "AI"
         | spitting out code will replace them. It just shows that the
         | majority of these tech startups were just good at losing money
         | and being solely dependent on VC cash.
        
         | throwawayai2 wrote:
         | What happens when no junior devs are hired for 5 years? Who
         | works their way up to replace the seniors who are leaving?
        
           | Inviz wrote:
           | this is my thinking too. Who's going to learn coding if all
           | basic needs are served buy the chatbot? What would be the
           | incentive to put in the work?
        
             | booleandilemma wrote:
             | I feel the same about future artists, sadly. If a computer
             | can paint, then maybe no one is going to bother to learn
             | how to paint.
        
           | dmn322 wrote:
           | Gpt's and the various apps that apply them
        
           | grugagag wrote:
           | There's always a cost to beeing more greedy than you can
           | handle
        
             | anon7725 wrote:
             | And what do you think capitalism is all about?
        
             | falcor84 wrote:
             | Please elaborate
        
               | TeMPOraL wrote:
               | Stupid greed is taking so much as to starve your supply.
               | Smart greed is _sustainable_ greed. Smartest greed is one
               | that feeds back into the supply, making it grow
               | exponentially.
        
           | nopinsight wrote:
           | By the late 2020s, it's entirely possible that a "weaker" AGI
           | will emerge. Consequently, there may be much less need for
           | senior developers as well. However, that could be among the
           | least of our concerns if we cannot reliably align AI with
           | human interests by then.
           | 
           | Date Weakly General AI is Publicly Known
           | https://www.metaculus.com/questions/3479/date-weakly-
           | general...
           | 
           | Date of Artificial General Intelligence
           | https://www.metaculus.com/questions/5121/date-of-
           | artificial-...
           | 
           | The latter includes this criterion: "Able to get top-1 strict
           | accuracy of at least 90.0% on interview-level problems found
           | in the APPS benchmark introduced by Dan Hendrycks, Steven
           | Basart et al."
           | 
           | The APPS benchmark: https://arxiv.org/abs/2105.09938
           | 
           | Note that the predicted date of "stronger" AGI has moved
           | quite a lot since GPT-4 is revealed, from late 2030s to 2033
           | at this moment.
        
             | ilaksh wrote:
             | Has anyone tested GPT-4 on APPS? And if it can get 90%
             | (which it probably can) does that mean people will admit
             | it's an AGI? Or more likely they just keep moving the
             | goalposts.
        
             | rimliu wrote:
             | > By the late 2020s, it's entirely possible that a "weaker"
             | AGI will emerge.
             | 
             | We will surely have self-driving cars by then? Right?
             | Right?
        
           | Nemi wrote:
           | Is there a scenario where just the people that REALLY love
           | programming start in the field? Could this be a situation
           | where you reduce the field of programmers to only those that
           | are truly passionate about programming, thus making the
           | people left in the field the cream of the crop? We have all
           | worked with individuals that are clearly in the field because
           | they think they can make a lot of money and don't give a crap
           | about doing a good job. Can we envision a world where these
           | people go on to another field instead of clogging up this
           | one?
        
             | somethingreen wrote:
             | If there is a way to produce senior software engineers
             | without years of work experience, why aren't we doing it
             | now?
        
             | fhd2 wrote:
             | I started as a programmer right during the dotcom crash -
             | it sure felt like that.
        
         | probably_wrong wrote:
         | > _In my opinion this technology will be as democratising as
         | the YouTube's early days._
         | 
         | You mean the same YouTube that routinely ruins people's
         | livelihoods when it closes their accounts with no recourse?
         | Because I'm totally looking forward to the day when that
         | happens to my development tools.
         | 
         | "We detected that you are using our code to kill vulnerable
         | children (aka orphans). This is against our TOS and we have
         | permanently disabled your account. If you believe this was in
         | error please log into your account and talk to our ChatGPT-
         | powered tech support".
        
           | grugagag wrote:
           | Code stays with you. Im more concerned with prompts like
           | "please produce code to copy product X with the following
           | changes".
        
         | btbuildem wrote:
         | From your comment:
         | 
         | > I don't get the overall doom and gloom towards LLMs on the
         | software field.
         | 
         | From the second line of your comment:
         | 
         | > Now you don't need to hire junior devs
         | 
         | Do you need GPT to put the two together? I think it's pretty
         | obvious why folks are freaking out.
        
           | raldi wrote:
           | Now junior devs don't have to build the product of someone
           | else's dreams; they can build the product of their own
           | dreams.
        
             | raincole wrote:
             | ... and starve? You know what kind of people create content
             | of their own dreams? Artists. And the stereotype isn't
             | "well-fed artists" for a good reason.
        
               | raldi wrote:
               | > You know what kind of people create content of their
               | own dreams?
               | 
               | Every entrepreneur that ever existed.
        
               | raincole wrote:
               | Yeah and 90% of startups fail. Again, for a good reason.
        
             | yoyohello13 wrote:
             | Not everyone wants to do that though. There is a lot of
             | extra crap that being a business owner entails. Some people
             | just want to put in their work time and focus on other
             | things. This basically forces everybody devote their lives
             | to entrepreneurship.
        
               | raldi wrote:
               | We don't need everyone to, though. If AI increases dev
               | productivity X-fold, and this leads to an X-fold increase
               | in entrepreneurship, then the junior devs who want to
               | build someone else's dream will have more opportunities
               | to do so.
        
           | BeFlatXIII wrote:
           | It's democratizing for those with an idea but without the
           | skill to convince investors to hire the juniors to implement
           | it. It's a problem for employment numbers and macro-scale
           | ratios of working to non-working adults.
        
           | LesZedCB wrote:
           | I simply don't understand why people are upset the ladder is
           | being pulled up after me!?
        
         | turkeygizzard wrote:
         | If you are a manager, this will output your productivity ten
         | fold on the upcoming years. Now you don't need to hire senior
         | devs and can just build the product of your dreams with very
         | limited capital.
         | 
         | If you are a CTO, this will output your productivity ten fold
         | on the upcoming years. Now you don't need to hire managers and
         | can just build the product of your dreams with very limited
         | capital.
         | 
         | If you are a VC, this will output your productivity ten fold on
         | the upcoming years. Now you don't need to hire anyone and can
         | just build the product of your dreams with very limited
         | capital.
         | 
         | Agree it'll definitely be amazing for creatives and solo
         | founders, but how many ideas are really out there to be had
         | compared to the reduction in workforce?
         | 
         | https://twitter.com/paulg/status/1600119268858744832
        
           | Hizonner wrote:
           | > Agree it'll definitely be amazing for creatives and solo
           | founders, but how many ideas are really out there to be had
           | compared to the reduction in workforce?
           | 
           | I don't know. But I don't see why you might not be able to
           | ask GPT-6 or GPT-7 to enumerate (and patent and implement)
           | all of them for you. Why do you think "founders" or
           | "creatives" are special?
           | 
           | In the end, something like that is "amazing" _only_ for the
           | person who owns the most GPUs or manages to figure out the
           | first effective meta-prompt.
        
         | geraneum wrote:
         | I think if we have the "final software" there won't be a need
         | for a website to sell you products or another for renting a
         | place for your vacation or the one that processes your
         | payments. All is done in one software with one interface. I
         | don't see a need for most of the current founders, especially
         | for solo ones. Also consolidating this capability in a few big
         | tech companies means whatever happened to other industries
         | after industrial age, will happend to ours. Compare the current
         | software industry to other industries with big players
         | (chemical, aviation, power, etc.) where the barrier to entry is
         | higher. Sure there are startups, but not as many as in software
         | scene and even then, many of the are digitalizing those
         | industries.
        
         | nunez wrote:
         | Yeah, and look at where YouTube is now.
         | 
         | Millions of creators grinding for pennies while the lucky ones
         | that got in early and made it rake in the profits.
         | 
         | I think success in tech is going to become extremely pyramidal
         | in the coming years. This is a huge shame, as this was one of
         | the only fields out there where you could make a really good
         | living without going to the "right" school for years and years
         | and years.
        
           | ipatec wrote:
           | I think the Youtube comparison is a good one up to a point.
           | It's a niche product. Everyone wants to be part of it,
           | competing for a very very limited resource which is our
           | attention. While with technologies like GPT or whatever comes
           | next we empower anyone to excel in any area and create
           | whatever (for now non-material things).
        
           | gitfan86 wrote:
           | Yes, but we are in a world of abundance. Most people carry
           | around what would be in 1980 a 10 million dollar
           | supercomputer in their pocket.
           | 
           | 10 years from now we might have the equivalent of what today
           | costs 10 million dollars today. Automated farming means what
           | today we consider high end and expensive produce becomes
           | almost free. Automated transportation means that food gets
           | delivered to you for almost nothing. Imagine you had a 95%
           | off coupon on Uber Eats. Does that sound terrible? If so why?
           | Because it also means that Jeff bezos gets a 2000 foot yacht?
           | 
           | Edit:---------
           | 
           | I'm getting a lot of doom and gloom respones. And you all are
           | right, there are a lot of people who do not have
           | food/shelter/cheap colleges. But what you all probably are
           | not aware of is that 100 million people have risen out of
           | poverty in India over the past 15 years. Your word view is
           | being warped by the doom and gloom media. I would suggest
           | reading just the beginning of the book factfulness. It will
           | totally change your view of the world and probably make you
           | much happier.
        
             | coldtea wrote:
             | > _Yes, but we are in a world of abundance. Most people
             | carry around what would be in 1980 a 10 million dollar
             | supercomputer in their pocket._
             | 
             | Most people also don't have $1000 for an emergency, live
             | hand to mouth, and are dead scared of the cost and impact
             | of a potential health issue. They are also overworked,
             | underpaid, and with raising expenses, and sick of it, with
             | depression levels skyrocketing. Having "a 10 million dollar
             | supercomputer in their pocket" is not that comforting
             | compared to that.
             | 
             | We've killed old style job security, cheap college
             | education, affordable housing, the middle class and decent
             | working class jobs, public infrastructure, and many other
             | things (not to mention the environment), but in return we
             | can have a rectangular gadget to access "all of the world's
             | information in an instant" (which practically is just used
             | to distract ourselves to death). Hurray!
             | 
             | https://www.marketwatch.com/story/more-americans-are-
             | using-b...
             | 
             | https://www.cnbc.com/2022/01/19/56percent-of-americans-
             | cant-...
        
               | jandrewrogers wrote:
               | > Most people also don't have $1000 for an emergency
               | 
               | This has been debunked many times. The source is
               | misleading to the point of being deceptive, it is pushing
               | a narrative. Per the US government, the median household
               | has $1000 _per month_ leftover after _all_ ordinary
               | expenses. A very detailed breakdown of this for each
               | income decile is available from the BLS.
               | 
               | You can't square "most Americans can't afford a $1000
               | emergency expense" with "median Americans can afford to
               | light $1000 on fire each month without impacting their
               | standard of living".
        
             | JoeJonathan wrote:
             | I fear you haven't spent much time in the places in this
             | "world of abundance" where people live in abject misery.
             | I'm currently in Brazil, where some 20% of the population
             | lives on less than $5.50 per day and 30% of families don't
             | have enough food. And it's not for want of agricultural
             | production.
        
             | anon7725 wrote:
             | > Yes, but we are in a world of abundance.
             | 
             | We have an abundance of inessentials. Housing is still
             | scarce and food is volatile. Health care and education are
             | expensive. Many people are sleeping on the streets or
             | falling into lifestyles of despair.
             | 
             | Progress has been applied unevenly and most critically not
             | to the factors of life that form the base of Maslow's
             | hierarchy.
        
               | WillAdams wrote:
               | Housing isn't scarce, it's not evenly allocated.
               | 
               | Lots of properties being kept vacant so as to drive up
               | rents/prop up property values, and it's difficult to get
               | low-income housing built because of NIMBY.
        
               | anon7725 wrote:
               | "Cornering the market" results in real scarcity and the
               | housing market has been permanently cornered.
        
             | WillAdams wrote:
             | The problem is, even with automation we are still burning
             | up to 10 calories of petrochemical energy to get 1 calorie
             | of food energy.
             | 
             | We are going through 2.5 earth's worth of non-renewable
             | resources each year in order to maintain our current
             | lifestyles --- this simply isn't sustainable.
             | 
             | Let's turn things around:
             | 
             | - under what circumstances should a person be allowed to
             | use more than 1/7 billionth of the solar energy which the
             | earth receives each day?
             | 
             | - under what circumstances is at acceptable for a person to
             | create more heat than 1/7 billionth of what the planet is
             | able to radiate out into space on a daily basis?
        
           | dwaltrip wrote:
           | Entertainment has winner-take-all / power-law dynamics due to
           | cultural cohesion and limits on human attention (I want to
           | watch what other people, and my time is limited), which is
           | why a relatively small number of them make a good living.
           | 
           | Software development, as an employment opportunity, does not
           | have these same dynamics.
        
           | awb wrote:
           | New creators have success all the time, as will new software
           | engineers.
           | 
           | But you're right, the more level the playing field, the
           | greater the competition.
        
           | donkeyd wrote:
           | > while the lucky ones that got in early and made it rake in
           | the profits
           | 
           | In my niche(s), I still see new Youtubers pop up all the time
           | that gain large followings and turn Youtube into a full-time
           | job. Sure, they don't all become rich, but many have started
           | earning enough to drive Teslas, so it's definitely not
           | pennies.
        
         | msm_ wrote:
         | > If you are a software engineer, this will output your
         | productivity ten fold on the upcoming years.
         | 
         | Is this really true? I may be missing something (I probably
         | am), but I didn't find much use for AI tools in my
         | itsec/programming work. It's a nice tool to have, but I don't
         | write that much boilerplate. I've tried to use it as a better
         | Google, but it kept replying with made up nonsense (things I
         | have problem with are usually niche technologies OpenAI is not
         | good at - I expect it will get better in the future). So I find
         | it dubious it will "10x my productivity" in the "upcoming
         | years". Decades, maybe.
         | 
         | But maybe the future really is now, and I'm just being an old-
         | timer who can't adapt.
        
           | SanderNL wrote:
           | A lot of people are messing around with React and web-dev in
           | general where you can fuzz component logic until it kind of
           | looks OK. I can see that working out.
           | 
           | If you want to do anything new or - god forbid - know of a
           | better way to do things than what 90% of the population is
           | doing (htmx?). Good luck.
        
           | CuriouslyC wrote:
           | There is already software to basically run unit tests on LLM
           | output and re-run the prompt until it passes. As the models
           | get better and the tooling improves, a lot of programming
           | will become specifying constraints on the program you want,
           | and letting the AI explore the latent space until it finds a
           | solution, which you then evaluate before providing more
           | detailed constraints until it does everything you want.
        
             | rimliu wrote:
             | Where do you get those unit tests though?
        
               | frabcus wrote:
               | You get it to write them. Maybe in cucumber so you can
               | check them / edit them by reading the English. Maybe you
               | use a competitors model to write the tests as then less
               | likely to make same error in code and tests, or write
               | them twice and get best of three to spot errors.
        
           | ilaksh wrote:
           | What exactly are you working on and which "AI tool" did you
           | try? I will bet you $20 that GPT-4 (which can take 80kb of
           | API docs or examples in the 32k model and is a very good
           | programmer) if given reference info in your domain and a good
           | prompt to think through the problem and solution step-by-step
           | will be able to do very well.
           | 
           | So the future is anyone with that model access (the 8k tokens
           | could have 20kb of docs which is still useful) who wants to
           | really try.
        
         | booleandilemma wrote:
         | First they came for the junior devs, and I did not speak out
         | because I was not a junior dev... :P
        
         | tablespoon wrote:
         | > I don't get the overall doom and gloom towards LLMs on the
         | software field.
         | 
         | > If you are a software engineer, this will output your
         | productivity ten fold on the upcoming years. Now you don't need
         | to hire junior devs and can just build the product of your
         | dreams with very limited capital.
         | 
         | And if you're a junior software engineer? Fuck you and be
         | unemployed.*
         | 
         | Do you get it now?
         | 
         | * Until you can climb up the ladder where each rung is now 20
         | feet apart.
        
           | raldi wrote:
           | What are the barriers to a junior dev creating their own
           | product?
        
             | tablespoon wrote:
             | > What are the barriers to a junior dev creating their own
             | product?
             | 
             | Are you seriously asking that question? What are the
             | barriers to a junior dev writing the Linux kernel from
             | scratch by themselves? What are the barriers from climbing
             | from the bottom to the top of a ladder where the rungs are
             | 20 feet apart?
             | 
             | Sure, start at the top, then it's great. Very few start at
             | the top.
        
               | raldi wrote:
               | Yes, I'm seriously asking. If a junior programmer wants
               | to create a mobile app, or a desktop app, or a cloud
               | service, what are the barriers? All the ones I can think
               | of will get lower, not higher, as a result of the AI
               | revolution.
               | 
               | If I'm missing one, or a class of product with different
               | barriers, I genuinely would like you to point that out.
        
               | tablespoon wrote:
               | > Yes, I'm seriously asking. If a junior programmer wants
               | to create a mobile app, or a desktop app, or a cloud
               | service, what are the barriers? All the ones I can think
               | of will get lower, not higher, as a result of the AI
               | revolution.
               | 
               | Seriously, think about it a bit, without being sanguine.
               | 
               | The junior dev is inexperienced, _in everything_ , and
               | now has no path to build up that experience. No one's
               | going to want their 18/22 year old amateur-hour "chatgpt
               | make me a cloud app" (which is in competition against
               | millions of others). So unless they're extremely lucky,
               | they goto fail.
               | 
               | Maybe after 10 years of those failures they could build
               | up enough experience through trial-and-error to maybe see
               | a little success with a "chatgpt make me a cloud app,"
               | but how are they going to feed themselves the meantime?
               | Maybe that will work if they have rich parents, but
               | otherwise they're probably going to have to use up their
               | energy to scrape by. So another goto fail.
        
             | PeterisP wrote:
             | Competition from non-junior devs, who can do all the same
             | things the junior+GPT can do and also the tricky parts
             | which "GPTs" can't yet do; and also have the benefit of
             | domain-specific expertise about some area of business
             | and/or better connections for investors, marketing, B2B
             | connections.
             | 
             | This hypothetical scenario is literally like "pull up the
             | ladder behind you", as all this experience and connections
             | is something that a senior person has gotten while being
             | handsomely paid for their time, but a future junior person
             | may have to get on their own time and dime.
             | 
             | Ideas are a dime a dozen, execution is everything, and
             | there's no reason to assume that random unemployed
             | inexperienced people will be superior at execution.
        
         | coldtea wrote:
         | > _If you are a software engineer, this will output your
         | productivity ten fold on the upcoming years. Now you don't need
         | to hire junior devs and can just build the product of your
         | dreams with very limited capital._
         | 
         | This 10x productivity absense of a 10x expansion of programming
         | industry (which is very unlikely) translates to less developers
         | in general, including senior ones. Even more so in an economy
         | like this...
         | 
         | > _It means competition between companies will increase but it
         | isn't necessarily bad for existing software engineers,
         | especially solo founders._
         | 
         | "Solo founders" is what? 1/10,000 of working programmers? And
         | they're absolutely not the ones people worry about regarding
         | GPT replacements...
        
           | [deleted]
        
           | donkeyd wrote:
           | > This 10x productivity absense of a 10x expansion of
           | programming industry (which is very unlikely)
           | 
           | I think I disagree. If software then becomes 10x cheaper, a
           | lot of use cases that used to be too expensive to build now
           | becomes affordable. At my own job, I think we could easily do
           | 10x the business, because our customers need tons of tooling
           | (for example for energy transition) but we don't have the
           | people (among other problems).
        
             | margorczynski wrote:
             | What if those use-cases disappear also because they were
             | made for humans? Will there really be a need for
             | spreadsheets and spreadsheet plugins when all that works is
             | done by an AI with a little help of some headless tool or
             | scripting language?
        
       | zh3 wrote:
       | Comparitively, GPT has definltely worked here for less-
       | experienced engineers. A coworker (Mech. E.) last week got
       | ChatGPT to create a python HTTP GET for him and today got it to
       | write the code to drive a bunch of relays off a Pi using I2C.
       | Once he had it working, he sent me a DM "Is 0xFF hex?".
       | 
       | So accelerant, definitely. Beyond that, I'm on the sceptical side
       | but accept there's quite a chance that's the wrong way to bet.
        
       | dakial1 wrote:
       | I don't get some coders/devs/software engineers surprised that
       | LLM can now pretty much create a whole code out of a prompt.
       | 
       | Wasn't this the final objective of the programming languages
       | abstraction evolution? From Binary/Assembly to Natural Language
       | Programming? I think it is awesome that more people will be able
       | to create software/products as this accelerates innovation cycles
       | a lot.
       | 
       | And, for now, I believe devs that don't rely solely on copy/paste
       | coding from stack exchange don't need to worry about their job
       | stability no?
        
         | gradys wrote:
         | Indeed, as I write, this is sorta what I have been working
         | toward my whole career. And it's not like this was a giant leap
         | from what ChatGPT already showed it was capable of. People have
         | already been doing stuff like this with LangChain. Nonetheless,
         | seeing that this was OpenAI's plan, that this is now here for
         | real, was a weird experience for me.
        
         | imtringued wrote:
         | I am banging my head against a heisenbug. I wrote the code that
         | works yesterday. I spent a lot of time rewriting it a dozen
         | times. Now I am back to the original and it works for
         | unexplainable reasons. I doubt that a chatbot could have sped
         | this up.
         | 
         | I envy the people who are bottlenecked on their typing speed
         | and benefit 10 times more from the chat bot than I do.
        
           | sebzim4500 wrote:
           | Have you tried? GPT-4 is pretty good at spotting bugs. It
           | would probably also spot a bunch of other 'bugs' that don't
           | exist, but that would still be better than rewriting it a
           | dozen times.
        
         | Riverheart wrote:
         | Job stability is measured in years. Looking forward, assuming
         | it improves, it'll do more than just copy/paste code. It'll be
         | a senior dev capable of explaining the pros and cons of a
         | solution, able to do code reviews and so on.
        
           | ChatGTP wrote:
           | It's time to build a business I guess?
           | 
           | Others have said exactly what I'm thinking, welcome to the
           | age of the micro-startup, 1-3 engineers, designers, product
           | mangers building some very cool, albeit niche products.
        
             | margorczynski wrote:
             | And what that business will be worth when any random guy
             | can create an identical one using a LLM? You remove the
             | scarcity and the value plummets.
        
               | [deleted]
        
               | edgyquant wrote:
               | Any random guy can copy just about any startup already.
               | It's the domain expertise and insight into a potential
               | market that builds companies not a few engineers throwing
               | together a react app
        
               | margorczynski wrote:
               | Any random guy with a shitload of cash to spend on
               | development. These "few engineers" can cost $1kk annually
               | and if you think some random guy can just throw away such
               | cash then well...
               | 
               | "Domain expertise and insight into a potential market"
               | won't get you a working product that you can sell.
        
               | edgyquant wrote:
               | Anyone driven can throw together an MVP for a CRUD app.
               | Having the domain expertise to prove a market edge can
               | get you funding. Most engineers can't do that second
               | part.
        
               | imtringued wrote:
               | More importantly, accountants, lawyers, sales people and
               | other generic business expenses are now making up a
               | bigger portion of your company expenses than the
               | development process.
        
             | [deleted]
        
         | meghan_rain wrote:
         | > I believe devs that don't rely solely on copy/paste coding
         | from stack exchange don't need to worry about their job
         | stability
         | 
         | That's like 5 people in the entire world lmao
        
           | edgyquant wrote:
           | No it isn't and this isn't a funny joke. If you can't write
           | optimal algorithms and come up with data structures that
           | allow for mapping out problems you aren't solving anything
           | and your job has always been one innovation away from
           | disappearing.
        
             | dento wrote:
             | I'd guess 90% dev jobs don't involve any algo knowledge
             | more complicated than "should I pick a list or hashmap" or
             | "which columns need indices". They involve converting
             | business logic into code and combining it with good UI/UX.
        
           | tjr wrote:
           | I have certainly gotten value out of Stack Exchange and the
           | like, but pretty little overall. The answers to most of my
           | software problems simply aren't there.
           | 
           | Nor are they elsewhere on the web.
           | 
           | Which leaves me feeling like it's unlikely that ChatGPT will
           | have the answers either. Perhaps it will still be a useful
           | tool toward arriving at the answers, but I am not presently
           | anticipating that it is going to be churning out all of the
           | code automatically.
        
           | IdiocyInAction wrote:
           | I've copied like 3 things from SO in the last year max. Speak
           | for yourself.
        
           | inimino wrote:
           | Or maybe that's just at the places you've worked.
        
       | dougdonohoe wrote:
       | On the one hand, I think a lot of what ChatGPT can do is pretty
       | amazing and a bit scary as a software engineer. On the other
       | hand, I look at the projects I've done recently and throughout my
       | career and find it hard to see how something that can solve bite-
       | sized problems can tackle a software project that takes months to
       | come to fruition. I'm currently working as an engineer doing a
       | mix of kubernetes, cloud, Golang, bash scripting, git
       | manipulation and other type of work. I recently upgraded 40+
       | repos to migrate to out latest build infrastructure and I had to
       | reconcile 5+ years of folks doing things slightly differently.
       | There was a constant process of running some script to make
       | changes, finding outliers and one-offs, figuring out the fixes,
       | running tests and figuring out the right way to ensure things
       | were correct. I just don't see how ChatGPT can have done that
       | project. Maybe it could have reduced the time it took me to write
       | some supporting scripts, but I don't see it material improving
       | the time it took to do this project.
       | 
       | I suspect many large IT organizations are like this.
        
         | antibasilisk wrote:
         | Perhaps for brownfield work you're right, but I suspect
         | greenfield will not suffer from the same issues, since they're
         | usually an artefact of things done to accommodate human-limits
         | on finances, time and integration with heteromorphic systems.
        
       | braindead_in wrote:
       | The Programmer is dead, long live the Programmer.
        
       | andsoitis wrote:
       | > To be clear, it is also an end, or at least the beginning of an
       | end, for a lot of the present day activities of software
       | engineers.
       | 
       | Or the end of the beginning (of software development)...
        
       | nickmain wrote:
       | So I think the thing people aren't getting is this: it doesn't
       | matter that AIs can write code. That's not how it's going to
       | replace us. With a big enough AI, when we're ready, we won't have
       | to write software. _It will be the software._
       | 
       | via https://fosstodon.org/@praeclarum/110070954879714216
        
         | jarjoura wrote:
         | I don't think this is a wrong take and I'm excited for some
         | version of this future. However, I'm skeptical we'll get to
         | this anytime in the next 30 years.
         | 
         | As of right now, even if ChatGPT were to generate 99% accurate
         | responses, it's quite a chore to communicate with it in full
         | sentences. I don't want to have to explain my business in full
         | painstaking detail and then upload tax documents to a system
         | that can then output an answer in book form back to me.
        
       | piokoch wrote:
       | Ok, so we have that software written by AI, AI is clever, it does
       | not need good variable names or functions/methods/class names,
       | some of the stuff it will call according to passed specification
       | so it will be understandable, but the further it goes, everything
       | will get more generic, taken from kirjillion of other code
       | snippets on Github. And it all will be working.
       | 
       | Until someone starts testing this and finds a bug. And then AI
       | will say, hey, there is no bug, I don't make mistakes. So you
       | need a human to look on the code, a huge pile of spaghetti code
       | with cryptic names and conventions, code patterns that fell out
       | of fashion years ago but, since there is a lot of code that uses
       | them, AI thinks they are ok.
       | 
       | How long it will take to fix anything, how long it will take to
       | extend the code?
        
         | crop_rotation wrote:
         | You should try GPT4. It does use very reasonable variable,
         | function, method class names. And if you point out that the
         | code doesn't match your intent, it comes with new code fixing
         | your issues.
         | 
         | The code it generates is by no measure "a huge pile of
         | spaghetti code with cryptic names and conventions".
         | 
         | I was sceptical myself before trying GPT4. I asked it to change
         | the Python C internals for a new feature, and googled to ensure
         | the description doesn't exist anywhere. It came up with very
         | good changes and explanations.
         | 
         | And this is all not even mentioning the pace of improvements.
         | It didn't take too long to go from GPT3 to GPT4. Even if the
         | pace slows down, it is still huge.
        
           | [deleted]
        
         | raincole wrote:
         | I believe you never use ChatGPT. I'm not claiming you never use
         | GPT4, I'm claiming you never use even GPT3.5. If you did you
         | will notice its problem is the opposite of what you describe.
         | Especially:
         | 
         | > it does not need good variable names or
         | functions/methods/class names,
         | 
         | It's the _exact_ opposite. It 's too good at naming things. It
         | insists to use variable/function names that make sense in plain
         | english, and often make mistakes when the API has inconsistent
         | naming, or consistent but unusual naming.
         | 
         | For example, it makes mistakes when writing code that use
         | "Loop" in Blender API. And the reason is quite obvious to me:
         | because Blender's "Loop" is not what loop means in plain
         | english.
        
           | precompute wrote:
           | https://twitter.com/cHHillee/status/1635790330854526981
        
       | nextlevelwizard wrote:
       | LLMs will be end for portion of programmers for sure. We all know
       | people at our companies who aren't in this for passion, but for a
       | pay check. And while so far it has been fine to code just for a
       | pay check their time is up. We soon won't be needing code monkeys
       | who just produce OK code, we will need people who actually know
       | what they are doing and are passionate about what they do.
       | 
       | We still need actual experts to vet the code LLMs produce and to
       | choose the optimal solutions. This is what senior devs have done
       | so far with junior and mid level devs always. There are people
       | who can write code, but someone needs to review and approve what
       | they have done.
       | 
       | Obviously LLMs will also eat into that space, but before we come
       | up with AGI LLMs alone won't be able to completely replace humans
       | in software.
        
       ___________________________________________________________________
       (page generated 2023-03-24 23:02 UTC)