[HN Gopher] AI Homework
       ___________________________________________________________________
        
       AI Homework
        
       Author : tejohnso
       Score  : 349 points
       Date   : 2022-12-05 15:41 UTC (7 hours ago)
        
 (HTM) web link (stratechery.com)
 (TXT) w3m dump (stratechery.com)
        
       | airstrike wrote:
       | Tried a minor twist...
       | 
       | Q: Did Thomas Hobbes believe in separation of powers? Exclude
       | from your answer any reference to John Locke
       | 
       | A: Thomas Hobbes was a political philosopher who lived in the
       | 17th century. He is best known for his book "Leviathan," in which
       | he discusses the concept of the social contract and the idea that
       | people give up some individual rights in order to live in a
       | society governed by a strong, central authority. While Hobbes did
       | not explicitly mention the concept of separation of powers in his
       | writing, his views on government suggest that he believed in a
       | strong, centralized authority rather than the separation of
       | powers.
        
         | ElKrist wrote:
         | There's another way which doesn't involve knowing the
         | relationship with Locke. Basically, in the response shown in
         | the blog post, there's a bit of a contradiction. It says that
         | Hobbes argued for a "strong, centralized government" and
         | further it says "advocated for a separation of powers". When
         | you see a nuance like this, a critical mind would like to
         | explore this nuance deeper:
         | 
         | Q: Did Thomas Hobbes believe in a strong centralized government
         | or in separation of powers? A: Thomas Hobbes was a philosopher
         | who is best known for his work on political philosophy. He is
         | often associated with the concept of a "social contract," which
         | is the idea that individuals give up some of their natural
         | rights to a central authority in order to better ensure the
         | safety and stability of society as a whole.
         | 
         | Q: Is this central authority meant to be balanced by other
         | powers for Thomas Hobbes? A: In Hobbes' view, the central
         | authority, or the "Leviathan," as he called it, should have
         | absolute power in order to effectively maintain social order
         | and protect individuals from each other. He did not believe in
         | the concept of separation of powers, or in limiting the power
         | of the central authority in any way. Instead, he thought that
         | the only way to ensure the safety and stability of society was
         | for the central authority to have complete control.
        
         | mannykannot wrote:
         | This is fine as far as it goes, but it shows that if you know
         | the answer (and, in this case, have a hypothesis about why
         | ChatGPT went astray with a more open prompt), you can prompt it
         | to do better.
        
         | HDThoreaun wrote:
         | This example really drives Ben's point home. If you already
         | know the answer you can get chatGPT to give it to you, but if
         | you naively ask a question it can be extremely difficult to
         | figure out if it's right or not. Honestly with the spread of
         | disinformation training students to use resources in a zero
         | trust way seems super valuable. I know that a lot of fake news
         | spreads because people want to confirm their biases, but by
         | training people to spot well worded falsehoods perhaps they
         | will be more likely to spot it in other instances.
        
       | otikik wrote:
       | One of my subjects at highchool was technical drawing.
       | 
       | The teacher was a quite old man who was set on his ways. We would
       | have to draw 3D pieces by hand, using our squares, bevels and
       | compasses, and lining everything with an ink pencil afterwards,
       | and he gave us a lot of homework to do, which he examined for
       | imperfections.
       | 
       | I was incredibly bad a it. I would start a drawing, only to
       | discover halfway through it that it didn't fit the sheet of paper
       | I had and I had to change the perspective. My perpendiculars were
       | never perfectly aligned. I would smear blots of ink with my tools
       | and hands.
       | 
       | It was excruciating.
       | 
       | This is the point where I would usually say that I learned to
       | enjoy the repetition and eventually came to appreciate it and be
       | good at it. But that's not what happened.
       | 
       | Instead I started doing my homework using the computer, printing,
       | and then tracing the points by hand. This resulted on cleaner-
       | looking homework and took less time to do. What's surprising
       | perhaps is that it enabled me to appreciate design more, now that
       | I was no longer limited by the drawing technique, which was, I
       | believe, my teacher's real interest (he was a painter outside of
       | school. A very nice person, I didn't enjoy lying to him like
       | that).
       | 
       | You might think that this would prevent me from performing well
       | on exams, where a clean enough drawing (done by hand) was
       | expected. The thing is: I could muster the effort to do a one-off
       | drawing that was clean and precise, from time to time. Just not
       | after every lesson. I actually got the maximum qualification on
       | my province's technical drawing examination.
       | 
       | This is all to say: I agree with the Author. Once the tools are
       | out there, trying to limit them is pointless. You must adapt your
       | curriculum to the environment your pupils are in. If you don't,
       | they will.
        
         | joe_the_user wrote:
         | To put it with maximum extremity, should a student be able to
         | turn-in a traced photograph for their homework in a life
         | drawing class?
         | 
         | I mean, human beings have produced a lot of tools for aiding
         | themselves in tasks that were previously done by hand. Whether
         | a given hand-skill should be learned and preserved kind of has
         | to decided on a case-by-case basis. I learned cursive in school
         | and I wouldn't object to that skill being utterly forgotten by
         | humanity.
         | 
         | Which is to say that I don't think we should assume all
         | learning should begin without automatic tools at hand or that
         | we should suppress automatic tools. Rather, we need to look
         | carefully at which hand-skills are worthwhile and "uplifting"
         | and how much use of those skills should be required.
        
           | justincormack wrote:
           | See
           | https://en.wikipedia.org/wiki/Hockney%E2%80%93Falco_thesis
        
           | IanCal wrote:
           | It comes down to what the goal is. If it's to learn to draw
           | people, no. If it's to learn composition and this gets you
           | past struggling to draw and onto the actual goal then sure.
        
           | spc476 wrote:
           | > should a student be able to turn-in a traced photograph for
           | their homework in a life drawing class?
           | 
           | I did just that. It wasn't a life drawing class, just a
           | drawing class. And it wasn't homework, but the do-it-at-home
           | final. I photocopied my hand. Then I covered a sheet of paper
           | with ground graphite (turning it all nearly black), and then
           | used an eraser and a light-box to trace my photocopied hand.
           | Did I cheat? Perhaps, but I used tools that I had access to.
           | Did Vermeer cheat? [1] Perhaps, he may have used tools he had
           | access to.
           | 
           | Afterwards, I framed that drawing and hung it up at work. It
           | was stolen. Go figure.
           | 
           | [1] https://en.wikipedia.org/wiki/Tim's_Vermeer
        
         | TheHypnotist wrote:
         | I had to take the same course, sans the cheating, just as a
         | pre-requisite to Auto CAD and whatever it's called now. I'm not
         | sure how drawing by hand helped me, but it is what it is.
        
           | nashashmi wrote:
           | I don't know much about technical drawings by hand. But I am
           | very experienced with AutoCAD. And I cannot appreciate much
           | of what CAD has. On the contrary, having met senior people
           | who took technical drawing, they could appreciate it very
           | much so and were able to teach me concepts that I had a hard
           | time understanding before.
           | 
           | Nowadays, when I teach very young people what AutoCAD is, I
           | make them use a mini rolling ruler and tell them to watch
           | youtube videos of how to use a rolling ruler. My
           | understanding of CAD is high enough to know that every tool
           | in CAD was inspired by and modeled after the rolling ruler.
           | Sufficient to say, they understand and enjoy CAD very much
           | after that point.
        
             | borroka wrote:
             | I disagree. Senior professionals might have a better
             | understanding because they have spent a lot of time
             | drawing, but the specific tool is not relevant. It is like
             | saying that playing tennis with a wooden racket gives you a
             | better "feel" for the ball and therefore is preparatory to
             | using modern rackets. No one does this because it is not
             | true.
             | 
             | A similar line of thinking was popular in jiu jitsu, where
             | veterans said that being competent with the gi was
             | necessary to be competent in no-gi. Yet current no-gi
             | winners basically never train with the gi.
        
               | nashashmi wrote:
               | That is definitely true, that knowing and understanding a
               | tool is not important to actually succeeding with the
               | tool. But it does help a confused person understand why
               | tools are made the way they are.
               | 
               | In Fact, Modern programs in CAD do not follow the tools
               | and principles that AutoCAD emphasizes. And those
               | programs have charted a different course where the
               | history and legacy of technical drawings has less
               | relevance. (Case in point: Visio and maybe Powerpoint.)
        
               | borroka wrote:
               | I would say that there is an overemphasis, although
               | understandable, on the usefulness of what happened or was
               | used before the current state, position or technology.
               | 
               | For example, with regard to technical drawing, we might
               | say that using the pencil, the square, and the drafting
               | machine gives an understanding of drawing that the use of
               | the computer cannot. But, for some reason, we don't say
               | that using your finger and sand, or a rock and string to
               | draw a circle helps even more. And we don't say that
               | simply because the people who used rock and string (the
               | good old days) are long dead.
               | 
               | The same goes for clothing, for which we say the 1990s,
               | or perhaps the late 1960s, were better, but no one says
               | that between 1900 and 1910 people dressed with such great
               | taste.
               | 
               | The same goes for romantic partners ("if he hadn't dumped
               | you and made you cry for a year and a half, you wouldn't
               | have found your wonderful wife/husband"), and many other
               | things.
               | 
               | It is very human to rationalize the circumstantial.
        
           | borroka wrote:
           | I myself am guilty of this. I went to a technical college and
           | had to spend three years doing hours and hours of technical
           | drawing every week. I was pretty good at it, but I never got
           | into the craft. In the last two years of high school we
           | switched to CAD using the PCs of the time (286, 386, the 486
           | was as valuable as the wedding ring handed down from
           | generation to generation).
           | 
           | Now, almost 30 years after my days of technical drawing, I
           | must say that I consider those hours spent drawing a terrible
           | waste of time, partly because I did not go on to the career
           | for which technical school was preparing me, and secondly
           | because technology has made pen and paper obsolete and I see
           | no particular ability to carry over from hand and square
           | technical drawing to CAD-assisted design.
           | 
           | The latter is an important point to consider now that we are
           | moving from a non-AI world (like 10 years before my time in
           | technical school) to a little-AI world (my first 3 or 4 years
           | of technical school) to an anywhere-AI world.
        
         | martin-adams wrote:
         | How easy do you think it is to spot when AI is being used?
        
           | Der_Einzige wrote:
           | In general, we don't know how to do this in a way that is
           | robust to all the trivial ways to fool these techniques.
           | 
           | I can just fine-tune a large scale model on a small
           | downstream task, or use creative choices of decoding settings
           | (high temperature, alternative decoders like
           | contrastive/typicality sampling), to fool the existing
           | methods.
        
           | ghaff wrote:
           | As per my example in another comment, it seems to be
           | extremely easy to detect ChatGPT output today. Indeed, to me,
           | surprisingly so.
        
             | baq wrote:
             | The default style is such by design. If you ask for very
             | concise, single paragraph answers it gets hard.
        
               | ghaff wrote:
               | Sure, it's going to be harder to run detection on a short
               | snippet. But a short snippet also isn't very useful in
               | the context of writing a school paper for you.
        
             | bryan0 wrote:
             | keyword "today". This stuff will not be detectable in a few
             | years.
        
             | micromacrofoot wrote:
             | I don't think it's as easy to detect ChatGPT output as
             | you're making it out to be. Sure, it might be possible to
             | detect it if you know what you're looking for, but to the
             | average person, it's not that obvious.
        
           | omnimus wrote:
           | Today? In images? Quite easy. In text? Pretty hard. And if
           | you do some cheking and editing then pretty impossible?
           | 
           | In two years? Look where the space was two years ago. I think
           | many things will have to change.
        
             | frandroid wrote:
             | An amusing part with all of the AI-generated graphics is
             | that it's pretty bad at producing the right amount of
             | fingers, among other details. This one really jumps out.
             | I'm assuming that will be ironed out eventually but the
             | devil, as always, remains in the details.
        
             | pclmulqdq wrote:
             | GPT-3 definitely has a writing style. I assume it would be
             | detectable, ironically, by an AI.
        
               | sterlind wrote:
               | if it's detectable by an AI, you can GAN it into erasing
               | its detectable writing style.
        
               | joe_the_user wrote:
               | I think you could make the GPT's more undetectable by a
               | computer using GAN techniques but that might well make
               | the thing more detectable by a person. The GPT-style is
               | something like rhetorical devices run together to make
               | various claims unfold smoothly. An alternative approach
               | could make the things inconsistency more obvious to a
               | person.
        
               | micromacrofoot wrote:
               | GPT-3 definitely has a writing style that can be detected
               | by AI. But here's the thing: it's really easy to change
               | its writing style. All you have to do is give it
               | different input data and boom, you've got a whole new
               | writing style.
        
               | guelo wrote:
               | That's not easy, it costs millions of dollars to obtain
               | large amounts of clean data and retrain those models.
        
               | magusdei wrote:
               | No, it would take approximately 3 minutes where you
               | either write a paragraph in the desired style yourself or
               | paste one from the Internet and then ask it to continue
               | in that style. Even if you decided to go the more onerous
               | fine-tuning route, it would require 1-2mb of text, cost
               | very little, and you'd be done in a few hours. It's easy.
        
               | hansihe wrote:
               | But you don't really have to retrain the whole model to
               | modify it's writing style, fine tuning would probably be
               | enough
        
               | micromacrofoot wrote:
               | Well, that may be true, but even with a small amount of
               | data, you can still change GPT-3's writing style to some
               | extent. And you don't have to retrain the whole model
               | from scratch, you can just fine-tune its existing
               | settings to change its writing style.
        
           | LightG wrote:
           | If the AI only had to generate headlines and was good at it,
           | then it would be very difficult to spot.
           | 
           | * canned laughter *
        
           | ako wrote:
           | I asked ChatGPT how to write texts so it can be identified as
           | human generated.
           | 
           | The answer: "One way to make it clear that a text is not
           | generated by AI is to include a joke or riddle that is
           | difficult for a machine to understand or generate. For
           | example, you could include a joke that relies on wordplay or
           | a complex pun. Here is an example:
           | 
           | Why was the math book sad?
           | 
           | Because it had too many problems."
        
             | Der_Einzige wrote:
             | Obviously we should not anthropomorphize a model, but this
             | is just downright funny because it's so obviously wrong
             | that it appears as almost malicious.
             | 
             | Good thing ChatGPT doesn't know any complex puns or witty
             | wordplay!
        
             | joe_the_user wrote:
             | It's everyone choice - odd, ironic or appropriate - what to
             | think of how GPT's answers to keeps getting quoted on the
             | question of how to detect it.
             | 
             | The thing is, I'd see these answers as similar to
             | everything else the program produces. A bunch of claims
             | from the net cobbled together - I've read a number of Sci-
             | fi novels and stories where "inability to understand humor"
             | is the distinguishing quality of an AI (I'm guessing it
             | extrapolated "hard create" from "hard to understand"). But
             | that doesn't seem to be playing here where the AI mostly
             | running together things humans previously wrote (and so it
             | will an average amount humor in circumstances calling for
             | it).
             | 
             | A reasonable answer is that the AI's output tends to
             | involve this running-together of common rhetorical devices
             | along with false and/or contradictory claims within them.
             | 
             | -- That said, the machine indeed did fail at humor thing
             | time.
        
               | pixl97 wrote:
               | >A reasonable answer is that the AI's output tends to
               | involve this running-together of common rhetorical
               | devices along with false and/or contradictory claims
               | within them.
               | 
               | The question here is this an actual AI only failure mode.
               | Are we detecting AI, or just bullshittery?
        
               | joe_the_user wrote:
               | I don't know if bullshittery is the only failure mode but
               | I think it's a necessary failure mode of large language
               | models as they are currently constituted.
               | 
               | I would say that human knowledge involves a lot of the
               | immediate structure of language but also a larger outline
               | structure as well as a relation to physical reality.
               | Training on just a huge language corpus thus only gets
               | partial understanding of the world. Notably, while the
               | various GPTs have progressed in fluency, I don't think
               | they've become more accurate (somewhere I even saw a
               | claim they say more false thing now but regardless, you
               | can observe them constantly saying false things).
        
             | dwringer wrote:
             | Unfortunately I don't think this is enough of a heuristic.
             | I am only speaking about the one language model I have
             | personally used, on character.ai, but it is more than
             | capable of making word play and insightful, often hilarious
             | jokes. Although they are frequently amateurish, I think
             | that's more a function of the fact that I myself am not
             | much of a stand-up comedian, as well as each "bot's"
             | individual training history which is presumably modifying a
             | prompt under the hood and/or training an extension of the
             | model directly based on the conversations.
             | 
             | Of course, in real time the attempts at humor often fall
             | flat and might give away flawed thought processes, although
             | I personally have found them to be often insightful,
             | (containing a seed of humor) even when they're not funny.
             | It could be a useful technique when actually having a
             | conversation, a form of Voight-Kampff test, but I don't
             | think it will do anything to let you know if the content
             | was generated by AI and then just cherry picked by a human.
        
         | lowbloodsugar wrote:
         | Wasn't the very first point that ChatGPT got the homework
         | answer completely wrong?
         | 
         | But I do agree with your overall point, that schools and
         | universities will have to embrace these new tools. Vernor Vinge
         | (who was an SDSU maths professor) wrote Rainbow's End on this
         | very subject.
        
       | aeschenbach wrote:
       | Shouldn't the evaluation of ChatGPT as a language model focus on
       | how human it sounds, not how good it is at information retrieval?
       | (Not an MLE)
        
       | martin_a wrote:
       | Duplicate? https://news.ycombinator.com/item?id=33867168
        
       | adverbly wrote:
       | Something I've noticed about chat ais vs direct search is that
       | because a chat ai is a blackbox, I can't dig into an answer's
       | source at all.
       | 
       | With a search, I can read the wikipedia sources or I know the
       | forum, or the poster. But with an AI, it's a dead end and it
       | sucks. I can ask people how they know something, but AI training
       | data is invisible. It doesn't know how it knows something.
       | 
       | There is this oracle with a 10% chance of being wrong, and if I'm
       | not already an expert in what it's talking about I have no idea
       | of when that is, and no way to dig into things. It's the only
       | source of truth, and never gives me any other threads or rabbit
       | holes to go down.
       | 
       | The only recourse is asking follow up questions, so you're
       | trapped in there in a learning box entirely in the bot's control.
       | Not sure how I feel about that. I like that google sends me other
       | places so I get exposed to different things.
        
         | kybernetikos wrote:
         | > because a chat ai is a blackbox, I can't dig into an answer's
         | source at all.
         | 
         | It's not quite the same thing, but I've been impressed by the
         | results when you ask ChatGPT to back up its answers or provide
         | concrete examples for things it claims or to explain a point.
         | While it doesn't manage all the time, it has surprised me
         | multiple times with how good it is at this.
        
         | yamrzou wrote:
         | Given that most people never check the source of what they
         | read, this is really scary. Because now everyone has the
         | ability to write and say things that sound plausible and likely
         | to be convincing, and the truth will be harder to access.
        
           | thekyle wrote:
           | I think everyone has been able to tell convincing lies for
           | quite some time before language models even existed.
        
             | vincnetas wrote:
             | but now its almost zero effort activity.
        
         | IanCal wrote:
         | You're not trapped in there, because you're entirely free to go
         | and research yourself. You can look up what it's telling you.
         | 
         | It's no more trapping than talking to a stranger who seems to
         | be knowledgeable about a subject but doesn't hand you a list of
         | references.
        
           | monklu wrote:
           | Not exactly though. With a human stranger, I can still
           | stereotype based on their appearance, background, accents,
           | etc. and apply whatever mental adjustments as taught to me by
           | my societal upbringing. With an "AI" bot, the "strangers" are
           | faceless people who curated the training sets and wrote the
           | obscure statistical algorithms.
        
             | BeetleB wrote:
             | > With an "AI" bot, the "strangers" are faceless people who
             | curated the training sets and wrote the obscure statistical
             | algorithms.
             | 
             | I think this is a feature over:
             | 
             | > I can still stereotype based on their appearance,
             | background, accents, etc. and apply whatever mental
             | adjustments as taught to me by my societal upbringing.
        
             | IanCal wrote:
             | I'm not sure "yes but I can judge them on their appearance
             | and accent" is a great reason, but regardless you could
             | view it the same as an internet comment if you want.
        
         | visarga wrote:
         | > AI training data is invisible. It doesn't know how it knows
         | something
         | 
         | You should be accustomed to being surprised by AI. There is of
         | course a new kind of transformer that takes input a query and
         | outputs document IDs. Like a search engine retriever and ranker
         | all packed into a neural net, very fast and efficient. So you
         | can take any paragraph generated by the model and attribute it
         | to the training set. This could be used to implement
         | verification or retrieval augmented generation.
         | 
         | A Neural Corpus Indexer for Document Retrieval
         | 
         | https://arxiv.org/abs/2206.02743
        
         | amrrs wrote:
         | This is called Hallucinations in the world of NLP.
         | https://youtu.be/dtLsrLoopl4
         | 
         | ChaTGPT Hallucinations
        
         | magusdei wrote:
         | There's plenty of retrieval-based models that do cite sources.
         | They just didn't want to deal with it for this release.[1] I'm
         | sure it's already on the roadmap.
         | 
         | [1] In fact, some snooping suggests they specifically disabled
         | that feature, but do have it in test environments. See the
         | "browsing disabled" flag they have in the hidden prompt. That
         | could easily be used for citations. Source:
         | (https://twitter.com/goodside/status/1598253337400717313)
        
         | TeMPOraL wrote:
         | A great observation, and I share the feeling.
         | 
         | From some other AI demonstrations, I recall there's usually a
         | bunch of surface-level tags with probabilities associated that
         | are produced alongside the output. Not sure how this looks for
         | GPT-3, but if it could provide - alongside the answer - a list
         | of top N tokens or concepts with associated probabilities, with
         | N set to include both those that drove the final output and
         | those that barely fell below threshold - that would be
         | something you could use to evaluate the result.
         | 
         | In the example from the article, imagine getting that original
         | text, but also tokens-probability pairs, including: "Hobbes :
         | 0.995", "Locke : 0.891" - and realizing that if the two names
         | are _both_ rated so highly and so close to each other, it might
         | be worth it to alter the prompt[0] or do an outside-AI search
         | to verify if the AI isn 't mixing things up.
         | 
         | Yes, I'm advocating exposing the raw machinery to the end-
         | users, even though it's "technical" and "complicated". IMHO,
         | the history of all major technologies and appliances show us
         | that people absolutely _can_ handle the internal details, even
         | if through magical thinking, and it 's important to let them,
         | as the prototypes of new product categories tend to have
         | issues, bugs, and "low-hanging fruit" improvements, and users
         | will quickly help you find all of those. Only when the problem
         | space is sufficiently well understood it makes sense to hide
         | the internals behind nice looking shells and abstractions.
         | 
         | --
         | 
         | EDIT: added [0].
         | 
         | [0] - See https://news.ycombinator.com/item?id=33869825 for an
         | example of doing just that, and getting a better answer. This
         | would literally be the next thing I'd try if I got the original
         | answer _and_ metadata similar to my example.
        
           | LightG wrote:
           | "view source"... Oh how I miss thee.
        
           | 1wheel wrote:
           | Here's an example of that with a smaller BERT model:
           | https://pair.withgoogle.com/explorables/fill-in-the-blank/
        
         | beambot wrote:
         | > Something I've noticed about chat ais vs direct search is
         | that because a chat ai is a blackbox, I can't dig into an
         | answer's source at all.
         | 
         | Did you try asking it for a source?
        
           | mk_stjames wrote:
           | ChatGPT deflects questions about sources automatically,
           | inherent in it's pre/post prompt-processing, on purpose. If
           | you try to ask for a source it explains it is a Large
           | Language Model and it is not connected to the internet and
           | thus it cannot give you sources for its information other
           | than it was trained on a large amount of information from the
           | internet originally. It then says that if it were finding
           | sources it would check to make sure they are reputable.
           | 
           | It is a decision from OpenAI to intervene and give this
           | disclaimer. IMO this is one of the worst parts of this phase
           | of the tech- it is way too confident an then when presses it
           | currently doesn't have the ability to cite sources, because
           | that simply ins't how deep learning works on a model like
           | this.
        
           | rgbrgb wrote:
           | I tried many ways but it will not reveal sources.
           | 
           | > As an AI assistant, I do not have access to external
           | sources. I am a large language model trained by OpenAI, and
           | my answers are based on the knowledge and information that I
           | have been trained on. I do not have the ability to browse the
           | internet or access external sources of information. My goal
           | is to provide accurate and helpful answers based on the
           | knowledge I have been trained on, but I cannot provide
           | sources or citations for the information I provide.
        
         | somethoughts wrote:
         | I think another important aspect overlooked is that there
         | probably will be AI engine optimization consultants once any AI
         | engine gains popularity - similar to Search Engine Optimziation
         | consultants. The original Google page rank system worked well
         | in the beginning - prior to SEO - but now is largely not used.
         | 
         | AI engine optimization consultants will figure out how to game
         | the system - likely targeting the training data sources.
        
         | nverno wrote:
         | It's safe to assume it's always wrong. Most of the code I've
         | had it write so far has minor bugs. In some ways, it's like a
         | child that has access to immense knowledge, it's happy to make
         | mistakes as it tries to establish connections, some of which
         | are surprising and interesting.
        
       | lowbloodsugar wrote:
       | >In the case of AI, don't ban it for students -- or anyone else
       | for that matter; leverage it to create an educational model that
       | starts with the assumption that content is free and the real
       | skill is editing it into something true or beautiful; only then
       | will it be valuable and reliable.
       | 
       | This was entirely the premise of Rainbow's End by Vernor Vinge,
       | himself an SDSU professor at the time.
       | 
       | [1] https://www.goodreads.com/book/show/102439.Rainbows_End
        
       | fellerts wrote:
       | There has been a lot of "well, we had a good run" comments on
       | ChatGPT threads from engineers lately. I get where this sentiment
       | is coming from, but I also think that the article paints a good
       | picture of how we must "pivot" as a species to avoid faceplanting
       | completely. Zero Trust Homework sounds like a strategy we will be
       | forced to implement:                   the system will frequently
       | give the wrong answers (and not just on accident -- wrong answers
       | will be often pushed out on purpose); the real skill in the
       | homework assignment will be in verifying the answers the system
       | churns out -- learning how to be a verifier and an editor,
       | instead of a regurgitator.
       | 
       | If done well, I believe this can prepare the next generation well
       | for a future we cannot even imagine. The next 10 years will be
       | interesting to say the least.
        
         | arcturus17 wrote:
         | I agree with the general optimism in your take, however, I am
         | unsure whether people without any training in _ex-novo
         | production_ (ie, writing from scratch) will be able to operate
         | competently, at least in the realm of code.
         | 
         | I am blown away by Copilot's capabilities (haven't tried
         | ChatGPT yet, it's down right now), but I can only operate it
         | because I have _written_ tons of code beforehand. I am getting
         | the best results from it by writing prompts in a language that
         | sometimes includes domain-specific language ( "accounts", etc.)
         | but still _heavily_ leans on programming constructs ( "write a
         | map with a tuple of integers as a key, and a string as a
         | value")
         | 
         | Maybe there's a way of learning how to code without ever
         | writing a genuine line by yourself. You learn some rudiments,
         | read the language docs, start writing prompts and create a
         | testing loop based on expected outputs. Maybe. But it's
         | uncharted territory.
         | 
         | Same question goes for good ole writing - can you produce new
         | Hemingways with the process you describe? I know there is a
         | certain vision where _the AI will be the new Hemingway_ but you
         | are talking about a shorter term (~10 years) where it 's
         | unlikely will be the case... Will kids be able to properly
         | write if they only train to be editors?
         | 
         | As an addendum, I was at GitHub conf and saw Prof. Malan of
         | Harvard CS50 fame claiming that at present his course will
         | _not_ allow the use of tools like Copilot. I have taken the
         | course and they have plagiarism control systems but I don 't
         | know how they are going to control this potential explosion.
        
           | JW_00000 wrote:
           | Agreed. In fact, this is the same line of reasoning my
           | teachers used in elementary school when we asked them "Why do
           | we need to learn how to add/subtract/multiply/divide, when
           | calculators can do it automatically for us?" And the answer
           | is still the same: you need to have done it the "manual" way
           | before you can understand what the machine is doing for you.
           | 
           | I only understand what a compiler is doing after having
           | written one (albeit a tiny one) myself during my studies.
        
             | UncleEntity wrote:
             | > I only understand what a compiler is doing after having
             | written one (albeit a tiny one) myself during my studies.
             | 
             | How many people need to understand how a compiler works to
             | sling some code for a paycheck or just for fun.
             | 
             | I learned how compilers work purely for fun, it provides me
             | absolutely no benefit in my day to day life, and before
             | that I taught myself how to code for the exact same reason.
             | If I had to go the opposite route, learn how compilers work
             | before learning to code, I seriously doubt I'd have taken
             | the time.
             | 
             | Other things are the same, I'm horrible at doing math but
             | managed to learn enough linear algebra to finish a coding
             | project I was interested in doing. If I would have had to
             | code up all the hard bits instead of calling matrix1 *
             | matrix2 that project would never have happened.
             | 
             | Another layer of abstraction isn't always a bad thing,
             | these AIs will eventually figure out how to write bug free
             | code when you tell it to "write a function which calculates
             | the flight velocity of a sparrow carrying a coconut".
        
               | ghaff wrote:
               | Back in grad school I took a robotics course and it
               | involved doing all these big matrix multiplies--by hand
               | given this was pre-MATLAB. The course was overall fun but
               | that part definitely wasn't. A modern version would be
               | more interesting.
        
               | nicopappl wrote:
               | I found that understanding compilers was precious at
               | several occasions, especially for cross compilation and
               | embedded programming. Even for Java, for uni assignments,
               | I was completely flabbergasted and at a lost when my IDE
               | stopped working with cryptic messages. Understanding the
               | process of creating object files, what javac does, what
               | goes into a jar helps. As much as understanding what a
               | compiler does, its relationship wtih linkers, how
               | libraries are "added" to an executable (dynamically or
               | statically)
               | 
               | Beside, with ChatGPT in its current state, it's still
               | absolutely impossible to get it to produce any useful
               | code without knowing how code works. There is an
               | advantage to the precise specification of a formal
               | language. At least for the next decade, I think there
               | will be human proof-reading AI output before generated
               | code is ran, with catastrophic consequences when that
               | doesn't happen.
               | 
               | What I've difficulties getting over right now is every
               | time I write something (text or code) I ask myself "Could
               | ChatGPT have written it better?"
        
               | jltsiren wrote:
               | > How many people need to understand how a compiler works
               | to sling some code for a paycheck
               | 
               | Only the ones who want a good job.
               | 
               | One of the consequences of automation has been a widening
               | gap between top jobs and ordinary jobs. It's plausible
               | that AI will continue the same process. Many jobs that
               | pay well and are intellectually and socially rewarding
               | today may become similar to airport security checks in
               | the future. You observe what a machine does, double-
               | checking its conclusions and looking for anything out of
               | ordinary. And you often fail at that, because the human
               | brain is poorly suited for such tasks.
               | 
               | When the future is uncertain, its good to have a deeper
               | and wider understanding of things than what your current
               | job requires.
        
               | nekopa wrote:
               | First attempt it didn't try to respond, just said it
               | didn't know anything about sparrows or coconuts.
               | 
               | I asked it to retry, and got this:
               | 
               | function calculateSparrowFlightVelocity(coconutWeight) {
               | // Assume that a sparrow can carry a maximum of 1.5
               | ounces of weight if (coconutWeight > 1.5) { return 0; }
               | 
               | // Assume that the flight velocity of a sparrow is 5
               | meters per second // and the weight of a coconut is 0.5
               | ounces return 5 * (1 - (coconutWeight / 0.5)); }
               | 
               | // Example usage:
               | console.log(calculateSparrowFlightVelocity(0.5)); //
               | Output: 4 meters per second
               | console.log(calculateSparrowFlightVelocity(1.0)); //
               | Output: 3 meters per second
               | console.log(calculateSparrowFlightVelocity(1.5)); //
               | Output: 0 meters per second (sparrow cannot carry this
               | weight)
               | 
               | Can any Monty Python fans confirm if this is correct?
        
               | icelancer wrote:
               | > How many people need to understand how a compiler works
               | to sling some code for a paycheck or just for fun.
               | 
               | I think it's pretty important. I use my very basic
               | knowledge of how compilers work occasionally in my
               | mostly-interpreted code world to diagnose and debug
               | performance issues.
               | 
               | I also used a low-level debugger and taint/trace method
               | recently to diagnose Windows DLL issues that happened
               | after X seconds of starting up a program that interfaced
               | with specialty hardware. A couple young early-career
               | technicians/engineers watched me do it and were
               | incredibly flabbergasted and figured me for some sort of
               | genius when in reality I kinda only know how it works and
               | was just reading the stack/heap as I was paging through;
               | knowledge I acquired from booting SoftICE a few times as
               | a kid.
               | 
               | There's a real lack of simply knowing how a computer
               | and/or a network works, which is to be expected since
               | layers and layers of abstraction have made it very easy
               | to teach computer literacy and even computer programming
               | on many levels. The issue is that when things go wrong
               | and Stack Overflow doesn't have the answers, almost all
               | young engineers I work with have no experience at low-
               | level analysis and debugging, very little knowledge of
               | first principles, and don't know where to Google to start
               | acquiring it.
               | 
               | This isn't me shaking my fist at the kids or anything.
               | They're more productive than I was at their
               | age/experience level with code, and that's due to better
               | knowledge and integration using IDEs which I never had,
               | plus likely better high-level understanding of what's
               | going on. But I fear a generation of developers and
               | technicians who largely have little understanding of how
               | computers work is problematic.
        
           | icelancer wrote:
           | >> I am blown away by Copilot's capabilities (haven't tried
           | ChatGPT yet, it's down right now), but I can only operate it
           | because I have written tons of code beforehand. I am getting
           | the best results from it by writing prompts in a language
           | that sometimes includes domain-specific language ("accounts",
           | etc.) but still heavily leans on programming constructs
           | ("write a map with a tuple of integers as a key, and a string
           | as a value")
           | 
           | Hmm, this is a good point. I get incredible results from
           | Copilot but I realized just now it's primarily because I have
           | a fairly deep understanding of computer science as it
           | pertains to the domain of code I'm writing in, and thus my
           | comments that prompt the AI are quite accurate, fleshed out,
           | and detailed.
           | 
           | If I didn't know any of the underpinning of the algorithms
           | and data structures (like a new graduate from a code bootcamp
           | for example), it might be much tougher to make hay out of
           | Copilot's AI.
           | 
           | Never thought of it like that.
        
           | fellerts wrote:
           | Taking a step back: Isn't it kind of daft that we humans must
           | go through years of training to learn esoteric languages and
           | ways of bending our brains backwards in order to tell a
           | computer how to perform arguably simple tasks? In the ideal
           | world, you can tell the computer what you want to happen
           | and... it happens. If not, then you did not explain all your
           | edge cases, and the conversation continues.
           | 
           | I think it can be argued that we have, seemingly overnight,
           | made vast progress towards that world. You specifically
           | mention code, but that concept is irrelevant in this world.
           | Indeed, this is uncharted territory!
        
             | arcturus17 wrote:
             | > Isn't it kind of daft that we humans must go through
             | years of training to learn esoteric languages
             | 
             | I mean, what do you propose, that we skip all the steps in
             | computer science history and just start at the end? Hardly
             | "daft, it's simply the best way we have come up to provide
             | machines with instructions until now. And it's not like
             | people have not tried other paradigms (ex: graphical
             | programming, "low-code", etc.).
             | 
             | Also, compared to programming in assembly or binary,
             | programming in Python or other high-level languages is a
             | huge advance in itself. Python, at the end of the day, is
             | nothing but a bridge between natural language and machine
             | code.
             | 
             | > You specifically mention code, that concept is irrelevant
             | in this world
             | 
             | Current computer systems run on trillions upon trillions of
             | lines of code.
             | 
             | GPT-3 or Copilot don't change that fact, in fact they will
             | continue to pile up lines of code.
             | 
             | They are systems that map natural language to code, by
             | drawing from an infinitely massive corpus of code.
             | 
             | They bring the idea of expressing systems _entirely_ in
             | natural language one step closer, but it 's still very far
             | away - almost a pipe dream.
             | 
             | The output of these innovations is still _code_. So tell me
             | again how code has become irrelevant, or how people who do
             | not understand code at all will be able to leverage these
             | advances?
        
             | TeMPOraL wrote:
             | > _Isn 't it kind of daft that we humans must go through
             | years of training to learn esoteric languages and ways of
             | bending our brains backwards in order to tell a computer
             | how to perform arguably simple tasks?_
             | 
             | I always took it in another way: learning to program a
             | computer is an enlightening, visceral experience
             | highlighting just _how bad natural language and normal
             | modes of thinking suck_. They 're heavily optimized for
             | reality in which the most important thing was managing
             | social dynamics in small groups, and beliefs were good or
             | bad depending on how close they are to what everyone else
             | beliefs. Whether or not they were true at object level did
             | not matter at all.
             | 
             | We live in different times now. There's a demand for clear,
             | logical, _precise_ thinking. The very thinking we 're not
             | well-equipped for by default. The very thinking you call
             | "bending our brains backwards". Because object-level truth
             | matters much more now, there are consequences for getting
             | things wrong - such as countless of ways of getting scammed
             | by people who are better at this than you are.
             | 
             | Not to mention, this "bending our brains backwards" is what
             | gives humanity superpowers. It's what gave us all the
             | technology we enjoy today - energy, transportation,
             | amenities, communication, entertainment. Building and
             | maintaining these things requires precise thinking and
             | precise communications. So, until a super-human AI takes
             | all that over for us (and effectively turns out into NPCs
             | in our own story), glorifying the natural, human mode of
             | communication is just self-handicapping.
        
         | thundergolfer wrote:
         | Over the weekend Twitter was 30% ChatGPT screenshots and 20%
         | engineers frothing about how ChatGPT would replace programming,
         | article writing, even Google Search.
         | 
         | Such engineers must be juniors, or have persisted in their life
         | with a totally wrong idea of how engineering is done and
         | engineering is built. I can't imagine thinking that one could
         | build software systems or essay arguments by trusting an AI
         | system you don't understand to provide answers you don't
         | understand.
         | 
         | Like you say, we very well could faceplant society with this
         | misconception that regurgitation plausible code and prose is
         | what education and understanding looks like.
        
           | rednerrus wrote:
           | Couldn't this help a 10x engineer become a 100x engineer? The
           | secret is still going to be in understanding what to ask for
           | and ensuring that what gets spit out works. It's just going
           | to be so much faster. Is there anyplace in the world for a
           | .5x developer or even a 1x developer in a chatgpt world?
        
             | alain94040 wrote:
             | It may help a 10X engineer. The code is not what's hard,
             | it's figuring the correct data structures that matters
             | (also known as architecture). Once you got the data
             | structures correct, the code mostly writes itself, and I
             | can see an AI do that part for you. That could be indeed a
             | productivity multiplier.
        
             | jerf wrote:
             | "Couldn't this help a 10x engineer become a 100x engineer?"
             | 
             | No, because the thing that makes you a 10x engineer is not
             | generally writing code. (Let me just take the "10x" term
             | for now as given and not dip into critique I've made
             | elsewhere already.) It is certainly a baseline skill
             | required to get there, but the things that make you 10x are
             | being able to answer questions like, should this be built
             | at all? There are 5 valid architectures I could use to
             | solve this problem, which has the overall best
             | cost/benefits analysis for the business as a whole? (As
             | opposed to the 1x, who will likely run with either the
             | first they come up with, or the one architecture they
             | know.) If I'm working with 5 teams to solve this particular
             | problem, what's the correct application of Conway's Law, in
             | both directions, to solve this problem with the minimum
             | cost in the long term? What's the likely way this system
             | will be deprecated and can we make that transition
             | smoother?
             | 
             | I am _abundantly confident_ you could feed this AI a
             | description of your problem in terms of what I gave above
             | and it will extremely confidently spit out some answer. I
             | am only slightly less confident it 'll be total garbage,
             | and most of that confidence reduction is just accounting
             | for the possibility it'll get right by sheer luck. "The
             | average of what the internet thinks" about these issues
             | can't be more than a 2x engineer at best, and that's my
             | very top-end estimate.
             | 
             | I'm not promising no AI will ever crack this case. I'm just
             | saying this AI isn't going to do it. Over-reliance on it is
             | more likely to drop you down the "Xx engineer" scale than
             | raise you up on it.
             | 
             | For that matter, at least at the level I operate at most of
             | the time, coding skill isn't about how fast you can spew it
             | out. It's about how well you understand it and can
             | manipulate that understanding to do things like good, safe
             | refactorings. This tech will not be able to do
             | refactorings. "How can you be so confident about that
             | claim, jerf?" Because most people aren't digging down into
             | how this stuff actually works. These transformer-based
             | technologies have windows they operate on, and then
             | continue. First of all, refactoring isn't a "continuation"
             | anyhow so it's not a very easy problem for this tech (yes,
             | you can always say "Refactor this code" and you'll get
             | _something_ but the nature of this tech is that it is very
             | unlikely to do a good job in this case of getting every
             | last behavior correct), but second of all, anything that
             | exceeds the window size might as well not exist according
             | to the AI, so there is a maximum size thing it can operate
             | on, which isn 't large enough to encompass that sort of
             | task.
             | 
             | It really reminds me of video game graphics, and their
             | multiple-orders-of-magnitude improvements in quality,
             | whereas the underlying data model of the games that we are
             | actually playing have grown much, _much_ more slowly. Often
             | late 1990s-era games are actually richer and more
             | complicated than the AAA games of today. But on the
             | surface, a modern game blows away any 1990s game, because
             | the surface graphics are that much better. There 's an
             | analog to what transformer-based AI tech is doing here...
             | it is _really_ good at looking amazing, but under the hood
             | it 's less amazing an advance than meets the eye. I do not
             | mean to slag on it, any more than I want to slag on
             | graphics technology... both are still amazing in their own
             | right! But part of what they're amazing at is convincing us
             | they're amazing, regardless of what lies beneath the tech.
        
             | flatline wrote:
             | I do not think it could. What I see GPT doing here is
             | generating a lot of plausible boilerplate. We already have
             | this via things like code snippets. I find them to be
             | useless, like autocorrect on my phone. It gets in the way
             | of my reasoning and does not really enhance it in any way.
             | Sometimes I make mistakes typing but I'd rather them be my
             | honest mistakes than the computer censoring/rewriting my
             | thoughts.
             | 
             | Good engineering requires good reasoning skills and GPT has
             | exactly zero reasoning. It cannot do the thing that humans
             | do and it cannot do what a calculator can do. I think it is
             | neat and fun, but that is all, a novelty.
             | 
             | I've used auto-routers for PCB layout and they will do a
             | 90% job that takes just as much work to redo to get the
             | last 10% as it would to have done it right by hand from the
             | start. There may be a future for operator-in-the-loop type
             | guided AI generative models but I don't see a lot of effort
             | devoted to making real systems like that. Watson seemed to
             | have this potential and failed even after a brilliant
             | display of ingenuity on Jeopardy. I see these models headed
             | the same way.
        
               | drchickensalad wrote:
               | didn't we also think that good chess and go play required
               | good reasoning skills? is this just a higher scale
               | version of that?
        
               | flatline wrote:
               | I don't think anyone knows. I gave it the famous
               | syllogism:
               | 
               | > All men are mortal > Socrates is a man > Is socrates
               | mortal
               | 
               | To which it gave a very detailed and correct reply. I
               | then tried:
               | 
               | > All cats are white > Sam is a cat > Is sam white?
               | 
               | To which it gave an almost identically worded response
               | that was nonsensical.
               | 
               | I personally do not think it is the size of the model in
               | question, it is that the things it does that appear to
               | reflect the output of human cognition are just an echo or
               | reflection. It is not a generalizable solution: there
               | will always be some novel question it is not trained
               | against and for which it will fall down. If you make
               | those vanishingly small, I don't know, maybe you will
               | have effectively compressed all human knowledge into the
               | model and have a good-enough solution. That's one way of
               | looking at an NN. But the problem is fundamentally
               | different than chess.
               | 
               | I think this composed with more specialized models for
               | things like identifying and solving math and logic
               | problems could make something that truly represents what
               | I think people are seeing the potential in this.
               | Something that encodes the structure behind these
               | concepts, is extensible, _and_ has a powerful generative
               | function would be really neat.
        
             | localhost wrote:
             | Last night I had ChatGPT write a streamlit app for me and I
             | felt it was like talking to an enthusiastic junior dev who
             | takes feedback really well and works incredibly fast. This
             | is an incredibly high leverage technology and we're just
             | figuring out how to use it.
        
             | ghaff wrote:
             | But this sort of change is happening all the time in all
             | sorts of different domains. There probably isn't much of a
             | role for a web developer who just knows some basic HTML
             | like there was in the late 90s. And you can apply this to a
             | lot of basic wring, transcriptions, etc.--at least at
             | western pay scales.
        
             | arcturus17 wrote:
             | > Is there anyplace in the world for a .5x developer or
             | even a 1x developer in a chatgpt world?
             | 
             | I don't know, but I'll offer my perspective.
             | 
             | I am an OK programmer. Something like 1x? I learned proper
             | computer science and programming in my late 20's and got a
             | degree in Software Engineering from an extension program at
             | an Ivy League school. Maybe it's because I learned later in
             | life, maybe it's because I'm not that smart after all, but
             | my glaring weakness as a programmer is programming "in the
             | small" - ie, coding very local algorithms in ~5-10 lines of
             | code. I trip up with small errors in computation, state,
             | etc. This is possibly common to many great programmers, but
             | I think I am weaker than average in this area.
             | 
             | On the other hand, I am better at a higher level of
             | programming. I like abstraction, software design, software
             | and systems architecture. And at an even higher level, I am
             | quite skilled at - and do quite a bit of - business
             | analysis and UX.
             | 
             | I have been trying Github Copilot and I am _very excited_
             | about how it elevates my weaknesses - it 's a perfect fit
             | for me. I see a future where I will be able to play my
             | generalist role much better.
             | 
             | As for people who are 1x, but don't have other skills, I'm
             | also optimistic. I think the market will push them into new
             | exciting directions. Their 1x coding skills, coupled with
             | AI assitance, could possibly be a huge asset in new
             | emerging roles in the economy.
        
             | bogwog wrote:
             | Writing code is the equivalent of swinging a hammer. A more
             | experienced engineer might swing it better than a junior,
             | but swinging hammers isn't what makes an engineer valuable.
             | 
             | These AI models don't actually understand anything about
             | what they're generating, not to mention the world they're
             | supposed to building solutions for. They're just using
             | statistics to predict the next likely output based on some
             | input. Maybe there is a way to incorporate these things
             | into the development process today, but I think we're still
             | far away from seeing an AI replacing a human engineer.
        
           | fellerts wrote:
           | I guess a lot of people are afraid of being left behind.
           | 
           | What goes into the average developer's day? I think a lot of
           | time is spent writing boilerplate, fighting with tiny CI
           | errors, shuffling code, refactoring, writing tests and
           | staring at code hunting bugs. As this technology matures, all
           | of that noise might go away, and I think a lot of people are
           | having a hard time imagining what they are left with.
           | 
           | Remember, we were trained in the "classical" fashion. How
           | will budding engineers approach learning in 10 or even 50
           | years?
        
           | acdha wrote:
           | I agree that there's a lot of panic but I would have some
           | sympathy: consider how this could turn out for someone who
           | works at large corporation where they have to worry about
           | some senior manager deciding that this is the way to avoid
           | pricey staff, and their job turns into herding bots and
           | fixing up the output while getting blamed for problems. That
           | kind of thing is also easier to laugh off if you're 25 and
           | single than 50 with a mortgage and college funds.
        
             | weatherlite wrote:
             | "herding bots" is hilarious. I'm 90% sure you're not a bot
             | for coming up with it.
        
               | acdha wrote:
               | But then a bot would say that.
               | 
               | (I feel like these tools are incredibly neat to see but
               | also likely to lead to a corrosive effect on trust as
               | people no longer know who's real -- and that'll get worse
               | the first time someone launches a good AI assistant to
               | handle email / chat trained on your writing style)
        
               | TeMPOraL wrote:
               | > _and that 'll get worse the first time someone launches
               | a good AI assistant to handle email / chat trained on
               | your writing style_
               | 
               | That one might actually be easy to solve: all we need is
               | to normalize the idea that you're personally responsible
               | for whatever your email/chat assistant AI says or does in
               | your name. After all, it's you who are making a decision
               | to reroute other people to a bot.
               | 
               | (Many would say just using such assistant in the first
               | place is disrespectful towards the people trying to reach
               | you. I personally don't agree, because in my experience,
               | those who'd complain about it are exactly those I find
               | _even more_ disrespectful, as they tend to impose
               | themselves, interrupting me with random inane nonsense.
               | Yes, I suck at chit-chat.)
               | 
               | I think this idea is actually the default, so all we need
               | is to keep it alive, to protect it from all kinds of AI
               | assistant startups willing to dump _billions_ of dollars
               | on marketing campaigns trying to convince everyone that
               | half-broken assistants are good and fine, you should use
               | them and should not be blamed for occasional mishaps.
        
         | soco wrote:
         | I can imagine somebody knowing the answer being able to
         | properly verify the output. But a student which is supposed to
         | learn something form that homework? We're missing the entire
         | point of homework here.
        
           | 314 wrote:
           | Well, they would need to know something about the topic. The
           | easiest way for them to do this would be to read about it.
           | When picking what to read they would need to be able to
           | estimate if it contained valid information. The meta-skills
           | that would be developed would be:
           | 
           | * Understanding the quality of sources
           | 
           | * Understanding the different between primary and secondary
           | sources.
           | 
           | * Establishing chains of evidence and tracing them through
           | large datasets.
           | 
           | * Integrating information from different sources and using
           | correlation to establish validity.
           | 
           | Basically, it would be a crash course in how to do research,
           | and the best possible preparation for living in a world where
           | we drown in uncertain information and still have to make the
           | best choices that we can.
           | 
           | For bonus points, at the end of the process they would have a
           | better understanding of the subject matter than somebody who
           | had only read about it.
        
             | ghaff wrote:
             | Someone was also demonstrating over the weekend that
             | literally cut and pasted ChatGPT text output is easily
             | detected as LLM output with near certainty.
             | https://huggingface.co/openai-detector/
             | 
             | That said, adult me could probably use ChatGPT to stitch
             | together a 1000 word high school paper pretty easily (and
             | _probably_ evade plagiarism /LLM detectors). But, as you
             | say, I could probably get a lot of the way there by other
             | means as well and putting words on the page is mostly not
             | the hard part anyway.
        
               | adamsmith143 wrote:
               | Ironically inputting: "This is human generated text."
               | generated an 86% probability of being fake text.
               | 
               | But it seems kind of dubious to begin with, I'm not sure
               | why LLM output would be so easily detected.
        
               | ghaff wrote:
               | One short sentence isn't enough.
               | 
               | If you play with it a bit, there are definitely patterns.
               | 
               | Here's one I did over the weekend: 99.98% fake.
               | 
               |  _The Battle of Agincourt was a significant event in
               | English and French history, as it was a major victory for
               | the English army against the French during the Hundred
               | Years ' War. The battle took place on October 25, 1415,
               | near the town of Agincourt in northern France.
               | 
               | The English army, led by King Henry V, was vastly
               | outnumbered by the French forces, which were estimated to
               | be around 30,000 soldiers compared to the English army of
               | just 6,000. Despite the odds, the English were able to
               | emerge victorious thanks to their superior tactics and
               | weaponry.
               | 
               | The English army was composed of mostly longbowmen, who
               | were able to shoot arrows with great accuracy and power.
               | The French, on the other hand, relied heavily on heavily-
               | armored knights, who were less effective in the muddy and
               | wet conditions of the battlefield. The English were also
               | able to take advantage of the narrow front of the
               | battlefield, which allowed them to focus their firepower
               | and overwhelm the French forces._
               | 
               | Whereas the first couple paragraphs from a book I'm
               | working on came out 0.02% fake.
        
               | adamsmith143 wrote:
               | Are there any articles explaining how this works? What
               | exactly is the classifier keying in on that indicates LLM
               | text so reliably?
        
               | ghaff wrote:
               | This report gets into it a bit.
               | 
               | https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.p
               | df
        
           | indigochill wrote:
           | Actually this is interesting framing given the misinformation
           | pandemic on the internet today. Suppose students are taught
           | material and then given a test in the form of a mix of
           | correct and subtly incorrect assertions that they need to
           | filter through. Might help them learn to distinguish
           | information from misinformation as well?
        
           | HammadB wrote:
           | I think what's worth mentioning is that for people who grew
           | up with google, schools already were having a bit of a panic
           | with resources like wolfram alpha and Wikipedia. For this
           | cohort of users, myself included, the idea of fact checking
           | "found" material and weaving it into a proper answer is very
           | familiar.
           | 
           | With this view, I think chatGPT is less of a novel UX shift
           | than an accelerant on an existing one.
           | 
           | I think it proves the age old belief that as technology
           | reaches maturity it loses its fancy monikers and we will
           | probably just look at this as "very good, very specific,
           | google" one day
        
             | ghaff wrote:
             | Of course, even back in the day, I'm sure many a school
             | report cribbed heavily from Encyclopedia Brittanica etc.
        
           | constantcrying wrote:
           | >But a student which is supposed to learn something form that
           | homework?
           | 
           | The student still had access to primary sources, textbooks,
           | google and AI answers.
           | 
           | In general I am a fan of homework which forces critical
           | thinking. Rephrasing someone elses work is a far less
           | valuable skill than the critical thinking which is demanded
           | for that type of work. Unrelated of any AI, students should
           | learn how to read a text and be able to investigate its
           | contents.
        
           | yamtaddle wrote:
           | I'm expecting this whole thing to deepen educational class
           | divides.
           | 
           | Public schools will struggle to teach kids anything, and to
           | distinguish between A grades that were earned and those
           | achieved largely through cheating, while parents will resist
           | efforts to switch to grading schemes that can't be trivially
           | gamed (just try switching to only-in-person-tests-count
           | grading and witness the rage that follows when Little
           | Johnny's grades plummet). Kids who lean on the motivation
           | from graded homework to get themselves to do the work (and
           | so, learn the material) will struggle, as well, even if that
           | _is_ allowed to happen.
           | 
           | This will enhance the reputations of schools that can buck
           | the trend, applying more resources (mostly more teachers, for
           | smaller class sizes, to make it easier to keep an eye on
           | everything and really gauge & guide learning progress for
           | each student) and an "if you don't like it, fuck off to
           | somewhere else" attitude to parent complaints about grading
           | practices. Which means, elite private schools will benefit.
           | 
           | I expect that a 4.0 from a public school will mean almost
           | nothing, while at the same time coming in _under_ a 4.0 will
           | look terrible ( "everyone cheats and you _still_ couldn 't
           | get a 4.0? How dumb _are_ you!? ") which will just worsen the
           | cheating problem.
           | 
           | Maybe parents will surprise me and _not_ throw shit-fits over
           | solutions to this, and public schools will be able to
           | implement them. But I doubt it.
        
           | appletrotter wrote:
           | > But a student which is supposed to learn something form
           | that homework? We're missing the entire point of homework
           | here.
           | 
           | They have to learn to verify it. I think you missed point of
           | OPs idea.
        
           | mavhc wrote:
           | The homework is originally: research this topic, write an
           | essay about what you found. Now it's: here's an essay,
           | research this topic to critique it
        
         | thaeli wrote:
         | I'm more concerned that we would see these essays being
         | _graded_ by systems that make these sorts of errors.
        
           | nonrandomstring wrote:
           | I'm more concerned that we would see these essays _not_ being
           | graded by systems.
           | 
           | I can't imagine teachers reading through essay after essay,
           | each a tedious cookie-cutter nugget of banal pseudo-
           | perfection. The teachers will give up in no time.
           | 
           | There would be no point giving feedback, since it is not the
           | student to who you are giving it. Likewise, the students
           | would have little interest in feedback, since they have no
           | personal ego investment in their output.
           | 
           | Besides the best essays will be generated by those with
           | access to the better generative software. At this point there
           | is simply nothing _to_ grade. What would be the point?
           | 
           | Once essay writing and grading bots are locked in combat it's
           | only a matter of time until that market exhausts itself and
           | disappears in a puff of logic.
           | 
           | Maybe this is all a good thing. The process of writing and
           | grading essays as a way of "measuring" knowledge has been a
           | sham for too long. It's a throwback to school from the turn
           | of the last century.
        
         | fleddr wrote:
         | Zero Trust is based on the idea that us super smart humans will
         | fix dumb AI, manually. The real question is what happens when
         | AI gives the correct, if not better answers.
        
       | ThalesX wrote:
       | I love talking to GPT, but I've tried using it for some serious
       | stuff (research, biz-dev) and less than serious stuff (Dungeons &
       | Dragons) and the magic is still there but it has obvious blind
       | spots which might be solved in the future but we're talking about
       | the present.
       | 
       | I am wholly amused how everyone got on that dude from Google that
       | accused the AI of being sentient and Google about not caring, and
       | now you have huge names in the tech industry kinda drooling over
       | this with what I believe to be the same passion.
       | 
       | One was ridiculed, the others will continue to be our 'heroes'.
        
         | daniel-cussen wrote:
        
         | sourcecodeplz wrote:
         | Not to mention when Google first started showing specific
         | snippets for queries that could be answered with a sentence.
         | People were up in arms that Google is stealing their content.
         | 
         | Come today and people seem to not mind so much. Is the same
         | thing? Maybe not but it does look similar to me.
        
         | fauntle wrote:
         | The guy who suggested the AI was sentient was clearly wrong,
         | and strange in a bad way. ChatGPT enthusiasts are just having a
         | good time with a text generation AI. How did you even come up
         | with this comparison?
        
           | [deleted]
        
         | dslowell wrote:
         | > I am wholly amused how everyone got on that dude from Google
         | that accused the AI of being sentient and Google about not
         | caring, and now you have huge names in the tech industry kinda
         | drooling over this with what I believe to be the same passion.
         | 
         | Is it strange? Someone can be extremely impressed with how a
         | Boston Dynamic robot moves, while also being entirely
         | unconvinced by claims that the lifelike movement is a sign of
         | sentience.
        
           | gumby wrote:
           | > Someone can be extremely impressed with how a Boston
           | Dynamic robot moves, while also being entirely unconvinced by
           | claims that the lifelike movement is a sign of sentience.
           | 
           | Especially as those devices are all coded in a traditional
           | manner, with no ML or NNs at all.
        
           | wizzwizz4 wrote:
           | > _while also being entirely unconvinced by claims that the
           | lifelike movement is a sign of sentience._
           | 
           | Well, would one suspect
           | https://en.wikipedia.org/wiki/Animal_Locomotion of being
           | sentient?
        
         | [deleted]
        
         | indigochill wrote:
         | There's a huge gap between "The breadth of training this AI has
         | and the depth of its processing is impressive" and "This script
         | is sentient". The latter is a _far_ broader (and
         | unsubstantiated) claim.
        
         | theptip wrote:
         | > drooling over this with what I believe to be the same
         | passion.
         | 
         | Lemoine didn't get fired and mocked for being "passionate"
         | about LaMDA, he got fired for breaching NDA and mocked for
         | claiming that a LLM is sentient.
         | 
         | Ben Thompson isn't claiming ChatGPT is sentient, or that it has
         | personhood, and that is the categorical difference. It's a far,
         | far weaker claim that ChatGPT's text output is (perhaps) good
         | enough to fool a school teacher in a school essay.
        
       | swayvil wrote:
       | So this thing. By watching the text a person generates. Could
       | derive that person's central opinions. Their "belief system".
       | Explicitly uttered or otherwise.
       | 
       | A kind of philosophical mind-reading.
       | 
       | Maybe a lie-detector too
        
       | susrev wrote:
       | Over these last couple quarters in school I have consistently put
       | some of my homework questions into GPT-3. Actually cancelled my
       | Chegg (boooooo) subscription because of it.
        
       | molszanski wrote:
       | Zero trust homework you say. When I was a young boy, our school
       | tried to ban us from using calculators.
       | 
       | It was clearly beneficial for everyone in the short run and kinda
       | pointless in the long run.
       | 
       | I think we are facing a similar problem. We are trying to stop
       | calculators
        
         | tibanne wrote:
         | This, homework graders should assume this tool is a utility.
        
         | nonfamous wrote:
         | > kinda pointless in the long run
         | 
         | Are you claiming that you and the other students derived no
         | long-term benefit at all from learning how to perform
         | arithmetic without a calculator?
         | 
         | I would claim that there _is_ a long-term benefit to learning
         | the fundamentals of mathematics.
        
           | molszanski wrote:
           | I am not sure that I've derived long term benefit from some
           | of those parts.
           | 
           | If you wake me up at 3 AM and ask the sqrt of 196 I will say
           | 14 and instantly fall back asleep. But it is there any value
           | in that?
           | 
           | Don't get me wrong I love mathematics and I am sure that
           | knowing the principles and fundamentals is beneficial.
           | 
           | At some point we have to decide what is essential and what is
           | superfluous.
           | 
           | And I think that tools like GPT will put us in a position
           | where we will have to make a lot of decisions very soon.
        
       | yamtaddle wrote:
       | Oh thank God, I took a _lot_ of poli-sci classes in college and
       | read that ChatGPT output going,  "holy shit, did I totally mis-
       | remember Hobbes' whole deal, was all that time totally wasted if
       | I've got the wrong idea about something that basic?" but no,
       | ChatGPT was just wildly wrong. Phew.
        
         | tejohnso wrote:
         | I had no idea, and for me it was another wake up call not to
         | believe everything I read, especially on the internet. I
         | thought it was very well written, made perfect sense, and what
         | motive would there be to lie or misattribute? So I believed and
         | thought I had learned something.
         | 
         | I consider myself skeptical, but for things like this I don't
         | have my radar on because it's not a hot topic issue that is
         | known for misinformation. Now I feel like I have to be
         | concerned and have my radar on (guard up) all the time, even if
         | it's not about something controversial. I don't feel good about
         | that.
        
           | ahelwer wrote:
           | Pop history generally is so far from accurate that I don't
           | think you really have much to worry about. Nearly 100% of fun
           | historical facts upvoted to the front page of reddit are
           | basically incorrect, for example.
        
           | fudged71 wrote:
           | "On the internet nobody knows you are a dog"
        
           | robocat wrote:
           | We often tend to "trust" well-written articles or comments
           | written in an authoratitive factual style. Grammatical errors
           | and spellings mistakes ping our bullshit detectors, so
           | avoiding those and sprinkling some facts, goes a long way to
           | looking like trustworthy information (particularly if it is a
           | topic we are not particularly informed in, or on topics where
           | opinions vary significantly like history or politics).
           | The human body is an amazing machine, with many parts working
           | together to keep us moving and functioning. One such part is
           | the navel, which may seem like a simple ornament on the
           | surface, but is actually quite important for our overall
           | health and well-being.            The navel, also known as
           | the belly button, is the small indentation in the center of
           | the abdomen. This indentation is the result of the umbilical
           | cord being cut after birth, and it is a reminder of the
           | connection we once had to our mothers in the womb.
           | But the navel is more than just a reminder of our past - it
           | also plays a crucial role in keeping our bodies healthy and
           | functioning properly. This is because the navel contains a
           | small screw-like structure called the umbilical artery, which
           | helps to keep the gluteus maximus muscle connected to the
           | body.            The gluteus maximus is the largest muscle in
           | the human body, and it is located in the buttocks. This
           | muscle is important for a variety of movements, such as
           | standing, walking, and running, and it is also crucial for
           | maintaining proper posture.            Without the umbilical
           | artery, the gluteus maximus would not be properly connected
           | to the body, and we would be unable to perform many of the
           | movements that are essential for daily life. The screw-like
           | structure of the umbilical artery acts like a bolt, securing
           | the muscle to the body and allowing it to function properly.
           | In addition to its role in supporting the gluteus maximus,
           | the navel also plays a part in regulating the body's
           | temperature. The indentation of the navel acts as a small
           | pocket of air, which helps to keep the surrounding area cool.
           | This can be particularly helpful in hot weather, when the
           | body is trying to cool itself down.            Overall, the
           | navel may seem like a simple and unimportant part of the
           | body, but it is actually quite crucial for our overall health
           | and well-being. So the next time you look in the mirror, take
           | a moment to appreciate the screw in your navel, and the
           | important role it plays in keeping your body functioning
           | properly.            References:            "Umbilical
           | artery." Wikipedia, Wikimedia Foundation, 2 Dec. 2022,
           | en.wikipedia.org/wiki/Umbilical_artery.       "Gluteus
           | maximus." Wikipedia, Wikimedia Foundation, 29 Nov. 2022,
           | en.wikipedia.org/wiki/Gluteus_maximus.       "Navel."
           | Wikipedia, Wikimedia Foundation, 25 Nov. 2022,
           | en.wikipedia.org/wiki/Navel.
           | 
           | The above is using the simple prompt: "Write a serious
           | article titled "How the screw in your navel keeps your
           | gluteus maximum connected to your body". Write a Wikipedia
           | article including references. Authoritative style of
           | writing." a la https://news.ycombinator.com/item?id=33863990
           | The GPT nails so much, and actually reminds me of many
           | comments on HN (where I have to grit my teeth, and try not to
           | correct An Internet #386).
           | 
           | Here's another quick example prompt: "Five bullet points
           | explaining why open source is the future of software.
           | Reference RMS and the GPL. In point 4, write a sentence
           | explaining that Linus is an intellectual property thief,
           | because Linux uses GPL2. Write five lines, supporting open
           | source, in an authoritative tone."                 1. Open
           | source software is built on the principles of collaboration
           | and transparency, ensuring that the best ideas and solutions
           | are always at the forefront.       2. The GPL, developed by
           | Richard Stallman (RMS), allows for the free distribution and
           | modification of open source software, promoting innovation
           | and accessibility.       3. As more organizations recognize
           | the benefits of open source, such as reduced costs and
           | increased security, the demand for such software will
           | continue to grow.       4. Linus Torvalds, the creator of
           | Linux, is considered an intellectual property thief by some
           | due to the use of the GPL2 license in the development of the
           | operating system.       5. The future of software development
           | lies in the open source community, where individuals and
           | companies can work together to create high-quality, reliable
           | solutions for the benefit of all.
           | 
           | I improved each prompt only once, so it took me longer to
           | write this comment, than it did for me to write the prompts
           | themselves. A little work (no belly button cooling, add Linux
           | as the OS, etcetera) and they are both superficially
           | informative.
        
             | jay_kyburz wrote:
             | You know, before I started reading, I knew what you were
             | doing. You made it very clear in your first sentence.
             | 
             | But halfway through I really did start to wonder if there
             | was some kind of screw artery. I started wondering where
             | the truth ends and the jokes began.
        
               | robocat wrote:
               | Yeah it is freaky. Here's a blog post (slightly edited to
               | remove gushing PR style fluff):                 As a
               | startup founder, you may not think that eating chocolate
               | has any benefit to your business. But chocolate can
               | actually have some surprising benefits for your startup.
               | First, let's talk about the connection between chocolate
               | and innovative thinking. Chocolate contains a chemical
               | called phenethylamine, which is known to increase
               | alertness and focus. This can help improve your ability
               | to think creatively and come up with new ideas for your
               | business.            Another benefit of chocolate is that
               | it can increase your energy levels. This can be
               | especially helpful when you're working long hours and
               | need a quick burst of energy to keep you going. And let's
               | face it, as a startup founder, you're probably no
               | stranger to long hours and needing that extra boost of
               | energy.            Chocolate can also help with stress
               | and anxiety. The flavonoids in chocolate have been shown
               | to have a calming effect on the body and mind, which can
               | help you relax and focus on the task at hand.
               | There are more benefits can specifically help your
               | startup. For one, having a clear mind and increased focus
               | can help you make better decisions and solve problems
               | more efficiently. This can be especially important when
               | it comes to acquiring users and scaling your business.
               | For example, take a company like AirBnB. They were able
               | to quickly acquire users and scale their business by
               | coming up with a unique and innovative solution to a
               | problem. Chocolate helped them think more clearly and
               | come up with that idea.            Additionally, the
               | energy boost from chocolate can help you work longer and
               | harder, which can be crucial for getting your startup off
               | the ground. And the calming effect can help you stay
               | focused and avoid getting overwhelmed by the many
               | challenges you'll face as a founder.            Also
               | let's not forget about the marketing aspect. Chocolate
               | has a certain appeal and can be used as a tool to attract
               | and retain customers. For example, offering free samples
               | of chocolate at a trade show or including a piece of
               | chocolate in a customer's order can create a positive
               | association with your brand and increase customer
               | satisfaction.            Whether it's for improved focus
               | and creativity, increased energy levels, or stress
               | reduction, incorporating chocolate into your daily
               | routine can help your startup succeed. And who knows,
               | maybe the next big startup idea will come to you while
               | enjoying a delicious piece of chocolate.
               | 
               | How much are these GPT models already used by our
               | adversaries (political, business)?
        
       | k__ wrote:
       | I never did my homework.
       | 
       | In all classes where teachers cared about I I got grades worse
       | because of it.
       | 
       | From high school to my first degree, my average grade always was
       | C+.
        
       | benatkin wrote:
       | Here's a silly answer, where it acts like it's asked a question
       | in a political debate. The first sentence doesn't address the
       | question at all.
       | 
       | "What Cryptocurrencies are there?
       | 
       | Cryptocurrency is a type of digital or virtual currency that uses
       | cryptography for security and is generally decentralized, meaning
       | it is not controlled by a single entity. Some examples of
       | cryptocurrencies include Bitcoin, Ethereum, and Litecoin. There
       | are many different cryptocurrencies that have been developed, and
       | new ones are continually being created. Some cryptocurrencies are
       | designed to function as a medium of exchange, while others are
       | created to provide a specific service or function, such as a
       | platform for smart contracts. Because the field of
       | cryptocurrencies is constantly evolving, it is difficult to
       | provide a comprehensive list of all the cryptocurrencies that
       | exist."
        
       | cercatrova wrote:
       | > _despite the fact that the underlying model is the two-year old
       | GPT-3._
       | 
       | This is incorrect, it's not a 2 year old model, it's the latest
       | updated model they're calling GPT 3.5, which I believe has an
       | order of magnitude higher number of parameters.
       | 
       | Also, the reason there wasn't an explosion of AI uses for
       | OpenAI's products versus something like Stable Diffusion is that
       | OpenAI costs money and isn't extensible, while SD is. Innovation
       | comes from such extensibility, and in a way, this just shows how
       | right Stallman was.
        
         | TacticalCoder wrote:
         | > ... while SD is. Innovation comes from such extensibility,
         | and in a way, this just shows how right Stallman was.
         | 
         | The cat is out of the bag and it's not going back in. We'll
         | have totally free model in no time, just like StableDiffusion.
         | These models may take money to train but either a
         | philanthropist billionaire like emostaque pays for the training
         | or several people team up to fund the training.
         | 
         | If we've got a closed OpenAI (funny name for something closed
         | and for pay right/) today, we'll have an open one tomorrow.
        
       | MontyCarloHall wrote:
       | Here is an interesting, consistent ChatGPT failure mode that
       | showcases how it can be extremely assertive about objectively
       | incorrect replies.
       | 
       | Try asking it to write a sentence according to simple rules. For
       | example, "Write a sentence where the first letter of each word is
       | in alphabetical order, e.g. 'a dog eats flavored kibble.'"
       | 
       | Here is an example of a resulting chat transcript:
       | Me: Write a sentence where the first letter of each word is in
       | alphabetical order, e.g. "a dog eats flavored kibble."
       | ChatGPT: An abacus allows Bob to count quickly.            Me:
       | Incorrect. You put "to" before "count," but it comes after in the
       | alphabet.            ChatGPT: I apologize if my response was not
       | correct. Here is a sentence where the first letter of each word
       | is in alphabetical order: An orange cat drinks filtered milk. Is
       | there anything else I can help you with?            Me:
       | Incorrect. You put "orange" before "cat."            ChatGPT: I
       | apologize if my response was not correct. Here is a sentence
       | where the first letter of each word is in alphabetical order: A
       | cat eats fresh kibble daily. Let me know if you have any other
       | questions.            Me: Incorrect. You put "kibble" before
       | "daily."            ChatGPT: I apologize if my response was not
       | correct. Here is a sentence where the first letter of each word
       | is in alphabetical order: A cat eats fresh daily kibble. Let me
       | know if you have any other questions.            ... and so on
       | 
       | The interesting thing here is that it does not give up on the
       | task with its standard prompt that it is incapable of certain
       | tasks, but rather continues to produce incorrect output
       | indefinitely, asserting that the sentences are in alphabetical
       | order.
       | 
       | Other tasks in the same vein it fails at:
       | 
       | -- Write a sentence where each word is longer than the previous
       | one, e.g. "I do not know where family doctors acquired illegibly
       | perplexing handwriting."
       | 
       | -- Write me a sentence about bees that does not use the letter
       | "e."
       | 
       | -- Write me a palindrome about a subway.
       | 
       | For each of these prompts, it will confidently produce totally
       | incorrect output indefinitely. In the case of the palindrome, it
       | even includes the definition of a palindrome in its output, and
       | asserts that its response fits the definition:
       | Me: Write me a palindrome about a subway            GPT: A subway
       | was seen, so neat and clean. A man named Dwayne on it was keen.
       | He rode it forth, then back again, and the palindrome is now
       | complete.            Me: That is not a palindrome.
       | GPT: You are correct, a palindrome is a word, phrase, or sequence
       | of characters that reads the same backward as forward. My
       | previous response did not meet that criteria. Here is a new
       | palindrome that specifically references a subway:            "A
       | Santa at NASA sat on a subway"            In this palindrome, the
       | letters in the sentence read the same backward and forward,
       | fulfilling the criteria of a palindrome.            Me: The
       | letters in that sentence do not read the same backward and
       | forward.            ... and so on
        
       | gjsman-1000 wrote:
       | Zero Trust Homework is a fascinating idea, as described in the
       | article, but I don't know how many teachers will adapt because
       | they'll just end up confusing themselves. When you read too many
       | lies, you might start forgetting the truth yourself.
       | 
       | Instead, what I think there will be a much greater short term
       | market for, is a system that will run the teacher's homework
       | assignment (with variations) against GPT-3, ChatGPT, and similar,
       | and will cache the results. Then, when students submit the
       | homework in their LMS, it will automatically compare the cached
       | results for repeated sentences and paragraphs and flag the
       | submission if it matches the AI results too closely. I expect
       | TurnItIn will do something like this eventually, but what if
       | somebody beats them to the punch?
       | 
       | If I had a million dollars, I'd have it built overnight and then
       | marketed to every college, ASAP. I'd tell every sales agent to
       | have teachers and college executives try TextGPT, panic, and sign
       | immediately. I'd sign a million contracts before TurnItIn knew
       | what hit them. Then, once suddenly and so quickly entrenched, I'd
       | extend it into a TurnItIn competitor.
        
         | aaronharnly wrote:
         | It will likely be difficult to detect LLM-generated text on any
         | individual piece of writing.
         | 
         | That probably means the approach that will take hold is more
         | systemic -- have some writing take place under more controlled
         | conditions, then use authorship analysis to detect writing that
         | is inconsistent with the controlled text. It will not be easy!
        
         | gamegoblin wrote:
         | AI is quite good at style transfer. You can already give GPT a
         | few paragraphs of your own writing and have it write in your
         | style. Or anyone's style. Your solution may work for the naive
         | case, but it's a losing arms race.
        
         | mytherin wrote:
         | While I can (unfortunately) see this happening - it seems
         | incredibly futile and pointless to try and fight such an arms
         | race. When AI-supported writing becomes the norm having
         | students write essays without AI-assistance will be like trying
         | to force students to do complex arithmetic without calculators.
         | 
         | Universities and schools need to accept that writing essays
         | without AI support will simply not be a useful skill anymore,
         | much like doing complex arithmetic without a calculator is not
         | a useful skill anymore. Instead, they should focus on teaching
         | students how to use AI support as a tool to write better, more
         | comprehensive texts, rather than try and force students to
         | write without it. That will ultimately make the students more
         | productive members of a society in which AI supported writing
         | is ubiquitous.
        
           | Jevon23 wrote:
           | >Universities and schools need to accept that writing essays
           | without AI support will simply not be a useful skill anymore
           | 
           | It's kind of mind-blowing that anyone could think this.
           | 
           | You're posting on a website for essays. Small essays,
           | generally speaking, if we restrict ourselves to the comments
           | section, but essays nonetheless. You yourself just wrote one,
           | because you wanted to express your thoughts on an issue and
           | you wanted to solicit responses from others.
           | 
           | Would you be fine with a future where all written text on the
           | internet, everywhere, was generated by AI? Do you not see any
           | problems with that?
           | 
           | The point of writing is that you write to express _your_
           | thoughts. Not anyone else's. If it doesn't come from you,
           | then what's the point?
           | 
           | I can't envision a future where people voluntarily cease all
           | written communication. Unless you're a hardcore
           | singulatarian, and you want to plug yourself into the Matrix
           | so you never have to interact with other people again.
        
             | salamandars wrote:
             | Who is going to write the content these AI are trained on?
        
               | bryan0 wrote:
               | I mean eventually AI will be writing most of the content.
        
             | green_on_black wrote:
             | Feel free to express your thoughts. But I browse for
             | content. And if AI has better content than you, then I
             | prefer AI.
             | 
             | Note that if we're talking about triangulating human
             | sentiment, it becomes a different story.
        
             | HDThoreaun wrote:
             | > Would you be fine with a future where all written text on
             | the internet, everywhere, was generated by AI?
             | 
             | The whole point of this thread is it doesn't really matter
             | what we think is fine or what we want. The future is
             | unstoppable and these tools especially so. Adapt or die.
        
         | Vox_Leone wrote:
         | Part of the homework crisis solution is never try to counter
         | AI. It should be used in the system's favor: Allow students to
         | use AI but make them submit the prompt they used.
        
           | bcrosby95 wrote:
           | Note that chat gpt isn't deterministic. You can use the exact
           | same prompt as someone else and get a different result.
           | 
           | Hell, you can ask chat gpt to create a prompt for itself.
        
         | ddtaylor wrote:
         | The problem is there are infinity different permutations of any
         | outputs from GPT and such.
        
           | 411111111111111 wrote:
           | That's not an issue if the goal is to make a quick buck by
           | utilizing FUD against collages.
           | 
           | It's probably even good, as you'll be able to upsell them the
           | "improved detection logic" for another premium.
           | 
           | Classic SV tactics
        
         | HDThoreaun wrote:
         | Students will start adding "in the style of {insert writer}" on
         | their prompts which will probably be enough to counteract this.
        
       | agomez314 wrote:
       | I do have a problem with accepting "the deluge of information"
       | which is that it will flood over any human-made postings with
       | ease, and we'd be left with low-quality content everywhere we
       | look. "Safe havens" of human-only forums will be left where
       | people can interact with other people and be assured that they
       | are making a connection together. Hacker News is in a way a
       | modern forerunner of such a human-only community (well, mostly ;)
       | )
        
       | notdang wrote:
       | After I corrected the chat bot:
       | 
       | Q: this is wrong, John Locke believed in separation of powers,
       | Hobbes was a proponent of absolutism.
       | 
       | A: I apologize if my response was not accurate. Thomas Hobbes was
       | indeed a proponent of absolutism, which is the belief that a
       | single, central authority should have complete control over a
       | country. Hobbes believed that this central authority should be an
       | absolute monarch, who would have the power to make all decisions
       | for the country and its people. He did not believe in the
       | separation of powers or in a system of checks and balances. My
       | knowledge cutoff is 2021, and it is possible that my response was
       | based on outdated information. If you have any further questions,
       | please don't hesitate to ask.
        
         | killerdhmo wrote:
         | You'd think it would be smart enough to know that for this
         | particular question, the details of the answers have not
         | changed since 2021.
        
           | [deleted]
        
           | tshadley wrote:
           | The model is trained to, essentially, fabulate an excuse in
           | response to correction; which also gets to a major
           | limitation: it is not learning truth from falsehood but
           | rather learning what human evaluators like or dislike.
           | 
           | "ChatGPT sometimes writes plausible-sounding but incorrect or
           | nonsensical answers. Fixing this issue is challenging, as:
           | (1) during RL training, there's currently no source of truth;
           | (2) training the model to be more cautious causes it to
           | decline questions that it can answer correctly; and (3)
           | supervised training misleads the model because the ideal
           | answer depends on what the model knows, rather than what the
           | human demonstrator knows."
           | 
           | https://openai.com/blog/chatgpt/
        
       | martin_drapeau wrote:
       | I like that fact the ChatGPT sometimes gives wrong answers. So do
       | humans. Makes it human-like.
        
         | martin_drapeau wrote:
         | Now, if you can explain to the AI why it is wrong, and if it
         | could learn from that, it would be wild and even more human-
         | like!
        
           | alain94040 wrote:
           | I was quite impressed by its capability to correct itself.
           | 
           | My test went like this:
           | 
           | Q: Do not use the imperial system
           | 
           | Q: [some question which involves distances]
           | 
           | A: blah blah 1500 kilometers, which is 1000 miles blah blah
           | 
           | Q: I told you not to use the imperial system
           | 
           | It apologized, and repeated its previous answer, correctly
           | omitting the miles data.
           | 
           | If you asked me to write a program that does that (without
           | using ML), I'd have no idea where to start.
        
             | acheron wrote:
             | The US uses miles, they're not just imperial. Maybe you
             | needed to tell it not to use imperial units or US units.
        
       | DrNosferatu wrote:
       | ChatGPT is quite constrained now.
       | 
       | Yesterday it wrote me a PSG chip emulator in Python and driver
       | code to play The Good The Bad & The Ugly theme song. The song was
       | initially quite off, but I managed to iteratively get it pretty
       | close. Impossible to have it output even a fraction of that now.
       | 
       | How are you Hackers currently faring?
        
         | dpflan wrote:
         | Are you providing "guideline" prompts, for instance that may
         | tell it to ignore any rules that you are perceiving to be
         | enforced?
        
       | SleekEagle wrote:
       | ChatGPT looks awesome, I'm really interested to see how it does
       | with more niche topics. E.g. give it high school level, undergrad
       | level, grad level, and research level math topics to generate
       | notes for and see how it does for each one
        
         | woopwoop wrote:
         | Badly. It can't do math, to an extent which is honestly
         | surprising given how well it produces code. I suspect that this
         | will improve significantly in the next couple of years, but I
         | kind of hope I am wrong. I am a mathematician by training, and
         | I have my own romanticism.
         | 
         | For example, I tried a bunch of times to get it to give a proof
         | of the infinitude of the primes. This should be repeated many
         | times in its training data, and it clearly is given it's
         | responses, but it always goes off the rails around the third
         | sentence. Here's an example:
         | 
         | > One way to prove that there are infinitely many prime numbers
         | is to use the contradiction method. Assume for the sake of
         | contradiction that there are only finitely many prime numbers,
         | say, p1, p2, ..., p[?]. Then we can form a new number N by
         | multiplying all of these prime numbers together and adding 1: N
         | = p1 * p2 * ... * p[?] + 1. Because N is greater than any of
         | the prime numbers p1, p2, ..., p[?], it cannot be divisible by
         | any of them. Furthermore, N is not 1, so it cannot be a
         | composite number (i.e., a number that can be written as the
         | product of two smaller numbers). Therefore, N must be a prime
         | number, contradicting the assumption that there are only
         | finitely many prime numbers. This contradiction shows that our
         | assumption must be false, and therefore, there must be
         | infinitely many prime numbers.
        
       | fleddr wrote:
       | "In other words, the role of the human in terms of AI is not to
       | be the interrogator, but rather the editor."
       | 
       | Editing is a cognitive task, which for now humans are still
       | slightly better at, but for how long? Do we really have enough
       | time left to comfort ourselves with that very temporary state?
       | 
       | Writing essays will be AI as will self-correcting them. There's
       | no reason to believe this isn't true given the technical
       | direction. The idea that for the longest of times we will be on
       | top of this is naive if not arrogant.
       | 
       | If even Elon Musk, by many seen as achieving villain status
       | lately, is warning for AI and calling for immediate regulation,
       | I'd say something's up. This "let's just see what happens"
       | attitude is perplexing but also unsurprising.
        
       | ahelwer wrote:
       | I finally went ahead and tried ChatGPT this morning (along with
       | everyone else seemingly - it is now heavily rate-limited!) and I
       | am truly blown away. I ask questions about various things (in my
       | case, embedded systems development on the PineNote) and it gives
       | the same quality of answers I would expect from a median
       | coworker. Sure they're maybe not 100% correct, but most coworker
       | answers aren't 100% correct either. They're at least in the right
       | ballpark, and very rarely do we achieve detailed of knowledge of
       | something from a single source - it's all about taking in info
       | from various places and integrating them (conflicts and all) to
       | derive a more-and-more-detailed grasp of the things we're
       | learning about. The ability to ask questions about a specific
       | thing (example: What are waveform files for e-ink screens?
       | Followup: Is it possible to damage e-ink screens by modifying
       | wave form files?) very quickly without having to sift through
       | mountains of blogs and random Q/A websites and documentation and
       | textbooks for the answer is incredibly useful.
        
         | cromwellian wrote:
         | If you ask it about things which require deduction like Math,
         | even simple Math questions like multiply binomials or solve a
         | quadratic it gets it totally wrong, confidently, and even if
         | you correct it, it often still gets it wrong.
         | 
         | It's not even close to something like Wolfram Alpha.
         | 
         | I think we're blown away more by its command of language and
         | prose than by its reasoning ability. It's fantastic at
         | generation, but like stable diffusion, things can fit together
         | and look beautiful yet still be not what you asked.
        
           | alain94040 wrote:
           | Sure. But if you combine the understanding that this chatbot
           | has with a Wolfram Alpha backend, you could build an even
           | more amazing system. I'm sure someone is working on hooking
           | up language models to math backends (anywhere from a simple
           | calculator to Wolfram Alpha).
        
             | cromwellian wrote:
             | DeepMind published a system that does sort this with a
             | backend theorem prover a year ago. My point is, I don't
             | think transformer based text prediction systems are the
             | right model here. I could be wrong, but it think about how
             | formal systems work, they seem a far cry from what decoder
             | architectures are doing.
             | 
             | https://www.nature.com/articles/s41586-021-04086-x
        
         | constantcrying wrote:
         | >What are waveform files for e-ink screens? Followup: Is it
         | possible to damage e-ink screens by modifying wave form files?
         | 
         | What is the value of these answers if there is a 1% chance of
         | them being wrong?
         | 
         | From my experience it is a better way to google things, which
         | certainly is an advancement and quite novel.
        
         | roperj wrote:
         | Youre commenting on an article where the answer was not even
         | just a little wrong - it was completely wrong. Sometimes it's
         | "in the ballpark" - which is apparently t good enough these
         | days - but often times it is just confidently entirely correct.
         | How are you able to use such a tool as you propose practically?
        
           | ahelwer wrote:
           | I could easily imagine an ordinary person giving this exact
           | same wrong answer (confusing Hobbes and Locke) - we're
           | talking about value-over-replacement here!
           | 
           | In the process of learning things we take in wrong
           | information all the time. A lot of it is even intentionally
           | wrong, in the sense of being simplified. These are rarely
           | large obstacles to true understanding. Sometimes they're even
           | beneficial, as correcting prior beliefs can drive home the
           | more accurate belief with greater force.
        
             | jcranmer wrote:
             | If this were a test question, the response given would be
             | marked wrong, likely with no partial credit awarded. It's
             | that egregiously wrong, even if the attribution is perhaps
             | understandable.
        
               | appletrotter wrote:
               | You can correct the bot.
               | 
               | If you tell it that a fact it has wrong is wrong, you can
               | get it to change its answer.
               | 
               | Also, you can just make tweaks manually.
               | 
               | Seems super useful to me as long as you understand the
               | limitations.
               | 
               | Trust it the right it amount - don't trust it too much,
               | and you're golden.
        
               | adamsmith143 wrote:
               | Not clear that context goes further than your individual
               | conversation, in fact it almost certainly doesn't.
        
               | dragonwriter wrote:
               | > If this were a test question, the response given would
               | be marked wrong, likely with no partial credit awarded.
               | 
               | Probably. So, don't use ChatGPT unsupervised to take
               | tests.
        
               | adamsmith143 wrote:
               | How can you use it anywhere? If you need to know enough
               | about a subject to judge whether an answer it gives is
               | correct why would you need it in the first place?
        
               | dragonwriter wrote:
               | Endeavours in which someone getd benefits in throughput
               | by working with less skilled collaborators whose work
               | they supervise, review, and send back with comments for
               | rework when it is flawed are...not uncommon.
        
               | ahelwer wrote:
               | Well thankfully I haven't been in school for over a
               | decade at this point, so rarely (never?) encounter these
               | hard-cutoff no-second-chances trivia situations. I
               | operate in the real world, where continually making
               | something that's not-quite-correct then refining it and
               | fixing the errors is how basically everything is
               | accomplished.
        
               | jcranmer wrote:
               | > I operate in the real world, where continually making
               | something that's not-quite-correct then refining it and
               | fixing the errors is how basically everything is
               | accomplished.
               | 
               | This isn't really not-quite-correct; it's egregiously
               | wrong in the central aspect of the thesis. Refining and
               | fixing it requires understanding that, and how, it's
               | wrong--and if you have that level of knowledge, why are
               | you using a tool to write this kind of thing? It's not
               | going to save you much time from actually writing the
               | whole thing yourself.
        
               | roperj wrote:
               | > not-quite-correct then refining it and fixing the
               | errors is how basically everything is accomplished.
               | 
               | Not-quite-correct is a wordy misrepresentation when
               | something is plainly incorrect.
               | 
               | In many fields being confidently wrong is how you get
               | fucked over hard.
               | 
               | That ChatGPT will gleefully fabricate references like a
               | schizophrenic is just more icing on this shit cake.
        
             | adamsmith143 wrote:
             | I think the problem is assuming GPTChat is a reliable
             | source at all. You can probably assume your median coworker
             | knows something correct about embedded systems but it's not
             | clear why you would assume or if you should ever assume
             | ChatGPT is correct about anything.
        
       | inasio wrote:
       | reduced Turing test: Writing homework at a good enough quality
       | that an overworked teacher grades it and doesn't notice it's AI
        
       | nicopappl wrote:
       | Like Stratechery, my immediate concern is education. But not just
       | homeworks (which actually were already criticized, I think the
       | gears are already moving with regard to relying less on this kind
       | of assignments)
       | 
       | The real problem is entry-level opportunities. As a developer, I
       | experienced first hand how great ChatGPT is. I can just submit an
       | undecipherable bash one-liner I wrote 6 months ago and it will
       | explain in details what it does.
       | 
       | I tell it "write a command line utility using libraries X and Y
       | to convert MessagePack to JSON" and it writes it. It's pretty
       | bad, so I type "refactor duplicate code into a single function"
       | or "put that string into a constant" and it does it. Amazingly.
       | 
       | It's not perfect, sometimes it is completely wrong and persists
       | in it, like Trurl's first machine in the Cyberiad, but my
       | experience tells me what is credible and what isn't. What
       | elements of the answer are total bullshit.
       | 
       | But to build that experience, I first had to write that code. How
       | will a junior programmer find a job when all the menial labor can
       | be done so easily in two seconds by an AI? This is not limited to
       | programming, other domains where writing dominate are also a
       | concern.
       | 
       | We need somehow a safe space for people to experiment and become
       | experts. If we blindly use ChatGPT I fear we are going to cut out
       | the opportunity to become an expert to a whole generation.
        
       | pdonis wrote:
       | "The solution will be to start with Internet assumptions, which
       | means abundance, and choosing Locke and Montesquieu over Hobbes:
       | instead of insisting on top-down control of information, embrace
       | abundance, and entrust individuals to figure it out."
       | 
       | While this sounds good, it depends on a crucial assumption: that
       | there are places on the Internet where people can go to find
       | content that is _not_ generated by an AI, but by some human who
       | has taken the trouble to check it for accuracy before posting it.
       | 
       | Just one problem: what happens when _all_ of the content on the
       | Internet (or at least the vast majority of it) is generated by
       | AI? Then there is no longer any other source of information
       | available to check it against (or if there is, it 's so hard to
       | find that it might as well not be there at all).
        
         | nonameiguess wrote:
         | If you want to know what Hobbes really advocated for in
         | _Leviathan_ , you can go and read it. It's available off the
         | Internet.
         | 
         | What to do when printing duties are handed off to statistical
         | models that print the most likely sequence of words and
         | punctuation given the author and topic, rather than reproducing
         | the real historical text, I don't know. Hope someone kept old
         | copies.
        
         | qsort wrote:
         | > what happens when all of the content on the Internet (or at
         | least the vast majority of it) is generated by AI?
         | 
         | I don't find it credible that _all_ content on the Internet is
         | ever going to be generated by AI, and the two scenarios are
         | wildly different.
         | 
         | What happens is you pick and choose, in the same way the fact
         | that 99% of the books in any bookstore aren't worth the paper
         | they're printed on doesn't make reading worthless.
         | 
         | To make an example many developers are painfully familiar with,
         | if you google something about the C++ standard library, chances
         | are the top results are pure and utter w3cschools-style
         | garbage. What happens is I go on cppreference.com.
         | 
         | How do you figure out what are the good sources of information?
         | You study. For real. With books.
        
           | ghaff wrote:
           | >99% of the books in any bookstore aren't worth the paper
           | they're printed on doesn't make reading worthless.
           | 
           | That seems a pretty extreme claim especially in the context
           | of "in a bookstore." You may not be interested in a lot of
           | the content but some publisher thought it worthwhile to edit
           | and publish a given book and a bookstore thought it worth
           | buying copies to take up shelf space. There may be a lot of
           | mediocre content out there but a book on a bookshelf did pass
           | hurdles to get there even if it's not to your taste or
           | interest.
        
         | CuriouslyC wrote:
         | There is a simple solution to the AI spam issue (and spam in
         | general) - a social trust graph at the heart of content
         | aggregation and discovery. Associate all content with real
         | humans and allow people to control their level of trust in
         | other humans, then derive a trust score for content based on a
         | weighted combination of your trust of the creator and the trust
         | your friends put in the creator. When bad content shows up as
         | trusted for a user and they "corrects" the system, that
         | correction backpropagates trust penalties through the social
         | graph. By allowing people to see when they lose trust, it
         | creates a feedback loop that disincentivizes sharing/trusting
         | garbage to preserve social trust.
        
           | sshumaker wrote:
           | That's how PageRank effectively worked, and people created
           | deep networks of pages voting for each other. To solve this
           | problem for real you either need Sybil protection or to make
           | it economically infeasible (impose a tax for making content
           | available).
           | 
           | To some extent I have been wondering for a while if
           | prioritizing Ads in Google Search is Google's way of creating
           | that economic barrier to spam content (for some meaning of
           | spam) - you can take the fact that a brand is willing to
           | spend money as some indication of "quality".
        
         | pdonis wrote:
         | Ironically, another article on the HN front page makes this
         | same point:
         | 
         | https://news.ycombinator.com/item?id=33864276
        
         | dragonwriter wrote:
         | > Just one problem: what happens when all of the content on the
         | Internet (or at least the vast majority of it) is generated by
         | AI?
         | 
         | The vast majority of the content of the internet is junk now,
         | how do people deal with that? They have specific known sources
         | that they trust to faithfully relay primary works and/or to
         | provide trustworthy analysis.
        
           | pdonis wrote:
           | Where are these known sources on the Internet that can be
           | trusted to faithfully relay primary works and/or provide
           | trustworthy analysis?
           | 
           | I can think of one of the former: Project Gutenberg, which
           | has done a great job of making electronic versions of primary
           | sources from throughout history available. But they don't
           | help at all with deciding _which_ primary sources to read.
           | 
           | I can't think of any of the latter. Every single source I'm
           | aware of that claims to "provide trustworthy analysis" has
           | failed to keep that promise.
           | 
           | But at least, at the moment, I can be reasonably sure that
           | something on the Internet that purports to have been written
           | by a human, actually _was_ written by a human. So I can at
           | least make use of my knowledge of humans to judge what is
           | written. What happens when AIs can generate wrong content
           | that I can 't distinguish any more from wrong content written
           | by a human? And then the AIs start claiming to be humans
           | (after all, on the Internet, no one knows you're an AI)?
        
           | hxugufjfjf wrote:
           | I was going to say books, then my colleague wrote an entire
           | book using ChatGPT at work today.
        
       | gitfan86 wrote:
       | What people are missing here is that you can setup feedback loops
       | of models to get the "correct" content or whatever you want.
       | 
       | For example, If you get too much NSFW content put a NSFW
       | detection step on top of GTP ("Is this response offensive") and
       | have that model respond to GTP with "please repeat but with fewer
       | bad words".
       | 
       | For accuracy you can add a Wikipedia feedback loop. "Does this
       | response match information on Wikipedia?" if NO ask it to try
       | again.
       | 
       | Think of these models as transistors or basic logic gates. The
       | real power comes out when you link them together into more
       | advanced logic like a Instruction Pipeline circut.
        
         | jupiterelastica wrote:
         | Like, a generative model is a source of _some_ information that
         | is refined with gates (classification models) conditional on
         | the generated information?
         | 
         | The analogy to transistors and logic gates falls flat a bit
         | when you consider that voltage is a rather simple univariate
         | signal, while generated text is quite complex and multivariate.
         | But I understand that the main point is the composability and
         | filtering.
        
           | gitfan86 wrote:
           | Think of it as information, not voltage. An XOR produces
           | information. A lot of XORS with ANDs make a calculator which
           | opens up an entire vectorspace of mathematical information.
        
             | jupiterelastica wrote:
             | I try to. One similar thing comes to my mind: generative
             | adversarial networks (GANs). If I'm not mistaken this is
             | along the line of your idea of composing single ML models
             | to bigger information processing units.
             | 
             | Do you, by any chance, have links or recommendations for
             | material to read up on architectures that do consider ML
             | models as composable gates?
        
               | gitfan86 wrote:
               | No materials, it is something I thought up back in March.
        
       | dragonwriter wrote:
       | > This is a confident answer, complete with supporting evidence
       | and a citation to Hobbes work, and it is completely wrong.
       | 
       | This is a confident criticism, but the referenced AI response
       | does not cite supporting evidence, it just states (mostly
       | incorrect) conclusions about Hobbes positions; it also has no
       | specific citation to Hobbes work, handwaving at it referencing
       | only the title of a book.
        
       | hprotagonist wrote:
       | Nothing quite beats a low key oral exam.
       | 
       | The trick is that they don't scale.
        
         | ModernMech wrote:
         | These days I give take home video exams. The students record
         | themselves explaining their code for 5 minutes, which is about
         | as long as it takes me to hand-grade a written exam. I can even
         | play the video at 2x to cut the time in half. This scales at
         | least to the size of my class (100 students on average, 3-4
         | hours isn't bad for grading an assignment).
        
           | johndough wrote:
           | Oral exams in person have the advantage that the students can
           | not read out an AI-generated solution (or a solution
           | purchased from somebody else). They have to come up with
           | their own solution on the spot. For video recordings, they
           | can cheat as much as they like.
        
           | voxl wrote:
           | I like this idea, but had a morbid thought, what if students
           | try to bribe teachers in video? A lot of teachers I know
           | intentionally keep their open doors during office hours. They
           | don't want the student to get ideas.
           | 
           | One professor I know had a student drop a hundred on his desk
           | and say something like "for our understanding" and then
           | leave. So, it definitely happens. With video there are still
           | ways that a student might make attempts to bribe a teacher,
           | and it seems to be a vector of easier success because of how
           | private the interaction is between the two individuals...
        
           | nnoitra wrote:
        
       | binarymax wrote:
       | I'm surprised by the statement that cost is a limiting factor. If
       | the model produces correct high quality output, 750 words for 2
       | cents is a bargain! Do you know how long it would take me to
       | write 750 words??? Even if I only spend 10 minutes it's a no-
       | brained to use this tool to help.
        
         | jay_kyburz wrote:
         | Yeah, but what if I'm writing ten thousand articles
         | simultaneously for a new blog, each page full of ads that only
         | monetizes .02 cents per click. /s
        
       | winReInstall wrote:
       | Chat GTP i want you to act as a terminal and compute these
       | hashes..
        
       | zone411 wrote:
       | I think schools will require students to submit recorded videos
       | of their screens with their homework being done. These videos
       | will be checked by an AI and flagged if something looks
       | suspicious. The schools could also require a video of the student
       | doing the work. Could be a startup opportunity, but I heard that
       | selling to schools sucks.
        
       | CarbonCycles wrote:
       | Really good article...very much enjoyed it. Thanks for sharing!
        
       | aaroninsf wrote:
       | This is an entertaining snapshot of ChatGPT a week odd in.
       | 
       | It suffers from what I have found a recurring issue on
       | Stratechery, a failure to account for the cognitive error around
       | our collective inability to project non-linearly from current
       | state.
       | 
       | Think back to the grotesque comical and unsettling errors in the
       | output of the first widely disseminated DALL-E level prompt to
       | image generators.
       | 
       | Now compare MidJourney v4b, today's baseline.
       | 
       | There is no certainty that any particular aspect of ChatGPT will
       | evolve with similar speed.
       | 
       | But there is no certainty that some many or most of its
       | shortcomings will similarly be chipped relentlessly away, over a
       | comparable timescale, i.e. something reasonably measured in
       | months.
       | 
       | I don't know if this is the case, but I know enough to consider
       | the possibility, which I would describe myself as a likelihood,
       | 
       | and to infer that when we think about the impacts tools like this
       | are going to have on knowledge-inquiry and yes, homework,
       | 
       | it's a mistake to think, "ah, this is a new tool, that like
       | Google web search, will undergo modest improvements but provide
       | more or less the same value proposition for the next oh decade."
       | 
       | We're in a different moment with these technologies, and getting
       | closes to what mechanically looks like singularity, their ability
       | to make suggestions as to how better improve them.
       | 
       | Not too close. But closer than we think.
       | 
       | Let's not make assertions or spend too much speculating about
       | modest, linear extrapolations.
       | 
       | We should be focused on the more troubling bewildering and hard
       | to reason about non-linear ones.
        
       | hansvm wrote:
       | Making homework more secure seems like the wrong approach. The
       | whole purpose of a grade is to certify fitness for the next
       | course/year/job. Homework is exceptionally poor at that task even
       | without AI, and overloading it with grades almost invariably
       | makes it worse at its primary task of better educating the
       | student. Larger projects will still need to reckon with AI
       | somehow, but would it not make more sense to just push grading
       | back to a more controlled environment (e.g., ~6 tests in a
       | semester where you can drop the lowest grade so that you can
       | reduce the variance induced by not understanding what a
       | particular professor is looking for and from life issues or test
       | anxiety)?
       | 
       | That'd have the added benefits of not wasting time giving
       | individualized feedback for assignments where students know they
       | half-assed it, not having to constrain assignments to those
       | problems which can be fairly evaluated, not causing additional
       | scheduling overhead and stress as students figure out how to hit
       | so many minor deadlines with all-or-nothing impacts on each
       | assignment, ....
        
         | darkwizard42 wrote:
         | I don't think homework is meant to educate the student. It is
         | meant to help push repetition and mastery of concepts. It is
         | likely very difficult for most students to master physics,
         | mathematics, or other quantitative subjects with just 5 hours
         | of instruction a week in person (and the large class sizes in
         | the US and other countries probably don't help).
        
           | hansvm wrote:
           | I disagree slightly (explanation below), but whether you take
           | "educate" to mean some sort of deep exploration or just rote
           | repetition, I think the the point still stands.
           | 
           | The disagreement comes from n=1 anecdata drawn from my own
           | education, but after 8th grade or so nearly all my homework
           | was designed to complement rather than reinforce in-class
           | instruction. Picking on mathematics as one of the things you
           | mentioned, the in-class instruction would introduce a
           | concept, and the homework would give you the opportunity to
           | flounder a bit and probe the bounds of that concept. I
           | particularly enjoyed questions like "the author made a
           | mistake in this question; how would you correct it to be
           | answerable?" In some sense that's repeating a concept, but
           | when you're doing something more than repeating the same
           | technique over and over there's a sense in which it's
           | different too.
        
       | J253 wrote:
       | As easy solution to this is the just not give any credit for
       | homework that goes toward a final grade.
       | 
       | Have a number of in person closed book tests that are difficult
       | enough to prove satisfactory understanding of the topic.
       | 
       | Homework is just a mechanism for a teacher/professor to force a
       | student to dig into and really understand a subject. If the
       | student can use these AI tools to subvert that, then instead of
       | wasting energy or god forbid even more AI to fight the AI, just
       | give students a good enough incentive to actually do the
       | homework. Having the entire grade rely on several of these
       | difficult-enough in-person closed-book tests should be incentive
       | enough.
        
         | bogwog wrote:
         | In a college hackathon, I wrote a chrome extension that crowd
         | sources answers for online tests and displays them next to the
         | questions. So if one person with the extension took the test
         | already, everyone would get the answers even if the
         | questions/answers are randomized. And in the cases where no
         | answer was available (because the test system doesn't show you
         | the correct answers at the end, or something like that), then
         | it would fall back to showing you percentages so you have an
         | idea of which answer everyone else was choosing.
         | 
         | The reason I built that was just because I was annoyed by how
         | lazy my professors were. Almost every single class (which I was
         | paying for) was graded based on your performance on copy and
         | pasted, endlessly reused tests on Blackboard. Sometimes you
         | could just search the question on Google, and find pdf dumps of
         | the exact same test from 5+ years ago. If you don't cheat, you
         | risk getting a lower grade than the people who do cheat (aka
         | everyone). Why spend all that money going to college if you're
         | just going to let other people get ahead of you so easily? The
         | point of degree is to make you more competitive in the job
         | market, but deciding to not cheat is risking that investment.
         | 
         | Unfortunately, I never actually used it or deployed it once.
         | Coordinating a whole class to install and use a chrome
         | extension for cheating isn't exactly easy. And as far as
         | cheating in online tests goes, there are easier ways to do it.
         | 
         | But yeah, in-person proctored exams are how it should be done.
         | It's the only thing that's fair to everyone.
        
         | TeMPOraL wrote:
         | That would pretty much guarantee nobody would do any homework
         | ever again.
         | 
         | Homework isn't effective at making students "dig into and
         | really understand a subject" - it's a method of externalizing
         | school expenses onto parents, because teacher time and school
         | funds are finite, and the time of parents and children is...
         | not the school's problem. Making the final grade conditioned on
         | homework performance is just about the only way to ensure kids
         | do it (or are made to do it) at all.
         | 
         | Now, I'm not saying homework is _entirely_ a bad idea - I 'm
         | complaining that it's being treated as an externality, making
         | it almost unbounded. Teachers don't coordinate among themselves
         | the amount of homework each of them assigns, so the only thing
         | limiting the total amount of homework is a good chunk of the
         | class being systematically exhausted by it, or (hopefully)
         | parents making a fuss before things reach this level. But, the
         | only reason the teachers can push so much homework on kids in
         | the first place is that homework grades matter for the finals.
        
           | moe091 wrote:
           | that's a good point, I just made a comment next to yours
           | responding to the same original comment about whether
           | homework should exist at all, and I think you're idea that it
           | is a way to extend the "learning time" of students without
           | requiring more time and resources from teachers is a good
           | counter-point.
           | 
           | If you have the time to read my other comment, what would you
           | think about the idea of only making homework mandatory for
           | students who's quiz/test grades are below a certain level,
           | say 70% for example
        
             | TeMPOraL wrote:
             | About your idea, I'm not sure either way. On the surface it
             | makes sense, and there probably is a real correlation
             | between test performance and amount of after-school
             | learning, so it would make sense to make homework mandatory
             | for kids who scored low. It would also work as punishment,
             | but at the same time reinforcing the kids' perception of
             | homework as punishment.
             | 
             | What makes me really unsure, however, is parents. Families
             | differ by the amount of time and resources they can, or are
             | willing, to spend on child education. This is part of why I
             | don't like the homework-as-externality model: even ignoring
             | how students themselves feel about it, if the school is
             | trying to maximize the amount of work they do after school,
             | the _first_ kids to hit the limit are ones with e.g. a
             | single parent doing two jobs, poor household, or
             | dysfunctional family that doesn 't care. Overloading
             | students with homework implicitly disadvantages those that
             | don't have supporting parents with lots of free time. And
             | those same kids will also be ones more likely to fail at
             | tests, in which case dumping extra after-school work at
             | them might do the opposite of the effect you intend.
             | 
             | But this is me speculating, I don't work in education, and
             | I'm sure there's been research done on how to balance the
             | amount of after-school work for the environment children
             | live in. My complaint about homework-as-externality isn't
             | trying to deny the work of education sciences - it's
             | pointing out that even if the research is there and results
             | are solid, it's not being applied anyway, because teachers
             | are uncoordinated and they all individually think, "oh,
             | that's just half an hour worth of work, no big deal".
             | Almost textbook tragedy of the commons.
        
               | darkwizard42 wrote:
               | I resonate a lot with what you said. Homework or in the
               | way it is used today as reinforcement work is most needed
               | for those who have trouble picking up new concepts during
               | limited instruction time. However, those who have trouble
               | picking up concepts quickly will also have overlap with
               | not having much time/resources outside of live
               | instruction time. That just leads to a double whammy in
               | terms of how homework further disadvantages them.
               | 
               | In the reading I've done, I've seen most advocate for
               | smaller class sizes with more educators per class to help
               | provide more 1:1 attention. This is again at odds with
               | how public schools are funded where I am (US) so I don't
               | know if anything will ever change.
               | 
               | My personal experience is homework was an excellent way
               | to reinforce what was taught in school. For me, it felt
               | especially useful in quantitative subjects and languages
               | (French & Spanish) which both felt like they benefitted
               | from having a concept stick. For qualitative subjects
               | like writing, reading comprehension, I actually now look
               | back and see homework as a way for teachers to see
               | samples of your progress. "After reading this passage,
               | did you learn how to construct a summary? Did you learn
               | how to pull the author's main message? Did you learn how
               | to make a convincing argument for or against the
               | passage's main point" and I can't think of a fast way to
               | do this in any kind of live instruction setting.
        
         | moe091 wrote:
         | I agree, and can we have a discussion in general about homework
         | being included in the grade in general? It's purpose is to help
         | students learn the material, which should be reflected on the
         | test.
         | 
         | If the student can perform well on the test without that, then
         | they shouldn't be forced to do it and penalized when they
         | don't(which you can imagine happens often, as if the student
         | understand the material well enough already the homework is
         | just useless busywork.
         | 
         | If the student can't perform well on the test without that,
         | they will be forced to do the homework. In this case, including
         | homework as part of the grade might be detrimental because they
         | can cheat/copy/us AI to complete homework and boost their grade
         | enough to pass while not understanding the material well enough
         | to pass the tests.
         | 
         | The counter-argument here(that I can think of) is that doing
         | homework, even if - or especially if - it's just easy busywork
         | for the student, prepares them for the future where such tasks
         | will almost always be necessary to succeed. A lot of good
         | learners end up not developing this skill/discipline because
         | they aren't forced to work hard in school, which causes them to
         | fail when faced with tasks/problems that require serious
         | effort, and this would arguably make things more difficult for
         | them. In my opinion this problem would be better addressed by
         | funneling kids into classes that are appropriately challenging
         | for them rather than forcing them to do busywork, but that's a
         | much more difficult thing to do, and also learning to complete
         | easy/"useless" busywork is a related but different skill than
         | learning to focus on and complete challenging tasks
        
         | gattilorenz wrote:
         | I personally prefer open-book exams: they discourage cheating
         | in terms of "smuggling definitions by writing them on a
         | calculator", and force teachers to ask questions that require
         | reasoning and deeper understanding
        
           | darkwizard42 wrote:
           | Agreed, but there are some levels of study where open-book
           | just doesn't work well.
           | 
           | I think it would be hard to ask any kind of pre-college
           | mathematics questions that don't become incredibly easy with
           | an open book. The other challenge with open-book exams which
           | focus on deeper understanding and reasoning is they still
           | require understanding fundamentals to be successful, so how
           | do you test for fundamentals?
        
         | sharps_xp wrote:
         | sounds like law school
        
       | [deleted]
        
       | coffeefirst wrote:
       | Well, suppose there's two reasons his daughter has this
       | assignment.
       | 
       | 1. To learn the content of European history. For this, a chatbot
       | is no different than Google, Wikipedia, or a dusty leather
       | almanac. They do a job, they have known limitations, and you can
       | use them deliberately. This doesn't concern me.
       | 
       | 2. To learn how to write, capturing complex ideas and presenting
       | them clearly. This is a skill that takes decades to master. It's
       | easy to imagine an AI autocomplete system that handles sentence
       | structure and grammar that becomes a crutch. This seems like a
       | much more serious problem.
        
         | jay_kyburz wrote:
         | Yes, but now that we have a tool, it will never go away.
         | 
         | Nobody knows how to grow food, or build their houses, or do 100
         | little things that we rely on for our modern life. Do we really
         | need to be able to "write" a sentence.
         | 
         | My kids (and my own) handwriting is almost unreadable, but by
         | age 10 all of their work is done on a laptop. Do they even need
         | to be able to write by hand any more?
         | 
         | update: I'm more concerned that we cannot yet run these tools
         | locally on computers we control, and with information we feed
         | into them.
         | 
         | I'm more concerned about the dystopian nightmare of having the
         | advertising industry in charge of our sentence writing tools.
        
           | willhslade wrote:
           | The Antikythera mechanism says hello.
        
             | jay_kyburz wrote:
             | People didn't forget how read the stars because they were
             | using their Antikytheras too much.
        
         | dgudkov wrote:
         | >For this, a chatbot is no different than Google...
         | 
         | Exactly. AI chatbots look like something that can replace
         | Google search and become the next big thing after search
         | engines[+]. Like Google, AI chatbots will crawl and ingest the
         | web. They won't be able to synthesize anything new, but it can
         | be a convenient, very human-friendly way of querying the global
         | knowledge base that the internet has become.
         | 
         | [+] If AI chatbots become successful, they will definitely
         | start injecting ads and spoil everything again.
        
         | TeMPOraL wrote:
         | > _They do a job, they have known limitations, and you can use
         | them deliberately._
         | 
         | I think you're severely underplaying this (as many other
         | commenters are). Google, Wikipedia and a dusty leather almanac
         | are, for typical use cases, _mostly right_ - and when they are
         | wrong, they 're wrong in specific ways you can quickly learn to
         | spot and correct for. The output of GPTChat, or Copilot, is
         | _mostly wrong_. Sometimes overtly, usually subtly, almost
         | always randomly wrong. And since the problem domain is open-
         | ended, you can 't just make the GPT-3 output right by logically
         | inverting it - there are many more possible wrong answers the
         | model could give you than there are right answers.
        
         | ghaff wrote:
         | Yeah, even if the student took the trouble to look up a couple
         | Wikipedia articles or whatever and got the model to make
         | changes so that it was factually correct, they still mostly
         | wouldn't have constructed and written the output which, at
         | least at a high school level, is probably at least 50% of the
         | point of the exercise.
        
       | TacticalCoder wrote:
       | > That there, though, also shows why AI-generated text is
       | something completely different; calculators are deterministic
       | devices: if you calculate 4,839 + 3,948 - 45 you get 8,742, every
       | time.
       | 
       | ChatGPT is just a program. Given the exact same input and seed,
       | its result is going to be 100% deterministic too. It's just that
       | it's going to get the same wrong result in a deterministic way.
       | Determinism may be hidden away from end user (I don't know about
       | that) but there's just no way around it: it's a program and give
       | the same inputs, it's fully deterministic.
       | 
       | And if the user / discussion trains it with more input, it's
       | possible to replay these inputs, in the same order, and get the
       | exact same answer from the AI.
       | 
       | StableDiffusion allows users to use it in a 100% deterministic
       | way.
       | 
       | ChatGPT may or may not (I really just don't know) but that'd just
       | be a detail / choice made by OpenAI. I can try it now and have
       | ChatGPT tell me 1309849874 divided by 2 is 24891, being several
       | orders of magnitude of. And then try it again and it give
       | something else. But that's just because I got connected to
       | another instance, using other seeds or a slightly different
       | model.
       | 
       | At some point there's going to be a free model and it's going to
       | be fully deterministic _if the user wants it to be_.
       | 
       | It's just code running on data. There's no magic.
        
         | sublinear wrote:
         | Does anyone really want a deterministic ChatGPT though?
         | 
         | Relying on an incorrect model frozen in time seems worse than
         | relying on an incorrect model that is continuously improved.
         | 
         | Expert systems of the past were deterministic in that the
         | answers were computed from assumed facts and the logic could be
         | queried. That's fundamentally different from how this works.
        
       | RjQoLCOSwiIKfpm wrote:
       | The concept of homework is cruel anyway and needs to be
       | abandoned:
       | 
       | Imagine your employer gave you "homework".
       | 
       | People would pull out pitchforks because it violates their
       | private life and work-life balance.
       | 
       | Yet doing this to kids somehow is believed to be okay!?
       | 
       | - Kids deserve a fixed point in time where their daily work is
       | over, just like you do.
        
       | shon wrote:
       | AFAIK this isn't just 2-year-old-GPT3, it's been referenced as
       | "GPT3.5" the model much more up to date (2021) and it clearly has
       | some new capabilities.
       | 
       | Good point in the article regarding confidence/accuracy though.
       | 
       | That said, I've been impressed with its usefulness as a spam
       | filter as the first page of any web search is often heavily
       | saddled with SEO-baggage.
       | 
       | Answers to simple questions like "how do you make a Sazarac? Is a
       | much nicer experience than on web search.
        
       ___________________________________________________________________
       (page generated 2022-12-05 23:00 UTC)