[HN Gopher] Ask HN: Does the HN commentariat have a reductive vi...
       ___________________________________________________________________
        
       Ask HN: Does the HN commentariat have a reductive view of what a
       human being is?
        
       I've been very struck during most of the AI discussions recently
       how little weight comments seem give to the subtlety and rich
       contextual knowledge that humans bring to even quite simple
       activities.  I know we often over-estimate the value of our
       contributions. I know we often find that our functions can
       ultimately be automated in some respect. But I find in aggregate
       that the leading comments reflect a very arid conception of being a
       human connected to other humans.  For example in the discussion
       about AI Lawyers very little sense of the moral aspect of another
       human acting on behalf of a human client. In the discussions about
       the replacement of programming jobs by this kind of technology, not
       a great deal of confidence in the importance of human judgement in
       building human-focused systems.  Is this just reflective of our
       context as people that streamline and automate, or do HN readers
       just think a human isn't such a complex entity?  For me this is
       somewhat like the T-Shirt that says "I went outside once, but the
       graphics were crap"...except nobody's joking.
        
       Author : scandox
       Score  : 38 points
       Date   : 2023-02-01 21:51 UTC (1 hours ago)
        
       | tremon wrote:
       | Does the questioner have a reductive view of the HN commentariat?
        
       | tptacek wrote:
       | Careful. There is no "view of the HN commentariat". Many
       | thousands of different people write on HN, and they have
       | differing views. There isn't even usually a meaningful prevailing
       | view: cognitive biases will make unusual or conflicting views
       | stand out more to you, and there's an intrinsic (and topic-
       | specific) bias on what controversies people will wade into and
       | when. Even the initial conditions on threads can radically alter
       | what the "commentariat" will appear to be saying.
        
       | nicoburns wrote:
       | > do HN readers just think a human isn't such a complex entity
       | 
       | I'm someone who has made these kind of comments before. It may
       | help you to place my such comments into the context that I am
       | _not_ someone who works in AI, but I am someone who studied
       | philosophy and has both studied scientific literature on and
       | thought deeply about the nature of the mind.
       | 
       | While we're not yet close to understanding the mind in entirety,
       | something I was struck by as I read about the parts of the mind
       | we do understand is just how many human capabilities do seem to
       | be explainable on a physical neural network (as in an actual
       | network of physical neurons, not the AI thing) basis without
       | requiring any notion of conciousness or uniquely human (or even
       | animal) capability.
       | 
       | My view is not that AIs are currently anywhere close to the
       | capabilities of humans at the moment. But:
       | 
       | - I am somewhat agnostic on the question of whether they could
       | match them in future. And I think other people should be too.
       | We're not really in a position to know this yet.
       | 
       | - I think a lot of the limitation of AIs are limitations in IO
       | capabilities: AIs can typically only consume text or images, and
       | they can't typically influence the world themselves at all (one
       | of the things that has come out of research into (human)
       | perception is that it's generally very much an active process -
       | activities that might naively seem passive like vision actually
       | involve tight feedback loops and actively interacting with the
       | world).
       | 
       | - To me the way modern "deep learning" models work _does_ seem
       | like computers genuinely learning from experience. That it 's
       | possible that it differs from human learning largely in scale and
       | complexity rather than being fundementally different (it is of
       | course possible that it's not the case, but I don't think this is
       | obviously the case)
       | 
       | I would also agree with another commenter that part of the
       | purpose of such comments is to provoke thought and break people
       | out of their assumptions. Many people take the idea that human
       | cognition is fundementally different to machine cognition (or
       | even animal cognition!) for granted. And while that may
       | ultimately end up being the case, I think it's valuable to
       | question that belief.
        
       | alphazard wrote:
       | There's definitely some AI hype going around right now, so it's
       | important to filter that from these conversations. We aren't as
       | advanced as people say, and we aren't advancing as fast as people
       | say either. We are advancing.
       | 
       | Most HN readers will be receptive and maybe even in agreement
       | about statements concerning the hardness of these problems, but
       | not the magicalness of these problems. In your post, you used a
       | lot of magical words, which the commentariat is correct to
       | identify as non-constructive. Phrases like "human connected to
       | other humans", "human judgement", "moral aspect".
       | 
       | There is nothing about humanness that makes these problems any
       | less tractable. If they are hard and we don't know how to build
       | machines that solve them as well as humans do, so be it. But they
       | aren't hard for magical reasons relating to poorly defined terms
       | like "morality" or "connectedness". At least that is the opinion
       | of most scientifically minded people, and probably the
       | commentariat.
        
       | sublinear wrote:
       | > do HN readers just think a human isn't such a complex entity
       | 
       | Maybe I'm wrong, but it seems like the majority of the comments
       | of this type I see are written by new accounts created minutes
       | before either meant to be throwaways or otherwise.
        
       | RGamma wrote:
       | * * *
        
       | morganf wrote:
       | Why are Marxists Marxist?
       | 
       | Or said differently: when you have a hammer, everything is a
       | nail.
       | 
       | And our beloved HN, as amazing and as addictive as it is, is a
       | community by for of "the software developer-entrepreneur" and
       | this by definition, with the hammer of "your mind tries to reduce
       | everything to algorithms" (the personality type which is
       | attracted to writing software just for that very reason!) of
       | course they will do that for humans as well.
       | 
       | Of course, I'd love an HN of poets but that would have the
       | problem of the other extreme: empathy emoting so it would be hard
       | to turn that into clear concise cutting and actionable
       | insights...
        
         | godelski wrote:
         | I think there is a relatively unique thing about programming
         | that makes them/us believe we know more than we do (not unique
         | to programming, but not every job does this). The thing is we
         | often are a jack of all trades types and work with a large
         | range of domains. This gives us insight into those domains but
         | does not make us experts. That insight can trick us into
         | thinking we understand the domain, but expertise comes from
         | understanding nuance and having an intimate understanding of
         | the vernacular. Programmers are often the middle part of that
         | gaussian meme[0]. Enough knowledge to think you understand a
         | subject but not enough knowledge to really know it. It happens
         | because we're human.
         | 
         | [0]https://i.imgflip.com/5gfpyc.jpg
        
       | godelski wrote:
       | I think the issue here is that many of these things are extremely
       | complicated but _look_ simple. If you aren't in the weeds of that
       | technology you're primed to attach yourself to the simple answer.
       | This is often exacerbated as our technical vernacular overlaps
       | with English, as well as other technical vernaculars. This can
       | make someone believe that they have more understanding of a topic
       | than they really have. This is very obvious in AI/ML research
       | (hint, usually when people say "AI" they aren't a researcher)
       | because there is a lot of hype around the research and products.
       | But I have a great example of this miscommunication on HN from a
       | comment I made yesterday[0]. I said
       | 
       | > [Confidence] indicate[s] how confident the model is of the
       | result, not how likely the prediction is to be accurate.
       | 
       | The problem here is likely how "confidence" and "likelihood" are
       | used. The words are overloaded. Maybe I should have said "not how
       | probable the prediction is" but this could even be less clear.
       | Most people think likelihood and probability are the same thing.
       | 
       | So there's a lot to why this is happening. Misreadings, ego,
       | fooling ourselves, and more. I think there's only a few solutions
       | though. First, we need to recognize that there's nothing wrong
       | with being wrong. After all, we are all wrong. There is no
       | absolute truth. Our perceptions are just a model of the world,
       | not the world[1]. Second, we have to encourage a culture that
       | encourages updating our opinions as we learn more. Third, maybe
       | we don't need to comment on everything? We need to be careful
       | because we might think we know more than we do, especially since
       | we might know more than the average person and want to help this
       | other person understand (but this doesn't mean we're an expert or
       | even right!). Fourth, we need to recognize that language is
       | actually really complicated and that miscommunication is quite
       | frequent. The purpose of language is to communicate an idea from
       | one brain and pass it to another brain. But this is done through
       | lossy compression. Good faith speaking is doing our best to
       | encode in a fashion that is most likely to be well interpreted by
       | our listener's decoder ("speak to your audience" is hard on the
       | internet. Large audiences have a large variance in priors!). Good
       | faith listening is doing our best to make our decoder align with
       | the _intent_ of the speaker's message. Good faith means we need
       | to recognize the stochastic nature of language and that this is
       | more difficult as the diversity of our audience increases
       | (communication is easy with friends but harder with strangers).
       | 
       | I'm sure others have more points to make and I'd love to hear
       | other possible solutions or disagreements with what I've claimed.
       | Let's communicate to update all our priors.
       | 
       | (I know this was originally about ML, which I research, but I
       | think the question was key on a broader concept. If we want to
       | discuss stochastic parrots or other ML stuff we can definitely do
       | so. Sorry if this was in fact a non sequitur)
       | 
       | [0] https://news.ycombinator.com/item?id=34608009
       | 
       | [1] https://hermiene.net/essays-trans/relativity_of_wrong.html
        
       | debacle wrote:
       | HN is overwhelming made up of young men of above average
       | intelligence in a high-risk segment of an industry that has a
       | reputation for lower than average social skills.
       | 
       | Go into every thread with that understanding.
        
       | Connor_Creegan wrote:
       | Surprise: most people in this community have a theologically-
       | impoverished view of mankind. Not sure what you were expecting
       | from an aggregator site run by venture capitalists.
        
       | kerpotgh wrote:
       | [dead]
        
       | mxkopy wrote:
       | HN is not the place for anthropological or humanistic insight,
       | unfortunately.
        
       | tpmx wrote:
       | (Not working in AI.)
       | 
       | I feel like many of these reductive views are expressed in order
       | to provoke unusual thoughts. This is useful.
        
       | gizajob wrote:
       | Such reductivism in AI has been going on since Turing. The
       | linguistic outputs that the test is measured in are a small
       | subset of what human beings do, and a more recent subset at that,
       | in evolutionary terms. Much of our intersubjective attunement to
       | one another happens below the level of language (i.e. you know
       | when your wife is cross or in a bad mood...). What's worse, is
       | that we only have language, in the form of computer languages,
       | with which to capture and describe the full extent of the
       | mindedness of a human being in order to create an AI, yet
       | language is a higher order phenomenon than the complete mind that
       | we're looking to replicate.
        
         | ryandrake wrote:
         | On the Internet, all the output we have is text on a page. We
         | may be quickly approaching a world where it is impossible to
         | distinguish a human on the Internet from a bot on the Internet.
         | If the output cannot be used to distinguish bot vs. human, does
         | it really matter who created that output?
         | 
         | Did ChatGPT generate the above output?
        
       | im3w1l wrote:
       | A worker can be a very complex human being even as the work they
       | do is simple and easily automated.
        
         | staindk wrote:
         | 100%
         | 
         | I think I understand what OP is getting at, but if an AI lawyer
         | proves to be both cheaper to hire and more effective at
         | defending me... it's a no-brainer IMO.
         | 
         | Then after my AI lawyer and I win my jaywalking hearing or
         | whatever, I can meet up with friends and talk about things like
         | humans do.
        
       | jdthedisciple wrote:
       | Perhaps you are approaching it from a normative POV, whereas the
       | average among the HN crowd is looking at it from a descriptive
       | POV.
       | 
       | In other words, maybe those acquainted with software and AI see
       | the things you mentioned - AI Lawyers and AI developers - as
       | inevitabilities that we will simply have to face. This in turn
       | leads HN'ers to think in terms of entrepreneurship or "how can
       | this make me money in the future?", which means _adopting_ those
       | trends rather than rejecting them, because if you do, someone
       | else will adopt them. Thus, the whole techno-entrepreneurial
       | spirit of this forum leaves little space for viewpoints that
       | offer no technological or entrepreneurial benefit or advancements
       | such as rejecting AI.
        
       | version_five wrote:
       | It's the same sort of idea as people who think being able to look
       | stuff up in wikipedia is the same as knowing it (and my pet
       | peeve, thinking they're contributing to a discussion by reading a
       | wikipedia article outloud)
       | 
       | I'm not sure where it comes from, I suspect it's just immaturity.
       | I've seen it here but also in the real world, I'm not sure HN
       | overindexes on it, maybe even the opposite
        
         | sph wrote:
         | When you have a hammer, everything looks like a nail.
         | 
         | After the failed AI hype of the turn of the millennium, we have
         | developed a niche of machine learning that produced significant
         | results, so there is a push to see if this impressive yet very
         | limited piece of technology is just a few layers and GPUs away
         | from AGI.
         | 
         | Sorry, but intelligence is more than a glorified, generalised
         | Markov chain. And even if you solved that problem, you have
         | something smarter, but as versatile as a gnat.
         | 
         | To create mammal levels of complexity, you need to implement
         | consciousness and sentience which we still have no clue how the
         | hell it is or works.
        
       | type0 wrote:
       | I'm fairly certain you can create AI bot that would
       | indistinguishably mimic your regular HN user, in the same style
       | as GPT-4chan was https://www.youtube.com/watch?v=efPrtcLdcdM
       | 
       | The only problem is that it might be banned for spewing too many
       | falsehoods.
        
       | titzer wrote:
       | > rich contextual knowledge that humans bring to even quite
       | simple activities.
       | 
       | I feel like the continual Tik-Tok reduction of attention span and
       | high-speed memetics of it all is massively reducing our "rich
       | contextual knowledge" and we're becoming a bunch of flippant
       | oafs.
        
       | [deleted]
        
       | AlexandrB wrote:
       | The one that constantly grinds my gears is commenters comparing
       | how AI systems are trained to human learning as if they are the
       | same. E.g. "How is *GPT taking in data and producing an output
       | different than a human learning a skill and making
       | prose/code/art?"
        
         | rowanG077 wrote:
         | I think it's a incredibly important question to be able to
         | explain how an creating novel work is different then a Human
         | creating novel work. Why does this grind your gears?
        
           | kube-system wrote:
           | I'm not the parent commenter, but it grinds my gears because
           | the answer is obvious. Humans value human creativity because
           | of emotion, shared experience, and the value we place on each
           | other as humans.
        
           | sillysaurusx wrote:
           | I suspect it's the same reason it grinds my gears that it's
           | called a "learning rate" instead of "step size" in ML.
           | 
           | Not only is it less precise term, but it gives the wrong
           | implications.
           | 
           | Personally, I'm on the side of releasing training data. Let
           | everybody train on everything. But it's always felt absurd to
           | say that the ML models are "learning" things.
           | 
           | But hey, none of us know how learning works anyway, right? So
           | maybe it's not such a big distinction. As you say, none of us
           | can pinpoint _why_ a model isn 't learning vs why we are.
        
           | bonsaibilly wrote:
           | To me it seems to imply a stunningly nihilistic point of view
           | vis-a-vis human writing (or art, where it also gets repeated
           | a lot here).
           | 
           | It seems almost definitionally obvious that what an LLM does
           | is not the same as what a human does - both on the basis that
           | if all human writing were mere done via blending together
           | other writing we had seen in the past, it would appear to be
           | impossible for us to have developed written communication in
           | the first place, and on the basis that when I write
           | something, I _mean_ something I am then attempting to
           | communicate. An LLM never _means_ to communicate anything,
           | there is no _there_ there; it simply reproduces the most
           | likely tokens in response to a prompt.
           | 
           | To insist that we're just a bunch of walking, breathing
           | prompt-reproducers essentially seems like it's rooted in a
           | belief that we have no interior lives, and that meaning in
           | writing or art is utterly illusory.
        
             | hprotagonist wrote:
             | see: http://www.jaronlanier.com/zombie.html
             | 
             | It's not said very much, but this style of dehumanization
             | is really corrosive in a way that directly benefits the
             | worst forms of human governments and structures, and this
             | fact goes i think genuinely unrecognized too often in tech-
             | land.
             | 
             | if we really are p-zombies, then those people aren't really
             | suffering, right, so it's fine ...
        
         | zzzzzzzza wrote:
         | as a "reductivist" I feel I stand on the side of freedom, and
         | that the other side wants to own our souls as intellectual
         | property, under the guise of extending and protecting their
         | current contractual relationships with media companies
         | 
         | (i don't believe intellectual property is a morally legitimate
         | concept, since it comes from a exploration of a pre existing
         | space of ideas (also I am a georgist so I don't believe
         | physical space can be morally owned either))
         | 
         | naturally this strongly held belief can result in sharp words
         | against perceived enemies.
        
         | godelski wrote:
         | > How [is an AI artist] taking in data and producing an output
         | different than a human learning a skill and making
         | prose/code/art?
         | 
         | There may be a misalignment in intent of the claim and
         | interpretation of the claim. As someone that researches
         | generative modeling I actually think there is an important
         | aspect to this question, but I do not think that this question
         | has anything to do with how the brain or the machine learn art.
         | It has to do with legality and morals.
         | 
         | So I'll break it down. We believe that it is morally and
         | legally acceptable for a human to look at copyrighted artwork
         | and even mimic it in the process of learning how to become a
         | better artist (sales are where the morality breaks down and
         | especially with impersonation). The question is "where is the
         | nuanced difference between a machine using that data and a
         | human using that data to learn?" This doesn't depend on the
         | learning techniques just like how no one cares if one person
         | learns differently than another person. Obviously no one thinks
         | AI art should impersonate real artists nor do they think people
         | should sell this work if it contains copyrighted material.
         | That's in line with the human artist values (fine to draw
         | Mickey Mouse, not fine to sell a drawing of Mickey Mouse and
         | worse to sell that drawing and claim it is official Disney
         | art).
         | 
         | This is a very important question because we need to create
         | laws about how we can train these systems and how we handle the
         | data that they produce (two very different things!). The line
         | between human and machine is a lot thinner than people think
         | (think digital painting and CGI), and it doesn't matter that
         | stochastic algorithms learn differently than humans learn. The
         | question is about how/if learning material can be used and if
         | machines should be treated differently. And if so, why.
         | 
         | But this is way more than one sentence.
        
         | krona wrote:
         | Didn't You just state the Chinese room thought experiment?
         | 
         | It's an important observation that humans are just as capable
         | of doing tasks without understanding them, and so its no
         | surprise that the computer doesn't understand them either.
        
       ___________________________________________________________________
       (page generated 2023-02-01 23:01 UTC)