[HN Gopher] Neural nets are not "slightly conscious," and AI PR ...
       ___________________________________________________________________
        
       Neural nets are not "slightly conscious," and AI PR can do with
       less hype
        
       Author : andreyk
       Score  : 96 points
       Date   : 2022-02-20 20:16 UTC (2 hours ago)
        
 (HTM) web link (lastweekin.ai)
 (TXT) w3m dump (lastweekin.ai)
        
       | andygroundwater wrote:
       | It's stretching credulity beyond the usual exaggerated hype
       | associated with AI. What we have now is semi-OK forecasting at
       | scale, nothing more. We (as in the researches, platform and
       | technology) can get a system to select what looks to be a valid
       | response to a host of stimuli, e.g., chess moves, patient
       | diagnostics, vehicle driving etc.
       | 
       | None of this "thinks for itself", nor is it remotely near to such
       | levels of conscious self-awareness. I'm sick of this hype, it's
       | been going on since the 1905's with hucksters promising robot
       | household domestics, and all sorts of kooky weirdness that was
       | swallowed up by the popular media.
        
         | csee wrote:
         | The people who say it might be slightly conscious are just
         | appealing to a functional, substrate-independent requirement
         | for consciousness. I happen to agree with them that it's
         | feasible and plausible.
         | 
         | Let me ask you. If we invented an AGI that was as smart as us
         | based on much larger nets (perhaps with one or two algorithmic
         | tweaks on current approaches) trained on much more data,
         | running on commodity hardware, would it be conscious? If yes,
         | why can't our current nets be slightly conscious?
        
           | new_guy wrote:
           | 'slightly conscious' is just word salad, it doesn't actually
           | mean _anything_.
        
             | csee wrote:
             | I am slightly conscious when I am extremely drunk and can
             | barely think and feel, but yet still have some modicum of
             | conscious experience. That's what it means.
             | 
             | If you don't agree that consciousness exists on a spectrum,
             | and instead think that something is either conscious or
             | not, then simply replace the words 'slightly conscious'
             | with 'conscious'.
        
               | tedunangst wrote:
               | But why would I want to put an extremely drunk computer
               | in charge of making decisions?
        
               | csee wrote:
               | I was attempting to give an example of what a 'slightly
               | conscious' state is to show that it isn't completely
               | incoherent. Admittedly it was far from rigorous.
        
             | bondarchuk wrote:
             | You could say the same about "conscious" in general.
             | There's not a single coherent definition of the word, not
             | even in academic debates.
        
               | Animats wrote:
               | Maybe we should just say "Shut up and program", similar
               | to how some physicists say, "Shut up and calculate", when
               | the philosophical wrangling gets out of hand. Copenhagen
               | interpretation vs. many-worlds? Does it matter? Is there
               | any way to find out? If not, back to work.
               | 
               | My comment on this for several decades has been that we
               | don't know enough to address consciousness. We need to
               | get common sense right first. Common sense, in this
               | context, is getting through the next 30 seconds without
               | screwing up. Automatic driving is the most active area
               | there. Robot manipulation in unstructured environments is
               | a closely related problem. Neither works well yet. Large
               | neural nets are not particularly good at either of these
               | problems.
               | 
               | We're missing something important. Something that all the
               | mammals have. People have been arguing whether animals
               | have consciousness for a long time, at least back to
               | Aristotle. Few people claim that animals don't have some
               | degree of common sense. It's essential to survival. Yet
               | AI is terrible at implementing common sense. This is a
               | big problem.
        
               | YeGoblynQueenne wrote:
               | Indeed, common sense is one of the foundational problems
               | of the field. This is John McCarthy, in 1959, fresh out
               | of the Dartmouth workshop:
               | 
               |  _Programs with common sense_
               | 
               |  _Interesting work is being done in programming computers
               | to solve problems which require a high degree of
               | intelligence in humans. However, certain elementary
               | verbal reasoning processes so simple that they can be
               | carried out by any non-feeble minded human have yet to be
               | simulated by machine programs._
               | 
               | http://jmc.stanford.edu/articles/mcc59/mcc59.pdf
               | 
               | Again, that's 1959. Ouch.
               | 
               | I wonder, who started talking about consciousness in
               | machines? Turing talked of "thinking", McCarthy of common
               | sense, lots of people of "intelligence", Drew McDermot of
               | stupidity, even, but who was the first to broach the
               | subject of "consciousness" in machines?
        
               | csee wrote:
               | There isn't, but the response to this lack of definition
               | shouldn't be to simply terminate the discussion.
               | 
               | We know it's probably a real thing because we experience
               | it, and it's an extremely important open question whether
               | an AGI on hardware will have "it" too.
               | 
               | The answer to the question will have large ethical
               | implications a few decades into the future. If they can
               | suffer just like animals can, we really need to know that
               | so we don't accidentally create a large amount of
               | suffering. If they can't suffer, just like rocks probably
               | can't, this doesn't have to be a concern of ours.
        
               | grumbel wrote:
               | The response to the lack of definition should be
               | investigation into how that definition could look like,
               | not arguing if we or something else has it or not.
               | Without a definition and criteria to test you're never
               | going to make progress.
        
               | csee wrote:
               | Philosophers have been trying for decades to define it
               | rigorously and have failed decisively. It really looks
               | intractable at the moment. Given we are in this quagmire,
               | I think it is ok to explore/discuss a bit further despite
               | the shaky foundations of only having fuzzy definitions of
               | "qualia" or "consciousness" to rely on.
        
               | mannykannot wrote:
               | Quite a lot of the philosophical debate has been tied up
               | in the effort to show that minds cannot be the result of
               | purely physical processes or will never be explained as
               | such, which does not tell us anything about what they
               | are.
               | 
               | We are not going to be able to say with any great
               | precision what we are trying to say with the word
               | 'consciousness' until we have more information. In lieu
               | of that, what we can do is say what phenomena seem to be
               | in need of explanations before we can compile a
               | definition.
               | 
               | At this point, opinions that human-level consciousness is
               | either just more of what has been done so far, or cannot
               | possibly be just that, are just opinions.
        
               | sesm wrote:
               | Which probably means that someone with "chief scientist"
               | title shouldn't be using it when making public claims. Of
               | course, he can do it for his own profit, but he is
               | ruining the credibility of his research field, that's why
               | people working in this field object to it.
        
           | oneoff786 wrote:
           | There's no evidence that neural nets can form an AGI so it's
           | a moot point. The AGI is an I'll defined inflection point.
        
             | Galaxeblaffer wrote:
             | I'd consider brains and other biological neutral systems as
             | neutral nets. So to me there's pretty convincing evidence
             | that neutral nets can form an AGI
        
               | oneoff786 wrote:
               | Well you shouldn't. They are not the same. Brains are not
               | (ML) neural networks. Neural networks are just a
               | mathematical approximation of one part of how the mind
               | works
        
       | defenestration wrote:
       | You can argue that consciousness has an important role in
       | evolution. That creatures which are aware of their own existence
       | have a greater chance of reproduction and survival. What if we
       | create an AI and give it the goal of maximum reproduction, would
       | it be more effective if it can 'think' about itself?
        
       | Veedrac wrote:
       | So this article does not actually defend its claim, it just gets
       | mad that some AI researcher expressed their opinion, makes an
       | unsourced (albeit probably correct) claim that the criticized
       | opinion is a minority position, and wishes really strongly that
       | people would be less excited about this thing that they are not
       | as excited about.
       | 
       | Meanwhile in academic philosophy, it's totally OK to conjecture
       | that, say, subatomic particles are 'slightly conscious' and
       | nobody tries to tell them that they are not allowed to have an
       | opinion.
       | 
       | Here's a hint, if you want to refute an idea, refute the actual
       | idea, don't just tell the person in so many words that they don't
       | have the social status to say it. Yes this post wound me up a
       | bit, how could you tell?
        
         | abeppu wrote:
         | You're asking for a refutation, and ordinarily that's a good
         | thing to aim for, but in this case was the original claim clear
         | enough to be refuted? I think we don't even have a good
         | definition for consciousness and we certainly don't have
         | agreement over what would constitute evidence for it from the
         | view of an outside observer, and the original claim doesn't
         | attempt to provide any evidence, and so doesn't even imply an
         | epistemic position. How can one refute something which is so
         | vague?
        
         | jstx1 wrote:
         | > Here's a hint, if you want to refute an idea, refute the
         | actual idea,
         | 
         | Science doesn't work that way -
         | https://en.wikipedia.org/wiki/Falsifiability
        
         | melony wrote:
         | The difference is that claiming consciousness may kill the
         | entire field of ML research when non-experts decide to wade in
         | and start lobbying for regulation. You don't want misguided
         | groups like PETA meddling with regulating neural network
         | research. FAANG won't be affected much either way but your
         | average university will.
        
         | andreyk wrote:
         | hmm, I am not sure this is a fair assessment, the portion on
         | "Experts largely agree that current forms of AI are not
         | conscious, in any sense of the word. " provides sources and a
         | brief argument. Sure it's not a super long defense of the
         | stance, but then again this is mostly an overview of what
         | happened with all this twitter drama and not a full argument
         | about this topic.
         | 
         | Also, it outright states "Granted, the claim could also be
         | reasonable, if a particular definition of consciousness was
         | specified as well."
        
           | andai wrote:
           | Do we even have a consensus on which animals are conscious?
        
             | burrows wrote:
             | Do you believe it makes sense to even claim that other
             | humans are conscious?
             | 
             | If conscious is defined as "to have subjective
             | experiences", then I don't believe "other people are
             | conscious" is coherent.
             | 
             | The argument I hear usually is that other bodies are
             | constructed like my body and I'm conscious therefore they
             | are probably conscious too.
             | 
             | But I think this completely misses the point. The issue is
             | the proposition itself. How can that proposition be
             | translated into empirical claims? If the answer is just
             | that other bodies are like my body, then conscious is just
             | a fancy synonym of "is a human being".
        
             | danaris wrote:
             | It is possible to define an upper boundary for "this is not
             | conscious" and a lower boundary for "this _is_ conscious "
             | with grey area in between them.
             | 
             | Thus, even if we cannot clearly state for any given animal
             | whether it is or is not conscious, we can still clearly
             | state that, say, a coffee maker is not conscious, even if
             | it has rudimentary processing capability, or that a person
             | _is_.
             | 
             | As I implied in another comment[0], I believe it would be
             | both possible and valuable to construct a set of conditions
             | that we collectively feel are _necessary_ , if not
             | _sufficient_ , to define consciousness. That way, we could
             | at least rule it out as long as no AI meets those minimum
             | standards.
             | 
             | [0] https://news.ycombinator.com/item?id=30409569
        
           | jeffparsons wrote:
           | > "Experts largely agree that current forms of AI are not
           | conscious, in any sense of the word."
           | 
           | Experts in what? There _are_ no experts in consciousness, at
           | least in the lay sense.
        
             | mcherm wrote:
             | Experts in AI.
             | 
             | The statement "Experts largely agree that current forms of
             | AI are not conscious" connects to two pieces of expertise:
             | expertise in consciousness and expertise in AI. It is
             | plausible an expert in AI might have the background to
             | state with confidence that AI is not "conscious" in any
             | meaningful sense of the word.
        
             | OneLeggedCat wrote:
             | Looks like you are getting downvoted, but it's true.
             | Defining exactly what it is to be "conscious" is a nearly
             | impossible problem to solve, even having spent your life
             | studying it. I'm not personally even convinced that
             | "cogito, ergo sum" is even correct.
        
               | OJFord wrote:
               | 'cogito' is a stumbling block in itself really.
        
               | Traster wrote:
               | It's true that "conscious" may be difficult to define,
               | but it's almost impossible to come up with a definition
               | for which there aren't exisitng experts.
        
             | joshuamorton wrote:
             | Perhaps this is the point. If you don't have an agreed upon
             | definition of the word, it is not a useful tool. A claim o
             | consciousness, if that claim is meaningless, isn't useful.
             | 
             | But aside from that, there is a lot of philosophy on what
             | consciousness is
             | (https://en.wikipedia.org/wiki/Consciousness has some of
             | it). And those people, especially philosophers in the
             | crossover of computer systems/intelligence and general
             | philosophy are "experts".
        
         | nope_42 wrote:
         | The burden of proof is on the person claiming something is
         | true.
        
           | darawk wrote:
           | So if I claim it is true that neural nets are not conscious,
           | the burden of proof is now on that claim?
           | 
           | The burden of proof is on the person making an assertion. The
           | original claim was not an assertion, it was that they "may be
           | slightly conscious". The article linked here is the only one
           | that made an actual assertion, which is that the original
           | claim was categorically false.
           | 
           | In short, I agree with you. The burden of proof is on this
           | article to demonstrate that neural nets are not conscious.
        
             | nope_42 wrote:
             | The null hypothesis is that things aren't concious until
             | there is evidence that they are.
        
               | darawk wrote:
               | And what evidence is that, specifically?
        
       | chasing wrote:
       | There are actual living things we have difficulty proving are
       | "conscious" and you get into really tricky territory trying to
       | establish what might be "conscious" (or even "alive") in the
       | world even without bringing AI into the mix. Even the people
       | around us we can't _prove_ to be conscious except in that we are
       | also human and assume they have a similar first-person
       | (subjective) view of the world and aren 't just a biological
       | robot running equations. Yes, you could literally be the only
       | conscious being in the universe and the universe would
       | indistinguishable from one with many consciousnesses.
        
         | freemint wrote:
         | If something only can "think" if it feed data it can not
         | reflect about itself out of it's own volition.
        
           | OJFord wrote:
           | (in an ironic demonstration of what I mean) I'm not sure I
           | understand what you're saying - but is it that you can only
           | call a process 'thinking' if it's producing some output
           | that's not derivable from its input, nor hard-coded in the
           | definition of the function, as it were?
           | 
           | Perhaps that's way off, I was just starting to think along
           | those lines as I came to your comment, and it seemed it might
           | fit, that we might be thinking along the same lines.
        
           | joe_the_user wrote:
           | This especially,
           | 
           | I'd be very skeptical of claims that, say, some complex
           | ongoing control system, like a self-driving car, could be
           | "conscious". But there's an argument someone could make.
           | 
           | But an artifact that has no data storage seems to fail any
           | reasonable definition immediately. Maybe it could be part of
           | something else that you can claim is conscious if you add
           | storage, output controls, aims or whatever. But by itself the
           | claim just seems preposterous.
        
         | visarga wrote:
         | > Yes, you could literally be the only conscious being in the
         | universe and the universe would indistinguishable from one with
         | many consciousnesses.
         | 
         | No, you think that's possible but it's naive. Your existence
         | depends on environment and self replication of information.
         | Your structure depends on it, your cognition. Are you saying
         | you can be separate from the tree, and everything just a fake
         | around you?
        
         | igorkraw wrote:
         | Edit:Shoot, I meant to reply to another comment. Leaving this
         | here and linking it
         | 
         | ---
         | 
         | IIT is the most coherent definition of consciousness I'm aware
         | of
         | https://en.m.wikipedia.org/wiki/Integrated_information_theor...
         | 
         | People dismiss it often, and it has valid academic criticisms,
         | but most "lay" critiques I've seen seem to dismiss it because
         | it gives things they don't consider to be "conscious like them"
         | consciousness.
         | 
         | I think in general, we overexagerate qualities that humans have
         | as uniquely special and important (intelligence, sentience,
         | consciousness), even if the definition of them is fuzzy and any
         | non-fuzzy definition is "too inclusive". I wonder if this is
         | because we are collectively Identity building and creating an
         | "other" out of nature, because otherwise we'd
         | 
         | 1. Be much less special as we want to fee
         | 
         | 2. Would have to consider a lot of inconvenient externalities
         | for our reasoning (moral and otherwise) to be consistent
         | 
         | And both are slowdowns when you are bootstrapping a post
         | scarcity society out of scarcity, so it'd culturally valuable
         | to reify special qualities that we just determine "we" possess
         | and "they" don't - because it's easier to unify on and allows
         | more actions than making sure everyone can deal with the
         | unfiltered reality of human... unremarkableness in an uncaring
         | world, another social species like so many other with a
         | temporary oligarchy on the planet (without wanting to drone to
         | philosophically, I do think this aspect of Weltschmerz is
         | underappreciated, especially seeing the anxiety amongst my peer
         | group when the topic comes up)
        
           | mannykannot wrote:
           | You have acknowledged that there are valid academic
           | criticisms of IIT, and for those wanting to get an idea of
           | what those might be, a good place to start is with Scott
           | Aaronson's responses in his blog:
           | https://scottaaronson.blog/?p=1799
           | 
           | Note that the issue that you are concerned with here and the
           | usefulness of IIT are separate concerns, and critics like
           | Aaronson are not taking that position on the basis of the
           | attitudes you claim are behind many 'lay' critiques.
        
       | dificilis wrote:
       | "Conscious" is a word that has no objective scientific
       | definition.
       | 
       | It follows that "slightly conscious" is not well defined.
       | 
       | In practice "conscious" just means "anything that thinks and
       | makes decisions like I do".
       | 
       | Also, nobody actually understands how their own brain works when
       | they are thinking and deciding, which makes it very difficult for
       | anyone to determine if some particular AI software thinks and
       | decides the same way that their brain does those things.
        
       | bondarchuk wrote:
       | I don't think "conscious" as a binary or scalar variable will
       | ever be a coherent concept. "Raw" consciousness without content
       | has never been demonstrated or even sensibly theorized. At the
       | very least, we should add another term, that which the entity is
       | conscious of. Then, I don't see why you would so vehemently deny
       | that an image recognition net is conscious of the images it
       | recognizes.
       | 
       | Last I heard lobsters are supposed to be conscious and they only
       | have about 100k neurons.
        
       | skilled wrote:
       | I mean, whatever the comments say here, it doesn't change the
       | fact that news outlets pick up these kind of tweets and proclaim
       | them as gospel.
        
       | jowday wrote:
       | Discussions of "Consciousness" in the context of ML or AI
       | research always seem to devolve into navel-gazing futurist
       | pseudointellectualism. I don't think it's possible to have a
       | meaningful conversation about something as ill defined as
       | consciousness. This isn't to malign the OpenAI researcher behind
       | the original tweet - I just feel that AI researchers bringing up
       | consciousnesses is a good signal to tune the conversation out.
       | 
       | Bonus points if psychedelics are somehow brought up.
        
       | danaris wrote:
       | To be "conscious" in the sense that we generally understand it,
       | any AI would need, at minimum, two things that are not commonly
       | part of it.
       | 
       | First, it needs to be _continuously active and taking data
       | input_.
       | 
       | Second, and closely related, it needs to be _continuously
       | learning_.
       | 
       | The neural nets we use today, in the main, are trained in one big
       | lump, then fed discrete chunks of data to process. The neural
       | nets themselves exist simply as static data on a disk somewhere.
       | Some, I believe, have multiple training stages, but that's not at
       | all the same thing as true _continuity_.
       | 
       | I'm sure there are other aspects to being conscious, but I
       | suspect that some of them, at least, are emergent behaviours, and
       | I further suspect that they are mostly or all dependent upon
       | these two.
        
       | user3939382 wrote:
       | It's not just PR it's entire companies. I had a job interview
       | with a guy who wanted me to do sales for his ML company and was
       | bragging he had "AI" to predict who was going to win the Academy
       | Awards. He had con-vinced someone with deep pockets that was
       | going to work. If you go look at tech jobs on LinkedIn you see
       | countless new companies with similar mud foundations that are
       | somehow raising capital.
        
         | bonoboTP wrote:
         | I believe the good experts are already distancing themselves
         | from the AI term. It will backfire and will go out of fashion
         | once again. There are important tools and skills in this space,
         | but "AI" has been used more for deception than for clarity.
        
       | belval wrote:
       | This is part of Microsoft and OpenAI marketing/branding strategy.
       | Similar wording was used during the acquisition when OpenAI used
       | "pre-AGI" in their press release:
       | 
       | > Instead, we intend to license some of our pre-AGI technologies,
       | with Microsoft becoming our preferred partner for commercializing
       | them.*
       | 
       | It's mostly arguing about semantics and it's fine and common in
       | research circles. Sam Altman is pretty out of line with his
       | comment saying LeCun lacks vision because he doesn't adhere to
       | their hype (my opinion) based wording. Aside from that it's just
       | business as usual, no need to stop every time academics argue.
       | 
       | https://openai.com/blog/microsoft/?utm_source=angellist
        
       | xiphias2 wrote:
       | AI researchers with huge salaries at huge companies are
       | incentivized to hype what tasks machine learning can do while
       | underestimating how centralized power it gives to the companies
       | that have the infrastructure to train huge NNs.
       | 
       | It doesn't matter whether AI is conscious or not, only whether
       | it's centralizing or decentralizing power as it gets more
       | powerful than human thinking (even if it's not conscious).
        
       | bonoboTP wrote:
       | The whole discussion is pure fluff and a Twitter box match.
       | You'll do yourself a favor by keeping all this noise out and
       | concentrate on actually valuable books and writings.
       | 
       | Any doofus and their cat can have an opinion on whether machines
       | are conscious. We've been having this debate since Turing and
       | even earlier.
       | 
       | Also any time a Twitter storm comes up around AI, you will
       | predictably have certain blocks building and flinging excrement
       | at each other for various latent political disagreements.
       | 
       | For Sutskever, it's a way to get into the news cycle, to get lots
       | of engagement. Do you want to reward these? It's like Musk
       | tweets. You can probably have more "impact" with a well optimized
       | two-line off-hand tweet than with an actual book where you
       | explain some novel idea.
        
         | visarga wrote:
         | > Any doofus and their cat can have an opinion on whether
         | machines are conscious.
         | 
         | Please try a little bit to read the source before commenting.
         | The originator of this opinion is Ilya Sutskever, co-founder at
         | OpenAI and cited 269k times. He's one of the top people in the
         | field. https://twitter.com/ilyasut/status/1491554478243258368
         | 
         | I take Ilya's tweet more like a musing, an invitation to think
         | what if, rattling the box to get interesting reactions.
         | 
         | In my opinion he's not necessarily right or wrong. Today's
         | large neural networks might be conscious if they didn't lack
         | some special equipment - a body, senses and action organs, and
         | a goal. They need to be able to do causal interventions in the
         | environment, not just reply to simple text inputs. I think
         | embodiment is not out of reach.
         | 
         | Look at Yann LeCun's strong reply:
         | 
         | > Nope. Not even for true for small values of "slightly
         | conscious" and large values of "large neural nets". I think you
         | would need a particular kind of macro-architecture that none of
         | the current networks possess.
         | 
         | https://twitter.com/ylecun/status/1492604977260412928
         | 
         | The neural nets need the 4Es of cognition: embodied, embedded,
         | enacted and extended.
         | 
         | > The four E's of 4E cognition initialize its central claim:
         | cognition does not occur exclusively inside the head, but is
         | variously embodied , embedded, enacted, or extended by way of
         | extra cranial processes and structures... they constitute a
         | form of dynamic coupling, where the brain body world
         | interaction links the three parts into an autonomous, self
         | regulating system.
         | 
         | (MJ Rowlands, The New Science of the Mind: From Extended Mind
         | to Embodied Phenomenology)
        
         | [deleted]
        
       | burtonator wrote:
        
       | theferalrobot wrote:
       | I feel like everyone taking a hardline stance on this is being
       | disingenuous - consciousness as it is used in pop culture is a
       | largely non-scientific (and in my opinion a useless) term.
       | 
       | If you claim 'consciousness' is just an emergent phenomena of
       | complexity (something I happen to agree with) then sure neural
       | nets are potentially slightly conscious, but that isn't how most
       | people view consciousness unfortunately.
       | 
       | Most people view 'consciousness' as some 'pie in the sky'
       | component of biological life that has yet to be discovered by
       | science, but this line of inquiry is completely outside the realm
       | of useful dialog, so it seems pointless to debate such things.
       | 
       | These are the two general views of consciousness, the first at
       | least provides a useful framework for discussion, but the two
       | camps will vehemently always disagree with each other.
       | 
       | > "The question of whether a computer can think is no more
       | interesting than the question of whether a submarine can swim." -
       | Edsger Dijkstra
        
         | fsckboy wrote:
         | "I think consciousness is just emergent from complexity, so
         | what I have to say is valid, but people who suspect that's not
         | the full story, well that's pointless to debate, they should
         | use my framework for discussion"
         | 
         | sheesh. weak.
        
           | jjcon wrote:
           | It is pointless to debate with someone that invokes ideas
           | outside the realm of scientific inquiry... that is
           | definitionally true isn't it?
        
             | vba616 wrote:
             | If I claim "no advanced automobile will ever develop the
             | ability to run like a horse as an emergent phenomenon"
             | 
             | Does that mean that I regard running as outside the realm
             | of scientific inquiry?
        
           | theferalrobot wrote:
           | >but people who suspect that's not the full story, well
           | that's pointless to debate, they should use my framework for
           | discussion
           | 
           | You can believe whatever you want, use whatever framework you
           | want (religion, spirituality, science etc). I'm just pointing
           | out that it is pointless to debate between the two because
           | they fundamentally disagree about how to inquire about the
           | world and answer questions like this. Everyone in this debate
           | is talking past each other without acknowledging that they
           | are starting from two very different positions and sets of
           | definitions.
        
             | fsckboy wrote:
             | it sounds like you put yourself in a class of people who
             | are perfectly rational, and therefore anything that you
             | can't think of doesn't exist, and anybody who thinks about
             | those things is a mystic.
             | 
             | you are making a mistake like physicists who believed "God
             | does not play dice with the world" at the dawn of quantum
             | mechanics or "time is a constant, not the speed of light"
             | at the dawn of relativity.
             | 
             | You have no idea where consciousness comes from, stop
             | assuming you do, it's poor science.
             | 
             | (For the record, I'm sure the integral of my history of
             | atheism is strictly greater than yours, mentioning since
             | that seems to be the subtext of your argument.)
        
         | igorkraw wrote:
         | I meant to reply to this comment since IIT is the main coherent
         | treatment I've seen+my thoughts about why people habe this pie
         | in the sky tendency
         | 
         | https://news.ycombinator.com/item?id=30409369
        
         | vba616 wrote:
         | I don't get the (perennial) dichotomy.
         | 
         | It seems to me the question of whether a submarine can swim is
         | well-formed and relevant.
         | 
         | I feel confident that no advanced propeller driven craft will
         | ever develop flippers and fish-like swimming as an emergent
         | phenomenon.
         | 
         | I also feel confident that an artificial device that _does_
         | swim like a fish is entirely within the realm of engineering,
         | let alone science.
         | 
         | It has never made any sense to me that, by analogy, there is a
         | conflict between those two beliefs.
         | 
         | Economic forces may preclude fish-machines, but it might just
         | mean they will be delayed for a long time because they are
         | difficult.
        
         | [deleted]
        
       | DantesKite wrote:
       | The original tweet was very innocuous and seemed more like a
       | thought experiment than a proclamation.
       | 
       | Furthermore, nobody has come up with any conclusive evidence that
       | the statement is incorrect.
       | 
       | It's possible neural networks are slightly conscious, because we
       | fundamentally do not understand what consciousness entails.
       | 
       | If anybody can prove that statement is wrong, Nobel Prize to
       | them.
        
       ___________________________________________________________________
       (page generated 2022-02-20 23:00 UTC)