[HN Gopher] Why philosophers should care about computational com...
       ___________________________________________________________________
        
       Why philosophers should care about computational complexity (2011)
        
       Author : lordleft
       Score  : 101 points
       Date   : 2021-11-16 16:31 UTC (6 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | dang wrote:
       | Past threads:
       | 
       |  _Why Philosophers Should Care About Computational Complexity
       | (2011) [pdf]_ - https://news.ycombinator.com/item?id=17573142 -
       | July 2018 (22 comments)
       | 
       |  _Why Philosophers Should Care About Computational Complexity
       | (2011) [pdf]_ - https://news.ycombinator.com/item?id=11913825 -
       | June 2016 (54 comments)
       | 
       |  _Why Philosophers Should Care About Computational Complexity
       | [pdf]_ - https://news.ycombinator.com/item?id=9061744 - Feb 2015
       | (43 comments)
       | 
       |  _Why Philosophers Should Care About Computational Complexity_ -
       | https://news.ycombinator.com/item?id=2897277 - Aug 2011 (10
       | comments)
       | 
       |  _Why Philosophers Should Care About Computational Complexity_ -
       | https://news.ycombinator.com/item?id=2861825 - Aug 2011 (36
       | comments)
        
       | photochemsyn wrote:
       | Question: "How many computable numbers could a computer compute
       | if a computer could compute all computable numbers?"
       | 
       | Woodchucks are curious about this.
        
         | sumtechguy wrote:
         | You owe the oracle 32GB of RAM. His game rig is low on memory
         | and hungry.
        
         | jazzyjackson wrote:
         | I wanted to incorporate this question with an Arduino lesson
         | plan based around key breaking, basically demonstrating that
         | 10,000 combinations might seem like a lot when you're doing it
         | by hand, but a computer can brute force it in no time, so how
         | big does a number have to be before it starts taking serious
         | time for a computer to count to it?
         | 
         | That's when I learned compilers will look at your for loops and
         | just predict the result of incrementing a number a million
         | times and skip to the end condition. Was quite a shock to find
         | out the program I write is not the program that runs. I think I
         | added something simple to the loop, like flipping a bit on a
         | digital output to force the loop to actually run.
        
           | kamray23 wrote:
           | -O0 turns off optimizations on most reasonable C compilers.
        
           | lordnacho wrote:
           | Stick random() in there somewhere for each iteration, print
           | it out? Then the optimizer won't be able to just jump to the
           | end.
        
             | jazzyjackson wrote:
             | Actually yes I think that was what we went with, instead of
             | counting from 0 to MAX_INTEGER we generated random numbers
             | that many times, brute forced integer "keys" just fine.
        
       | beepbooptheory wrote:
       | This is only redeeming to me really because generally its the
       | philosophers that are presumptuous enough to tell other fields
       | what they should do or think. Nice to see the tables turned on
       | them!
        
         | at_a_remove wrote:
         | Goodness yes. When I was in physics, there were a lot of these
         | philosophers who had some kind of real _problem_ with (special)
         | relativity, like it had swerved and driven over their dog on
         | purpose. I 'm not talking about the spackling of idiots who
         | were under the impression that Michelson-Morley was performed
         | precisely once, but folks who just found it antithetical to
         | their personal outlooks, aside from the space opera types whose
         | dreams are cruelly crushed underfoot by _c_ (no exotic matter
         | has yet called Lazarus forth from his tomb). Normally, one
         | could dispel them with some holy water and a copy of Dr. Will
         | 's _Was Einstein Right?_ but some had a tenacity to them which
         | had long transformed from a virtue to a vice.
        
           | [deleted]
        
           | ChrisSD wrote:
           | It's funny because of the number of physicists who loudly
           | proclaim philosophy is either dead or at least useless (while
           | also extolling the virtues of their own philosophy of
           | science, with bonus points for mangling Popper).
        
           | quantum_mcts wrote:
           | Also physicist. My beef is more with philosophy-of-science
           | folks in particular. They think that we, physicists, really
           | need them and their insights about how to do physics
           | properly. For example they really like this meme:
           | 
           | reddit.com/r/badphilosophy/comments/gjz24v
           | 
           | Which is totally dishonest. (I tried to counter that with
           | this https://imgur.com/a/zwDhfxJ , but that's not memey
           | enough.)
        
             | bangkoksbest wrote:
             | It's not a coincidence that the left side (more positive to
             | philosophy) and Born (whose quotes are more neutral) are
             | also continental European, while the three critical
             | examples (can add in Krauss, Hawking, and many others) come
             | from Anglo-Americans. There's too often a pettiness in the
             | latter (to an extent, also in your post) in which what look
             | like fairly limited and contemporaneous gripes about
             | academics in another field is given the trappings of some
             | historically unbound and universal insight.
             | 
             | There's no inherent tension between science and philosophy;
             | even Meermin's "shut up and calculate" is a philosophical
             | position (instrumentalism), to say nothing of the
             | philosophical, scientific and mathematical contributions of
             | e.g. Poincare, Helmholtz, Mach, Duhem, Peirce, Leibniz, Von
             | Neumann, Jaynes etc. Even modern Anglo-American physicists
             | contribute to and draw a lot from philosophy, e.g. Wheeler
             | and Paul Anderson.
        
         | bakuninsbart wrote:
         | That's not my experience at all, rather I see a lot of
         | philosophers giving _good_ pointers where their field borders
         | on others, and then being shot down or ignored. I 'd actually
         | be interested in an example of this, as I don't think I've
         | encountered it so far, although I'm admittedly not that engaged
         | either.
         | 
         | By far the worst offenders of presumptuousness must be old
         | physicists though, followed by old computer scientists.
         | Thinking of Michio Kaku talk about philosophy still makes me
         | physically cringe.
        
           | btilly wrote:
           | Can you provide examples of your "good pointers"?
           | 
           | My experience is that the feedback provided by philosophers
           | sounds good to the philosophers, but is actually useless to
           | practitioners. And in technical subjects, like math, computer
           | science and physics, it is important to develop a really good
           | BS filter.
           | 
           | That said, I am also sympathetic to
           | http://www.paulgraham.com/philosophy.html, so must admit some
           | hostility to the current practice of philosophy.
        
             | voidhorse wrote:
             | I think there's an inherent tension between what
             | philosophers are trying to do and what practitioners in
             | specific fields are trying to do that leads to the bad
             | blood.
             | 
             | If you're a philosopher, you're generally trying to look at
             | things from a general enough perspective that you might
             | argue for a radical shift in the way we do things and call
             | out the assumptions that are otherwise taken as axioms of
             | the field--the water we swim around in.
             | 
             | If you're a practitioner, you're focused on swimming. Core
             | axioms and prevailing theory are your presuppositions--
             | you're just focused on getting things done within the
             | confines of the prevailing theory/framework, perhaps
             | nudging it every so often in particular directions based on
             | new discoveries--its quite rare that a working scientist
             | sparks an actual theoretic revision and becomes the next
             | Einstein (see thomas Kuhn).
             | 
             | So yes, philosophers are totally useless when it comes to
             | getting practical stuff done because that's generally not
             | the space they are attempting to help with or illuminate.
        
         | iworshipfaangs2 wrote:
         | > generally its the philosophers that are presumptuous enough
         | to tell other fields what they should do or think
         | 
         | I'm sorry but in my experience, this is not at all a one-way
         | street. It is very common for me to hear engineers and other
         | STEM-types to complain about modern art or about the supposedly
         | obscurantist, sophistic style of various disciplines of the
         | liberal arts. However, these complaints rarely come from
         | Engineers who take an active interest in the fields they
         | criticize.
         | 
         | Admittedly I'm mainly speaking about general people I work with
         | or whom I (sadly often) encounter on the internet. But there
         | are some eminent names who are just as guilty. In addition to
         | Kaku, as bakuninsbart mentioned, I could also bring up, off the
         | top of my head, Stephen "philosophy is dead" Hawking and
         | Richard "Shakespeare would have been better if he were
         | educated" Dawkins.
         | 
         | I think I could come up with many more examples if I started
         | looking. On the other hand, we fortunately also have people
         | like Murray Gell-Man, who has gotten us to quote from the most
         | experimental book in all of English-language literature every
         | time we talk about elementary sub-atomic particles.
        
           | a9h74j wrote:
           | Based upon minimal, but seemingly enough observation, I'd add
           | Kraus to the list of philistines.
        
           | normac2 wrote:
           | > supposedly obscurantist, sophistic style of various
           | disciplines of the liberal arts.
           | 
           | Would you say there is absolutely no kernel of truth to this?
           | Check out, say, the abstract to this paper [1]. Is there
           | nothing obscurantist about it? If you acknowledge that it's
           | obscurantist to some degree, would you say that it's rare and
           | I just cherry-picked a bad one?
           | 
           | I'm a STEM person, and I have trouble understanding why some
           | people find this stuff to be just reasonable academic work
           | with nothing dysfunctional, pedantic or sophistic about the
           | writing style.
           | 
           | It just seems so extremely obvious to me, that it makes me
           | wonder if the people into this stuff simply have nervous
           | systems that are wired a bit differently, and I'm falling
           | prey to the typical mind fallacy. It's hard to believe that
           | if I studied this stuff deeply enough and with an open mind,
           | it would no longer seem obscure.
           | 
           | [1] https://scholarworks.umass.edu/dissertations/AAI3193887/
        
             | voidhorse wrote:
             | While one could argue that the abstract you linked is not
             | well written, I do not think it's obscurantist by any
             | means. It's a dissertation, so it's not surprising they are
             | using the jargon of the field and citing important works.
             | 
             | All academic literature is specialist literature. If you
             | aren't trained in the field you likely won't understand it.
             | It's totally reasonable to me that a STEM person would have
             | no idea what this abstract is saying just as a Humanities
             | person probably couldn't make heads or tails of the
             | abstract of a dissertation on category theory or on a
             | particular branch of computer science.
             | 
             | I find it funny that STEM folks always go after humanities
             | academics for being obtuse when its just a matter of the
             | pot calling the kettle black--dense STEM research and
             | theory uses language that'd be considered equally obtuse to
             | the untrained reader.
        
               | normac2 wrote:
               | To me, it goes beyond being hard to read, and I take it
               | as obscurantist in the strictest sense of someone going
               | out of their way to be hard to understand.
               | 
               | I have a theory that most STEM people simply don't think
               | like most humanities people, literally at a neurological
               | level. (I edited my post to add some thoughts around
               | that, possibly after you replied.)
               | 
               | STEM work rarely comes off that way to me. The only time
               | it looks to me like the person is going out of their way
               | to be obtuse and technical, is some higher math stuff
               | (which is a known thing and acknowledged even by some
               | mathematicians). This includes the stuff from entirely
               | different parts of STEM that I don't understand at all.
        
               | voidhorse wrote:
               | That's fair. I would agree there is a certain "big words
               | == more intellectual == smarter" or "more difficult ==
               | smarter" fallacy that arises somewhat frequently in
               | contemporary humanities papers.
               | 
               | I think part of it might originate from the fact that the
               | abstractions used for taking about things in the
               | humanities aren't fixed as well as they are in science.
               | Take the abstract in question for example--the writer
               | uses the palimpsest as a sort of visual analogue and
               | abstraction to try to describe interactions and
               | relationships between texts/narratives--while it's not an
               | absurd metaphor, it's difficult to grok, because there is
               | no real standardized metaphor for describing this set of
               | relationships. You could argue the object of study isn't
               | as well defined as it is in the sciences where we have
               | fairly standardized abstractions like "waveform" etc.
               | that make it a lost easier to talk about things clearly.
        
       | mensetmanusman wrote:
       | The Chinese Room has been on my mind lately after recently
       | learning about it:
       | 
       | https://en.m.wikipedia.org/wiki/Chinese_room
       | 
       | Highly recommend trying to grasp the argument, because in a sense
       | it connects complexity to 'time to compute'. If it takes a room
       | of robots following simple instructions a billion years to act as
       | an AI that reads Chinese, are those robots a collective
       | intelligence? Can it be scaled up to consciousness?
        
         | skulk wrote:
         | Why stop at a computer, why not extend the argument to a full
         | simulation of a human brain? What if I sat in my room for 3
         | million years simulating 3 seconds of a human body including
         | the brain? Assuming that my simulation is faithful, who's to
         | say that my simulation doesn't have some form of consciousness
         | or subjective experience?
        
           | AnIdiotOnTheNet wrote:
           | You can go even deeper: given that your simulation follows a
           | set of known rules from an initial state, what does actually
           | performing those steps even matter?
           | 
           | Is the consciousness given existence because you perform
           | calculations on paper, or does it exist already within the
           | fabric of reality and you're merely exposing it?
        
             | 3np wrote:
             | I think a common mistake is to conflate "consciousness"
             | with "intelligence" or at least "thought".
             | 
             | Just because we don't tend to recognize one without the
             | other doesn't mean they necessarily have a causative
             | relationship.
        
             | tshaddox wrote:
             | I'm not sure what you mean. Just listing the steps of an
             | algorithm is quite different than executing the steps of an
             | algorithm. As an obvious example, with an algorithm for
             | summing two numbers together, the difference between
             | listing the steps and executing them is that when you
             | execute them you are provided with the sum.
             | 
             | I don't see any sense in which these _aren 't_ different,
             | unless perhaps you're introducing some separate
             | cosmological "Boltzmann brain"-style argument that all
             | possible finite results of all halting algorithms will
             | arise or have arisen due to random fluctuations. But in
             | that case you've got a whole new set of assumptions to
             | address.
        
               | skulk wrote:
               | The idea of a fundamental difference between the result
               | of "performed" and "un-performed" computation always
               | leads me to imagining the null universe, where nothing
               | gets computed ever but all math "exists" as we'd expect.
               | It begs the question, why is the universe non-null?
               | Allowing the mere existence of the possibility of
               | computation to answer that question is very comforting.
               | But yes, it requires approximately as much faith as any
               | monotheistic religion.
        
           | Der_Einzige wrote:
           | This is why I find it so strange that philosophers seem to by
           | in large accept "Cogito Ergo Sum" or "I think therefor I am"
           | as a valid proof of ontology. What if the appearance of
           | thought is not actually thought?
           | 
           | Did philosophers not watch Blade runner? Sure Descartes
           | didn't - but contemporaries sure as shit do!
        
             | jean_tta wrote:
             | This is essentially the philosophical zombie thought
             | experiment:
             | https://en.wikipedia.org/wiki/Philosophical_zombie
        
             | cuspycode wrote:
             | But how do you even distinguish appearance of thought from
             | actual thought, objectively? Of course you can always argue
             | that actual thought is something that is subjectively
             | experienced, but that leads to solipsism, doesn't it?
        
               | Der_Einzige wrote:
               | Solipsism is what you get from accepting the Cogito "I
               | know I exist for sure, and only my own existence can be
               | verified by the cogito"
               | 
               | To answer your question, I claim that you can't. Radical
               | skepticism ("we live in the universe of the evil demon"),
               | which is what I advocate for, means that you can't even
               | claim that you exist. See the works of Max Stirner for
               | further elaboration of the implications of this for
               | Philosophy.
        
             | shadowgovt wrote:
             | Not to downplay Descartes (his philosophy, specifically the
             | process he undertakes, is worth a study), but something
             | that the pop summaries of his writing really gloss over is
             | that his entire premise hinges on something he takes as
             | assumed true that need not be, and if it isn't, the rest
             | goes straight off the rails.
             | 
             | Paraphrasing from memory: he pins "cogito ergo sum" on the
             | assumption that objective knowledge is possible. That
             | assumption is built atop imagining an alternative: an evil
             | demon has full control over his senses and feeds him
             | whatever it wants to (essentially, the 'Matrix
             | hypothesis'). His solution to this concern? "A loving God
             | wouldn't create a universe that banal and meaningless."
             | 
             | It's only a proof if you accept the postulate that
             | objective reality is correlated with experience.
        
               | [deleted]
        
         | karpierz wrote:
         | Where does computational complexity factor into the Chinese
         | room?
         | 
         | Whether the Chinese room responds quickly or slowly doesn't
         | really factor into whether the room understands Chinese.
        
           | Strilanc wrote:
           | The linked article discusses this.
        
         | kordlessagain wrote:
         | > To all of the questions that the person asks, it makes
         | appropriate responses, such that any Chinese speaker would be
         | convinced that they are talking to another Chinese-speaking
         | human being.
         | 
         | This is a fallacy, regardless of the language. Some comms
         | between conscious entities must always contain metaphoric
         | language, including signaling through body language. A shrug is
         | a metaphor for uncertainty. This is why the premise of the
         | Chinese Room is true (we can't build a mind in a box) but also
         | why conscious machines will necessarily have to have bodies.
         | 
         | Anything that is conscious and can hold a (real) dialog with a
         | human will have emotional output tied to the body and it's
         | collective experience.
        
           | michaelmrose wrote:
           | Why couldn't you simulate the body too?
        
           | tshaddox wrote:
           | > Some comms between conscious entities must always contain
           | metaphoric language, including signaling through body
           | language.
           | 
           | What do you mean? Humans can certainly communicate and
           | maintain meaningful relationships solely through writing.
        
         | achr2 wrote:
         | It seems to me that the argument is kind of silly. The "Chinese
         | Room" has failed to account for the actual processes of the
         | system. The 'spoken response' is only a small part of the
         | 'program' of a mind; the rest is the internal processes that
         | constitute consciousness that can infer, compare, reinforce,
         | and associate. Consciousness is the whole of the system, not a
         | functional IO.
        
           | new_guy wrote:
           | It's basically a red-herring (fallacious argument), the
           | robots/workers etc are just homunculus, and the entire thing
           | falls apart without them.
        
         | arketyp wrote:
         | Yeah, this is related to Stephen Wolfram's principle of
         | computational equivalence and his re-take on "the weather has a
         | mind of his own". Turing machines occur everywhere in nature,
         | emerge from the simplest rules, Wolfram demonstrates, so maybe
         | consciousness is everywhere too. Where to put the scope of
         | computing processes basically. Searle refutes it saying it's
         | nonsensical to contend that a concrete wall could be conscious.
         | I don't know, who knows.
        
           | np- wrote:
           | It doesn't seem particularly nonsensical to contend a
           | concrete wall has some form of consciousness. Concrete walls
           | don't spring up out of nowhere, they're built by humans who
           | have a need to separate things (either for good or for bad).
           | A single concrete wall might not seem like much, but in the
           | greater scope of things, the rising and falling of all
           | concrete walls seems to represent some sort of plane of
           | humanity and its consciousness. Even think of the extremely
           | complicated emotions evoked by different types of walls: i.e.
           | the Berlin Wall, the U.S.-Mexico Wall, the Great Wall, etc. I
           | know I'm probably reading way too much into this specific
           | example, but I think this could generally apply to a lot of
           | things.
        
         | dragonwriter wrote:
         | > Highly recommend trying to grasp the argument
         | 
         | It's not really, IMO, all that valuable.
         | 
         | > because in a sense it connects complexity to 'time to
         | compute'.
         | 
         | No, it doesn't. It basically just rejects _by definition_ (and
         | then presents an illustration which is a useful support if you
         | accept its definition or an illustration that its definitions
         | are wrong if you don't) that a programmed response system that
         | exhibits the behavior of an actor that understands material
         | actually understands material, and from that rejects the idea
         | that brains (which produce minds, which have actual
         | understanding) are equivalent to programmed response systems.
        
         | [deleted]
        
         | Yajirobe wrote:
         | > If it takes a room of robots following simple instructions a
         | billion years to act as an AI that reads Chinese, are those
         | robots a collective intelligence? Can it be scaled up to
         | consciousness?
         | 
         | No, because of the article you linked to. Chinese room argues
         | AGAINST the existence of consciousness/intelligence arising
         | from following symbolic instructions.
         | 
         | Here is a quote I like by John Searle:
         | 
         | > Computational models of consciousness are not sufficient by
         | themselves for consciousness. The computational model for
         | consciousness stands to consciousness in the same way the
         | computational model of anything stands to the domain being
         | modelled. Nobody supposes that the computational model of
         | rainstorms in London will leave us all wet. But they make the
         | mistake of supposing that the computational model of
         | consciousness is somehow conscious. It is the same mistake in
         | both cases.
        
           | hackinthebochs wrote:
           | Searle's argument misses the mark. A simulated rainstorm wont
           | leave us wet, but a simulation of a disordered state is (or
           | contains) a disordered state. If consciousness is a
           | relational property of a system, then a simulation of such a
           | relational property will instantiate that relational property
           | without qualification. The Chinese room argument can't
           | address this conception of consciousness.
        
             | mjburgess wrote:
             | The argument is against this view of consciousness.
             | 
             | More intuitively: our words are things which do things in
             | the world. When we acquire them we do so by being embedded
             | in the world. When we use them we use them to change the
             | world. (Largely).
             | 
             | If a child in a cupboard reading a textbook out-loud could
             | fool a chemist that the child understood chemistry, it
             | doesn't mean they do. In the end, you need to turn on that
             | bunsen burner.
             | 
             | The chinese room needs to be taken super-literally: does
             | the person just doing a "hash-table lookup" on "the right
             | answers to pre-prosed questions" _actually understand_.
             | 
             | To me its incredibly obvious that they do not. This does
             | not establish that, eg. cognition, does not involve
             | computational hastable-like processes. I think it does
             | establish that propontents of a naive computationalism have
             | an incredibly big challenge.
             | 
             | The challenge is to specify the environment
             | computationally, since that is the only means by which the
             | computer itself can be sufficiently embedded and actually
             | _understand_ anything.
             | 
             | The issue here is that if "computer" isnt just a trivial
             | property it has to include, eg., measurability,
             | determinism, etc. Properties that reality lacks. Reality
             | isnt a computer.
             | 
             | Starting with the chinese room one /can/ supplement
             | sufficiently until we reach searle's conclusion.
        
               | hackinthebochs wrote:
               | > does the person just doing a "hash-table lookup"
               | 
               | But this is Searle's sleight-of-hand. The question isn't
               | whether the person in the Chinese room understands
               | Chinese, but whether the system as a whole understands
               | Chinese. The man is analogous to the CPU in a computer,
               | but the CPU is embedded in a larger system and it is the
               | entire system that performs operations of the computer.
               | You can't focus on one component to the exclusion of the
               | rest of the system and expect to derive valid conclusions
               | about the system.
               | 
               | >The challenge is to specify the environment
               | computationally
               | 
               | I agree, but this presents no _in principle_ challenge to
               | a computational system reaching such a detailed
               | description of the environment.
               | 
               | >The issue here is that if "computer" isnt just a trivial
               | property it has to include, eg., measurability,
               | determinism, etc. Properties that reality lacks. Reality
               | isnt a computer.
               | 
               | I don't follow. The issues of measurability and
               | determinism in QM (assuming that's what you are referring
               | to) aren't necessarily problems at the scale of physical
               | systems. Computational systems can certainly make
               | distinctions to within some margin of error, which is
               | sufficient as a substrate for gathering information about
               | the external environment.
        
               | mjburgess wrote:
               | No, there's no sleight of hand. It seems you just agree
               | that the man doesnt understand Chinese.
               | 
               | > at the scale of physical systems.
               | 
               | Alas, they are. Even classical mechanics isn't
               | deterministic. Most systems are chaotic and require
               | infinite precision in measurement to be deterministic.
               | Since QM precludes this, most chaotic systems literally
               | can only be predicted over short horizons.
               | 
               | Eg., there is moon in our solar system, i believe, with a
               | chaotic orbit. At maximum possible measurement fidelity
               | we can predict 20yrs.
               | 
               | Consider that all the particles of our body are just
               | these little chaotic moons, and their aggregate chaos
               | dramatically dimishes this 20yr horizon.
               | 
               | There is, quite literally, not enough information present
               | in this moment to predict the next moment. Most systems
               | are sufficiently chaotic in a sufficiently large number
               | of parameters to preclude this. This reaches all the way
               | up to classical scales.
               | 
               | Reality isnt a computer.
        
               | burkaman wrote:
               | Everyone agrees that the man doesn't understand Chinese.
               | The sleight of hand is that Searle compares a full
               | computer to a component of the Chinese Room, when he
               | should be comparing a computer to the room as a whole.
               | The man in this thought experiment is analogous to an
               | internal component of a computer, not the whole thing.
               | 
               | Put another way, Searle separates a "computer" from the
               | instructions it is running, but there is no such
               | separation. It's like separating a brain from the
               | electrical brain activity. It's true that a brain without
               | electrical activity cannot understand Chinese just like a
               | computer without a program cannot, but that's not a very
               | interesting observation.
        
               | mjburgess wrote:
               | A computer is just an implementation of a function (in
               | mathematical sense), ie., a finite set of pairs (IN, OUT)
               | where (IN, OUT) in {0,1}^N, {0,1}^N. Ie., its an
               | implementation of {010101, ...} -> {010101010, 01010...}
               | .
               | 
               | If we all agree that the function the man performs can be
               | set to be whatever you wish (ie., any {input sentences}
               | -> {output sentences}) and we agree that the man doesnt
               | understand chinese... then the question arises: what is
               | missing?
               | 
               | I dont think the "systems reply" actually deals with this
               | point -- it rather just "assumes it does" by "assuming
               | some system" to be specified.
               | 
               | If using a language isnt equivalent to a finite
               | computable function -- what is it?
               | 
               | The systems reply needs to include that the _reason_ a
               | reply is given is that the system is _caused_ to give the
               | reply by the relevant causal history  / environment /
               | experiences. Ie., that the system says "it's sunny"
               | _because_ it has been acquainted with the sun, and it
               | observes that the day is sunny.
               | 
               | This shows the "systems reply" prima facie fails: it is
               | no reply at all to "presume that the system can be so-
               | implemented so as to make the systems reply correct". You
               | actually need to show it can be so-implemented. No one
               | has done this.
               | 
               | There are lots of reasons to suppose it can be done, not
               | least, that most things arent computable (ie., via non-
               | determinism, chaos, and the like). Given the environment
               | is chaotic, it is a profoundly open question whether
               | _computers_ can be built to  "respond to the right
               | causes" and computational systems may be incapable of
               | doing this.
               | 
               | If they cannot, then searle is right. That man, and
               | whatever he may be a part of, will never understand
               | chinese. It is insufficient to "look up the answers",
               | "proper causation" is required.
        
               | burkaman wrote:
               | What is missing is the man's ability to memorize the
               | function and carry it out without the book. If he could,
               | and the function truly produced a normal response to any
               | Chinese phrase in existence, then he would speak Chinese.
               | 
               | Edit: He would speak Chinese, but he wouldn't understand
               | it. What is missing in order to understand Chinese is an
               | additional program that translates Chinese to English or
               | pictures or some abstract structure independent of
               | language. Humans have this, but the function in this
               | thought experiment is a black box, so we don't know if it
               | uses an intermediate representation. Thanks to
               | colinmhayes for pointing this out.
               | 
               | Could he give a reasoned answer to "what is the weather"?
               | That depends on whether we include external stimuli as
               | part of the function input. If not, then neither a human
               | nor a computer could give a sensible answer beyond "I
               | don't know".
               | 
               | I see now that your issue is really with the function -
               | could a function ever exist that gives responses based on
               | history/environment/experience. My understanding is that
               | such a function is the premise of the thought experiment;
               | it's a hypothetical we accept in order to have this
               | discussion. Searle claims that even if such a function
               | exists, the computer still doesn't truly understand. But
               | if that's what we're asking, my answer would be yes, as
               | long as history/environment/experience are inputs as
               | well. Of course a computer locked away in a room only
               | receiving questions as input can never give a real answer
               | to "what is the weather", just like a human in that
               | situation couldn't. But if we expand our Chinese Room
               | test to include that type of question and also allow the
               | room or computer to take its environment as input, then
               | it can give answers caused by its environment.
               | 
               | > You actually need to show it can be so-implemented. No
               | one has done this.
               | 
               | I mean, fair enough. It's fine to say "I won't believe it
               | until I see it", but that pretty much ends the
               | discussion. If we want to talk about whether something
               | not yet achieved is possible, then we need to be willing
               | to imagine things that don't exist yet.
        
               | colinmhayes wrote:
               | > If he could, and the function truly produced a normal
               | response to any Chinese phrase in existence, then he
               | would speak Chinese
               | 
               | I don't necessarily disagree, but it's not so simple.
               | Just because the man speaks chinese doesn't mean he
               | understands it. He could have figured out the proper
               | output for every possible input without knowing what any
               | of it means. If the next person asks in English "what did
               | you talk about with the last person?" what would he
               | answer, assuming he also speaks english? Really the
               | question comes down to if the computer is able to write
               | the book by observing I guess, but even then you could
               | conceive of a different book with instructions on how to
               | write the translation book.
        
               | burkaman wrote:
               | You're right, I will edit my comment. Thanks for this
               | framing.
        
               | tshaddox wrote:
               | The point is that the human in the Chinese room is just
               | the hardware, and no one really thinks that any computer
               | hardware on its own understands Chinese. It would
               | obviously be the entire computational system that
               | understands Chinese. The only reason this can even appear
               | to be confusing is that we often casually use "computer"
               | to refer both to an entire computational system (your
               | "implementation of a function") as well as a physical
               | piece of hardware (like the box on the floor with a Dell
               | sticker on it).
        
               | hackinthebochs wrote:
               | >I dont think the "systems reply" actually deals with
               | this point
               | 
               | To be clear, the point of the systems reply isn't to
               | demonstrate how a computational system can understand
               | language, it is to point out a loophole such that
               | computationalism avoids the main thrust of the Chinese
               | room.
               | 
               | >If using a language isnt equivalent to a finite
               | computable function -- what is it?
               | 
               | In the Chinese room, the man isn't an embodiment of the
               | computable function (algorithm). The man is simply the
               | computational engine of the algorithm. The embodied
               | algorithm includes the state of the "tape", the unbounded
               | stack of paper as "scratch space" used by the man in
               | carrying out the instructions of the algorithm. So it
               | remains an open question whether the finite computable
               | function that implements the language function must "use
               | language" in the relevant sense.
               | 
               | What reason is there to think that it does use language
               | in the relevant sense? For one, a semantic view of the
               | dynamics of the algorithm as it processes the symbols has
               | predictive power for the output of the system. Such
               | predictive power must be explained. If we assume that the
               | room can _in principle_ respond meaningfully and
               | substantively to any meaningful Chinese input, we can go
               | as far as to say the room is computing over the semantic
               | space of the Chinese language embedded in some
               | environment context encoded in its dictionary. This is
               | because, in the limit of substantive Chinese output for
               | all input, there is no way to duplicate the output
               | without duplicating the structure, in this case semantic
               | structure. The algorithm, in a very straightforward
               | sense, is using language to determine appropriate
               | responses.
        
               | mannykannot wrote:
               | I agree that one should take Searle's argument literally,
               | and I doubt there are many people on either side who
               | suppose that the room's operator understands either the
               | questions being posed to the room or the answers it
               | delivers. The dispute is whether there is anything
               | significant in this assumption - and, more specifically,
               | whether Searle is right in supposing that if Hard AI is
               | possible, then it would follow that the room's operator
               | _must_ understand the questions, their answers, and why
               | they are appropriate answers.
               | 
               | You may recognize that, by distinguishing between the
               | operator and the room, I am prefiguring the so-called
               | "systems reply", which is that the system as a whole is
               | demonstrating whatever level of understanding is manifest
               | here, and there is no assumption, in so supposing, that
               | this understanding is manifest in any single component,
               | _even if one of those components has the ability to
               | understand some things on its own._ By deliberate
               | construction, Searle is using the room 's operator to
               | mechanistically implement an algorithm, a task that does
               | not require knowledge of the language in which the
               | questions are posed (in fact, he is using the language
               | barrier to _prevent_ the operator answering the questions
               | himself.) Searle has not presented an argument for the
               | operator needing to do anything more, and it is obvious
               | that people can be trained to perform tasks they do not
               | have any understanding of beyond the specific actions
               | involved.
               | 
               | Searle's attempt to refute this reply seems to show that
               | he cannot even conceive it fully, as that response is to
               | modify the scenario such that the operator memorizes the
               | rule book, as if where it is stored makes any difference.
               | 
               | Searle's stance here seems to me to be a version of the
               | homunculus fallacy, supposing that our minds are
               | _implemented by_ (as opposed to _being_ ) a conscious
               | entity.
        
               | mjburgess wrote:
               | I think it is significant that the room, even as a
               | system, does not _use_ language.
               | 
               | A hashtable lookup cannot be what it is to "act in the
               | world with words".
               | 
               | When I say, "do you like what I'm wearing?" -- regardless
               | of what has been written in any book, at any time, I do
               | not want _that_ reply.
               | 
               | I may want those _words_. But the reply I want is to use
               | _your_ judgement of taste and experiences of the world to
               | tell me _your opinion_. The words dont answer the
               | question if the _reason_ they are used is incorrect.
               | 
               | A hashtable lookup can, basically, never be a reason to
               | use words.
               | 
               | And hence the Chinese room reveals much much more than
               | simply a circular point. And the systems reply fails to
               | say much much less than it needs to.
               | 
               | The systems reply needs to explain "by what system" the
               | system's use of language counts as a use. "by what
               | system" words are born of understanding, not "mere
               | reterival".
               | 
               | To do this, you need to specify the whole world,
               | experience, etc. as "computational systems". And thus the
               | systems reply simply fails against this example. It isnt
               | sufficient to say "maybe" you have to say, "and this is
               | how!".
               | 
               | And worse, the world is clearly not a computer -- as any
               | non-trivial definition (eg., universal turing machine)
               | cannot simulate properties the world has (non-
               | determinism, chaos, etc.).
        
               | mannykannot wrote:
               | In your first post in this thread you wrote "the Chinese
               | room needs to be taken super-literally" - a sentiment I
               | agreed with - but since then (starting immediately
               | afterwards, in fact) you seem to think that Searle is
               | restricting the capabilities of the room to those of a
               | hash table. I am pretty sure that a close reading of his
               | paper will fail to reveal any such restriction, and, in
               | fact, if it did, that would count _against_ his argument.
               | Similarly, nothing you say, here or elsewhere among these
               | replies, about the limitations of hash tables, supports
               | Searle 's argument (or more general arguments against
               | Hard AI.)
               | 
               | Nor is Searle arguing that the Chinese room, if it could
               | be implemented, must use language (beyond the
               | capabilities that he does require of it, which is
               | answering questions from an ill-defined domain.) And as
               | far as I can tell, he does not require it to, as you
               | somewhat vaguely put it, "act in the world with words"
               | beyond this.
               | 
               | The systems reply does not need to do anything more than
               | show that Searle is making some unargued-for assumptions
               | about what Hard AI would imply - and Searle's response to
               | it so completely misses the point of that reply that he
               | is actually helping make it clear! (For more details, see
               | the second paragraph of my previous post; as far as I can
               | tell, no argument has been raised against the points I
               | made there.)
               | 
               | The limitations of Turing machines are similarly beside
               | the point. A computer, such as those involved in the
               | composition and delivery of this reply, is not just a
               | Turing machine (not even a universal one), even though it
               | is capable of implementing one (up to the physical limits
               | of memory). In particular, a computer (as opposed to a
               | Turing machine) can incorporate environmental inputs,
               | including physically-derived entropy, and can use these
               | in simulating physical processes.
               | 
               | You are evidently fond of the phrase "[Reality/the world]
               | is not a computer"; that may be so, but rather more
               | definitively, I think, one can say that the mind _is_ an
               | information processor.
               | 
               | I understand that it seems inconceivable to you that a
               | mind could be produced by a suitably-programmed computer
               | of any power, and I am aware that I can only offer hand-
               | wavy arguments for that proposition. I think you are
               | mistaken, however, to read Searle's Chinese Room argument
               | as being about, let alone supporting, the broader
               | intuitions you have expressed in your reply to me and to
               | others. Searle, who is, after all, a philosopher of
               | considerable stature, is well aware that you need
               | something more than just one's intuitions to make a
               | respectable argument; that alone would not pass peer
               | review.
        
               | 3np wrote:
               | > The chinese room needs to be taken super-literally:
               | does the person just doing a "hash-table lookup" on "the
               | right answers to pre-prosed questions" actually
               | understand.
               | 
               | That's not how it's phrased. The Chinese Room needs to
               | pass the Turing Test with arbitrary input of Chinese
               | characters. So a convincing Chinese Elissa basically.
               | 
               | > The issue here is that if "computer" isnt just a
               | trivial property it has to include, eg., measurability,
               | determinism, etc. Properties that reality lacks. Reality
               | isnt a computer.
               | 
               | Reality isn't a human brain either.
               | 
               | Or is it?
               | 
               | Either way I don't see how that is relevant.
        
               | burkaman wrote:
               | > does the person just doing a "hash-table lookup" on
               | "the right answers to pre-prosed questions" actually
               | understand.
               | 
               | The person doesn't, but the (person + hash-table) system
               | does. This is not weird - presumably whoever wrote the
               | hash-table did understand Chinese. The whole point of
               | using a room for this thought experiment is that we're no
               | longer talking about a person, we're talking about a
               | "machine" consisting of a person, a hash-table, and some
               | kind of input/output system.
               | 
               | The hypothetical computer is the same. The CPU does not
               | understand Chinese, but the whole computer as a system
               | does, because part of that system is actual knowledge
               | that came from someone that understands Chinese. When
               | people ask if a computer is intelligent, they are not
               | asking about one particular component, they're asking
               | about the computer as a whole. Just like when you talk
               | about a human's knowledge and abilities, you don't ask
               | about particular sectors of their brain, you ask about
               | the person as a whole.
        
               | mensetmanusman wrote:
               | The person is frivolous in the Chinese room though, would
               | you say (robot + hash table) understands Chinese? I.e.
               | GTP-3 understands English...
        
               | burkaman wrote:
               | GPT-3 isn't there yet, but yes, a robot with the book
               | from the Chinese Room experiment would form a system that
               | understands Chinese.
               | 
               | I agree that the person is frivolous, that's why it's
               | strange that Searle asks whether the person in his
               | thought experiment understands Chinese. That's
               | irrelevant, what matters is whether the room as a whole
               | does, and clearly it does.
        
               | [deleted]
        
               | colinmhayes wrote:
               | I'm not sure. Say the person has translations tables for
               | english and chinese. If the second tester asked in
               | English "what did you talk about with the last person?"
               | would the man be able to answer? Clearly the man + hash
               | table speaks chinese, but I don't think that's the same
               | as understanding it.
        
               | burkaman wrote:
               | That depends on the premise of this new thought
               | experiment. Are the program-books allowed to include side
               | effects of writing down records of interactions, and take
               | those records as input? If not, it's not a fair
               | comparison with a computer. If so, then yes I think the
               | room could be able to answer.
               | 
               | Why did I have to add something, if the room already
               | understood Chinese before? Because we're essentially
               | adding a second program now and trying to share
               | understanding between them. The room did understand
               | Chinese, but that understanding will not be accessible to
               | a new component unless we design with that in mind. An
               | analogy would be asking a person "which muscles did you
               | use to digest that food?" Clearly some part of the human-
               | system knows this, because it activated the right muscles
               | and successfully digested the food, but the part of the
               | system that hears and responds to English doesn't have
               | access to that knowledge. We would need to redesign the
               | system and programs in order to share understanding
               | between these different parts of the system.
               | 
               | I realize we were using the term "hash-table" in this
               | thread and what I'm talking about wouldn't be possible
               | with just tables, but such a simple program is not a
               | requirement for the thought experiment. The idea is we
               | just have some black box function that takes inputs and
               | produces outputs and we don't know how it works.
        
               | mananaysiempre wrote:
               | ... I'm finding it very difficult to phrase this argument
               | about _actually understanding_ in a way that wouldn't
               | also entail that chairs made of atoms (and--
               | overwhelmingly--empty space) don't _actually exist_ ,
               | they are mere simulacra of real chairs (which don't exist
               | anywhere). Which might be a consistent position, but I'd
               | instead say that it is simply a largely useless
               | interpretation of the word _actually_ and as good
               | philosophers we should search for a better one. (See also
               | Anderson's "More is different".)
               | 
               | Or is there such a phrasing?
        
               | mjburgess wrote:
               | Approximately,
               | 
               | You "actually understand" if in saying words, w, about a
               | situation, s, the reason, r, you say those words counts
               | as as an understanding of s.
               | 
               | So if you're asked, "what is the weather today?" and a
               | system replies, "its sunny" -- it only understands if the
               | reason it replied was because its saying "sunny" had come
               | about because it understands sunny sitations; and
               | therefore has a justfiied reason on this occasion to say
               | that this occasion is sunny.
               | 
               | If I ask a NN trained on a trillion documents, "do you
               | like what I'm wearing?" it cannot _answer_. It can say
               | the words,  "yes I do!" but it cannot have a reason to
               | say those words. It's just a "weighted average retrievla
               | across a compressed dataset of a trillion documetns".
               | 
               | The question isnt "on average, what -- historically --
               | would a generic person reply to the question: do you like
               | what i'm wearing?"
               | 
               | The question is if the systems _likes_ what I 'm wearing.
               | Its replies are only "actual replies" if it can see what
               | i'm wearing, has experiences/preferences for taste, has
               | some aesthetic judgement, has a disposition to
               | like/dislike -- etc.
               | 
               | No ML system on the planet, in this sense, has any
               | understanding whatsoever. Interpolation over historical
               | data is just a means of compressing history into
               | parameters, ie., its a compressed lookup table. That
               | isn't ever a /reason/, exactly, if a person simply used
               | such a table -- they wouldnt mean what they said
        
               | tshaddox wrote:
               | Okay, but how do you test whether any speaker (human or
               | otherwise) _understands_ something without asking them to
               | explain it? If you have some other test (like perhaps
               | some analysis of what the human brain is doing when it is
               | understanding something), great! But even then, why can
               | 't the computer do the same thing that the human brain is
               | doing which constitutes understanding? Or if the only
               | test is to ask the speaker to explain something to you,
               | well then, the Chinese room can do that too! Of course
               | you end up with an infinite regress where you can ask the
               | speaker to explain the previous explanation forever, but
               | that's true for human speakers as well.
        
               | mjburgess wrote:
               | The question is whether what _causes_ us to understand
               | things is a computational process.
               | 
               | See my comment to the other reply, an others in this
               | thread about chaos/non-determinism to see why I doubt.
               | 
               | Ie., I dont think our organic growth and adaption to our
               | environment, in being profoundly chaotic (and via QM,
               | thef. non-deterministic) is likely to be describable as a
               | computable function.
        
               | vlowther wrote:
               | Given that your squishy brainmeats also consist of neural
               | networks that have been trained directly by years of
               | experience and indirectly by billions of years of
               | evolution, I can write off your response as just a
               | weighted average retrieval across a compressed dataset of
               | trillions of your experiences.
        
               | mjburgess wrote:
               | If, on the occasion I say, "I like you" my saying it is
               | _caused_ by _my liking you_ -- then you can describe this
               | process however you wish.
               | 
               | Since my liking you is caused by my immediate
               | environment, it isn't reducible to a weighted average of
               | my history.
               | 
               | Another way of putting it: the historical positions of
               | all the molecules in some water aren't sufficient to
               | determine its present state. It's state depends on its
               | container (ie., the pressure & temp of its environment).
               | And there are a very very large number of states of
               | water, many still being discovered.
               | 
               | In this sense my state in any moment is a point in an
               | infinite space of states -- not determined by my history.
               | But also _extremely complexly_ by my container -- my
               | social, etc. environment. The world hitting my senses is
               | doing more to me than the air on the water. It induces in
               | me a state which cannot be  "averaged" from my history.
               | 
               | Thus, no, we are not weighed averages of our histories.
               | We are profoundly chaotic and organic organisms whose
               | growth in our environments enables us to respond to our
               | enviroments by entering a near infinite number of states.
               | These states arent in our history, they are how our
               | biophysical structure -- via history -- responses to the
               | near infinite depth of the here-and-now.
               | 
               | We are more like water than a computer. A computer is a
               | deterministic machine which is a deterministic function
               | of its deterministic inputs. Water is a chatoic system
               | whose state "isnt up to it". Water's state is /in/ its
               | container, and water itself is a non-determinsitic
               | chatoic soup.
               | 
               | The chaos of water is the least of what one nanometer of
               | a cell has; a cell is a trillion times that adaptive and
               | responsive. And we are a trillion of those.
               | 
               | We are a cascade of chaotic state changes provoked by an
               | infinitely rich environment action on our bodies shaped
               | by a long history of organic growth.
               | 
               | We don't have "neural networks", we have cells. That some
               | form "networks" has nothing to do with what we are. A
               | complete misdirection.
        
               | mannykannot wrote:
               | > Since my liking you is caused by my immediate
               | environment, it isn't reducible to a weighted average of
               | my history.
               | 
               | It is not clear to me that this cannot be the case of a
               | weighted average very heavily weighted to the immediate
               | past.
               | 
               | > The historical positions of all the molecules in some
               | water aren't sufficient to determine its present state.
               | It's state depends on its container (ie., the pressure &
               | temp of its environment).
               | 
               | AFAIK, given that you could determine the momenta of the
               | molecules from a history of their positions, this would
               | be sufficient to determine its state (maybe you need
               | their spin as well?) The relevant information about the
               | container has been impressed on the motion of the
               | molecules.
               | 
               | Similarly, we can suppose that the history of your
               | environment becomes manifest in your mental states (and a
               | predisposition towards certain state transitions) -
               | though, on account of the complexity of that environment,
               | in a compressed form.
               | 
               | > We are more like water than a computer. A computer is a
               | deterministic machine which is a deterministic function
               | of its deterministic inputs. Water is a chatoic system
               | whose state "isnt up to it". Water's state is /in/ its
               | container, and water itself is a non-determinsitic
               | chatoic soup.
               | 
               | And yet we can usefully model fluid dynamics on a
               | computer, _even though the mathematical representation of
               | the problem is analytically intractable._ This line of
               | argument does not appear to be leading in the direction
               | you think it does.
               | 
               | > We don't have "neural networks", we have cells. That
               | some form "networks" has nothing to do with what we are.
               | A complete misdirection.
               | 
               | I am generally distrustful of these ontological arguments
               | - quite often, it seems, things that were once thought of
               | as being completely different turned out to be similar in
               | some relevant way.
        
             | jimbokun wrote:
             | > If consciousness is a relational property of a system
             | 
             | But first you have to establish this is true.
        
               | hackinthebochs wrote:
               | Not really. The point is that as an argument against
               | computationalism about the mind, Searle's Chinese room
               | can't rule out computationalism if mind is merely a
               | relational property of a system.
        
           | mananaysiempre wrote:
           | The Chinese room was _originally_ an argument against the
           | consciousness of automatons, but then EPR was an argument
           | against (fundamental) entanglement, Bell more or less
           | expected his inequality to hold, Michaelson-Morley was
           | interpreted as having found the luminiferous aether, and the
           | Poisson (aka Arago) spot was a _reductio ad absurdum_ of the
           | wave theory of light. All of these things aren't obsolete,
           | they are still useful pieces of insight, but the state of the
           | art regarding how they should be conceptualized has moved on
           | from what their originators thought.
           | 
           | Which is not an argument _for_ any particular intepretation
           | of the Chinese room, it's only to say that the _bare_ fact
           | that the original author thought of it in a particular way
           | doesn't mean we should do so.
        
           | retrac wrote:
           | The Chinese Room is useful as an idea or model, even if you
           | don't agree with the original interpretation. I am fond of
           | the view that the room is potentially conscious, with human
           | or machine worker. The worker should expect to understand the
           | conversation they are mechanically carrying out no more than
           | your individual brain cells ought to expect to understand.
        
       | guerrilla wrote:
       | They do [1][2]*.
       | 
       | 1. https://plato.stanford.edu/entries/computational-complexity/
       | 
       | 2. https://plato.stanford.edu/entries/computability/
       | 
       | * Note that the OP is even cited there.
        
         | jimhefferon wrote:
         | I approached people in the Phil Dept where I work, and they
         | expressed only polite interest (i.e., disinterest). So from my
         | (very limited) survey, there may be folks willing to write
         | entries, but, that I can tell, there is not broad interest.
        
       | sharker8 wrote:
       | Are we talking continental or analytic philosophers?
        
       | nathias wrote:
       | We should and do, but Chineese room is a philosophically unsound
       | analogy.
        
       | hristov wrote:
       | So they can get better jobs? Rimshot.
        
       ___________________________________________________________________
       (page generated 2021-11-16 23:01 UTC)