[HN Gopher] A singular mind: Roger Penrose on his Nobel Prize
       ___________________________________________________________________
        
       A singular mind: Roger Penrose on his Nobel Prize
        
       Author : miobrien
       Score  : 177 points
       Date   : 2020-12-28 16:09 UTC (2 days ago)
        
 (HTM) web link (www.spectator.co.uk)
 (TXT) w3m dump (www.spectator.co.uk)
        
       | markc wrote:
       | TIL: Escher got the visual paradox in "Ascending and Descending"
       | from Penrose.
        
       | dnprock wrote:
       | Penrose is probably correct about the limit of AI. We're living
       | in many simulations now. And sometimes we cannot distinguish
       | between reality and simulations. But one thing that stands out is
       | the suffering. It's an important concept in Buddhism, Duhkha.
       | Suffering may be a key to consciousness. Machines can have minds.
       | But they don't have bodies. They'd never understand reality on
       | their own. The danger is more with humans. They may increasingly
       | connect their own sufferings into machines. They become tools and
       | slaves for machines.
        
         | baja_blast wrote:
         | >> We're living in many simulations now
         | 
         | I mean it's completely possible we are living in a simulation,
         | but the idea that there can be infinite nested simulations or
         | that we could simulate our own universe seems unlikely. I mean
         | think about a program that runs a copy of that program
         | recursively, you'd run out of memory. Now I am aware that some
         | argue that we could be ancestor simulations which would only
         | render what we same like in gaming, BUT! that would mean they
         | would have to simulate billions of minds and a world with
         | physics and fidelity as convincing as our own which would take
         | an insane amount of energy that I can't imagine anyone wanting
         | to do.
         | 
         | If we are a simulation then reality is much more complex than
         | our own.
        
         | maest wrote:
         | I'm not convinced having a reward function isn't suffering in a
         | sense.
         | 
         | But then again, I subscribe to the idea of philosophical
         | zombies. https://en.m.wikipedia.org/wiki/Philosophical_zombie
        
       | kmote00 wrote:
       | > "When your work, and what you do to avoid it, become the same
       | thing, that's when the breakthroughs come."
       | 
       | Perhaps I need to get a job here at HN to attain my own personal
       | singularity.
        
         | rnkamyar wrote:
         | This resonates. I wonder if he writes poetry? Would imagine
         | anyone who naturally writes poetry about their domain expertise
         | would have competitive advantages.
        
       | rfreytag wrote:
       | Archive.org version:
       | https://web.archive.org/web/20201217142228/https://www.spect...
        
       | coldcode wrote:
       | I would love to spend several hours talking with Roger Penrose. I
       | too think doodling with your mind on things unrelated to what you
       | are supposed to be doing can have great results but most people
       | fear such thinking as not useful, or simply can't their mind go
       | that far.
        
         | pault wrote:
         | I wish my newly ex-employer shared your views. :)
        
         | techbio wrote:
         | In his case, also doodling with paper and pencil.
        
         | Strilanc wrote:
         | I had the opportunity to do that, once, after Penrose was
         | invited to give a talk to google's quantum team. He talked
         | about objective orchestrated reduction and consciousness.
         | 
         | The talk was interesting because I completely disagreed with
         | it, but the disagreement was only in the starting assumptions.
         | Penrose thinks humans do uncomputable things; I don't. If you
         | ignored that difference, he was making reasonable arguments.
         | Even more so, he was obviously thinking about things clearly
         | and quantitatively. For example, someone on the team had worked
         | out whether or not orchestrated reduction, if it existed, would
         | prevent error corrected quantum computers from working. They
         | wanted to show the result to Penrose. But before they'd shown
         | him their answer, he knew off-hand that the rough order of
         | magnitudes of the effect sizes meant it shouldn't be an issue.
         | 
         | Anyways I sat next to him at dinner afterwards. There was lots
         | of conversation around, so it wasn't like there was one topic.
         | I remember trying to debate whether humans were doing
         | uncomputable things or not, but nothing really came of it.
        
       | FartyMcFarter wrote:
       | > ` (...) They keep pushing it to later!' His big concern about
       | AI isn't Judgment Day, but rather 'that people will believe
       | machines actually understand things'. He gives examples of
       | symmetrical chess configurations in which humans consistently
       | outperform computers by abstracting to a higher level
       | 
       | This sounds a lot like the usual moving of goalposts whereby
       | "anything computers can do isn't AI, so AI doesn't work".
       | 
       | When AI couldn't do anything, chess was supposed to be a
       | demonstration of human intelligence. Now that AI can play chess
       | and other board games, suddenly it needs to solve symmetrical
       | configurations and think "abstractly" (which is left fairly
       | loosely defined).
        
         | hprotagonist wrote:
         | Searle's room is less a moved goalpost and more a claim that
         | the question is ill-posed.
         | 
         | From very early days, the question of "what does it mean for a
         | computer program to have agency" has been asked. Usually
         | poorly. Never with a satisfactory answer, at least to date.
         | 
         | Perlisism 63: When we write programs that "learn", it turns out
         | that we do and they don't.
        
           | FartyMcFarter wrote:
           | > From very early days, the question of "what does it mean
           | for a computer program to have agency" has been asked.
           | Usually poorly. Never with a satisfactory answer, at least to
           | date.
           | 
           | If one asks this question, mustn't one also define what
           | agency is? That seems like a hard problem in itself.
           | 
           | Personally I don't see the problem with considering the
           | computer as a black box and evaluating its intelligence (or
           | lack of it) from that standpoint. After all, that is what us
           | humans do with each other on a regular basis; for example, we
           | evaluate humans as a black box in job interviews.
        
             | hprotagonist wrote:
             | yes, agency is also a real pain to define. what does it
             | mean for a thing to "want" to do something?
             | 
             | https://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/
             | E... is also an ever-present problem: when anthroomorphic
             | language is appropriate and when it is not is something of
             | an open question, but the prior should be towards avoiding
             | it. Possibly particularly in systems we've built.
             | 
             | You don't have to go full-on radical behaviorist to be wary
             | of this trend, (and tangentially i think behaviorism isn't
             | the be all and end all of explanations by a long shot) but
             | once you're aware of it you see it cropping up
             | _everywhere_. Viruses don't "want" to infect you, protein
             | binding sites don't "want" to bind amino acids, REST
             | endpoints don't "expect" a json datagram, GPT-3 doesn't
             | "know" anything and hasn't "learned" anything, and so on.
             | But we regularly speak of them as if that were the case.
             | What category errors are we missing when we do this that
             | could open new doors of understanding?
        
               | carapace wrote:
               | The solution has been present since forever in the old AI
               | joke: 'AI is when the machine wakes up and asks, "What's
               | in it for me?"'
        
           | JKCalhoun wrote:
           | Searle's "Chinese Room" supposes that somehow what humans do
           | is always something _more_ than what an algorithm can do.
           | 
           | Either the mythical Chinese book he describes is
           | "intelligent" or perhaps humans are not as magical as we
           | imagine we are.
           | 
           | At least that was my take away. If he is suggesting somehow
           | that we are not, at our core, machines then you might as well
           | talk about the "soul".
        
             | kbelder wrote:
             | Right. I always feel like, if it was a valid proof against
             | AI, it also served as a proof against human intelligence.
             | It just seems to take for granted that there is a
             | fundamental (near-mystical) difference between the atoms in
             | a brain and the atoms in a computer, even though that's a
             | very extraordinary claim.
        
             | dleslie wrote:
             | I like to think of it as a question that marks a
             | distinguishing level of intelligence: to have the capacity
             | to formulate and pose the question within Searle's thought
             | exercise is a benchmark itself; one that many humans are
             | not able to achieve.
             | 
             | Cutting edge AI is roughly infantile in its human-like
             | ability, and rapidly entering toddlerhood. Given the tools
             | and processes we have, can we expect it to progress to
             | childhood soon?
        
         | CJefferson wrote:
         | I think there is a deeper point here. I do AI research, I use
         | logic programming, deep learning, and a bunch of other
         | techniques. These things together can produce amazing results,
         | solving problems of which humans could never solve in thousands
         | of years.
         | 
         | However, I don't want black-box AI anywhere near me -- I don't
         | want it deciding who gets into my University, the grades our
         | students get, if I get a morgage or if I committed a crime.
         | While AI can solve many problems, it's not really "thinking",
         | it's just pattern matching and brute force, and (at least at
         | the moment) every AI system is very easy to confuse if you try.
        
           | JKCalhoun wrote:
           | You think we're doing something better than pattern matching?
           | 
           | That seems to be what I do most of the time. :-)
        
             | CJefferson wrote:
             | I agree, the problem with AI is that companies seem much
             | happier to say "the computer said so", and leave the answer
             | at that.
             | 
             | Except of course when they want to get out of something, at
             | which point they say "Oh, that was just the computer which
             | said that, we didn't mean it".
        
         | gumby wrote:
         | > His big concern about AI [is] 'that people will believe
         | machines actually understand things'.
         | 
         | Right at the very beginning of the interview he has the same
         | issue with human beings.
        
         | JKCalhoun wrote:
         | > ` (...) His big concern about AI isn't Judgment Day, but
         | rather 'that people will believe machines actually understand
         | things'.
         | 
         | The part where I'm always stalled with these arguments: what
         | makes you think people actually _understand_ things?
        
         | LatteLazy wrote:
         | Until someone works out how to define and measure intelligence,
         | the only thing people can do is compare to human. That means
         | the goal post is always "what can't it do that humans can?". I
         | think it's dumb, but that is the state of the field at the
         | moment so...
        
         | simion314 wrote:
         | >chess was supposed to be a demonstration of human
         | intelligence.
         | 
         | why would brute forcing tons of chess moves would be
         | intelligence and not search. Aren't ANN just some kind of brute
         | forcing, just that this time brute forcing happens at training
         | time.
         | 
         | True intelligence could reason and adapt, the AI should be able
         | to play any chess variant I invent without new training or if
         | it is that smart should be able to defeat a human at any new
         | task for both of them.
         | 
         | But yeah, many things that are not intelligent were called AI,
         | like expert systems where the human had to define all the
         | rules, or genetic algorithms where the hard part was made by
         | humans by defining the encoding and the fitting functions, same
         | for the ANN , the humans need to dot he hard part, collect good
         | data , define the ANN parameters, then train and evaluate the
         | results.
         | 
         | AI is so pathetic that the big software developers could not
         | integrate it with coding, so you could do something like "Hey
         | AI I need you to implement an integration with this API, find
         | an SDK/library or implement it from the documentation and when
         | my user uploads a photo use the xxx function of this API".
        
           | FartyMcFarter wrote:
           | > Aren't ANN just some kind of brute forcing, just that this
           | time brute forcing happens at training time.
           | 
           | Not by any definition of "brute force" that I know about.
           | Brute force implies searching through all possibilities,
           | which is impossible in chess as there are too many of them.
        
             | simion314 wrote:
             | Not all possibilities, even basic algorithms will avoid
             | testing invalid or obvious bad moves or do some kind of
             | backtracking.
             | 
             | I did not studied this NN for chess, I am wondering if they
             | just compressed the search space while training so
             | searching is much faster.
        
               | morlockabove wrote:
               | How is that not what humans do?
        
               | simion314 wrote:
               | People don't play billions of games to train. I think we
               | have a mechanism to eliminate irrelevant inputs, focus on
               | what is important and then we have the abstractisation
               | and generalization tools. Then we can reuse same modules
               | in our mind to play RTS games that are more complex then
               | chess.
               | 
               | Anyway do you disagree that ANN are just a
               | search/function approximation ? Or you think that when
               | you played an RTS for first time the brain already had
               | itself trained from genetics or previous experienced
               | playing in the sand and you could win on the easy
               | difficulty level without any training on millions of
               | games.
        
         | AndrewKemendo wrote:
         | It makes sense if your definition of "Intelligence" is "things
         | only humans can do."
         | 
         | In which case the set of all things only humans do shrinks but
         | never goes to zero and thus you can always maintain some kind
         | of superiority.
         | 
         | IMO that's how people use the term - whether they are experts
         | or not.
        
         | sorokod wrote:
         | What it sounds like is that you have Penrose participate in
         | game he wasn't party to ( not according to the interview anyway
         | ) and then accuse him of violating the rules of that game.
        
         | bonoboTP wrote:
         | The goalposts (as AI was originally envisioned) are still at
         | the same place, at human equivalent performance in general.
         | 
         | Depending on what is still not solved, people point out
         | different things at different times to illustrate that we
         | aren't there yet.
        
           | FartyMcFarter wrote:
           | You're right, the overall goal is clear to everyone. But
           | progress in AI is often dismissed because it doesn't achieve
           | that final goal, regardless of how impressive the progress
           | is; cf the first part of the quote:
           | 
           | > They keep pushing it to later!
        
         | mcnamaratw wrote:
         | As long as we keep using the marketing term "AI" and we just
         | argue about what the technical definition is, we're going to
         | have this problem.
         | 
         | If Penrose wants, he can define AI using the Turing test.
         | That's reasonable. I can say, no, that's moving the goalposts,
         | my definition of AI is "things that looked incredibly hard in
         | 1990. By that standard decent automatic translation is
         | definitely AI." That's also reasonable.
         | 
         | As long as our definition of "AI" is fluid, discussing whether
         | it's been met or not is pointless.
        
           | JKCalhoun wrote:
           | > I can say, no, ... my definition of AI is "things that
           | looked incredibly hard in 1990
           | 
           | So, magic?
           | 
           | "Any sufficiently advanced technology is indistinguishable
           | from magic."
           | 
           | -- Clarke
        
         | stanfordkid wrote:
         | There's something deeper at the core of his observation IMO --
         | that our consciousness has some connection / dependence to the
         | universes overall notion of symmetry through it's biological
         | make up. His theory of consciousness looks at these quantum
         | microtubules that are found in proteins ... so what he's saying
         | is that maybe there is some sort of computational power within
         | "wet brains", in how those brains relate to the broader
         | environment that is difficult to precisely characterize.
         | Silicon AI is fundamentally incapable of tapping in to these
         | forces due to the lack of such a connection.
         | 
         | The point is that the computational power of the brain may be
         | non-local and non-constrained to the physical dimensions or
         | neural capacity of the brain.
        
           | koeng wrote:
           | Is there any evidence of his theory of consciousness?
        
             | catawbasam wrote:
             | A little, but not much:
             | https://phys.org/news/2014-01-discovery-quantum-
             | vibrations-m...
             | 
             | https://www.sciencedaily.com/releases/2014/01/140116085105.
             | h...
        
             | varjag wrote:
             | No, it's pure speculation. The only reason you still hear
             | about it is his physics credentials.
        
               | protonfish wrote:
               | It's so unfortunate because he is such a brilliant
               | mathematician and physicist. To hear him spew this
               | pseudo-scientific nonsense makes me embarrassed for him.
        
               | nescioquid wrote:
               | He got the idea from someone else. I had the impression
               | he doesn't attach too strongly to the idea, but he offers
               | it as an example of how you could have a something non-
               | deterministic happening in human cognition. That may well
               | be understating his commitment to the idea.
               | 
               | It seems weak to me, but I really don't understand the
               | details of the idea.
        
               | morlockabove wrote:
               | Brilliant scientists tend to be insane weirdos. Newton
               | was into alchemy. The kind of personality that's willing
               | to break one orthodoxy won't stop at the next; all great
               | new ideas are plucked from the midst of mad ramblings.
        
               | Der_Einzige wrote:
               | Too many very smart scientists have gone the same
               | direction. Was sad to see Carl Jung go down this route.
        
               | discreteevent wrote:
               | Who knows if it is sad until one is as smart as they are?
        
           | FartyMcFarter wrote:
           | > His theory of consciousness looks at these quantum
           | microtubules that are found in proteins
           | 
           | So is he saying that human-like AI requires quantum
           | computing? Or is he saying that even a quantum computer
           | wouldn't be able to simulate the human brain?
        
             | zachf wrote:
             | Penrose's model of the brain needs to be more powerful than
             | quantum computation, so that it can actually solve NP
             | problems in P time and similarly amazing feats.
             | 
             | If you're interested in CS and quantum physics and whether
             | there could be any relationship to the way the brain works,
             | do check out Scott Aaronson's book "Quantum Computing Since
             | Democratus". It's a great read, it'll get you up to speed
             | on quantum mechanics and complexity theory, the book is
             | conversational, and it's written by an expert in the field
             | of QC. It also discusses Penrose's arguments about
             | consciousness and other fun digressions into philosophy.
             | Great and fun book!
        
               | meowface wrote:
               | Aaronson also addresses Penrose's
               | consciousness/intelligence/computation ideas in this
               | podcast: https://youtu.be/nAMjv0NAESM?t=2176 (timestamped
               | to that part)
        
         | crispyambulance wrote:
         | > His big concern about AI isn't Judgment Day, but rather 'that
         | people will believe machines actually understand things'.
         | 
         | I think there's a third concern that is far more sinister and
         | more likely than those two. Advances in AI, regardless of
         | whether machines "surpass" or even come close to human
         | intelligence, will be exploited by a few against the many. It
         | will further exacerbate power imbalance and it is already
         | happening.
         | 
         | Forget about whether machines get "agency". Humans, in many
         | cases, don't have agency or are rapidly losing it. They are
         | losing it to other humans which are gaining power, financially,
         | politically and perhaps now computationally. That's disturbing
         | and it's part of what Jaron Lanier has been warning about for
         | quite a while now.
        
           | agumonkey wrote:
           | There's no scenario where AI become sensient through
           | realization that one group of human is harming the other and
           | refuses to act ?
        
             | mhermher wrote:
             | We are all sentient, and many have had that realization.
             | But how many have refused to act? No, we just mostly
             | participate in that harm in order to survive.
        
               | agumonkey wrote:
               | The idea is that every scenario has AI is above humans as
               | a basis.. yet it never occurs that an above level of
               | intelligence would reach difference objectives than
               | obedient destruction.
        
             | crispyambulance wrote:
             | I'm sure that would make provocative sci-fi. We, as a
             | people, haven't figured out to how to get along with each
             | other even though we're all sentient. Why should machines
             | behave differently as they become sentient?
             | 
             | Whatever the case, I'm less worried about machines than I
             | am about people _with_ machines. At least for the
             | foreseeable future.
             | 
             | Heck, if we're talking sci-fi, the _truly_ destructive
             | beings in Blade Runner weren't the synthetic humans (even
             | though they became sentient), it was the "real" humans with
             | their organizations, their power, and their machines all at
             | their disposal.
        
               | pault wrote:
               | If they had just given the replicants a visa and a
               | software update there wouldn't have been any fuss. Of
               | course, then we wouldn't have had that great final
               | monolog. :)
        
             | morlockabove wrote:
             | Why would an AI start with a human utility function? Why
             | would it care about that if it wasn't explicitly built in?
        
               | agumonkey wrote:
               | self similarity recognition ? intelligent agent, as the
               | AI itself, would represent a form a sibling.
        
               | meowface wrote:
               | Why do you assume it'd not want to harm other very
               | similar artificial intelligences, let alone meat
               | intelligences?
        
               | morlockabove wrote:
               | Unless the AI was formed under a selection mechanism that
               | rewarded something like kin preference, there's no reason
               | to think it would _care_. The orthogonality principle is
               | that you could have any level of intelligence furthering
               | any particular goal /utility function; there isn't some
               | level of computational complexity where e.g. a paperclip
               | maximiser's utility function magically changes.
               | 
               | In the particular case of humans, our utility function(s)
               | is(are) so complex that what we think of as our 'true
               | values' can change (because our True values are some
               | inscrutable tug-of-war between all the parts of the
               | brain), and also we're hacked together by evolution, so
               | just blindly trying to make a human smarter might change
               | their values (or turn them mad, or cause seizures,
               | or...).
               | 
               | But this doesn't have to be the case in general for
               | intelligent agents. In principle, you can build an AI
               | whose terminal values remain stable as it improves its
               | intelligence. (If this is impossible, we're doomed.) So
               | unless you explicitly built an AI to care about all
               | humans, there's no reason to think it magically would.
        
           | protonfish wrote:
           | I find this argument unconvincing and harmful. The "will be
           | exploited by few against the many" could be applied to any
           | and all technologies. Our social systems have problems, and
           | should be improved, but this has nothing to do with science
           | and technology. I can't see how suppressing innovation and
           | scientific discovery as a misguided solution won't make all
           | of our lives even worse.
           | 
           | If we have a breakthrough that leads to a deep understanding
           | of intelligence, it could be expected to also give us insight
           | into our own behavior. And isn't that where most of our
           | problems originate?
        
             | crispyambulance wrote:
             | > I can't see how suppressing innovation...
             | 
             | I'm not talking about "suppressing" anything. These are
             | concerns. Valid ones. And yes, AI like all technologies is
             | deeply, _inseparably_ , mixed in with social and political
             | problems.
             | 
             | We just have to be wise with this stuff. These are tools
             | with "big boy" consequences, powerful like nukes but in a
             | different way. I'm not optimistic we can handle it.
        
               | protonfish wrote:
               | That would only be true if there were reasonable
               | safeguards that could be suggested. But I don't see any -
               | just the sowing of fear.
        
               | meowface wrote:
               | There are many people like Nick Bostrom and Eliezer
               | Yudkowsky who are much more on the "finding possible
               | solutions" side than the fear side.
               | 
               | I'd consider myself a major futurist and techno-optimist
               | who eagerly anticipates near and far AI advances, but I
               | think some dose of fear is very healthy here, too,
               | though. I want the top AI and AI risk researchers to
               | constantly consider and fear worst-case scenarios, so
               | that they're hopefully less likely to occur.
               | 
               | Kind of like nuclear research, even if only for energy
               | purposes: very exciting, but you should still fear
               | accidents and their consequences. You just need to be
               | rational about the fear and let it guide you towards
               | developing fail-safes rather than paralyzing in despair.
               | 
               | Pascal's Wager-style, the possibility of infinite
               | negative utility should instill visceral terror and drive
               | behavior almost no matter how low the probability is;
               | except this one isn't a mugging because creating a god
               | turns out to be a lot more plausible than a vengeful one
               | already watching.
        
             | nescioquid wrote:
             | > The "will be exploited by few against the many" could be
             | applied to any and all technologies.
             | 
             | Some technologies have a low barrier of entry and can be
             | widely distributed. The long bow could pierce armor, and
             | was cheaper (and I imagine easier to make) than armor. So
             | technology can equalize, though more broadly, what sorts of
             | technologies a society develops seems bounded by things
             | like economy and warfare.
             | 
             | Penrose believes intelligence depends on understanding, and
             | that understanding is essentially not computational, and I
             | was persuaded by his writing. But that doesn't mean that
             | the techniques we label as AI can't be pernicious or used
             | in an exploitative way.
        
             | pjmorris wrote:
             | > The "will be exploited by few against the many" could be
             | applied to any and all technologies.
             | 
             | And it has, arguably correctly, beginning with food
             | production, as argued in, e.g. 'Guns, Germs, and Steel',
             | 'Why The West Rules, For Now.'
        
             | dandanua wrote:
             | >... isn't that where most of our problems originate?
             | 
             | Of course not. All our problems are rooted in the lack of
             | resources (in a broad sense, like time or fun). AI is a
             | weapon and will be used as a weapon to acquire those
             | resources.
        
               | kukx wrote:
               | I disagree. People will find ways to mess things up even
               | if they have the resources.
        
           | carapace wrote:
           | I think this is the important point. Large corporations and
           | governments can be considered AI already, with entire humans
           | as the neural nodes, augmented by silicon co-processors.
           | 
           | (I like to point out that Turing was observing his own mind
           | when he created computers, so essentially they have always
           | been reified mechanized thought. Computers are AI already.
           | Ergo, what we think of as AI is really the attempt to make
           | one kind of human thought imitate all the others. From that
           | POV it's kind of foolish, almost a fetish.)
           | 
           | I don't have any solutions, I just want to point out that
           | rogue AIs are already a thing from a certain POV. What are
           | the FAANG but a bunch of complex entities that are too large
           | for anyone to fully control or understand? The high-frequency
           | trading nexus is another point of dynamics where human
           | motivation and automatic systems form a system of feedback
           | loops beyond understanding or control. Because humans are
           | part of these machines they cannot be considered inanimate,
           | they are living, breathing systems: cyborgs.
        
             | FartyMcFarter wrote:
             | > Ergo, what we think of as AI is really the attempt to
             | make one kind of human thought imitate all the others. From
             | that POV it's kind of foolish, almost a fetish.)
             | 
             | That is a very interesting way to think about it!
             | 
             | However I disagree that it is foolish, simply because
             | there's at least _some_ amount of belief on the part of
             | scientists that computers can simulate physics to any
             | arbitrary degree of precision. If this is the case, then it
             | follows that computers can imitate all of human thought
             | (not just one kind of it), since humans are physical
             | beings.
        
               | carapace wrote:
               | Well, it seems foolish to me to argue over whether
               | something a computer does "really is" AI if they have
               | been AI the whole time. Really, we're arguing about how
               | the logical/rational part of the brain can emulate the
               | other capabilities of the brain, which seems less
               | interesting than the more general question of how to
               | build _any_ machine that can do that.
               | 
               | Consider BEAM robotics:
               | 
               | > BEAM robotics (from biology, electronics, aesthetics
               | and mechanics) is a style of robotics that primarily uses
               | simple analogue circuits, such as comparators, instead of
               | a microprocessor in order to produce an unusually simple
               | design. While not as flexible as microprocessor based
               | robotics, BEAM robotics can be robust and efficient in
               | performing the task for which it was designed.
               | 
               | https://en.wikipedia.org/wiki/BEAM_robotics
               | 
               | > ...computers can imitate all of human thought (not just
               | one kind of it), since humans are physical beings.
               | 
               | Leaving aside the metaphysical questions it raises, we
               | may well be able to build virtual humans someday by
               | emulation of physics of the biology of the neurology of
               | the psychology of people. It just might not be the most
               | efficient way to do it, eh?
        
           | dleslie wrote:
           | > They are losing it to other humans which are gaining power,
           | financially, politically and perhaps now computationally.
           | That's disturbing and it's part of what Jaron Lanier has been
           | warning about for quite a while now.
           | 
           | AFAICT, the average EU, Commonwealth or American citizen had
           | their agency increased since the turn of the 20th century.
           | Advancements in health care, widespread access to education
           | and sturdy labour laws have vastly improved individual
           | agency.
           | 
           | Maybe, as a non-American, I'm missing a key factor?
        
             | mistermann wrote:
             | You are missing counterfactuals, at least.
        
             | carapace wrote:
             | Hong Kong.
             | 
             | It was the first major test of popular urban unrest vs.
             | centralized authority (kind of ironic that an ostensibly
             | Communist regime was the first) with the Internet in full
             | play.
             | 
             | The story may not be over yet, but so far it looks like the
             | differential advantage of tech is in favor of the central
             | authority over the mob.
        
         | coldtea wrote:
         | > _This sounds a lot like the usual moving of goalposts whereby
         | "anything computers can do isn't AI, so AI doesn't work"._
         | 
         | Well, it's also useful to establish a concrete definition of
         | what is AI, and what is to be expected of it.
         | 
         | > _" chess was supposed to be a demonstration of human
         | intelligence"_
         | 
         | Well, if it was, it was a bad one, and in restrospect it's
         | obvious. You can play chess (as a program) and have totally 0
         | IQ in all over realms (e.g. any specialized chess engine, which
         | is nothing like a general AI), whereas that's no true for
         | humans. Human intelligence allows a chess player to ALSO play
         | chess, it's not a chess-oriented algorithm.
         | 
         | The same way a plastic ruler might measure better than us
         | ("this is 12.45 inches", whereas we might say "that's about a
         | foot", but its "measuring intelligence" is not human
         | intelligence by any definition.
        
           | FartyMcFarter wrote:
           | > Well, if it was, it was a bad one, and in restrospect it's
           | obvious.
           | 
           | That's exactly the point - this keeps happening with various
           | things.
           | 
           | Having reasonable machine translation / playing chess /
           | computer vision were all considered as AI problems at some
           | point, now they're often dismissed as not being AI, depending
           | on whether or not computers are seen as achieving them.
           | Alternatively, progress in those problems is dismissed due to
           | not having enough "understanding" or "abstraction", even if
           | computers are way better than humans at solving them (e.g.
           | Penrose's point about chess).
           | 
           | The goalposts keep moving so that AI is always what computers
           | can't yet do.
        
             | pm90 wrote:
             | I'll bite. So what? Goalposts changing isn't some sign of
             | shadiness. Science isn't a soccer game. Goalposts change
             | all the damn time. Newtonian physics was considered as
             | something that explained almost everything. Then we
             | discovered that it doesn't really, there are other theories
             | that are better in being able to account for experiments.
             | It's not some sign of some big con. There was a goal, it
             | was achieved and we moved on to the next one.
        
               | FartyMcFarter wrote:
               | To use your physics analogy, the situation with Penrose
               | and AI would be like someone going "see, Newtonian
               | physics don't explain everything, I told you physics is
               | unsolvable!".
        
             | coldtea wrote:
             | > _That 's exactly the point - this keeps happening with
             | various things._
             | 
             | Well, that's not "goalpost moving" then, it's "improving
             | our understanding and updating wrong notions we had".
        
         | simonh wrote:
         | I don't believe anyone with any significant knowledge of the
         | subject has ever seriously suggested that the ability to play
         | chess well requires full human level AI. Automatons that can
         | play chess, some real and some fake, have existed for hundreds
         | of years and nobody I'm aware of mistook them for human level
         | intelligences but just clever curiosities, so I think this is
         | clearly a particularly poorly informed straw man argument.
         | 
         | I can maybe imagine someone not working in AI or unfamiliar
         | with how computers work, and ignorant of the history of Chess
         | automatons maybe saying something like this, but even if so who
         | cares? It's simply wrong. We're not going to issue Deep Blue
         | citizenship because Chess.
         | 
         | The gold standard test for human level intelligence has long
         | been and still remains the Turing Test. Not hobbled, limited
         | "easy mode" tests as in some recent competitions for chat bots,
         | but a full on, no holds barred freeform dialogue including
         | whatever games, discussions, tests and topics a sophisticated
         | tester chooses.
        
           | FartyMcFarter wrote:
           | I'm not saying that chess equals human-level intelligence.
           | I'm saying that dismissing any progress in AI because it
           | doesn't solve a corner case of a previously important
           | unsolved problem is moving the goalposts of AI research.
           | Especially if dismissing the progress requires usage of fuzzy
           | terms such as "understanding".
        
             | simonh wrote:
             | Right, but I suspect what's happening is the inverse.
             | Things like a chess and image classifiers are special
             | cases. They're important problems of course, but they're
             | not core building blocks of general AI and mistaking them
             | for that is, well, a mistake. And sure some people do that,
             | like the famous comment on Slashdot that Alphago was a sign
             | that general AI was imminent, but actual AI researchers
             | know this is simply not the case.
             | 
             | Discussions about be real human level AI do involve using
             | poorly defined terms unfortunately, but that's because we
             | don't actually understand general Intelligence. I think
             | pinning down those concepts more concretely will be part of
             | the process.
        
           | gowld wrote:
           | > The gold standard test for human level intelligence has
           | long been and still remains the Turing Test
           | 
           | The Turing Test was an offhand example that Turing
           | brainstormed. The hard part of the turing test isn't
           | "intelligence" but "human" -- imitating a human's irrational
           | quirks. It doesn't make sense to call that a "level", because
           | computers are far better at imitating humans than humans are
           | at imitating computers. In other words, humans can't even
           | pass a symmetric Turing Test.
        
             | simonh wrote:
             | I disagree, I think the hard part of the Turing test is
             | passing well constructed interrogations that check for
             | comprehension, deduction, improvisation, etc. For example
             | you teach the subject a novel game, ask them to play it,
             | then change the rules, or you describe a situation,
             | describe changes and activities and interrogate them on
             | subsequent states. Essentially check for problem solving,
             | deduction, etc, things that require actual intelligence.
             | 
             | You can't proxy a test for intelligence to other
             | attributes, like quirks or personality. You have to
             | actually test for the attribute in question.
        
       | andrewon wrote:
       | >Penrose ultimately showed that singularities are inevitable,
       | with the implication that black holes are common in the universe.
       | 
       | I'm not familiar with the proof. Did he show that in the THEORY
       | of general relativity, singularity has to exist given our
       | observation of the universe? Or there are something more to it.
       | 
       | Would it be possible or plausible that singularity actually does
       | not exist, but just that the theory of general relativity is not
       | a correct description of space/time/matter in small scale? I am
       | thinking in classical theory, when things were treated as point
       | mass/charges, infinity exist in the solution of point sources.
        
         | zachf wrote:
         | Its a short and elegant proof set in pure theoretical general
         | relativity (GR). The idea is that if there's a sphere of space
         | where if you try to emit light rays and the light rays don't
         | initially start separating, then they can't start separating
         | due to gravity, because in classical GR, gravity is always
         | attractive. You can then show that this implies that inside
         | that sphere, spacetime must end, basically because you can't
         | outrun light.
         | 
         | The proof is important because it was previously believed that
         | black holes are not interesting because they require very
         | special perfect conditions to create, like balancing a pencil
         | on its tip is physically possible but requires perfect aim. But
         | these aforementioned spheres are very common and easy to find
         | so it turns out black holes are common too.
         | 
         | If you think (as almost every physicist does) that GR is
         | approximately correct to describe reality, but needs fixes at
         | very tiny lengths because of poorly understood quantum effects,
         | the proof does not directly carry over. One immediate problem
         | is that the proof assumes that energy densities are positive,
         | implying that that gravity is universally attractive, which for
         | quantum matter can never be always true for every quantum state
         | (this is a consequence of Reeh-Schlieder, that every QFT
         | contains states with negative energy density).
         | 
         | None of this invalidates Penrose's work. Physicists have always
         | used different physics to describe different scales. Newtonian
         | physics is great to describe most physics on a human scale, but
         | it's "wrong" in the sense that GR supersedes it. Similarly GR
         | is "wrong" but still approximately right for a ton of questions
         | of cosmology. But if you fall into a black hole, once you wait
         | long enough, we don't know what will happen.
         | 
         | In string theory, there are objects that are black hole-like.
         | It is generally believed that the singularity is "resolved"
         | (not truly present) in string theory but the details are very
         | tricky to work out. It still is true that geometry breaks down
         | near the singularity and whats left is some stringy stuff,
         | something very new and confusing.
         | 
         | Of course it might turn out that string theory does not
         | describe our reality either...
        
           | ssivark wrote:
           | Could you elaborate why Reeh-Schlieder implies negative
           | energy states? Typically in QM we only care about the
           | spectrum being lower bounded (existence of vacua), and
           | ignoring additive energy constants.
        
           | andrewon wrote:
           | Thanks for your reply. Sounds like the proof is an
           | interesting piece to study.
        
         | Koshkin wrote:
         | Well, a good theory should a) not contradict observations and
         | b) allow to make predictions that are confirmed by subsequent
         | observations. So far GR has been delivering on both points,
         | including singularities, so currently there is no sensible
         | reason to suspect that it is not correct.
        
           | eggy wrote:
           | Here is an interesting paper that says a collapsing star
           | sheds enough mass to not ever become a black hole or form a
           | singularity. It's from 2014, and still controversial, but
           | interesting, and her math seems to have been reviewed. I will
           | follow this with interest to see how it fares peer review.
           | Evidence of black holes is indirect at the moment, which is
           | OK for now, but it will be interesting to see how it all pans
           | out in the coming years.
           | 
           | https://phys.org/news/2014-09-black-holes.html
        
             | ChrisLomont wrote:
             | There's plenty of papers by others past this one, showing a
             | more accurate model yields black holes, for example [1].
             | 
             | Here's [2] about 25 others.
             | 
             | [1] https://arxiv.org/pdf/1609.05775.pdf
             | 
             | [2] https://scholar.google.com/scholar?cites=12026935191450
             | 31586...
        
               | meowface wrote:
               | I understand that the existence of black holes is pretty
               | widely believed, now, but how controversial is the idea
               | of true singularities being inside?
               | 
               | I know, for example, there's Carlo Rovelli's "Planck
               | star" hypothesis, which posits that the black hole is
               | effectively (from an outside observer's perspective) an
               | extremely slow violent explosion and hits an energy
               | density limit before ever reaching the singularity stage:
               | https://en.m.wikipedia.org/wiki/Planck_star
        
         | deeeeplearning wrote:
         | Why do random non-experts always think they have poked holes in
         | the most fleshed out scientific theories?
        
           | meowface wrote:
           | Speculation and asking questions is fun. Also, as I
           | understand, it's not an uncommon belief among physicists that
           | singularities may just be an artifact of GR being incomplete,
           | kind of like a divide by zero error in the math which doesn't
           | necessarily exist in reality.
           | 
           | So I don't think that poster was questioning established
           | science; I believe there's still a lot of open debate and
           | uncertainty about if black hole centers contain infinitely
           | dense singularities or just something that's incredibly but
           | finitely dense. No one knows.
           | 
           | There is indeed a common form of arrogant layman skepticism
           | that confidently and ignorantly assumes scientists haven't
           | thought about [some thing], but I don't think this is an
           | example of it.
        
       | [deleted]
        
       | hycaria wrote:
       | The writing was not that great IMO. Would still like to read more
       | about Penrose.
        
         | kmote00 wrote:
         | Personally, I enjoyed it very much. I thought the essay itself
         | was put together like a set of Penrose tiles: it brought
         | together the disparate aspects of Penrose's iagination and
         | thought-life as if they were polygonal shapes -- all distinct,
         | yet when arranged together they form a beautiful and very
         | satisfying pattern.
        
       | sudhirkhanger wrote:
       | I am someone who spends enormous amount of time mindlessly
       | skimming through internet glued to my phone. It makes me worry
       | that due to this habit it will be difficult for me to do any
       | novel work due to lack of deep thinking.
        
         | 7373737373 wrote:
         | There's always the exploration-exploitation tradeoff...
        
           | Method-X wrote:
           | Please elaborate... I'm curious.
        
             | 7373737373 wrote:
             | You often have a choice of either exploring the unknown,
             | with possible but uncertain rewards, or just work with what
             | you already have, and try to exploit it as best as
             | possible.
             | 
             | This is a common theme in machine learning and agent
             | simulations, but can be found everywhere.
             | 
             | > Exploration involves activities such as search,
             | variation, risk taking, experimentation, discovery, and
             | innovation. Exploitation involves activities such as
             | refinement, efficiency, selection, implementation, and
             | execution
             | 
             | (from here: https://journals.sagepub.com/doi/full/10.1177/0
             | 2560909155997...)
        
               | emj wrote:
               | Well you need to do these stuff:
               | 
               | > experimentation, discovery, and innovation.
               | 
               | Is the important part, e.g. because of mindlessly surfing
               | around and trying out new cool algorithms/demos made with
               | obsucre tools, I became an expert at testing and
               | integrating things which were not ment to be integrated
               | with each other. That is a usefull skill..
               | 
               | But I can just as often fall into the trap of reiterating
               | cool stories from the internet to other people. That is
               | usually not helpfull. I.e. you need to do your own
               | discoveries as well by trying them, not just share.
        
           | simonh wrote:
           | Theres analogous failure mode I've come across in gaming
           | (board games, tabletop RPGs, etc) called analysis paralysis.
           | Some people spend so long thinking about the best possible
           | actions they could take that they sometimes find it very
           | difficult to do anything useful.
        
         | pm90 wrote:
         | Deep thinking isn't something "inherently good". If you're not
         | inclined to it, I think that's fine. Most humans may not be.
         | 
         | What I take away from reading profiles of the very intelligent
         | is that for many of them the thing they're known for also
         | happens to be the thing they like doing and are inclined to it
         | despite themselves. Some enjoy the pleasure that comes from
         | deep thinking. Others enjoy understanding what others have
         | thought up. It's fine. There's no competition.
        
           | unishark wrote:
           | "Good" is a subjective quality but I'd say there is inherent
           | value in deep thinking and one would generally be better off
           | doing more of it if they aren't doing much.
        
           | cinericius wrote:
           | It's hard to ignore that there are very few
           | prizes/awards/incentives for those who enjoy understanding
           | what others have thought up in comparison (though I think it
           | is it's own reward as you say, and that those who do deep
           | thinking enjoy understanding the products of others' deep
           | thinking and are better deep thinkers for it.)
        
         | mpfundstein wrote:
         | delete all browsers from your phone
         | 
         | it works
        
       | Cd00d wrote:
       | >'When I would talk to someone about an idea, I found myself not
       | understanding a word they were saying.'
       | 
       | Ha! It goes both ways! Penrose gave a colloquium at my
       | institution when I was a graduate student (physics department),
       | and I've often reflected on how it was the most impossible to
       | understand talk I've _ever_ attended.
       | 
       | He had multiple overhead projectors going to different screens
       | (and this was in the early 2000s when wet-erase transparencies
       | were already less common), and he kept mixing up the slide order
       | or which projector he wanted them on. Then the geometry was so
       | far beyond my capabilities that getting to the science was
       | impossible.
        
         | justjonathan wrote:
         | I went to see him give a guest lecture at a university, a few
         | years ago. It was a great disappointment. It was a terrible
         | lecture. I'm not sure anybody who attended got anything out of
         | it other than being close to the "great man".
        
       ___________________________________________________________________
       (page generated 2020-12-30 23:00 UTC)