[HN Gopher] Past Performance is Not Indicative of Future Results...
       ___________________________________________________________________
        
       Past Performance is Not Indicative of Future Results (2020)
        
       Author : olvy0
       Score  : 255 points
       Date   : 2021-07-31 15:40 UTC (7 hours ago)
        
 (HTM) web link (locusmag.com)
 (TXT) w3m dump (locusmag.com)
        
       | lamebitches wrote:
       | Covid is a bio-weapon. Fauci is the dealer.
        
       | 7357 wrote:
       | C. Doctorow is one of these (admittedly few) famous people I'd
       | like to meet IRL.
        
       | radu_floricica wrote:
       | I think he's doing a bit of bait and switch there. Knowing
       | reliably whether arrests are genuinely racist or if winks are
       | flirtatious is superhuman intelligence.
       | 
       | > But the idea that if we just get better at statistical
       | inference, consciousness will fall out of it is wishful thinking.
       | 
       | I'm a mostly disinterested spectator in current AI research, and
       | even I know that it's not all about that. Just google "AI
       | alignment" for an example, and god only knows what's going on in
       | private research.
        
         | akomtu wrote:
         | I think the definition of racism in this context can be simple.
         | If the rate of false positives for blacks is significantly
         | higher than the average across the nation, then it's racism.
         | Significantly higher can mean "one stddev higher".
        
       | vijucat wrote:
       | > Let's talk about what machine learning is...it analyzes
       | training data to uncover correlations between different
       | phenomena.
       | 
       | The author seems to have missed or excluded reinforcement
       | learning and planning algorithms in this definition.
       | 
       | My criticism of AI criticism in general is that no one admits
       | that at the root of it, we do not understand thinking (or
       | "consciousness"). We are merely the "recipient" or enjoyer of the
       | process, which is opaque. Just as AlphaGo, even if it just a
       | facsimile of a Go player, could beat a human at Go, it is
       | probable that an AI could produce a passable facsimile of
       | thinking at one point. Its mechanisms would be as opaque as human
       | thinking (, even to itself), but the results would be undeniable.
       | AGI is a possibility.
        
       | atty wrote:
       | Unfortunately it's pretty clear from the article that Cory does
       | not have much familiarity with the research going on in the field
       | of machine learning, and is creating a straw man. Quite a lot of
       | work is being done on causal inference, out-of-distribution
       | generalization, fairness, etc. Just because that is not the focus
       | of the big sexy AI posts from Google et al does not mean that the
       | work isn't being done. I'd also point out that humans can infer
       | causality for simple systems, but for any sufficiently complex
       | system we also can't reason causally. But that does not mean we
       | can't infer useful properties and make informed, reasonable
       | decisions.
       | 
       | I'd also point out that not all models are "theory-free", as he
       | describes it. I specifically do work in areas where we combine
       | "theory" and machine learning, and it works very well.
       | 
       | And finally, his point about comprehension does not really fly
       | for me. There is no magical comprehension circuit in our brain.
       | It's all done via biological processes we can study and emulate.
       | Will that end up being a scaled up version of current neural
       | nets? Will it need to arise from embodied cognition in robots?
       | Will it be something else? I don't know, but it's certainly not
       | magic, and we'll get there eventually. Whether that's 10 years or
       | 1000, who knows.
       | 
       | Are current paradigms going to lead to AGI? Frankly, I'd just be
       | guessing if I even tried to answer that. My gut instinct is no,
       | but again, that's just a guess. Can current methods evolve into
       | better constrained systems with more generalizable results and
       | measurable fairness? Absolutely.
        
         | version_five wrote:
         | I'm not sure what issue others had with your comment. You're
         | quite correct that he ignores vast swaths of current ML art and
         | attacks a narrow conception of what ML is. Many of his
         | criticism are legit with the right caveats, but he leaves out a
         | lot of information that could lead to a different thesis.
         | 
         | My read of the discussion here is that there is lots of idle
         | speculation by people who don't have any real experience with
         | ML research / engineering, that overwhelms a minority who
         | actually know what they are talking about and are calling CD
         | out on this, or at least challenging aspects of his arguments.
        
         | belter wrote:
         | Do you have an example/reference of the type of work you are
         | thinking about ?
        
         | jmull wrote:
         | > Are current paradigms going to lead to AGI? Frankly, I'd just
         | be guessing if I even tried to answer that. My gut instinct is
         | no
         | 
         | I'll just note that while you start off saying Doctorow has no
         | idea what he's talking about, you finish by pretty much fully
         | agreeing with the essay.
        
       | jstx1 wrote:
       | > I am an AI skeptic. I am baffled by anyone who isn't. I don't
       | see any path from continuous improvements to the (admittedly
       | impressive) 'machine learning' field that leads to a general AI
       | 
       | - I share the skepticism towards any progress towards 'general
       | AI' - I don't think that we're remotely close or even on the
       | right path in any way.
       | 
       | - That doesn't make me a skeptic towards the current state of
       | machine learning though. ML doesn't need to lead to general AI.
       | It's already useful in its current forms. That's good enough. It
       | doesn't need to solve all of humanity's problems to be a great
       | tool.
       | 
       | I think it's important to make this distinction and for some
       | reason it's left implicit or it's purposefully omitted from the
       | article.
        
         | darkwater wrote:
         | > I think it's important to make this distinction and for some
         | reason it's left implicit or it's purposefully omitted from the
         | article
         | 
         | I beg to disagree. They clearly state your opinion at the end
         | of the piece, using the metal-beat analogy. Great things were
         | done by blacksmiths beating metal, but not an ICE
        
         | bhntr3 wrote:
         | > I don't see any path from continuous improvements to the
         | (admittedly impressive) 'machine learning' field that leads to
         | a general AI
         | 
         | > I share the skepticism towards any progress towards 'general
         | AI' - I don't think that we're remotely close or even on the
         | right path in any way.
         | 
         | This isn't how science works though. Quoting the wikipedia page
         | for Thomas Kuhn's "The Structure of Scientific Revolutions" (ht
         | tps://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...):
         | 
         | "Kuhn challenged the then prevailing view of progress in
         | science in which scientific progress was viewed as
         | "development-by-accumulation" of accepted facts and theories.
         | Kuhn argued for an episodic model in which periods of
         | conceptual continuity where there is cumulative progress, which
         | Kuhn referred to as periods of "normal science", were
         | interrupted by periods of revolutionary science."
         | 
         | I think this is the accepted model in the philosophy of science
         | since the 1970s. That's why I find this argument about AI so
         | strange, especially when it comes from respected science
         | writers.
         | 
         | The idea that accumulated progress along the current path is
         | insufficient for a breakthrough like AGI is almost obviously
         | true. Your second point is important here. Most researchers
         | aren't concerned with AGI because incremental ML and AI
         | research is interesting and useful in its own right.
         | 
         | We can't predict when the next paradigm shift in AI will occur.
         | So it's a bit absurd to be optimistic or skeptical. When that
         | shift happens we don't know if it will catapult us straight to
         | AGI or be another stepping stone on a potentially infinite
         | series of breakthroughs that never reaches AGI. To think of it
         | any other way is contrary to what we know about how science
         | works. I find it odd how much ink is being spent on this
         | question by journalists.
        
           | GeorgeTirebiter wrote:
           | This seems akin to Asimov's "Elevator Effect":
           | https://baixardoc.com/preview/isaac-asimov-66-essays-on-
           | the-... starting p 221.
           | 
           | I agree that one would think that Science Fiction writers
           | would have enough of an imagination to be able to consider
           | alternate futures (Cory CYA's by saying such a scenario would
           | make a good SF story) - but there are already promising
           | approaches to AGI: Minsky's "Society of Mind", Jeff Hawkins'
           | neuro-based approaches, the fairly new Hinton idea GLOM: http
           | s://www.technologyreview.com/2021/04/16/1021871/geoffrey... .
           | 
           | "By 2029, computers will have human-level intelligence,"
           | Kurzweil said in an interview at SXSW 2017.
           | 
           | Time to get to work, eh? https://www.timeanddate.com/countdow
           | n/to?msg=Kurzweil%20AGI%...
        
             | simonh wrote:
             | 1960s Herbert Simmons predicts "Machines will be capable,
             | within 20 years, of doing any work a man can do."
             | 
             | 1993 - Vernor Vinge predicts super-intelligent AIs 'within
             | 30 years'.
             | 
             | 2011 ray Kurzweil predicts the singularity (enabled by
             | super-intelligent AIs) will occur by 2045, 34 years after
             | the prediction was made.
             | 
             | So until his revised timeline for 2029 the distance into
             | the future before we achieve strong AI and hence the
             | singularity was, according to it's most optimistic
             | proponents, receding by more than 1 year per year.
             | 
             | I wonder what it was that lead him to revise his timeline
             | so aggressively. I think all of those predictions were
             | unfounded, until we have a solid concept for an
             | architecture and a plan for implementing it an informed
             | timeline isn't possible.
        
             | dctoedt wrote:
             | Elevator effect:
             | https://indianapublicmedia.org/amomentofscience/elevator-
             | eff...
        
           | simonh wrote:
           | >So it's a bit absurd to be optimistic or skeptical.
           | 
           | We skeptics aren't skeptical that AI is possible, were
           | skeptical of specific claims. I think it's perfectly
           | reasonable to be skeptical of the optimistic estimates, since
           | they really are little more than guesses with little or no
           | foundation in evidence.
        
           | dcow wrote:
           | I think you're misunderstanding Kuhn slightly. He invented
           | the term paradigm shift. What he means by normal science with
           | intertwined spurts of revolution is more provocative. He
           | means that in order to observe periods of revolution, the
           | "dogma" of normal science must be cast aside and new normal
           | must move in to replace it. Normal science hits a wall, gets
           | stuck in a "rut" as Kuhn describes it.
           | 
           | I think, in a way, Doctorow is making that same argument for
           | the current state of ML: _" I don't think that we're remotely
           | close or even on the right path in any way"_. In other words,
           | general thinking that ML will lead to AGI is stuck in a rut
           | and needs a new approach and no amount of progressive
           | improvement on ML will lead to AGI. I don't think Doctorow's
           | opinion here is especially insightful, he's just a writer so
           | he commits thoughts to words and has an audience. I don't
           | even know wether I agree or not. But I do think this piece
           | comes off as more in the spirit of Kuhn than you're
           | suggesting.
           | 
           | And of course you can interpret Kuhn however you want. I
           | don't think Kuhn was saying you shouldn't use/apply the tools
           | built by normal science to everyday life. But he, subtly,
           | argues that some level of casting off entrenched dogmatic
           | theories, in the academic domain, is a requirement for
           | revolutionary _progress_. Kuhn agrees that rationalism is a
           | good framework for approaching reality, but also equates
           | phases of normal science to phases of religious domination
           | that predated it. Essentially truly free thought is really
           | really hard because society invents normals (dogma) and makes
           | it hard to deviate. Academia is no exception. Science, during
           | periods of normals, is (or can become) essentially over-
           | calibrated and over-dependent on its own contemporary
           | zeitgeist. If some contemporary theory that everyone bases
           | progressive research off of is not quite right, it kinda
           | spoils the derivative research. Not always true because
           | sometimes the theories are correct.
        
           | gmadsen wrote:
           | is this related to Foucault? in an old debate with Chomsky,
           | Foucault spends a lot of time on a concept similar to what
           | you are talking about
        
           | coldtea wrote:
           | > _I think this is the accepted model in the philosophy of
           | science since the 1970s._
           | 
           | Perhaps, but "philosophy of science" has never been something
           | the majority practicing scientists consider relevant, care
           | about, or are influenced by, since forever.
        
         | cratermoon wrote:
         | There's good reason to be skeptical of AI as it is. Here's a
         | couple of reasons
         | 
         | Racial bias in facial recognition: "Error rates up to 34%
         | higher on dark-skinned women than for lighter-skinned males.
         | "Default camera settings are often not optimized to capture
         | darker skin tones, resulting in lower-quality database images
         | of Black Americans"
         | https://sitn.hms.harvard.edu/flash/2020/racial-discriminatio...
         | 
         | Chicago's "Heat List" predicts arrests, doesn't protect people
         | or deter crime: https://mathbabe.org/2016/08/18/chicagos-heat-
         | list-predicts-...
        
           | pbhjpbhj wrote:
           | I'm curious how the physics of light is termed racial bias,
           | it's skin-colour bias if anything -- you can be "black" and
           | be lighter skinned than a "white" person, for example -- but
           | surely it's a consequence of how cameras/light works rather
           | than a bias.
           | 
           | Of course if you don't take account of the difficulties that
           | come with using the tool then you might be acting with racial
           | bias, but that's different. Or, all cameras/eyes/visual
           | imaging means are "racist".
        
           | mgraczyk wrote:
           | It's very easy to fix these problems though. There's nothing
           | inherently broken about the models or direction that prevents
           | error rates from being made more uniform. In fact newer
           | facial recognition models with better datasets do perform
           | approximately equally well across skin tones and sex
        
         | wffurr wrote:
         | Isn't that what's meant by "admittedly impressive"?
        
         | SavantIdiot wrote:
         | I'm am both.
         | 
         | Why I'm pro-AI: Neural nets.
         | 
         | I worked on object detection for several years at one company
         | using traditional methods, predating TensorFlow by a few years.
         | We had a very sophisticated pipeline that had a DSP front end
         | and a classical boundary detection scheme with a little neural
         | net. The very first SSDMobileNet we tried blew away 5 years
         | worth of work with about two weeks of training and tuning.
         | 
         | Other peers of mine work in industrial manufacturing, and
         | classification and segmentation with off the shelf NN's has
         | revolutionized assembly line testing almost overnight.
         | 
         | So yes, DNNs _absolutely_ do some things vastly better than
         | previous technology. Hand 's down.
         | 
         | Why I'm Anti-AI: hype
         | 
         | The class of problems addressed by recent developments in
         | NN/DNN software have failed horribly in scaling to even
         | modestly real-world, rational multi-tasking. ADAS level 5 is
         | the poster child. When hype master Elon Musk backs away, that
         | is telling.
         | 
         | We're on the bleeding edge here, IMHO we NEED to try
         | everything. There's no telling which path has fruit. Look at
         | elliptic curves: half a century with no applications, now they
         | are the backbone of the internet. Yes, there will be BS, hype,
         | snake oil, vaporware, but there will also be some amazing tech.
         | 
         | I say be patient and skeptical.
        
         | shreyshnaccount wrote:
         | I'm in favor of changing the terminology from AI and ML to
         | something along the lines of 'prediction model' so that the
         | idea of machines 'thinking' is replaced with them 'predicting'.
         | it's just easier for our mushy meat brains to think that AI and
         | ML means that it'll lead to general AI or as I like to call it
         | 'general purpose decision maker'. it's all about the language!
        
           | kzrdude wrote:
           | ML seems to be an ok term to me? It's the "intelligence" part
           | in AI that needs a disclaimer.
        
           | esfandia wrote:
           | We already have "Pattern Recognition", not sure why it got
           | absorbed by Machine Learning (the two terms seemed to co-
           | exist with some overlap on what they covered), and then ML
           | got absorbed by AI.
        
             | jstx1 wrote:
             | ML is still widely used and is much more common than AI as
             | a term. So I wouldn't say that it has been absorbed by AI
             | but their use sometimes overlaps depending on the target
             | audience.
        
           | coddle-hark wrote:
           | I like the term "data driven algorithm". It makes it clear to
           | everyone involved that what we're doing is just adjusting an
           | algorithm based on the data we have. No-one in their right
           | minds would confuse that with building a true "A.I.".
        
             | spockz wrote:
             | What about "data derived algorithm"? The algorithm itself
             | isn't really driven by data after it has been designed
             | anymore.
        
               | skohan wrote:
               | I mean if we want to be really accurate, we could say
               | something like "highly dimensional data-derived function"
        
               | shreyshnaccount wrote:
               | why stop at that? 'high dimensional matrix parameterised
               | data derived non linear function optimisation and unique
               | hypothesis generation' just rolls off the tongue doesn't
               | it xD
        
             | tw04 wrote:
             | To be frank: that very much does not make it clear to
             | everyone involved. If you told the average Joe you had a
             | "data driven algorithm" instead of "AI" you would likely
             | get a blank stare in return.
        
               | [deleted]
        
               | shreyshnaccount wrote:
               | confusion is better than wrongful understanding?
        
             | falcor84 wrote:
             | I'm sorry to say that I don't see any clear line separating
             | "data driven algorithms" from the embodied minds that we
             | are.
        
             | gmadsen wrote:
             | why? we don't understand the architecture, but the brain
             | certainly uses electrical signals in an algorithmic way
        
           | zoomablemind wrote:
           | In not so long past, there was another popular expression -
           | "computer-aided ...", which was quite fit for the practical
           | use (like CAD for design, CAT for translation etc)
           | 
           | Perhaps, CAI for inference or insight would express it more
           | fairly.
           | 
           | Alternatively, AI could've stood for 'automated inference',
           | but sure it's all too late to rebrand.
           | 
           | We humans still not clear about nature of our own
           | intelligence, yet already claimed being able to manufacture
           | it.
        
             | skohan wrote:
             | I think inference isn't the right term either. I think
             | current ML is more like automated inductive reasoning.
        
               | aidenn0 wrote:
               | Automated inductive reasoning sounds a lot like
               | artificial intelligence to me...
        
               | skohan wrote:
               | Idk maybe it's semantics, inference to me sounds more
               | like a logical leap is happening, whereas in my mind the
               | simplest form of inductive reasoning is just expecting a
               | pattern to repeat itself.
        
           | JohnJamesRambo wrote:
           | Do I think or predict?
        
             | quickthrower2 wrote:
             | I predict therefore I will be
        
           | MR4D wrote:
           | I propose "heuristic optimization".
        
           | akomtu wrote:
           | Iirc, predictive coding is a well known branch of math that's
           | said to be the next big step towards AI.
        
         | skohan wrote:
         | Yeah I agree - during undergrad, I spent a few years studying
         | neuroscience, and I was very let down by my first ML/AI course.
         | Compared to what I had learned about the brain, what we called
         | an "ANN" just seemed like such a silly toy.
         | 
         | The more you learn about neurobiology, the more apparent it is
         | that there are _so many_ levels of computation going on -
         | everything from dendritic structure, to cellular metabolism, to
         | epigenetics has an effect on information processing. The idea
         | that we could reach some approximation of  "general
         | intelligence" by just scaling up some very large matrix
         | operations just seemed like a complete joke.
         | 
         | However, as you say, that doesn't mean what we've done in ML is
         | not worthwhile and interesting. We might have over-reached
         | thinking ML is ready to drive a car without major fourth-coming
         | advancements, but use-cases like style transfer and DLSS 2 are
         | downright magical. Even if we just made marginal improvements
         | in current ML, I'm sure there is a ton of untapped potential in
         | terms of applying this tech to novel use-cases.
        
           | fossuser wrote:
           | I'm not sure I buy that - biology is often messier because of
           | nature related constraints, it gets optimized for other
           | things (energy, head size, etc.)
           | 
           | The way a plane flies is quite different than the way a bird
           | flies in complexity - they share an underlying mechanism, but
           | planes don't need to flap wings.
           | 
           | It's possible that scaling up does lead to generality and
           | we've seen hints of that.
           | 
           | - https://deepmind.com/blog/article/generally-capable-
           | agents-e...
           | 
           | Also check out GPT-3's performance on arithmetic tasks in the
           | original paper (https://arxiv.org/abs/2005.14165)
           | 
           | Pages: 21-23, 63
           | 
           | Which shows some generality, the best way to accurately
           | predict an arithmetic answer is to deduce how the
           | mathematical rules work. That paper shows some evidence of
           | that and that's just from a relatively dumb predict what
           | comes next model.
           | 
           | It's hard to predict timelines for this kind of thing, and
           | people are notoriously bad at it. Few would have predicted
           | the results we're seeing today in 2010. What would you expect
           | to see in the years leading up to AGI? Does what we're seeing
           | look like failure?
        
             | dtech wrote:
             | > It's hard to predict timelines for this kind of thing,
             | and people are notoriously bad at it. Few would have
             | predicted the results we're seeing today in 2010. What
             | would you expect to see in the years leading up to AGI?
             | Does what we're seeing look like failure?
             | 
             | Few have predicted a reasonably-capable text-writing engine
             | or automatic video face replacement, but many have
             | predicted self-driving cars would have been readily
             | available to consumers by now and semi-intelligent helper-
             | robots being around.
             | 
             | Just because unforeseen advancements have been made, does
             | not mean that foreseen advancements come true.
        
             | skohan wrote:
             | I've heard this airplane argument before, and while I do
             | consider it plausible that AGI might be achievable with
             | some system which is fundamentally much different than the
             | human brain, I still don't think it can be achieved using
             | simple scaling and optimization of the techniques in use
             | today.
             | 
             | I think this for a couple reasons:
             | 
             | 1. The current gap in complexity is _so huge_. Nodes in an
             | ANN roughly correspond to neurons, and the brain has
             | somewhere on the order of 100 billion of them.
             | 
             | Even if we built an ANN that big, we would only be
             | scratching the surface of the complexity we have in the
             | brain. Each synapse is basically an information processing
             | unit, with behavioral characteristics much more complicated
             | than a simple weight function.
             | 
             | 2. The brain is highly specific. The structure and function
             | of the auditory cortex is totally different to that of the
             | motor cortices, to that of the hypothalamus and so on. Some
             | brain regions depend heavily on things like spike timing
             | and ordering to perform their functions. Different brain
             | regions use different mechanisms of plasticity in order to
             | learn.
             | 
             | Currently most ANN's we have are vaguely inspired by the
             | visual cortex (which is probably why a lot of the most
             | interesting things to come out of ML so far have been
             | related to image processing) and use something roughly
             | analogous to net firing frequency for signal processing. I
             | would consider it highly likely that our current ANNs are
             | just structurally incapable of performing some of the types
             | of computation we would consider intrinsically linked to
             | what we think of as general intelligence.
             | 
             | To make the airplane analogy, I believe we're probably
             | closer to Leonardo da Vinci's early sketches of flying
             | machines than we are to the Right Brothers. We might have
             | the basic idea, but I would wager we're still missing some
             | of the key insights required to get AGI off the ground.
             | 
             | edit: it looks like you added some lines while I was
             | typing, so to respond to your last points:
             | 
             | > it's hard to predict timelines for this kind of thing,
             | and people are notoriously bad at it. Few would have
             | predicted the results we're seeing today in 2010. What
             | would you expect to see in the years leading up to AGI?
             | Does what we're seeing look like failure?
             | 
             | I totally agree that it's hard to predict, that technology
             | usually advances faster than we expect, and that tremendous
             | progress is being made. But the road to understanding human
             | intelligence has been characterized by a series of periods
             | of premature optimism followed by setbacks. For instance,
             | in the 20th century, when dyes were getting better, and we
             | were starting to understand how different brain regions had
             | different functions, it may have seemed like we were close
             | to just mapping all the different pieces of the brain, and
             | that completing the resulting puzzle would give a clear
             | insight into the workings of the human mind. Of course it
             | turns out we were quite far from that.
             | 
             | As far as what we can expect in the years leading up to
             | AGI, I suspect it's going to be something that comes on
             | gradually - I think computers will take on more and more
             | tasks that were once reserved for humans over time, and the
             | way we think about interfacing with technology might change
             | so much that the concept of AGI might not seem relevant at
             | some point.
             | 
             | As to whether the current state of things is a failure - I
             | would not characterize it that way. I think we're making
             | real progress, I just also think there is a bit of hubris
             | that we may have "cracked the code" of true machine
             | intelligence. I think we're still a few major revelations
             | away from that.
        
           | version_five wrote:
           | So you took an undergrad ML course and you're using this as
           | the basis for your conclusions about how ML can scale? You
           | understand modern neural networks as large matrix operations
           | and then attack that idea leading to intelligence as a joke?
           | 
           | I also find it improbable that intelligence will emerge from
           | modern ML without some major leap. But you have added nothing
           | to the discussion, beyond some impressions from undergrad,
           | when we are talking about something that is a very active and
           | evolving research area. It's insulting to researchers and
           | practitioners who have devoted years to studying ML to just
           | dismiss broad areas of applicability because you took a
           | course once.
        
             | skohan wrote:
             | I'm sorry, I don't mean to insult or offend anyone. I'm
             | just recounting my observations based on my understanding
             | of the subject - and that is really not to disparage the
             | amazing work that's being done, but rather to highlight the
             | scale of the problem you have to solve when you're talking
             | about creating something similar to human intelligence.
             | It's entirely possible I'm wrong about this, and I would
             | love to be proven so.
             | 
             | Do you disagree substantively with anything I have said, or
             | do you just think I could have phrased it better?
        
               | version_five wrote:
               | Thanks for your reply. I suppose a quick way to summarize
               | my criticism is that it reads to me like you've dismissed
               | the strengths of ML on technical grounds, while you imply
               | you don't have any real technical experience in the
               | field. You make a superficial comparison between the
               | compexity of biology and ML, without providing any real
               | insight, just saying one has lots going on and the other
               | is matrix multiplication.
               | 
               | If your conclusion is that current gradient based methods
               | probably won't scale up to AGI, you're probably right.
               | But if you want to get involved in the discussion of why
               | this is true, what ML actually can and can't do, etc. I
               | would encourage you to learn more about the subject and
               | the current research areas, and draw on that for your
               | discussion points.
               | 
               | Otherwise, it comes across as "I once saw a podcast that
               | said..." type stuff that is hard to take seriously.
               | 
               | No doubt I come across as condescending, please take what
               | I say with the usual weight you'd assign to the views of
               | a random guy on the internet :)
        
         | jmull wrote:
         | > it's left implicit or it's purposefully omitted from the
         | article
         | 
         | It's explicitly right there in the essay...
         | 
         | > Machine learning has bequeathed us a wealth of automation
         | tools that operate with high degrees of reliability to classify
         | and act on data acquired from the real world. It's cool!
         | 
         | > Brilliant people have done remarkable things with it.
         | 
         | You seem to be in agreement with the article but don't realize
         | it.
        
       | okareaman wrote:
       | > But the idea that if we just get better at statistical
       | inference, consciousness will fall out of it is wishful thinking.
       | It's a premise for an SF novel, not a plan for the future.
       | 
       | My impression of Silicon Valley types like Ray Kurzweil in "The
       | Age of Spiritual Machines" that if we wire up enough transistors
       | somehow consciousness will somehow arise out of the material
       | world. The somehow is not explained. Materialism is a dead end in
       | my opinion. I am more interested in theories about consciousness
       | as a field and our brains as receivers.
        
         | naasking wrote:
         | Everyone I've ever spoken to who has insisted that materialism
         | is a dead end, has never been able to provide a compelling
         | explanation for why they believe that. It's not as if
         | materialistic progress in neuroscience and ML/AI has stalled.
         | If anything, it's accelerating.
         | 
         | I have no doubt that Kurzweil's timelines and outcomes are
         | wrong, as have the predictions of just about every prior
         | futurist. I don't see what that has to do with materialism
         | being a dead end.
        
         | Trasmatta wrote:
         | If our brains are receivers to a field of consciousness, why
         | would it be impossible to replicate one of those receivers with
         | a machine?
         | 
         | You also seem to have just kicked the can down the road.
         | "Consciousness arises from a field somehow, and the brain acts
         | as a receiver somehow. The somehow is not explained."
        
           | okareaman wrote:
           | I didn't say I knew how. I said I believe materialism is a
           | dead end, by which I mean I doubt the consciousness arises
           | out of atoms configured as neurons. How those neurons receive
           | a conscious field seems a more productive line of inquiry,
           | but for some reason people resist this idea. Not sure why.
        
             | Trasmatta wrote:
             | My main point was that you seemed to be criticizing
             | materialism for not yet having a solid answer for "how",
             | which is the same issue any alternative theory has.
        
             | akomtu wrote:
             | Imho, materialism and non-materialism mesh well together.
             | It's just the two camps, materialists and occultists, are
             | too arrogant to recognize that the other camp might
             | understand certain things better.
             | 
             | A self aware intelligent organism or machine needs three
             | key components: a material foundation that's sufficiently
             | organized (a large net of neurons, a silicon crystal,
             | etc.), a material fluid-like carrier to control the
             | foundation (that's always electricity and magnetism) and
             | the immutable immaterial principle to constrain the carrier
             | (math rules, physical laws, software algorithms). That's
             | the core idea of occultism rephrased in today's
             | terminology.
             | 
             | The "conscious field" would be identical with the magnetic
             | field here and neurons don't need any magical properties to
             | receive this field: they just need to be conductive, like
             | transistors. I think the reason the AI progress has stalled
             | is because 0-1 transistors are too primitive and too rigid
             | for the task. I guess that superintelligence is only
             | different in the performance and connectivity degree of the
             | material foundation: instead of slow neurons with 10k of
             | connections it would be fast quasi crystal like structure
             | with billions of connections that needs to move very little
             | matter around (but it has to be material and consist of
             | atoms of some sort).
        
             | dane-pgp wrote:
             | By studying the atoms configured as neurons, we've managed
             | to develop machines that can learn to play board games and
             | Atari games better than humans, and can write prose and
             | poetry at a convincingly human level. Those skills may not
             | require consciousness, but it's not clear that these
             | machines would be more useful if they could "receive a
             | conscious field".
             | 
             | Do you think that animals receive a conscious field? Could
             | we create an accurate representation of a mouse's brain
             | just from modelling its neurons? If a mouse brain can't
             | receive a conscious field, but a human brain can, then what
             | relevant physiological differences are there between the
             | two, other than size?
        
       | nlh wrote:
       | This is a well-written and well-reasoned argument - BUT - I tend
       | toward the materialist philosophy, so the argument doesn't really
       | hold there.
       | 
       | Yes, an ML model that infers B from A might not "understand" what
       | A or B are....yet. But what is it to "understand" anyway? Just a
       | more complex process in a different part of the machine.
       | 
       | If the human brain is just a REALLY large, trained, NN, there's
       | no reason that we won't be able to replicate it given enough
       | computing power.
        
         | jaredklewis wrote:
         | > If the human brain is just a REALLY large, trained, NN,
         | there's no reason that we won't be able to replicate it given
         | enough computing power.
         | 
         | I think one clear sign that the human mind is more than just a
         | big NN is how large neural networks are already.
         | 
         | Take GPT-3, which is was trained on 45 terabytes of text and
         | has 175 billion parameters. Contrast that with the human brain,
         | which has around 86 billion neurons and is able to do much of
         | what GPT-3 can do with only a tiny fraction of the training
         | data. And it has to be said that while GPT-3 has more
         | competency than an average human at some text generation
         | related tasks, the average human brain is vastly more capable
         | than GPT-3 at any non-text related task.
         | 
         | So for neural networks to approach human level capability we
         | would need a whole stack of GPT3-ish size networks for all the
         | other non-text related things the human brain can do: speech,
         | vision, motor control, social interactions, and so on. By that
         | point the amount of training data and parameters is so
         | astronomical, there can be no question that the functioning of
         | human brains must be significantly different than that of
         | contemporary computer neural networks.
         | 
         | To be clear, I am also a materialist and subscribe to the
         | computational theory of mind, but just based on the size of
         | training data alone, it seems obvious that human brains work
         | differently than neural networks.
        
       | yarg wrote:
       | Past performance is not indictative of future results across
       | distinct domains.
       | 
       | Within a single problem space (or sub-space) past performance can
       | generalise quite well.
       | 
       | There's a problem with scaling solutions and expecting
       | performance to continue to increase in a continuous exponential
       | manner: growth that we perceive as exponential is often only on a
       | long-life S-Curve.
       | 
       | We've seen this in silicon, where what appears to the layman to
       | have been exponential growth has in fact been a sequence of more
       | limited growth spurts bound by the physical limits of scaling
       | within whatever model of design was active at the time.
       | 
       | The question of where the bounds to the problem domains are, and
       | when new ideas or paradigms are required is much more difficult
       | in AI than it has been in microprocessors.
       | 
       | It's easy enough to formulate the question "how small can this be
       | before the changes in physical characteristics at scale prevent
       | it from working?", if rather more difficult to answer.
       | 
       | AI is so damned steeped in the vagaries of the unknown that I
       | can't even think of the question.
        
       | Reimersholme wrote:
       | I feel like this would have felt more relevant maybe five-ten
       | years ago when there was more of a feeling that deep neural nets
       | was the end all. He mentions correlation vs causation but seems
       | to have missed that causal inference is one of the most active
       | and interesting fields of research today.
        
       | rob_c wrote:
       | Someone buy this man a beer. Couldn't have phrased most of that
       | better had I tried and I've been arguing these points with staff
       | for years
        
       | swayvil wrote:
       | I see no path from "observation" to "model" that does not involve
       | an arbitrary (aesthetic? Nonrational, human-necessitating?)
       | choice.
       | 
       | This would suggest that "general" AI is impossible.
       | 
       | ON THE OTHER HAND
       | 
       | There is a variety of general AI, called an "optimizer". It
       | starts with something better than a void. Maybe that's the path
       | we should be looking at.
        
         | Reimersholme wrote:
         | Well, human thinking relies on prior models/filters for
         | understanding the world as well so that would invalidate us as
         | having general intelligence too?
        
           | username90 wrote:
           | Human thinking includes building new models/filters for
           | understanding the world, not just applying old ones. And that
           | isn't used for learning, we do it all the time when solving
           | any kind of challenging problem or even for simple problems
           | like trying to recognize a face. Computer models might never
           | compete with human performance unless they can learn how to
           | solve a problem as it is solving it, because that is what
           | humans do.
        
             | swayvil wrote:
             | I am on the same page.
             | 
             | To talk about the models some more...
             | 
             | There's this big mass of models. And it's got all kinds of
             | sections. Special sections that we learn about in school.
             | Special sections called "science". Sections that we invent
             | ourselves. Sections that we inherit from our parents,
             | religion, etc. It's partially biological. Partially
             | cultural. A massive library of models, mostly inherited.
             | 
             | You move in relationship with the mass in different ways.
             | 
             | You can create new models. That's what basic science is.
             | Extending the edge of the mass. Naming the nameless.
             | 
             | You can operate freely from the mass. Creating your own
             | models or maybe operating model-less. Artists, mystics,
             | weirdos.
             | 
             | You can operate completely within the mass. Never really
             | contending with unmodelled reality. The map and territory
             | become one. Like in a videogame. I think that's the most
             | popular way.
        
           | swayvil wrote:
           | Those relied-upon models may be acquired nonrationally.
           | 
           | Via aesthetics etc.
           | 
           | Or, in the case of the optimizer, I think the human
           | equivalent would be desire.
        
       | marcinzm wrote:
       | I find his comment about hallucinating faces in the snow amusing
       | given that humans hallucinate faces in things all the time. And
       | then either post it to Reddit or have a religious experience.
        
         | starmftronajoll wrote:
         | Yes, that is explicitly part of the point Doctorow is making.
         | It's why the essay mentions the fact that humans see faces in
         | clouds, etc. Humans typically know when they are
         | "hallucinating" a face, and ML algorithms don't. When humans
         | see a face in the snow, they post it to Reddit; they don't warn
         | their neighbor that a suspicious character is lurking outside.
         | This is the distinction the essay draws.
        
           | kzrdude wrote:
           | Well, we seem to experience such things in a split second,
           | _and then we correct ourselves_. We use some kind of
           | reasoning to double-check suspicious sensory experiences.
           | 
           | (I was thinking of this when I was driving in a new place.
           | Suddenly it looked like the road ended abruptly and I got
           | ready to act, but of course it didn't end and I realized that
           | just a split second later.)
        
           | marcinzm wrote:
           | People perceive nonexistent threats all the time and call the
           | police. The threshold is simply higher than current AI but
           | that's a question of magnitude rather than inherent
           | difference. Fine tune a reinforcement model on 5 years of 16
           | hours a day video and I'm sure it will also have a better
           | threshold.
        
             | kortilla wrote:
             | There is general knowledge about the world for humans to
             | know that there isn't a giant human in the sky no matter
             | how good the face looks.
             | 
             | Train it with as many images as you want and as long as a
             | good enough face shows up, the model is going to have a
             | positive match. The entire problem is it's missing that
             | upper level of intelligence that evaluates "that looks like
             | a face, could it actually be a human?"
        
               | marcinzm wrote:
               | >There is general knowledge about the world for humans to
               | know that there isn't a giant human in the sky no matter
               | how good the face looks.
               | 
               | Is there? Humans used to think the gods were literally
               | watching them from the sky and the constellations were
               | actual creatures sent into the night. So this seems
               | learned behavior from data rather than some inherent part
               | of human thinking.
               | 
               | >Train it with as many images as you want and as long as
               | a good enough face shows up, the model is going to have a
               | positive match.
               | 
               | So will a human if something is close enough to a face. A
               | shadow at night for example might look just like a human
               | face. Children will often think there's a monster in the
               | room or under their bed.
        
             | user-the-name wrote:
             | But very seldom do they do that because of a hallucination.
        
         | itisit wrote:
         | Those humans don't typically believe those hallucinated faces
         | belong to people though nor do they call the cops.
        
           | version_five wrote:
           | You don't think a person has ever called the police because
           | they hear a noise they thought was an intruder, or saw
           | someone or something suspicious only in their mind? People
           | make these kind of mistakes too.
        
             | itisit wrote:
             | Of course, but the consistency of the false positive is the
             | issue. An able-minded person can readily reconcile their
             | confusion.
        
               | version_five wrote:
               | An ML system generally can reconcile (and also avoid)
               | this kind of confusion, with present technology. The
               | example is more a question of responsible implementation
               | than of a gap in the state of the art.
        
               | marcinzm wrote:
               | Then that's a question of training data.
        
               | foobiekr wrote:
               | The problem with this line of reasoning is that it can be
               | used as a non-constructive counter to any observation
               | about AI failure. It's always more and more training data
               | or errors in the training set.
               | 
               | This really is a god-of-the-gaps answer to the concerns
               | being raised.
        
               | marcinzm wrote:
               | No, my point is that if two systems show very similar
               | classes of errors but at different thresholds with one
               | trained on significantly more data than the more likely
               | conclusion is that there isn't enough data in the other.
        
               | Dylan16807 wrote:
               | Don't most high-end machine learning solutions have more
               | training data than a human could consume in a lifetime?
        
               | version_five wrote:
               | I don't think there is a realistic way to make that
               | comparison.
               | 
               | For consideration, our brains start with architecture and
               | connections that have evolved over a billion years (give
               | or take) of training. Then we are exposed to a lifetime
               | of embodied experience coming in through 5 (give or take)
               | senses.
               | 
               | ML is picking out different things, but it's not obvious
               | to me that models are actually getting more data then we
               | have been trained on. Certainly GPT has seen more text,
               | but I don't think that comparing that to a person's
               | training is any more meaningful than saying we'll each
               | encounter tens of thousands of hours of HD video during
               | our training.
        
               | username90 wrote:
               | They aren't very similar errors, ML solutions are equally
               | accurate as humans in at a glance performance but longer
               | and humans clearly wins. I'd say that the system is
               | similar to humans in some ways, but humans have a system
               | above that which is used to check if the results makes
               | sense or not, that above system is completely lacking
               | from modern ML theory and it doesn't seem to work like
               | our neural net models at all (the brain isn't a neural
               | net).
        
         | legrande wrote:
         | > Given that humans hallucinate faces in things all the time
         | 
         | Pareidolia: https://en.wikipedia.org/wiki/Pareidolia
        
       | dvt wrote:
       | What a confused and muddled post, trying to touch on psychology,
       | philosophy, and mathematics, and missing the mark on basically
       | all three. I'm quite bearish on AI/ML, but calling it a "parlor
       | trick" is like calling modern computers a parlor trick. I mean,
       | at the end of the day, they're _just_ very fast abacuses, right?
       | Let 's face it: what ML has brought to the forefront -- from
       | self-landing airplanes to self-driving cars, to AI-assisted
       | diagnoses -- is pretty impressive. If you insist on being
       | reductive, sure, I guess it's "merely" statistics.
       | 
       | Bringing up quantitative vs qualitative analysis is just silly,
       | since science has had this problem way before AI. Hume famously
       | described it as the is/ought problem+. And that was a few hundred
       | years ago.
       | 
       | Finally, dropping the mic with "I don't think we're anywhere
       | close to consciousness" is just bizarre. I don't think that any
       | serious academic working in AI/ML has made any arguments that
       | claim machine learning models are "conscious." And Strong AI will
       | probably remain unattainable for a very long time (I'd argue
       | forever). This is not a particularly controversial position.
       | 
       | + Okay, it's not the same thing, but closely related. I suppose
       | the fact-value distinction might be a bit closer.
        
         | flyinglizard wrote:
         | > what ML has brought to the forefront -- from self-landing
         | airplanes to self-landing cars
         | 
         | I am not aware of any ML in flight controls. Being black box
         | and probabilistic by nature, these things won't get past
         | industry standards and regulations (at least for a while).
        
           | dvt wrote:
           | > I am not aware of any ML in flight controls. Being black
           | box and probabilistic by nature, these things won't get past
           | industry standards and regulations (at least for a while).
           | 
           | (Hah, I accidentally wrote "self-landing cars," fixed). But
           | yeah, I guess I was thinking more of drones, I'm not exactly
           | sure what ML (if any) is in the guts of a commercial or
           | military airplane.
        
       | 3gg wrote:
       | I found this to be a very succinct, sober analysis of ML ("AI")
       | techno-solutionism. Cory is a great writer and knows how to
       | explain ideas in a simple, no-nonsense way. This article reminded
       | me of Evgeny Morozov's "To Save Everything, Click Here", where
       | you can find many more examples of how focusing on the
       | quantitative aspect of a problem and ignoring the social,
       | qualitative context it around it often goes wrong.
       | 
       | https://bookshop.org/books/to-save-everything-click-here-the...
        
       | Hacktrick wrote:
       | I just read one of his books for school.
        
       | iamnotwhoiam wrote:
       | Are there any approaches to artificial intelligence that do
       | involve qualitative data or don't rely entirely on statistical
       | inference?
        
         | Ericson2314 wrote:
         | Not really adjacent to what we do today.
         | 
         | I view A.I. as dual to "neoliberal M.B.A. culture". Just as the
         | business schools taught that managers should be generalists
         | without craft knowledge applying coarse microeconomics, A.I.
         | that we have created is the ultimate pliant worker that also
         | knows nothing deep and works from statistics. In a bussiness
         | ecosystem where analytics and presentations are more important
         | than doing things, they are a perfect match. Of course, a
         | bunches of statistician-firms chasing each other in circles is
         | going to exhibit the folly, not wisdom, of crowds.
         | 
         | I think solution is to face reality that more people need to
         | learn programming, and more domain knowledge needs to be
         | codified old school.
         | https://www.ma.imperial.ac.uk/~buzzard/xena/ I thus think is
         | perhaps the best application of computing, ever.
         | 
         | Training A.I. to be a theorem prover tactic is a great way to
         | make it better: if we can't do theory and empiricism at the
         | same time, we can at least do meta-empiracism on theory
         | building!
         | 
         | I think once we've codified all the domains like that and been
         | running A.I. on the theories, we'll be better positioned to go
         | back to the general A.I. problem, but we might also decided the
         | "manually programmed fully automated society" is easier to
         | understand and steer, and thus less alienation, and we won't
         | even want general A.I.
        
         | dr_dshiv wrote:
         | Cybernetics and control theory, broadly speaking, involve the
         | design of data feedback loops to govern simple machines or
         | complex socio-technical systems. For instance, an organization
         | might instrument a feedback loop to use qualitative survey data
         | to inform decision-making. That isn't ML, but it is
         | cybernetics. And, based on Peter Norvig's definition, it is a
         | form of AI.
         | 
         | Consider that "autopilot" was invented in 1914, long before
         | digital computers. From this perspective, Artificial
         | Intelligence might even be seen as an ancient human practice--
         | present whenever humans have used artifacts to govern complex
         | systems.
        
         | jon_richards wrote:
         | Does qualitative data actually exist? Named colors are
         | considered qualitative, but rbg and cmyk are quantitative. Does
         | converting from one to the other switch whether it is
         | qualitative or quantitative?
         | 
         | Surely semantic meaning is qualitative, but look at word
         | replacement in Google search. That's entirely based on
         | statistics, thesaurus graphs, and other ultimately quantitative
         | data.
         | 
         | The neat thing about neural nets is that they are ultimately
         | making a very, very complicated stepwise function. Brains are
         | not neural nets, but are they doing anything other than create
         | a very complex, entirely numerical, time and state dependent
         | function? No matter which way you try to understand something,
         | ultimately you are relying entirely on statistical inference.
        
           | RandomLensman wrote:
           | Kind of does exist even with colors: try to map "brown" into
           | an RGB or CMYK data point.
           | 
           | I think the real difference is that in qualitative data the
           | numerical representation does not mean anything. Sure, the
           | names of the archangels can be represented digitally
           | (quantitative) but that is just a change of representation -
           | the bit strings' numerical value carries no theological
           | meaning.
        
             | jon_richards wrote:
             | Brown is (165,42,42). You can argue about false precision,
             | but the term "brown" has false precision as well. The
             | likely variation in interpretations can be described by
             | error bars. Your understanding of someone saying "brown" is
             | informed entirely by statistical inference of your past
             | experience with "brown".
             | 
             | Changing the representation of the names doesn't matter,
             | but attempting to understand the meaning behind the names
             | is ultimately quantitative. The numbers are run in the
             | giant black box that is your brain and then your
             | consciousness receives other qualitative answers.
             | 
             | Asking for an AI without statistical inference or
             | quantitative data is asking for consciousness without a
             | brain.
        
               | RandomLensman wrote:
               | What is quantitative in understanding the meaning of a
               | name? We don't know that the brain runs on "numbers" (and
               | no, it's no just like a "computer").
               | 
               | To respond to your edit: That is not brown... there is a
               | whole science of color perception, have a look.
        
               | jon_richards wrote:
               | It's not a computer, but it is quantified.
        
               | username90 wrote:
               | Numbers implies you can do mathematical operations on
               | them that makes sense.
               | 
               | So how would you quantify "good" or "bad"? You can't
               | unless you also answer what "good" + "bad" should be. In
               | psychology they just assume that mapping those onto 1 and
               | 5 makes sense, so "good" + "bad" = 5 + 1 = 6, but that
               | doesn't make sense since it would imply that "good" is
               | the same as "bad" + "bad" + "bad" + "bad" + "bad". You
               | get similar but different issues if you start including
               | negative numbers, or if you just use relative measures
               | and don't have a proper zero, no matter what you do
               | numbers doesn't properly represent feelings as we know
               | them.
        
               | RandomLensman wrote:
               | That touches on the really tricky point that some things
               | can be quantified but not computed, so again, we don't
               | know how that measurable representation relates to they
               | way results are derived.
        
         | m0rphy wrote:
         | Maybe if we could invent quantum DNA computing + ML =
         | artificial intelligence that would be perceived and understood
         | by humans.
        
         | mooneater wrote:
         | Well causal inference is considered distinct from statistical
         | inference, and accounts for part of the gap here. (Not sure I
         | would call that "qualitative" though.)
        
       | version_five wrote:
       | This article is mostly a straw man, while still containing some
       | valid ML criticism. I am a ML s(c|k)eptic too, in that popular
       | conceptions of what ML is currently overpromise, often don't even
       | understand what ML actually is, and are often just some
       | layperson's imagination about what "artificial intelligence"
       | might do.
       | 
       | This article is the opposite. He's treating ML as basically a
       | simple supervised architecture that doesn't allow any domain
       | knowledge to be incorporated and simply dead-reckons, making
       | unchecked inferences from what it learned in training. Under
       | these constraints, everything he says is correct. But there is no
       | reason ML has to be used this way, in fact it is extremely
       | irresponsible to do so in many cases. ML as part of a system
       | (whether directly part of the model architecture and learned or
       | imposed by domain knowledge) is possible, and is generally the
       | right way to build an "AI" system.
       | 
       | I think ML has its limitations and will be surprised to see
       | current neural networks evolve into AGI. But I also don't think
       | the engineers working in this space are as out to lunch as the
       | author seems to imply, and would not write off the possibilities
       | of what contemporary ML systems can accomplish based on the flaws
       | pointed out in relation to a very narrow view of what ML is.
        
         | karaterobot wrote:
         | > This article is mostly a straw man, while still containing
         | some valid ML criticism.
         | 
         | I don't think this is an example of a straw man, given that his
         | audience is readers of Locus, a science fiction magazine. While
         | researchers and practitioners in ML understandably hold a more
         | nuanced, informed view, the position he's arguing against is
         | pretty common among the general public, and certainly common in
         | science fiction.
        
         | mistrial9 wrote:
         | I like your comment here starting with "straw man" .. and agree
         | with some of the statements.. I have seen lengthy, detailed and
         | authoritative reports that say some of the same things, but in
         | a formal, long-winded way with more added..
         | 
         | This meta-comment of restatement in various contexts, with
         | various amounts of story-telling and technical detail, brings
         | up the educational burdens of communication -- to be effective
         | you have to reach a reader where there are today .. in terms of
         | assumptions, technical learning, and focus of topic.. since
         | this is such a fast-moving and wide subject area, its super
         | easy to miss the distinction between "low value, high volume
         | audio clips recognition" and "life and death medical diagnosis
         | for less than 100 patients". hint - that matters a lot in the
         | tech chain AND the legal structure, and therefore combined, the
         | "do-ability"
        
         | ziggus wrote:
         | Agreed. The article reminds me of the arguments that religious
         | fundamentalists make against evolution: "there are still
         | monkeys, so how could it be that we evolved from monkeys,
         | wouldn't all the monkeys have evolved as well?"
         | 
         | Clearly, no biologist claims that humans evolved from modern
         | primates, just like no modern AI researcher seriously thinks
         | that current machine learning methods will lead to "True AI".
        
         | 3gg wrote:
         | > But I also don't think the engineers working in this space
         | are as out to lunch as the author seems to imply.
         | 
         | Are you at all close to this space? It sounds you may be
         | underestimating corporate politics and the lack of rigour and
         | ethical thought with which these systems are applied. The
         | example Cory puts on policing -- and the many other examples
         | you can find in Evgeny Morozov's book or "The End of Trust" --
         | are solid proof of this.
        
           | bhntr3 wrote:
           | > Are you at all close to this space?
           | 
           | I am.
           | 
           | > The example Cory puts on policing
           | 
           | My most upvoted comment on this website was discussing this
           | exact scenario. https://news.ycombinator.com/item?id=23655487
           | 
           | Could you perhaps clarify the generalization you're making
           | about me and people like me so I can understand it?
        
             | 3gg wrote:
             | Excellent. One problem in my mind that I don't see
             | discussed enough -- and also not in your other post -- is
             | that there is a large divide between those who use the
             | technology (the cops in this case) and those who supply it,
             | and there is no accountability in any of the two groups
             | when something goes wrong. Like you write in your other
             | post, "the system works (according to an objective function
             | which maximizes arrests.)", and that is as far as the
             | engineer goes. On the other hand, the cop picks up the
             | technology and blindly applies it. To make any improvement
             | to the system would require both groups to work together,
             | but as far as I know, that is not happening. A recent
             | example can be found in the adventures of Clearview AI. So
             | from that perspective, I do think that the engineers (and
             | the cops, and everybody else) are out to lunch, each doing
             | their own work in a bubble and not paying enough attention
             | to (or caring about) the side effects of the applications
             | of this technology.
             | 
             | Also, the lack of thought and accountability that I mention
             | above I think is fairly general from my experience, even
             | outside of policing. That is why I don't generally agree
             | with the lunch statement. Guys are having a hell of a party
             | as far as I can tell -- at the expense of horror stories
             | suffered by the victims of these systems.
        
               | salawat wrote:
               | I second this. I spend a great deal of time digging
               | through where we've positioned big data models to steer
               | population scale behavior, and very infrequently do the
               | implementers of the system ever stop to analyze the
               | changes they are seeding or think beyond the first or
               | second degree consequences once things take off.
               | 
               | That is all part of engineering to me, so by definition,
               | I think many in the field are in fact, out to lunch.
        
               | 3gg wrote:
               | Yes, thank you. Analyzing the effects of our technology
               | should be part of the engineering process. The physicists
               | back where I studied all go through a mandatory ethics
               | class. Us software crowd, well...
        
               | skmurphy wrote:
               | "Don't say that he's hypocritical        Say rather that
               | he's apolitical        'Once the rockets are up, who
               | cares where they come down?        That's not my
               | department!' says Wernher von Braun             Some have
               | harsh words for this man of renown        But some think
               | our attitude        Should be one of gratitude
               | Like the widows and cripples in old London town
               | Who owe their large pensions to Wernher von Braun"
               | 
               | Tom Lehrer "Wernher von Braun"
        
             | dundarious wrote:
             | 3gg was replying to version_five. You're bhntr3. There is
             | no generalization being made about you or even people like
             | you, in a post that is a specific response to an account
             | that is not yours.
        
               | bhntr3 wrote:
               | I believe they are disagreeing whether "engineers working
               | in this space are out to lunch" and since I have been "an
               | engineer working in this space" I was asking for more
               | clarification about what it meant to be "out to lunch".
        
           | vletal wrote:
           | My first thought was that I'm not the target audience of this
           | article. I'm a ML practitioner. This seems more like an
           | overstated opinionated wake up call to mgmt and sales people.
           | Is not it?
        
             | 3gg wrote:
             | If you are an ML practitioner and you think you're not part
             | of the target audience, then you're probably part of the
             | target audience.
        
             | version_five wrote:
             | Agreed. What I called a straw man in the OP could also be
             | characterized as a simplification to get his point across
             | to lay-audiences. (Personally I dont agree with the
             | simplification, per my other post). It's meant for popular
             | audiences (as someone else points out, this is from a sci-
             | fi magazine)
        
           | foobiekr wrote:
           | There are three entirely different groups at work here.
           | 
           | The deepmind team etc type of group who actually know what
           | they're doing and the boundaries of what they are working
           | with
           | 
           | the "AI-washing" startups, corporate groups who know they are
           | faking it and that what they're doing is extremely limited
           | 
           | the corporate project team types who are just doing random
           | tool play and honestly don't understand what they are doing
           | or that they are absolutely clueless with no self-awareness
           | at all
           | 
           | I've worked with all three and they really are just totally
           | different things that are all being lumped together. They
           | also are listed in terms of increasing proportion. For every
           | self-aware AI-washer team I've seen 50 "we are doing AI" Corp
           | team types spinning out one trivial demo after another to
           | execs who know zero.
        
             | out0fpaper wrote:
             | The same this is happening in the academics.
        
             | 3gg wrote:
             | Where does Google Vision Cloud sit in your categorization?
             | 
             | https://algorithmwatch.org/en/google-vision-racism/
        
               | foobiekr wrote:
               | First group.
               | 
               | You're observing that they aren't doing a perfect job,
               | which is true, but my grouping isn't related to
               | perfection of results.
        
               | 3gg wrote:
               | > The deepmind team etc type of group who actually know
               | what they're doing and the boundaries of what they are
               | working with.
               | 
               | You claim that they "know what they are doing and the
               | boundaries of what they are working with" -- and yet they
               | recklessly make public a racist vision product?
        
               | spacedcowboy wrote:
               | I have a PhD in neural networks, haven't used it in many
               | a year, but some of the knowledge is still there. Some of
               | the memories of racking my brains to understand what the
               | hell is going on are still there, too.
               | 
               | It is easy to have a theory of what is going on, to model
               | the processes of how things are playing out inside the
               | system, to make external predictions of the system, and
               | to be utterly wrong.
               | 
               | Not because your model is wrong, but because either the
               | boundary conditions were unexpected, or there was an
               | anti-pattern in the data, or because the underlying
               | assumptions of the model were violated by the data (in my
               | case, this happened once when all the data was taken in
               | the Southern Hemisphere...)
               | 
               | In all these cases, you can know what you're doing, you
               | can know the boundaries of what what you're working with,
               | and you can get results that surprise you. It's called
               | "research" for a reason.
               | 
               | The model can also be ridiculously complex. Some of the
               | equations I was dealing with took several lines to write
               | down, and then only because I was substituting in other,
               | complicated expressions to reduce the apparent
               | complexity. It's easy to make mistakes - and so you can
               | know what you're doing, and the boundaries that you're
               | working with, and still have a mistake in the model that
               | leads to a mistake in the data ... garbage in, garbage
               | out.
               | 
               | In short, this shit is hard, yo!
        
               | foobiekr wrote:
               | Your argument is that knowing what you are doing means
               | error free output.
        
               | 3gg wrote:
               | It's more like applying the technology with caution and
               | accountability when you already know beforehand that the
               | output is not error-free.
        
               | username90 wrote:
               | They never promised that the output would be error free,
               | having output with errors is still useful for many
               | applications. And the issues you are talking about got
               | fixed as soon as it was discovered and since then Google
               | has made sure to always diversify their datasets by race.
               | Nowadays that is common knowledge that you need to do it,
               | but back then it wasn't obvious that a model wouldn't
               | generalize across human races and it is much thanks to
               | that mistake that everyone now knows it is an issue.
        
               | 3gg wrote:
               | It was discovered by others, not them; they fixed the
               | issue only retroactively when it was called out in
               | public. This lack of oversight is part of what I mean
               | with applying things with caution.
               | 
               | And why would they have assumed in the first place that
               | the model _would_ generalize across human races, or any
               | other factor for that matter?
        
               | [deleted]
        
             | Quarrelsome wrote:
             | I feel like we're missing the point here. The dangerous
             | groups are those execs you mention who will have the
             | decision about whether to move something into production or
             | not.
             | 
             | When this technology gets into their hands with a dev leash
             | it will be recklessly implemented and people will die.
        
         | coding123 wrote:
         | That's how I felt too. Most of the article is trying to pull us
         | with an emotional attachment (mostly to racist things a
         | computer will do if tasked to do important things). While that
         | criticism is welcome, it's not specifically meaningful towards
         | an argument against AGI. The only part that was seemed to be
         | that statistical inference is not a path to AGI which is
         | somehow backed up by the emotional stuff.
         | 
         | What deep learning seems to step into more and more is time-
         | based statistical inference.
         | 
         | AGI is not:
         | 
         | seeing that a girl has a frown on their face.
         | 
         | seeing that a girl has a frown, because someone said "you look
         | fat"
         | 
         | seeing that a girl has a frown because her boyfriend said you
         | look fat
         | 
         | seeing that Maya has generally been upset with her boyfriend
         | who also most recently told her she is fat.
         | 
         | But keep going and going and going and we might get somewhere.
         | Do we have the computer power to keep going? I don't know.
        
           | salawat wrote:
           | AGI is that capability to orchestrate layering of topical
           | filters and feature detections in order to create an
           | actionable perception. Note that it isn't anything to do with
           | the implementations of said filters and detectors, but with
           | the ability to artistically arrange them to satisfy a goal,
           | and very possibly, must be coupled with the capacity to
           | synthesize new ones.
           | 
           | That executive and arranging function is the unknown. From
           | whence cometh that characteristic of Dasein? That
           | preponderance of concern with the act of being as Being?
           | 
           | It's a tough nut to crack, even in philosophical circles. To
           | think that we're going to articially create it by any means
           | other than accident or luck is hubris of the highest order.
        
       | mark_l_watson wrote:
       | I like the term "AI" and the classic definition of achieving
       | human like performance in specific domains. I don't think that
       | there is much confusion about the term for the general
       | population, and certainly not in the tech community.
       | 
       | The term "AGI" is also good, "artificial general intelligence"
       | describes long term goals.
        
       | m12k wrote:
       | The first AI winter came after we realized that the AI of the
       | time, the high level logic, reasoning and planning algorithms we
       | had implemented, were useless in the face of the fuzziness of the
       | real world. Basically we had tried to skip straight to modeling
       | our own intellect, without bothering to first model the reptile
       | brain that supplies it with a model of the world on which to
       | operate. Being able to make a plan to ferry a wolf, sheep and
       | cabbage across the river in a tiny boat without any of them
       | getting eaten doesn't help much if you're unable to tell apart a
       | wolf, sheep and cabbage, let alone steer a boat.
       | 
       | That's what makes me excited about our recent advances in ML.
       | Finally, we are getting around to modeling the lower levels of
       | our cognitive system, the fuzzy pattern recognition part that
       | supplies our consciousness with something recognizable to reason
       | about, and gives us learned skills to perform in the world.
       | 
       | We still don't know how to wire all that up. Maybe a single ML
       | model can achieve AGI if it is adaptable enough in its
       | architecture. Maybe a group of specialized ML models need to make
       | up subsystems for a centralized AGI ML-model (like a human's
       | visual and language centers). Maybe we need several middle layers
       | to aggregate and coordinate the submodules before they hook into
       | the central unit. Maybe we can even use the logic, planning or
       | expert system approach from before the AI winter for the central
       | "consciousness" unit. Who knows?
       | 
       | But to me it feels like we've finally got one of the most
       | important building blocks to work with in modern ML. Maybe it's
       | the only one we'll need, maybe it's only a step of the way. But
       | the fact that we have in a handful of years not managed to go
       | from "model a corner of a reptile brain" to "model a full human
       | brain" is no reason to call this a failure or predict another
       | winter just yet. We've got a great new building block, and all
       | we've really done with it so far is basically to prod it with a
       | stick, to see what it can do on its own. Maybe figuring out the
       | next steps toward AGI will be another winter. But the advances
       | we've made with ML have convinced me that we'll get there
       | eventually, and that when we do, ML will be part of it some
       | extent. Frankly I'm super excited just to see people try.
        
       | coldtea wrote:
       | > _The problems of theory-free statistical inference go far
       | beyond hallucinating faces in the snow. Anyone who's ever taken a
       | basic stats course knows that "correlation isn't causation." For
       | example, maybe the reason cops find more crime in Black
       | neighborhoods because they harass Black people more with
       | pretextual stops and searches that give them the basis to
       | unfairly charge them, a process that leads to many unjust guilty
       | pleas because the system is rigged to railroad people into
       | pleading guilty rather than fighting charges. (...)
       | 
       | Being able to calculate that Inputs a, b, c... z add up to
       | Outcome X with a probability of 75% still won't tell you if
       | arrest data is racist, whether students will get drunk and
       | breathe on each other, or whether a wink is flirtation of grit in
       | someone's eye._
       | 
       | Except if information about what we consider racist etc. also
       | passes through the same inference engine (feeding it with
       | information on arbitrary additional meta levels).
       | 
       | So, sure, an AI which is just fed crime stats to make
       | inferrences, can never understand beyond that level.
       | 
       | But an AI which if fed crime stats, plus cultural understanding
       | about such data (e.g. which is fed language, like a baby is, and
       | which is then fed cultural values through osmosis - e.g. news
       | stories, recorded discussions with people, etc).
       | 
       | In the end, it could also be through actual socialization: you
       | make the AI into a portable human-like body (the classic sci-fi
       | robot), and have it feed its learning NN by being around people,
       | same as any other person.
        
         | [deleted]
        
       | nkozyra wrote:
       | > It's not sorcery, it's "magic" - in the sense of being a parlor
       | trick, something that seems baffling until you learn the
       | underlying method, whereupon it becomes banal.
       | 
       | I think part of the problem is the belief that human or animal
       | intelligence is somehow more mystical.
       | 
       | People who think like this will see an ML implementation solve a
       | problem better and/or faster than a human and counter "well, it's
       | just using statistical inference or pattern recognition" and my
       | response is "so?" Humans use the same processes and parlor tricks
       | to understand and replay things.
       | 
       | Where humans excel is in generalizing knowledge. We can apply
       | bits and pieces of our previous parlor tricks to speed up
       | comprehension in other problem spaces.
       | 
       | But none of it is magic. We're all simple machines.
        
         | xnyan wrote:
         | >simple machines.
         | 
         | Ooof. Premed dropout here, so admittedly not an expert in human
         | biology but this is a wild statement. A neuron is simple in the
         | same way a transistor is simply a silicon sandwich doped with
         | metals.
         | 
         | A parlor trick is something that once you understand, is
         | straightforward to implement on your own. Are you arguing that
         | anyone now or in the foreseeable future could simply recreate
         | the abilities of a human? If so, what evidence could you show
         | me to support that?
        
           | nkozyra wrote:
           | I'm arguing that animal or lesser intelligence is built
           | around hundreds of thousands of parlor tricks operating in a
           | complex ensemble.
           | 
           | There's a bias toward the marvel of human intelligence that
           | causes some people to dismiss ML for the same underlying
           | reasons we don't try to put a square peg in a round hole
           | after infancy.
           | 
           | Side note: disagree all you like but starting a rebuttal with
           | "oof" is the kind of dismissive language that lets people
           | know you'll be taking a very reductionist approach in your
           | reply.
        
             | nicoffeine wrote:
             | > I'm arguing that animal or lesser intelligence is built
             | around hundreds of thousands of parlor tricks operating in
             | a complex ensemble.
             | 
             | Until ML/AI can perform a single one of those parlor tricks
             | without the constant direction of human intelligence,
             | there's no reason to stop marveling.
        
         | heavyset_go wrote:
         | Obligatory "Your brain is not a computer"[1] reference.
         | 
         | [1] https://aeon.co/essays/your-brain-does-not-process-
         | informati...
        
         | staticman2 wrote:
         | We are not "simple machines" we are the result of 3.7 billion
         | years of evolution. We are the most complex known thing in the
         | universe. We are far more complicated than anything we can hope
         | to make in the forseeable future, if ever.
        
           | sorokod wrote:
           | You and every living organism around you, was hammered out by
           | the same evolutionary process.
        
         | jmull wrote:
         | > We're all simple machines.
         | 
         | Great. Prove it. Build the simple machine that acts as a human
         | does. Should be simple, right?
         | 
         | Personally, I don't think there's any magic. But it's not
         | "simple" either.
        
       | dundarious wrote:
       | I'm tired of the Norvig vs. Chomsky style debates about what is
       | cognition/intelligence/learning. I think this piece does rehash
       | that debate somewhat, but it's not at all the focus.
       | 
       | It's key contributions are about the mainstream domination of
       | quantitative vs. qualitative methods, especially in this
       | paragraph:
       | 
       | > Quantitative disciplines are notorious for incinerating the
       | qualitative elements on the basis that they can't be subjected to
       | mathematical analysis. What's left behind is a quantitative
       | residue of dubious value... but at least you can do math with it.
       | It's the statistical equivalent to looking for your keys under a
       | streetlight because it's too dark where you dropped them.
       | 
       | and also of note is the "veneer of empirical facewash that
       | provides plausible deniability", for discrimination, and for
       | doing a poor job but continuing to be rewarded for it.
       | 
       | If I had to summarize it would be:
       | 
       | - The ML/AI community, which includes the researchers,
       | practitioners, and the evangelists, are broadly utopian in what
       | they think they can achieve. They are overconfident even in the
       | domain of detecting the face of potential burglars in a home
       | security camera, never mind in terms of creating new life with
       | AGI. I think Doctorow's critique equally applies to "algorithms"
       | even only as complex as a fancy Excel sheet, but he focuses on
       | ML/AI as the most common source of this excess of optimism, that
       | recording data and running it through a model is almost certainly
       | the _most sensible thing to do_ for any given problem.
       | 
       | - If there is a manufactured consensus that the almost purely
       | quantitative approach is the _most sensible thing to do_, then
       | any failures or short-comings can be hand-waved away. Say sorry,
       | "the model/algorithm did it", and just ignore the issue or apply
       | a minor manual fix. This is a huge benefit for decision-makers
       | wishing to maintain their status/livelihoods in both the public
       | and private sector. Crucially, this excuse works if you're just
       | ineffective, or if you're a bad actor.
       | 
       | Note that this is a critique of CEOs and government officials,
       | more than of engineers -- we would only be complicit by
       | association. If there is a critique for engineers, it's that we
       | provide fodder for the excess of optimism in summary point 1
       | because we love playing with our tools, and that we allow
       | ourselves to be the scapegoat for summary point 2.
        
       | shannifin wrote:
       | > I don't see any path from continuous improvements to the
       | (admittedly impressive) 'machine learning' field that leads to a
       | general AI any more than I can see a path from continuous
       | improvements in horse-breeding that leads to an internal
       | combustion engine.
       | 
       | While I also don't expect that AGI will emerge solely through
       | optimizing statistical inference models, I also don't think
       | "improvements to the machine learning field" consist _only_ of
       | such optimizations. Surely further insights, paradigm shifts,
       | etc., will continue to play a role in advancing AI.
       | 
       | Perhaps it's more a matter of semantics and a bad analogy;
       | "machine learning" seems far more broad a field than "horse-
       | breeding." Horse-breeding is necessarily limited to horses.
       | Machine learning is not limited to a specific algorithm or data
       | model.
       | 
       | Even calling it a "statistical inference tool", while not wrong,
       | is deceptive. What exactly does he or anyone expect or want an
       | AGI to do that can't be understood at some level as "statistical
       | inference"? One might say: "Well, I want it to actually
       | _understand_ or actually _be conscious_. " Why? How would you
       | ever know anyway?
        
         | mirekrusin wrote:
         | It gets philosophical quickly, is "consciousness" repeatedly
         | modifying cloud of random floats?
        
       | MAXPOOL wrote:
       | For a short and very non-technical article, this is well written.
       | 
       | The current approach to machine learning is not going to go
       | towards general-purpose AI with steady steps and gradual
       | innovations. Things like GPT-3 seem amazingly general at first.
       | But even it will quickly plateau towards the point where you need
       | a bigger and bigger model, more and more data, and training for
       | smaller and smaller gain.
       | 
       | There need to be several breakthroughs similar to the original
       | Deep Learning breakthrough away from statistical learning. I
       | would say it's 4-7 Turing awards away at a minimum. Some expect
       | less, some more.
        
         | mirekrusin wrote:
         | Strange you're saying that, the unexpected outcome from gpt3
         | was specifically that it did not plateau as they were expecting
         | and quite opposite deeper understanding emerged in different
         | areas.
        
       | [deleted]
        
       | taylorwc wrote:
       | Typo in the title, ought to be "Skeptic." Unless, that is, his
       | skepticism is also directly tied to handling sewage.
        
         | 3gg wrote:
         | Even if you look up "skeptic" on dictionary.com, it will
         | suggest the alternative spelling.
         | 
         | https://www.dictionary.com/browse/skeptic
         | 
         | English is not just spoken in 'murica.
        
         | stan_rogers wrote:
         | No, both spellings are good. The sewage thing would be
         | "septic".
        
           | [deleted]
        
       | a-dub wrote:
       | they say that those who ignore the past are doomed to repeat it,
       | data driven algorithms provide statistical guarantees of
       | repeating it.
        
       | m0rphy wrote:
       | ML or not, at the most fundamental level, classical computers
       | simply do not possess the type of logic that's truly reflective
       | of our reality. Its binary nature forces it to always resolve any
       | single statement to either a true or false answer only.
       | 
       | A very simple example. If we ask our classical computer this
       | question "are people currently supportive of COVID-19 vaccines?",
       | then it would probably give us a straight answer of either a
       | "yes" or "no" based on statistical inference of the percentage of
       | total people who have received vaccinations at this point.
       | 
       | At its most fundamental level, classical computers just cannot
       | comprehend a reality that could resolve that answer to both "Yes"
       | and "No" in a single statement, which btw is possible in a
       | quantum computing environment under its superposition state.
       | 
       | In our reality, some people who may not be fully supportive of
       | the vaccines, but under special circumstances they may be forced
       | to receive it because of workplace requirements, pressures from
       | their loved ones, etc...
        
       ___________________________________________________________________
       (page generated 2021-07-31 23:00 UTC)