[HN Gopher] Our field isn't quite "artificial intelligence" - it...
       ___________________________________________________________________
        
       Our field isn't quite "artificial intelligence" - it's "cognitive
       automation"
        
       Author : vo2maxer
       Score  : 182 points
       Date   : 2020-01-07 12:43 UTC (10 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | YeGoblynQueenne wrote:
       | >> Our field isn't quite "artificial intelligence"
       | 
       | True, but so what? We call it AI and that's that, really. We've
       | been calling it that for 70 years now and it's never been a
       | problem.
       | 
       | And let's be absolutely clear that it's not the _name_ that's
       | confusing the public but the way that industry luminaries promise
       | autonomous cars and robotic maids in the next -3 years, or the
       | way that the technology press -the _technology_ press- can 't get
       | its shit together to figure out the difference between "machine
       | learning", "deep learning" and "AI" as fields of research and as
       | category labels. Of _course_ the lay public is going to be
       | confused if people who are paid to elucidate complex concepts
       | make a mess of it.
        
         | 6gvONxR4sf7o wrote:
         | Probably most of the things being marketed as AI have been
         | around and called statistics for a long time, and it's never
         | been a problem.
        
         | joe_the_user wrote:
         | > " _True, but so what? We call it AI and that 's that, really.
         | We've been calling it that for 70 years now and it's never been
         | a problem._"
         | 
         | That isn't even ... _true_. AI became  "machine learning" in
         | the late 90s/early 2000s and that change happened because the
         | chorus of criticism of "artificial intelligence" had become
         | extremely loud and a less ambitious term served as a refuge.
        
           | YeGoblynQueenne wrote:
           | AI was renamed into many things in the '80s and '90s, for
           | example "Intelligent Systems" or "Adaptive Systems" etc, and
           | that indeed was done to dissociate research from the bad rep
           | that had accrued for AI. But "machine learning" has been the
           | name of a sub-field of AI since the 1950's and it's never
           | stood for the whole, at least not in conferences, papers or
           | any kind of activity of the field.
           | 
           | For example- two of the (still) major conferences in the
           | field are AAAI and IJCAI: the conference of the "Association
           | for the Advancement of Artificial Intelligence" and the
           | "International Joint Conferences of Artificial Intelligence".
           | Neither of those is in any way, shape or form a conference
           | for machine learning only and neither uses machine learning a
           | byname for AI. By contrast, machine learning has its own
           | journal(s actually) and there are specific conferences
           | dedicated to machine learning and deep learning (NeurIPs and
           | ICLR).
           | 
           | Additionally, there are many sub-fields of AI that are not
           | machine learning, in name or function: intelligent agents,
           | classical planning, reasoning, knowledge engineering etc etc.
           | 
           | The only confusion between "AI" and "machine learning" exists
           | in the minds of tech journalists and the people who get their
           | AI news exclusively from the tech press.
           | 
           | P.S. As a side note, the name for what the tech press is
           | doing, referring to the field of AI as "machine learning", is
           | "synecdoche": naming the whole by the name of the part.
        
           | sgt101 wrote:
           | No.
           | 
           | Some people started saying things like that, more in about
           | 2013, but all the time many people have been working on
           | topics like MAS, answer sets, causal logic and other stuff.
           | 
           | At that time the big trend was actually rebranding maimed
           | logical inference as The Semantic Web.
        
       | [deleted]
        
       | jariel wrote:
       | I don't really agree, and think the misnomer should be applied in
       | the opposite direction: AI should be called 'adaptive algorithms'
       | and it should be just another tool the box of CS people.
       | 
       | We're not doing anything that we were not before.
       | 
       | There is no new paradigm shift. There is no AI. There's just a
       | slightly new approach to solving problems. That's it. There's
       | somer really nice improvements in computer vision ... and a few
       | other things ...
       | 
       | ... but all this talk of 'intelligence' etc. should be brushed
       | aside, it's misleading to everyone.
       | 
       | There will be no 'general AI' with our current approaches for a
       | whole variety of reasons.
       | 
       | I'm embarrassed at how so many intelligent colleagues drink the
       | kool-aid on this.
       | 
       | Take classical ML: it was hyped for a while, now it's not as
       | exciting as 'Deep Learning'. Well, in few years, I think that DL
       | will be there as well: just a tool in the toolbox.
        
       | LoSboccacc wrote:
       | not even, it's multivariate regression analysis optimization
        
         | anticensor wrote:
         | It is automated (not _automatic_ ) cognition using multivariate
         | regression techniques, after all, there is something to
         | automate.
        
           | airstrike wrote:
           | I think many take issue with the use of the word "cognition"
           | in that definition.
        
       | sebastianconcpt wrote:
       | Brilliant observation.
       | 
       | And it's even harder than this.
       | 
       | The problem is not that we have a problem. The problem is that we
       | have problems. So the solution is not finding a solution to a
       | problem. The solution is finding a metasolution that is valid
       | across time and tribes. Bam! the challenge of being an
       | intelligent being in this universe. No way we can automate that.
       | Only mimic a portion of it and call it intelligence doesn't make
       | it really intelligent.
        
         | carapace wrote:
         | FWIW, check out the Godel machine:
         | 
         | https://en.wikipedia.org/wiki/G%C3%B6del_machine
         | 
         | http://people.idsia.ch/~juergen/goedelmachine.html
         | 
         | > No way we can automate that.
         | 
         | It's a very very interesting question. Personally I believe
         | that "automaton" is _almost_ the opposite of  "being". But
         | that's just me, not science or other authority. Certainly,
         | somewhere between virus and human _something_ comes into being
         | (no pun intended.) I don 't know of any non-metaphysical
         | argument that we couldn't find some _other_ way to create non-
         | biological general AI.
         | 
         | I think we could genetically engineer human DNA to create
         | wetware G"A"I but I put the "artificial" in quotes to indicate
         | that I'm not saying whether that would count as AI or not. I
         | know of a few efforts to create "Daleks" out of human brain
         | organoids, but I don't think anyone has gone beyond the
         | speculative/hype stage with it so far.
        
       | eanzenberg wrote:
       | Call it whatever you want, I don't care. It's working and
       | improving year over year.
        
       | mcculley wrote:
       | I have been wondering why we don't use the term "synthetic
       | intelligence": https://enki.org/2019/08/18/artificial-
       | intelligence-is-a-dum...
        
         | UncleOxidant wrote:
         | That just replaces the first word with what is essentially a
         | synonym. It's the second word "intelligence" that's the issue.
        
           | mcculley wrote:
           | "artificial" and "synthetic" aren't exactly synonyms in my
           | mind. If I synthesize glucose, there is nothing artificial
           | about it. It just didn't come from a process developed by
           | evolution. Conversely, artificial leather is nothing like
           | real leather.
           | 
           | I'll have to think about that some more.
        
             | kevin_thibedeau wrote:
             | The root artifice has a broader meaning than "fake".
             | Synthesizing something is an application of artifice.
        
             | TheRealPomax wrote:
             | But again, it's the "intelligence" part that's the
             | misnomer. Except for John Carmack, no one's trying to
             | invent general intelligence. Every single bit of work is
             | merely automating tasks that _when performed by humans_
             | requires intelligence... except that too is a misnomer
             | because as humans we literally can 't do anything, no
             | matter how mundane, without it "requiring intelligence".
        
       | ratsmack wrote:
       | I like this comment:
       | 
       | >At the end of the day, "AI" is just glorified statistics
       | (running on increasingly powerful computers).
        
       | godelski wrote:
       | I think it is also important to remember that intelligence isn't
       | clearly defined. It seems a lot of people interpret it in
       | different ways and the definition is closer to pornography (I
       | know it when I see it).
       | 
       | I often see two camps, one that defines intelligence to be more
       | human like. Limiting it to really cetaceans and hominids. Maybe
       | including ravens. The other group gives too vague of a
       | definition.
       | 
       | Personally, I do not see a problem with having lots of bins. I
       | don't think many disagree that intelligence is a continuum. So
       | why restrict it to very high level bins? Because that's the
       | vernacular usage? I for one vote for the many bin and continuum
       | approach. In this I think you could say that ML has some
       | extremely low level form of intelligence, but I would generally
       | say lower level than that of an ant. In that respect, a multi
       | agent system with the intelligence surpassing that of ants I
       | believe would be extremely impressive.
        
       | proc0 wrote:
       | Well said. The definition of intelligence is bastardized for
       | virtually all current AI applications. They are glorified
       | statistical heuristics / stochastic descent as has been mentioned
       | before. The key to approaching actual intelligence as we know it,
       | will be a system that can dynamically model its environment and
       | actors in it, since even insects are able to do this to some
       | extent.
        
       | qwerty456127 wrote:
       | I'd rather call it cognition imitation if you insist to associate
       | it with cognition or intelligence. In fact it's just brute-force
       | statistics.
        
         | savanaly wrote:
         | Are we so sure human cognition isn't this too?
        
       | dr_dshiv wrote:
       | One thing I find strange is how much we emphasize the artificial
       | nature of the intelligence. AI and automation always occurs in
       | the context of human processes. Nothing is truly autonomous, so
       | why design it as if human involvement is a failure? We can easily
       | design artifacts to enhance human intelligence or team
       | intelligence. Why the focus on the machine part and not the
       | overall system that functionally accomplishes the desired work?
        
         | joe_the_user wrote:
         | > _One thing I find strange is how much we emphasize the
         | artificial nature of the intelligence._
         | 
         | We really don't know what intelligence (sans qualifications)
         | is. AI has been a term for effort emulate what we roughly think
         | of as "intelligent" behavior. It's far from successful so far
         | and the lack of a "theory of intelligence" is probably part of
         | that. But it's pretty clear what "AI" researchers and systems
         | are doing now is far from intelligence.
         | 
         | > _AI and automation always occurs in the context of human
         | processes. Nothing is truly autonomous, so why design it as if
         | human involvement is a failure?_
         | 
         | This argument makes as much sense as "we'll never exceed the
         | speed of light, why act like faster transportation matters". An
         | automated factory still requires some maintenance but it's
         | creation certainly is significant.
         | 
         | > _We can easily design artifacts to enhance human intelligence
         | or team intelligence. Why the focus on the machine part and not
         | the overall system that functionally accomplishes the desired
         | work?_
         | 
         | Both approaches matter and since there's really nothing keeping
         | people from doing both of these, people pursue each separately.
         | Moreover, I'd say AI research could do well to cross-pollinate
         | with human-computer interaction theory.
         | 
         | But overall, you seem to just not understand why automation
         | matters - automation has brought vast productivity in a variety
         | of fields. It may or may not be possible other further fields
         | but if it is, it will transform the world equivalently.
        
         | carapace wrote:
         | "Intelligence Amplification" (IA) is a thing:
         | https://en.wikipedia.org/wiki/Intelligence_amplification
         | 
         | FWIW, I think that AI offers to _offload_ thinking (whether it
         | delivers or not is another thing) while IA appeals to people
         | who want to improve their own intelligence. Maybe I 'm too
         | cynical, but the former seems more popular than the latter.
        
       | cjauvin wrote:
       | Following the recent "AI Debate" between Yoshua Bengio and Gary
       | Marcus [0], there was a lot of discussion about the exact
       | definition (or redefinition even, as some argued) of some labels
       | like "deep learning" and "symbol" (what do we mean exactly by
       | these?), I find that it is quite relevant to this discussion.
       | 
       | [0] https://www.youtube.com/watch?v=EeqwFjqFvJA
        
       | [deleted]
        
       | choonway wrote:
       | Nope. It's just pattern recognition.
        
         | sgt101 wrote:
         | What is? I assume you mean machine learning? Ok... What about
         | one shot learning like lake and tabembauns bpl? What about
         | optimal resource allocation in auctions? Is this recognising
         | patterns in event spaces larger than the number of atoms in the
         | universe?
        
       | amrrs wrote:
       | Francois Chollet's discussion with Lex Fridman (first half) is an
       | interesting one on AGI - Video - https://youtu.be/Bo8MY4JpiXE
        
       | dsr_ wrote:
       | All programming is, is the reification of decision making.
        
       | shmerl wrote:
       | I think a better term you are contrasting it with is artificial
       | mind, not artificial intelligence.
        
       | liamcardenas wrote:
       | In my opinion, even calling it "cognitive" is too generous.
       | 
       | What makes it "cognitive" instead of just "normal" automation?
       | Because it's dealing with information rather than the physical
       | world?
       | 
       | I think a better term is statistical or digital automation.
        
       | GuB-42 wrote:
       | From the Merriam-Webster dictionary.
       | 
       | Definition of cognitive 1 : of, relating to, being, or involving
       | conscious intellectual activity (such as thinking, reasoning, or
       | remembering) 2 : based on or capable of being reduced to
       | empirical factual knowledge
       | 
       | Using "cognitive" instead of "intelligence" puts the emphasis on
       | data processing rather that adaptability, which may be a bit more
       | in line with how things are done today. However, it doesn't
       | addresses the core of the debate. The usual "[technology] isn't
       | [AI/cognitive automation] because it can't do [thing humans do],
       | it is just [thing computers do]". Both terms relate to
       | consciousness, and are generally considered fundamentally human
       | qualities.
       | 
       | I think there is simply no way out of that debate. Maybe use a
       | term that it sounds completely unrelated to human activity, maybe
       | something like "Big Data Statistical Matching".
        
       | knolan wrote:
       | It's curve fitting.
        
         | gfodor wrote:
         | It seems unclear if your brain is also curve fitting. Time will
         | tell hopefully.
        
       | 0xdeadbeefbabe wrote:
       | it's automation
       | 
       | Edit: artificial automation
       | 
       | Edit: computer science
       | 
       | Edit: pseudo science?
        
       | jokoon wrote:
       | Intelligence doesn't have a lot of scientific ground either. It's
       | pretty hard to define what intelligence is, or at least have a
       | scientific definition that is precise enough. The Turing Test is
       | only a measure, it doesn't help to reach a definition.
       | 
       | Practical research will always hit a ceiling if scientists cannot
       | try to define what they're looking for.
       | 
       | Even machine learning is not a good definition. There are other
       | attemps, like "sophisticated statistics" or "statistical
       | prediction".
       | 
       | Kudos for this tweet.
        
       | hprotagonist wrote:
       | As usual, the monks know what the laity doesn't, and aren't
       | particularly afraid to talk about it. Also as usual, there's
       | still a yawning gap between what domain experts are up to and
       | what non-domain experts think they're up to.
       | 
       | That this is true in AI is not surprising; humility comes from
       | knowing that my domain expertise in some fields (and thus a
       | clearer picture of 'what's really going on') is guaranteed to be
       | crippled in other fields. Knowing that by being in some knowledge
       | in-groups requires me to also be in some knowledge out-groups is
       | the beginnings of a sane approach to the world.
        
         | Invictus0 wrote:
         | The author is just correcting a misnomer. It is not really
         | accurate to say that machine learning is intelligent at all, so
         | why label it as such? It's confusing for everyone and leads to
         | great misunderstandings.
        
           | pmelendez wrote:
           | I don't think there are many definitions of machine learning
           | that claim the models to be intelligent. Most of them limit
           | the term to models that can be built from data.
           | 
           | Learning is a skill that not necessarily comes with an
           | "intelligent" label attached to it.
        
             | kkwak wrote:
             | Have we even defined 'intelligent' might mean? As in, we
             | had the Turing test as a bar and we are close to that
             | already. What is intelligence then, last I checked, there
             | wasn't a definitive answer do it. We'll need it so that we
             | can label AI as I properly - or maybe we don't care so
             | much... If it's close enough...
        
               | hprotagonist wrote:
               | there are several hundred competing definitions of
               | "intelligence". No consensus, as they say, has been
               | reached.
        
               | aSplash0fDerp wrote:
               | Once we start seeing cheaply made, imported yes/no
               | engines (masquerading as AI or knowledge) flooding the
               | market, the definition of intelligence will be lost on
               | marketing anyways (unlimited data, superfood, etc)
        
           | PeterisP wrote:
           | Machine learning is a particular narrow result of studying
           | the wider field of artificial intelligence. Just as expert
           | systems, or rdf knowledge representation, or first order
           | logic reasoners, or planning systems - none of them are
           | 'intelligent' but all of them are research results coming
           | from (and being studing in) the discipline of studying how
           | intelligence works and how can something like it be
           | approached artificially.
           | 
           | There's lots in the field of AI that is _not_ 'cognitive
           | automation' - many currently popular things and use cases
           | are, but that's not correcting a misnomer, that's a separate
           | term for a separate (and more narrow) thing - even if that
           | narrower thing constitutes the most relevant and most useful
           | part of current AI research.
           | 
           | A classic definition of intelligence (Legg&Hutter) is
           | "Intelligence measures an agent's ability to achieve goals in
           | a wide range of environments". That's a worthwhile goal to
           | study even if (obviously) our artificial sytems are not yet
           | even close to human level according to that criteria; and
           | while it is roughly in the same direction as 'cognitive
           | automation', it's less limited and not entirely the same.
           | 
           | For example, 'cognitive automation' pretty much assumes a
           | fixed task to execute/automate, and excludes all the nuances
           | of agentive behavior and motivation, but these are important
           | subtopics in the field of AI.
           | 
           | But I am willing to concede that very many people _are_
           | explicitly working only on the subfield of  'cognitive
           | automation' and that it would be clearer if these people (but
           | not all AI researchers) explicitly said so.
        
             | joe_the_user wrote:
             | > _Machine learning is a particular narrow result of
             | studying the wider field of artificial intelligence._
             | 
             | I beg to differ, at least as far as terms go now. Neural
             | networks lived in the "field" of machine learning along
             | with Kernel machines and miscellaneous prediction systems
             | circa the early 2000s. Neural network today are known as AI
             | because ... why? Basically, the histories I've read and
             | remember say that the only difference is now neural
             | networks are successful enough they don't have to hide
             | behind a more "narrow" term - or alternately, the hype
             | train now prefers a more ambitious term. I mean, the
             | Machine Learning reddit is one go-to place for actual
             | researchers to discussion neural nets. Everyone now talks
             | about these as AI because the terms have essentially
             | merged.
             | 
             | > _A classic definition of intelligence (Legg &Hutter) is
             | "Intelligence measures an agent's ability to achieve goals
             | in a wide range of environments"._
             | 
             | Machine learning mostly became AI through neural nets
             | looking really good - but none of that involve them become
             | more oriented to goals, is anything, less so. It was far
             | more - high dimensional curve can actually get you a whole
             | lot and when you do well, you can call it AI.
        
               | PeterisP wrote:
               | What do you mean by "today are known as AI" and "became
               | AI" ?
               | 
               | Neural networks have always been part of AI, machine
               | learning has always been a subfield of AI, all these
               | things are terms within the field of AI since the day
               | they were invented, there never was a single day in
               | history when those things had not been part of AI field.
               | 
               | Neural networks were part of AI field also back when
               | neural nets were _not_ looking really good - e.g. the
               | 1969 Minsky 's book "Perceptrons", which was a
               | description of neural networks of the time and a big
               | critique about their limitations - that was an AI
               | publication by an AI researcher on AI topics.
               | 
               | Your implication that an algorithm needs to do well so
               | that "you can call it AI" is ridiculous and false. First,
               | _no_ algorithm should be called AI, AI is a term that
               | refers to a scientific field of study, not particular
               | instances of software or particular classes of
               | algorithms. Second, the field of AI describes (and has
               | invented) lots and lots of trivial algorithms that
               | approximate some particular aspect of intelligent-like
               | behavior.
               | 
               | Lots of things that now have branched into separate
               | fields were developed during AI research in e.g. 1950s -
               | e.g. all decision making studies (including things that
               | are not ubiquitous such as minmax algorithm in game
               | theory), planning and scheduling algorithms, etc all are
               | subfields of AI. Study of knowledge representation is a
               | subfield of AI; Probabilistic reasoning such as Kalman
               | filters is part of AI; automated logic reasoning
               | algorithms are one more narrow subfield of AI, etc.
        
               | ozim wrote:
               | I think what parent poster means was that for people who
               | don't know better "neural networks === AI". For people
               | who now a bit more, there is bunch of other stuff than
               | just neural networks, and neural networks are not some
               | god sent solution for AI.
        
             | corporateslave5 wrote:
             | The thing with differentiating machine learning and AI, is
             | that nothing in AI world works except machine learning.
             | It's just a bunch of old theories and ideas, none of which
             | have panned out
        
           | cgriswald wrote:
           | > so why label it as such?
           | 
           | Great misunderstandings are often profitable for those who
           | understand.
        
             | Accujack wrote:
             | Or put more simply... marketing.
             | 
             | Every new discovery ever, people wanting to exploit it have
             | done anything necessary to use people's honest interest in
             | new technology and good feeling about human progress to get
             | money or power.
        
         | aSplash0fDerp wrote:
         | > Also as usual, there's still a yawning gap between what
         | domain experts are up to
         | 
         | The best homophone for AI is "beyond be yawned".
         | 
         | Comparitive analysis against refined/biased datasets with
         | Kiptronics (knowledge is power electronics/devices) is going to
         | change the world, but spectacular fodder is to be expected.
        
       | nickpinkston wrote:
       | I see a lot of AI engineers who seem concerned with this
       | particular issue, which I never really understand.
       | 
       | Is it because of a perception that most regular people are likely
       | overestimating the speed of which AI is going to overtake human
       | intelligence? Or more about corp management wanting miracles that
       | aren't possible?
       | 
       | Why does this matter and always seem to be talked about?
        
         | UncleOxidant wrote:
         | Because there's a history of overhyping ML/AI (whatever you
         | want to call it) leading to AI winters. Winter in this case
         | being kind of like a recession in economic terms - most
         | research funding dries up, etc. We essentially had one of those
         | winters from the late 80s until about a dozen years ago. A lot
         | of laymen now think of AI as being "magic" that can do anything
         | and that's not a good thing when the reality turns out to be
         | different.
         | 
         | At this point I don't think we'll see an AI winter as deep as
         | some of the previous ones. But we could certainly see an AI
         | Fall.
        
           | 0xdeadbeefbabe wrote:
           | The name is overhyped and pretentious by itself, and history
           | bears this out. Who cares if it's an AI fall or winter if
           | it's an AI stupid, because of all the credulous students.
           | 
           | Edit: Russel and Norvig's book is good though
           | http://aima.cs.berkeley.edu/
        
           | YeGoblynQueenne wrote:
           | >> Because there's a history of overhyping ML/AI (whatever
           | you want to call it) leading to AI winters.
           | 
           | Note that past AI winters have not occurred because of
           | overhyping _machine learning_. They occurred because of
           | overhyping of _symbolic AI_ that had nothing to do with
           | machine learning. For example, the last AI winter at the end
           | of the  '80s happened because of the overhyping of expert
           | systems- which of course are not machine learning systems.
           | 
           | Machine learning is not all, not even most, of AI,
           | historically. It's the dominant trend right now, but it was
           | not the dominant trend in the past. The dominant trend until
           | the 1980's was symbolic reasoning.
        
             | radarsat1 wrote:
             | But symbolic reasoning mostly worked, did it not? However,
             | its Achilles heel was that for it to be useful, it's
             | necessary to distill a lot of domain knowledge into a
             | format that can be processed by an expert system. That
             | means, writing 10s upon thousands of rows of "if then then
             | that".
             | 
             | Machine learning is different in that it is more amenable
             | to distilling those rules from the data automatically. It
             | is successful where symbolic reasoning failed because it
             | can go from the raw data. A good portion of machine
             | learning research is in new ways to preprocess and format
             | data into a structure that can be further consumed by
             | linear algebra, which turns out to be a lot easier and
             | practical than figure out a huge database of sensible first
             | order predicate logic statements.
             | 
             | If ML techniques can be used to feed symbolic systems, the
             | latter would show promise again, which is already happening
             | in recent trends in causal inference and graph networks.
             | The marriage of these two fields is inevitable, and has
             | already started.
        
             | UncleOxidant wrote:
             | There was a neural net popularity surge in the late 80s,
             | early 90s. Of course, the hardware wasn't there yet to be
             | able to deliver on the promises. I was in a Goodwill book
             | section about a year ago and there were a couple of NN
             | books from that era on sale for $3, one titled "Apprentices
             | of Wonder: Inside the Neural Network Revolution" from 1990
             | and the other was for programmers and included C code for a
             | NN to predict the stock market from 1989. Anyway, that all
             | had died out by about '92 or '93 and NNs were a pretty dead
             | academic topic until about 2005 or so when they figured out
             | that GPUs could be used to accelerate them.
        
         | twblalock wrote:
         | It seems analogous to Searle's "Chinese Room" argument:
         | automated responses to predefined stimuli isn't the same as
         | "intelligence" or "understanding".
         | 
         | The OP suggests modern AI is a fancy way of teaching systems to
         | effectively hardcode or automate their behavior themselves.
         | 
         | I'm not sure why that matters, as long as the results are what
         | we aim for. It's not like most AI researchers are trying to
         | create sentient artificial life-forms.
        
           | taurath wrote:
           | Interestingly though, at the population level it can seem a
           | lot closer.
        
           | mattkrause wrote:
           | It matters because the hard-coded behavior is brittle and
           | often doesn't do exactly what we want (or think).
           | 
           | For example, GPT-2 has been ascribed nearly magical powers:
           | it's a knowledge base, it can play chess, it does calculus,
           | it's a dessert topping AND a floor wax!
           | 
           | When you look closer, however, it doesn't do any of those
           | things particularly well. It can regurgitate something that
           | looks like a true fact--or its negation with equal
           | probability. It doesn't quite know the rules of chess. It
           | needs a solver to check that the solution to an integral is,
           | in fact, a solution.
        
             | whatshisface wrote:
             | All of those caveats apply to human intelligence, but to a
             | lesser degree. Kids can play chess without exactly knowing
             | the rules, and come on, everybody needs to check their
             | integrals.
        
         | TallGuyShort wrote:
         | Personally, when a lay person asks what I do, I like telling
         | people I work on "Artificial Intelligence software" because
         | it's the most accurate term that doesn't (a) get an immediate
         | request to implement their app idea for them and (b) require
         | explaining what machine learning / deep learning is.
         | 
         | But beyond that I hate the term within the industry because I
         | think artificial intelligence gets equated with a Jarvis-like
         | general AI that will talk to you like a superhuman servant. I
         | get the desire to better define the current state of the art.
         | But for most people, I agree it's going to seem like pedantry.
        
           | radarsat1 wrote:
           | > artificial intelligence gets equated with a Jarvis-like
           | general AI that will talk to you like a superhuman servant
           | 
           | to be fair, for a lot of researchers, that _is_ the ultimate
           | end goal, even for those who admit we are not even close to
           | it. I for one first got interested in AI from an 80 's movie,
           | can't remember which, with a character who talked to his
           | computer, which talked back. Since those early years, I
           | haven't spent even one second _working_ on actual AGI, seeing
           | the plethora of subgoals needed to get there, but..
           | _thinking_ about it.. plenty. that dream is a driving force
           | behind more ML /AI researchers than maybe you think.
           | Particularly in the RL community I would guess.
        
           | laichzeit0 wrote:
           | There's already a term you can use. "Statistical learning".
           | There's even a well known important book with that title:
           | Elements of Statistical Learning.
        
             | TallGuyShort wrote:
             | There are books well known in the field with "machine
             | learning" in the title. I don't think that's any clearer to
             | a lay person.
        
         | wayoutthere wrote:
         | It's because this kind of hype inevitably leads to a trough of
         | disillusionment -- the methods we collectively call "AI" today
         | are never going to lead to a general-purpose artificial
         | intelligence. People are disappointed we don't have self-
         | driving cars yet, but it's not clear whether that problem
         | domain is constrained enough for deep neural networks to solve.
         | 
         | What we have developed are ways to automate complex tasks
         | within a constrained input domain that can be easily
         | quantified. It seems like magic, which leads people to say that
         | it's "AI" but in reality it's just a complex automation built
         | through reinforcement techniques that leverage some clever math
         | tricks. Throw an unexpected input or new set of circumstances
         | at the model and you get interesting results.
         | 
         | It's not a sense that people are overestimating the speed with
         | which AI is going to overtake human intelligence -- it's that
         | the techniques we're using today that we call "AI" are not
         | capable of doing anything of the sort.
        
           | hunter-gatherer wrote:
           | This. Working in big Corp and Government I have seen how far
           | this disillusionment can take an organization down the wrong
           | road. Learning how to articulate complex technical /
           | scientific topics to bureaucracy, I am learning, is a very
           | valuable and needed skill amongst engineers.
        
             | wayoutthere wrote:
             | It's a fine line you have to walk. They usually are looking
             | for a person who will tell them what they want to hear, so
             | it's usually a matter of starting with "the art of the
             | possible" (aka a bunch of bullshit they heard on NPR) and
             | working them over to something more realistic.
             | 
             | I've found it helps if you can frame it in the context of
             | the other options (i.e. agree with where they want to go
             | and present multiple ways to get there) they're more
             | receptive. Leaders know about these hype cycles too, but
             | they often have to play along for political reasons and
             | they'll be thankful if you work with them rather than
             | against them.
        
         | liamcardenas wrote:
         | Andrew Yang is a serious contender for the US presidency whose
         | entire platform rests on assumptions about AI. He wants to
         | fundamentally reshape welfare in the country and implement an
         | entirely new tax. Thinking clearly about AI is therefore very
         | important, as it is having real and substantial political
         | implications.
        
           | chillacy wrote:
           | I remember him a year ago saying stuff that wasn't
           | particularly mainstream, like warning about fast food
           | cashiers being replaced by kiosks, malls closing due to
           | competition with Amazon, call center workers being automated,
           | etc.
           | 
           | These are all things that are coming, I have peers working on
           | some of them, but they aren't particularly mainstream, even
           | though the most accessible jobs in the economy fall under
           | those categories.
        
           | chrshawkes wrote:
           | Luckily he won't be elected and shouldn't if he really thinks
           | AI (in its current form) will solve these problems.
        
             | jimbokun wrote:
             | He doesn't think it will solve them, he thinks it is going
             | to cause them, and that we need to be ready with solutions.
             | 
             | Take Universal Basic Income. He is predicting far more jobs
             | are going to be automated in the near future than most
             | people expect, and something like UBI will be needed to
             | keep the people out of work from starving or rioting.
        
       | deesep wrote:
       | When machines transcend beyond their programmed limitations to
       | shape their environment in their own image, then they become
       | truly intelligent.
        
         | ForrestN wrote:
         | Why would a non-human intelligence necessarily have a drive to
         | "shape their environment?" Maybe a non-human intelligence would
         | discover the inevitable end of the habitable universe and opt
         | to just do nothing?
        
           | whatshisface wrote:
           | Nobody's going to pay for AWS hours for a lazy robot. They'll
           | keep changing it does something. The human drive, which is
           | not essentially rational, will give birth to the machine
           | drive, which won't be rational either.
        
       ___________________________________________________________________
       (page generated 2020-01-07 23:00 UTC)