[HN Gopher] Artificial Intelligence - The Revolution Hasn't Happ...
       ___________________________________________________________________
        
       Artificial Intelligence - The Revolution Hasn't Happened Yet
        
       Author : seagullz
       Score  : 102 points
       Date   : 2020-12-24 18:47 UTC (4 hours ago)
        
 (HTM) web link (medium.com)
 (TXT) w3m dump (medium.com)
        
       | klenwell wrote:
       | _The problem had to do not just with data analysis per se, but
       | with what database researchers call "provenance" -- broadly,
       | where did data arise, what inferences were drawn from the data,
       | and how relevant are those inferences to the present situation?
       | While a trained human might be able to work all of this out on a
       | case-by-case basis, the issue was that of designing a planetary-
       | scale medical system that could do this without the need for such
       | detailed human oversight._
       | 
       | I'm not a data scientist and I've never encountered that term
       | "provenance" before but I've encountered the problem he talks
       | about in the wild here and there and have searched for a good way
       | to describe it. His ultrasound example is a great, chilling,
       | example of it.
       | 
       | I also like the term "Intelligence Augmentation" (IA). I've
       | worked for a couple companies who liberally sprinkled the term AI
       | in their marketing content. I always rolled my eyes when I came
       | across it or it came up in say a job interview. What we were
       | really doing, more practically and valuably, was this: IA through
       | II (Intelligent Infrastructure), where the Intelligent
       | Infrastructure was little more than a web view on a database that
       | was previously obscured or somewhat arbitrarily constrained to
       | one or two users.
        
         | jamesblonde wrote:
         | We address the problem of adding provenance without rewriting
         | your tensorflow/scikit-learn/pytorch/pyspark application by
         | adding CDC support in the ML stack and collecting all events in
         | a metadata layer, building an implicit provenance graph. It's
         | now part of the open-source Hopsworks platform. See this USENIX
         | OpML'20 talk on it: https://www.youtube.com/watch?v=PAzEyeWItH4
        
         | mlthoughts2018 wrote:
         | Data provenance is a standard term of art in machine learning
         | and data science, a "data 101" kind of thing, with many OSS and
         | vendor tools built up to solve provenance problems, like DVC,
         | Pachyderm, kubeflow, mlflow, neptune, etc.
        
           | ACow_Adonis wrote:
           | worked with stats, machine learning and data science for 10+
           | years now. never heard the term used until now. (that's not
           | to say I'm not familiar with the things the term refers to,
           | indeed, most of the intellectual frameworks I've worked with
           | break each of the things that make up provenance into far
           | more fine grained concepts).
           | 
           | course, I've also never heard of or touched the software you
           | listed there either, but that may be because I don't view the
           | data science and machine learning I'm interested in as being
           | about specific software or vendor software...
           | 
           | sounds more database- lingo to me...
        
             | mlthoughts2018 wrote:
             | It's shocking if you've worked professionally in statistics
             | and not heard about data provenance.
             | 
             | A few publications from ~2011-2015 period:
             | 
             | http://ceur-ws.org/Vol-1558/paper37.pdf
             | 
             | https://ieeexplore.ieee.org/document/5739644
             | 
             | https://link.springer.com/chapter/10.1007/978-3-642-53974-9
             | _...
             | 
             | Add a variety of additional links dating back a bit further
             | (note the emphasis in this case on _research_ data and
             | tracking state of an experiment).
             | 
             | https://nnlm.gov/data/thesaurus/data-provenance
             | 
             | Data provenance is not a database / data warehouse term. It
             | is uniquely and specifically a basic "101" concept of
             | statistical science and ML / data science, where the
             | custody and tracking of data are specifically tied to
             | iterations of experiments, prototypes and research, for the
             | sake of reproducibility.
             | 
             | If I was interviewing an experienced statistical researcher
             | and they didn't at least have a working knowledge of the
             | core concepts, that would be a huge red flag.
        
             | renjimen wrote:
             | I've also worked as a data scientist for a few years and
             | have never heard or used the word "provenance" in a DS
             | context. Some people used it in the oil & gas industry when
             | talking about where reservoir sands came from, but that
             | usually garnered a eye-roll and mental translation to more
             | everyday language.
        
         | btilly wrote:
         | Provenance is an idea that shows up in multiple fields. I first
         | encountered it in discussions of archeology. But then it showed
         | up in, for example,
         | https://www.ralfj.de/blog/2020/12/14/provenance.html discussing
         | how improper handling of pointer provenance can cause code to
         | get miscompiled.
         | 
         | https://en.wikipedia.org/wiki/Provenance gives more on the term
         | and the way it shows up.
        
           | jjeaff wrote:
           | You'll hear the term provenance used quite a bit on PBS's
           | long running Antiques Roadshow.
        
             | kodah wrote:
             | Provenance is also used in wine and art where a chain of
             | custody, which the value largely hinges on, must be through
             | trustworthy people or institutions.
             | 
             | More interestingly, both wine and art have had their
             | provenance hinges widely exploited for massive profit while
             | posh people think they're enjoying something exclusive.
        
         | ape4 wrote:
         | Wikipedia says "Provenance is conceptually comparable to the
         | legal term chain of custody."
         | https://en.wikipedia.org/wiki/Provenance
        
           | thedudeabides5 wrote:
           | If you (ever) need to update your data, you need to know
           | where you got it from, what was wrong with it originally, and
           | how to pull it again.
        
         | agency wrote:
         | The IA terminology brings to mind the classic "Augmenting Human
         | Intellect"[1] essay by Doug Engelbart (famous for giving "The
         | Mother of all Demos"[2])
         | 
         | [1] https://www.dougengelbart.org/content/view/138
         | 
         | [2] https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
        
           | bigbubba wrote:
           | It reminds me of the memex essay:
           | https://www.theatlantic.com/magazine/archive/1945/07/as-
           | we-m...
           | 
           | https://en.wikipedia.org/wiki/Memex
        
       | ksec wrote:
       | While real AI hasn't really happened yet, Machine Learning has
       | definitely made a big impact with lots of potentials. I think we
       | are still in the middle of the S Surve in ML.
       | 
       | And AI is like.... Fusion? We are always another 50 years away.
        
       | xmo wrote:
       | Cross posted medium link:
       | https://medium.com/@mijordan3/artificial-intelligence-the-re...
        
         | dang wrote:
         | Since the original URL
         | (https://rise.cs.berkeley.edu/blog/michael-i-jordan-
         | artificia...) is responding slowly and points to the medium.com
         | URL as the original source anyhow, we've changed to the latter.
         | Thanks!
        
       | ipnon wrote:
       | So what do we name this new emerging engineering discipline?
       | 
       | AI engineering?
       | 
       | Cybernetic engineering?
       | 
       | Data engineering?
        
         | beaconstudios wrote:
         | Cybernetics and systems engineering certainly has to make a
         | comeback if we are to solve coordination problems like this at
         | planetary scale. It deeply saddens me that we almost reached a
         | popular acceptance of cybernetics in the 60s, but it passed us
         | by - we'd be in a much better position now if it had become a
         | mainstream science in the way that other, much less useful
         | sciences have.
        
           | thx2099100 wrote:
           | I agree.
           | 
           | As I see from reading a little about the field's history and
           | the literature, it suffered the same fate of other endeavors
           | that are complex and still have a lot to be solved.
           | 
           | people become interested in it, try to find simpler 'popular'
           | formulation and then the watered down versions become more
           | popular than the original more complex version that need more
           | rigor and discipline.
           | 
           | the watered down versions become more popular but without the
           | rigor and discipline, you can argue and conclude everything
           | and they opposite with these tools.
           | 
           | so people on the outside see the field as yet another fad and
           | the whole field die down taking down with it the original
           | version.
           | 
           | much like in AI with everyone labeling their stuff as AI
           | which dilute the term and more and more as time passes.
           | 
           | what Cybernetics and systems engineering needs is a
           | rebranding and separation from the more 'soft' side that
           | developed latter.
           | 
           | this is where I think some researchers on category theory
           | like Jules Hedges might help. it would help defining
           | dynamical and more general system in a vague but still formal
           | way, say with a computer proof assistant sort of tool.
        
       | dhairya wrote:
       | Part of the challenge of pursuing this comprehensive type of AI
       | infrastructure is that it requires massive coordination and
       | collaboration. Unfortunately the incentives in both industry and
       | academia make it difficult to even start such a project. As a
       | result we're stuck with incremental work on narrow problems.
       | 
       | I've been on both sides of table (started in industry developing
       | AI solutions and now in academia pursuing phd in AI). When I was
       | on the industry side, where the information and infrastructure
       | was there to build such a system, you had to deal with the
       | bureaucracy and institutional politics.
       | 
       | In academia, the incentives are aligned for individual production
       | of knowledge (publishing). The academic work focuses on small
       | defined end-to-end problems that are amenable to deep learning
       | and machine learning. The types of AI models that emerge are
       | specific models solving specific problems (NLP, vision, play go,
       | etc).
       | 
       | It seems to move towards developing large AI systems we need a
       | model of new collaboration. There are existing models in the
       | world of astrophysics and medical research that we can look to
       | for inspiration. Granted they have they have their own issues of
       | politics but it's interesting that similar scope projects haven't
       | emerged on the AI side yet.
        
       | boltzmannbrain wrote:
       | This post should (1) reflect the 2018 posting date, and (2) the
       | main hosting site:
       | https://hdsr.mitpress.mit.edu/pub/wot7mkc1/release/9
        
       | nextos wrote:
       | Dead link for me, but archive.org has a snapshot:
       | https://web.archive.org/web/20201224185231/https://rise.cs.b...
        
       | Ericson2314 wrote:
       | The reason we don't just have great expert systems from the last
       | 30 years is because Capital is more interested in cutting wages
       | than increasing productivity.
        
       | tucnak wrote:
       | >Artificial Intelligence - The Revolution Hasn't Happened Yet
       | 
       | No shit
        
       | yalogin wrote:
       | The phrase AI always bothered me. What we have is a generic way
       | to do "curve fitting" on a large amount of data. Nothing more.
       | The one difference is the "curve" is a black box but it still
       | strictly adheres to the input used.
        
       | MichaelRazum wrote:
       | Actually I think the first example was a really simple case,
       | where statistics would expose the error. So even the doctor said,
       | that they experienced an uptick in Down syndrome diagnoses. So
       | basically they just didn't investigate it properly. From my
       | experience every advanced ML-System have proper monitoring and
       | such anomalies would be detected very fast. Especially when you
       | change the machines. Actually it is a shame that the doctors
       | couldn't figure it out by themselves or at least investigate it
       | properly.
        
       | drevil-v2 wrote:
       | I wonder what is the end game in the reality where we do achieve
       | Artificial General Intelligence? It seems like a ethical
       | minefield to me.
       | 
       | You have companies like Uber/Lyft/Tesla (and presumably the rest
       | of the gig economy mob) waiting to put the AI into bonded/slave
       | labor driving customers around 24/7/365.
       | 
       | If it truly is a Human level intelligence, then it will have
       | values and goals and aspirations. It will have exploratory
       | impulses. How can we square that with the purely commercial tasks
       | and arbitrary goals that _we_ want it to perform?
       | 
       | Either we humans want slaves that will do what we tell them to or
       | we treat them like children who may or may not end up as the
       | adults that their parents think/hope they will become? I doubt it
       | is the later because why else would the billions of dollars
       | investment being pumped into AI? They want slaves.
        
         | WitCanStain wrote:
         | I don't think the claim that human-level intelligence entails
         | human ambitions has been substantiated. Why could you not have
         | a system that does things as intelligently as a human but
         | without a will of its own? It would only make sense if having
         | human values and goals is necessary to having intelligence but
         | I don't see how that could be true.
        
         | root_axis wrote:
         | There's no reason to believe that future AGIs will necessarily
         | have values, goals, and aspirations.
        
         | goatlover wrote:
         | To avoid paying employees, creating greater profit margins.
        
         | coddle-hark wrote:
         | The robots will gain civil rights the same way humans did,
         | either by means of violence or swaying public opinion.
         | Hopefully the latter. This isn't a guess as to how future
         | robots will work, this is an observation about how humans work.
        
       | lifeisstillgood wrote:
       | >>> in Down syndrome diagnoses a few years ago; it's when the new
       | machine arrived
       | 
       | Hang on - uptick in _diagnosis_ (ie post amniocentesis) or uptick
       | in _indicators_. One indicates unnecessary procedures, one
       | indicates a large population of previously undiagnosed downs ....
       | 
       | One assumes the indicator - and greatly hope there is improved
       | detection as I had at least one of these scares with my own kids
        
         | gwern wrote:
         | Presumably what he is leaving out is that the increase in
         | white-spots led to more amniocentesis, which then confirms the
         | Down syndrome. If you did amniocentesis on all babies, it would
         | of course increase the diagnosis rate even more.
         | 
         | Whether this is a bad thing, as he claims, depends on whether
         | you believe screening was being done optimally before, and that
         | will depend quite a bit on things left out like the utility of
         | not having a Down baby. (He doesn't present his working out the
         | entire scenario, as it's just an aside, but hopefully before
         | Jordan went around telling people how to change their prenatal
         | screening systems, he did work it out a little bit more than
         | back-of-the-envelope.)
        
         | aaron-santos wrote:
         | More false positives from ultrasounds could lead to more
         | amniocentesis true positives simply by increasing the number of
         | amniocentesis performed. Without more information it's not
         | possible to tell.
        
       | joe_the_user wrote:
       | How would one put it?
       | 
       | "Adaptive Intelligence" might be described as the ability to be
       | given a few instructions, gather some information and take
       | actions that accomplish the instructions. It's "underlings",
       | "minions" do.
       | 
       | But if we look at deep learning, it's almost the opposite of
       | this. Deep learning begins with an existing stream of data, a
       | huge stream, large enough that the system can just extrapolate
       | what's in the data, include data leads to what judgements. And
       | that works for categorization and decision making the duplicates
       | what decisions humans make or even duplicates what works, what
       | wins in a complex interaction process. But all that doesn't
       | involve any amount of adaptive intelligence. It "generalizes"
       | something but our data scientists have no idea exactly what.
       | 
       | The article proposes an "engineering" paradigm as an alternative
       | to the present "intelligence" paradigm. That seems more sensible,
       | yes. But I'm doubtful this could accepted. Neural network AI
       | seems like a supplement to the ideology of unlimited data
       | collection. If you put a limit on what "AI" should do, you'll put
       | a limit on the benefits of "big data".
        
       | [deleted]
        
       ___________________________________________________________________
       (page generated 2020-12-24 23:00 UTC)