[HN Gopher] AI thinks like a corporation (2018)
       ___________________________________________________________________
        
       AI thinks like a corporation (2018)
        
       Author : samclemens
       Score  : 84 points
       Date   : 2020-02-03 20:30 UTC (1 days ago)
        
 (HTM) web link (www.economist.com)
 (TXT) w3m dump (www.economist.com)
        
       | ld-50-agi-v3 wrote:
       | My advanced image recognition matched that picture to:
       | 
       | 1) Rectilinear behive (95% confidence) 2) Rectilinear jail (5%
       | confidence) 3) Ancestral forest home (0% confidence)
        
       | aaron-santos wrote:
       | For another take on this corporations [?] ai idea, I cannot
       | recommend enough Charles Stross' 34C3 talk on this very topic.
       | https://www.youtube.com/watch?v=RmIgJ64z6Y4 (picks up approx
       | 10mins in).
        
       | Barrin92 wrote:
       | There's a great article by Ted Chiang about this[1], comparing
       | the psychology of the paperclip AI scenario to the single-minded
       | "increasing shareholder value" mentality of business, in
       | particular, silicon valley startups.
       | 
       | He points out that the companies of the very people who fear the
       | ai-apocalypse do already what they're accusing AI of, pursuing
       | growth and 'eating the world' at all cost only to maximize
       | narrowly defined singular goals.
       | 
       | [1]https://www.buzzfeednews.com/article/tedchiang/the-real-
       | dang...
        
         | bryanrasmussen wrote:
         | So the corporations project their own immorality on AI.
        
           | tomrod wrote:
           | Only if embedded in the cost function or the specific
           | implementation.
           | 
           | AI has no concept of morality.
        
         | joe_the_user wrote:
         | Great quote: _" There are industry observers talking about the
         | need for AIs to have a sense of ethics, and some have proposed
         | that we ensure that any superintelligent AIs we create be
         | "friendly," meaning that their goals are aligned with human
         | goals. I find these suggestions ironic given that we as a
         | society have failed to teach corporations a sense of ethics,
         | that we did nothing to ensure that Facebook's and Amazon's
         | goals were aligned with the public good. But I shouldn't be
         | surprised; the question of how to create friendly AI is simply
         | more fun to think about than the problem of industry
         | regulation, just as imagining what you'd do during the zombie
         | apocalypse is more fun than thinking about how to mitigate
         | global warming."_
         | 
         | The narrative that the danger is corporations might
         | accidentally let AIs gain power is appealing but indeed ought
         | to be challenged.
         | 
         | The paperclip/strawberry monomania fear masks the way things
         | how things are shaping up now. AI is far from being to able
         | intelligently plan over a time horizon - maintaining a real
         | world process all the way to even a smallish goal requires real
         | world corner-case handling that the best AIs haven't got (self-
         | driving cars have five years away for 10-15 years, etc).
         | 
         | But AIs as "intelligent" filters are here now. These allow
         | "better" decision making along with unaccountable decision
         | making, both of which indeed fit well with the agenda of the
         | modern corporation. And that's the thing - the modern
         | corporation is already short sighted, the process of ignoring
         | the potential for climate happened well before automated
         | decision making appeared. But the modern corporation is still
         | more far seeing than a deep learning network and still
         | theoretically and legally accountable to society. However,
         | indeed, deep learning networks that only maximize those
         | qualities that these corporation want maximize and ignore other
         | considerations suit the indeed short-term preferences of the
         | institutions.
        
         | madaxe_again wrote:
         | I'd actually turn both this and the original article around,
         | and say that corporations behave like AIs. They are artificial.
         | They are (variably) intelligent.
         | 
         | The functions of processing inputs to create meaningful outputs
         | are carried out by neurons made of meat (human employees) and
         | of silicon. These processes are governed by rules and
         | metaprocesses, which the entity continuously optimises and
         | improves, in order to further its own growth and to improve its
         | fitness for its purpose - typically, enhancing shareholder
         | value. I cannot think of a single criteria for intelligence or
         | even life that a corporation does not possess, from independent
         | action to stimulus response to reproduction to consumption to
         | predation.
         | 
         | I think the first true hard AI will emerge accidentally, and it
         | will be a corporation that has largely optimised humans out of
         | the loop - but even with humans as a component of a system, a
         | gestalt, an artificial supra-intelligence, can still emerge.
         | 
         | This also neatly sidesteps the whole question of "should AIs
         | have rights?", as corporations are already legally persons.
        
           | qznc wrote:
           | I agree: http://beza1e1.tuxen.de/companies_are_ai.html
           | 
           | It leads to an interesting question: Is a corporation
           | controlled by its CEO?
        
             | madaxe_again wrote:
             | Nice - that overlaps pretty much perfectly with my
             | reasoning!
             | 
             | In my view, no. The CEO is at best a race condition
             | breaking function, at worst, a parasitic infection.
             | 
             | If corporations are self-optimising AIs, then they will
             | optimise CEOs out of existence if they aren't conducive to
             | fitness.
        
       | Animats wrote:
       | ML definitely doesn't. Not a hierarchical system at all. There
       | have been AIs which work like hierarchical control systems, but
       | ML systems are almost the anti-pattern to that.
        
         | jeromebaek wrote:
         | ML is hierarchical. the upstream layers categorise more
         | abstractly and downstream layers categorise more specifically.
        
           | FreakLegion wrote:
           | Not all deep learning is hierarchical and not all machine
           | learning is deep learning.
        
       | tartoran wrote:
       | This could go wrong in so many ways. Perfect way for un-
       | accountability: the algorithm decided and we don't know why
       | because it isn't transparent.
       | 
       | But there's some good potential too. It is a tool, it depends how
       | we wield it or more precisely "who".
        
         | oneplane wrote:
         | Depending on what tool it would be comparable too and how it
         | was used an algorithm or ML model or "AI" would probably end up
         | in the same dual-use discussions as we have now.
         | 
         | A hammer that hammers is a good hammer. So if a system of ML/AI
         | were to be optimised to hammer well (and 'be' a hammer) that is
         | fine. If someone then would decide to use that hammer to attack
         | someone and break their bones, would that be the hammer's
         | fault? Probably not. But when you get a hammer that is
         | optimised for bone-breaking the discussion changes. Same with a
         | kitchen knife vs. hunting knife vs. kabar which are all knives
         | but are also all 'named' with different intentions. And
         | suddenly it's no longer dual-use or 'who is at fault' but in
         | the very grey area of 'what was the intention'. And that brings
         | us back to 'transparency' which loops back in to "what if it is
         | just a tool". Darn circles.
        
       | dr_dshiv wrote:
       | I think the common-man conception of AI is really flawed.
       | Consider that autopilot -- clearly a form of AI -- was invented
       | in 1914. AI is far older than computers and corporations are a
       | clear and excellent example. It's a slippery slope -- and I think
       | it is worth sliding all the way down. Once we realize how
       | completely pervasive AI is, I think we have a much better
       | understanding of how to govern it in the future.
       | 
       | Personally, I find cybernetics to have a much stronger
       | philosophical basis than AI; I particularly like how Norbert
       | Weiner described how cybernetic feedback loops can lower local
       | entropy.
        
         | djs070 wrote:
         | I can't think of the definition of AI you could be using to say
         | the autopilot is an AI. Autopilot didn't teach itself to fly
         | the plane - it was explicitly programmed to do so.
        
           | drdeca wrote:
           | Did Deep Blue teach itself to play chess?
           | 
           | I'm not sure that "self-teaching" should be considered the
           | defining criterion for AI.
        
       | mamon wrote:
       | This is understandeable: Both AI and corporations are psychopaths
       | - they have no emotions to get in the way of rational thinking,
       | with ultimate goal of maximizing their personal gain.
       | 
       | Question remains: how do we teach AI empathy ?
        
         | Ididntdothis wrote:
         | There is a wide range of what people think empathy is and how
         | to express it in their actions. I think the problem with AI is
         | that it allows people to hide behind it so they don't have to
         | take responsibility for the actions of the AI they designed and
         | paid for.
        
         | unishark wrote:
         | It is basically hacking the algorithm. You seek to bias its
         | output toward some other goals versus what the optimal math
         | would give. One might argue that developing such algorithms for
         | good intentions runs the risk of providing technologies for
         | negative intents. But I think the research on the negative side
         | is generally way ahead anyway, as security is so important.
        
         | frandroid wrote:
         | We teach AI empathy by programming it with the heurestics of
         | care--optimizing for human well-being instead of profit. A
         | difficult thing to achieve when the entities paying for the
         | design of AIs are themselves driven by the profit motive, and
         | for-profit AIs are self-perpetuating.
        
         | lukifer wrote:
         | I think there's an argument that "empathy" (or even its wider-
         | scoped sibling "compassion") has a potentially sociopathic
         | element: a sort of cynical manipulation to appear virtuous to
         | the tribe, one which is more effective if it convinces our own
         | brains, so as to more effectively convince others. See Tim
         | Wilson's "Strangers to Ourselves", Hanson & Simler's "Elephant
         | in the Brain", etc.
         | 
         | Supposedly, there's also an empathic element to being an
         | effective hunter, both in the human and the animal kingdom. To
         | deeply understand and empathize with your prey helps you
         | capture and devour them. Indeed, the marketing, advertising,
         | and product development divisions of corporations can be deeply
         | "empathic" to the desires of the buyer, without necessarily
         | being to their benefit.
         | 
         | At any rate, I don't disagree; corporations are agnostic to
         | societal externalities, and highly incentivized to create
         | habit-forming relationships with customers, hence often leading
         | to behavior indistinguishable from sociopathy. To some extent,
         | I think the best we can hope for is aligned incentives;
         | sometimes what's best for a company's bottom line really is to
         | make people's lives better. But we shouldn't be naive to
         | exploitative relationships (even when nominally voluntary), nor
         | should we lean on "The Market" as the singular societal
         | organizing principle.
         | 
         | Extrapolating to AI, we should be very cautious as to what
         | metrics we optimize any particular algorithm for. Corporations
         | optimizing for stockholder returns are ultimately a subset of
         | "paperclip maximizing"; what we really want is a balance of
         | multiple leading indicators of success, and to constantly be
         | tweaking those success conditions as we discover new metrics
         | for measuring human flourishing.
        
       | metabagel wrote:
       | Obligatory plug for The Corporation (book & film)
       | 
       | https://thecorporation.com/
       | 
       | The similarity I see between AI and corporations is single-
       | mindedness. They ignore anything outside the scope of their
       | interest, "externalities" in the parlance of The Coporation.
       | Those externalities can have very real consequences.
        
       | avocado4 wrote:
       | AI doesn't think. It's just a tool based on math.
        
         | MiroF wrote:
         | Humanity doesn't think. It's just a bunch of neurons governed
         | by basic physical laws giving rise to some emergent outcome.
        
         | lukifer wrote:
         | "Just" is something of a sneaky word:
         | https://alistapart.com/blog/post/the-most-dangerous-word-in-...
         | 
         | But by some measure, human thinking is itself math: emergent
         | pattern-recognition traveling through cortical hierarchies and
         | deeply nested layers of abstraction and metaphor. While it's
         | indisputable that there's a qualitative leap between something
         | like computer vision, and the basic reasoning capacity and
         | perceptual systems of a human (even a toddler), there isn't
         | anything intrinsically magical about atoms rather than bytes.
         | It appears to simply be a factor of scale, and the billion-
         | year-old "legacy code" gifted to our wetware by iterated
         | selection pressures.
        
       | blackbear_ wrote:
       | Research needs money.
       | 
       | Big money tends to go towards what generates great returns.
       | 
       | Thus a lot of research is devoted to dull tasks that bring
       | business value.
       | 
       | No need to involve AI, or research for that matter.
        
       | snidane wrote:
       | Us humans like to think that we are the ultimate organisms and
       | everything else is either the material that composes us (cells
       | and tissue) or our products (corporations, cities).
       | 
       | Organisms are scale free systems. It's only our human bias to see
       | humans as importany. If aliens came to visit the Earth, they most
       | likely see this planet covered with massive algae which glow at
       | night (ie. our cities). Cities at night are more noticeable from
       | space than the cells which they are composed of - tissue cells
       | (houses) or blood cells (cars and pedestrians).
       | 
       | For some reason all aliens in movies are pictured as these
       | humanoid and human sized creatures which is most likely due to
       | the same human-centric bias.
        
         | geddy wrote:
         | >Us humans like to think that we are the ultimate organisms and
         | everything else is either the material that composes us (cells
         | and tissue) or our products (corporations, cities).
         | 
         | And yet if we perished from the planet, literally every single
         | other species, animal mineral or vegetable, would benefit from
         | our absence. Everything would grow back at an amazing rate and
         | it would be like we were never here.
         | 
         | Meanwhile, if the bees disappear, we're all screwed.
        
         | keiferski wrote:
         | I strongly recommend the book version (the films by Tarkovsky
         | and Soderbergh are okay but the book is better) of Solaris. It
         | explores this idea of an alien life form that is decidedly un-
         | humanoid.
         | 
         |  _[Lem, the author]... wrote that he deliberately chose the
         | ocean as a sentient alien to avoid any personification and the
         | pitfalls of anthropomorphism in depicting first contact._
         | 
         | https://en.wikipedia.org/wiki/Solaris_(novel)
        
           | AnIdiotOnTheNet wrote:
           | Lem was good at having _alien_ aliens in a way I haven 't
           | seen any other author pull off. I also recommend Fiasco [0].
           | 
           | [0]https://en.wikipedia.org/wiki/Fiasco_(novel)
        
         | david-cako wrote:
         | "Individual" and "collective" is a fundamental/metaphysical
         | concept. General purpose AI probably will have a concept of
         | individual influence/veto among its "nodes" so that it is
         | capable of self organizing and reorganizing.
         | 
         | It remains to be seen if corporate-like structure is something
         | it surpasses, but complex systems seem to usually have orders
         | of influence, and multiple hands on any given wheel.
        
         | thfuran wrote:
         | Even in books, where the limitations of costumes and sets are
         | less relevant, most aliens are at least somewhat humanish.
         | Though there are certainly some more exotic ones like the
         | pattern jugglers, First Person Singular, or the Prime.
        
           | bwi4 wrote:
           | An aside, but Adrian Tchaikovsky's Children of Ruin/Time
           | center on intelligent spiders and mollusks. Earth-centric as
           | they were bio-engineered from Earth native species, but an
           | interesting take on non-human species.
        
             | livueta wrote:
             | If we're name-dropping, the Vernor Vinge pack-mind dog
             | things are another fun example of non-human races with
             | actually non-human modes of cognition, even if they're not
             | as far removed as Solaris/Revelation Space sentient oceans.
             | In contrast, although I really liked A Deepness in the Sky
             | for other reasons, the spiders ended up feeling way too
             | much like humans in funny bodies rather than any kind of
             | actual non-human intelligent species.
        
           | gowld wrote:
           | Also https://en.wikipedia.org/wiki/Solaris_(novel)
        
           | donatj wrote:
           | Really comes down to a lack of imagination. I imagine there's
           | a good chance life elsewhere is unrecognizable to us as life.
           | No id, no ego, just a vivacious electrochemical reaction bent
           | on survival. Think viruses on a larger scale.
        
             | reroute1 wrote:
             | "Good chance" based on what?
        
               | oarsinsync wrote:
               | The size of the universe, and our relative size within
               | it.
        
               | pfdietz wrote:
               | That's a non sequitur.
        
               | reroute1 wrote:
               | Color me unconvinced
        
         | [deleted]
        
       | defanor wrote:
       | There's an article with a similar premise by Charlie Stross [1].
       | Though both seem to apply to basically any software, not just
       | statistical (or whatever currently falls under "AI"), and perhaps
       | (AIUI) can be generalized to just (over)simplified models.
       | 
       | [1] http://www.antipope.org/charlie/blog-static/2018/01/dude-
       | you...
        
       | rotrux wrote:
       | AI thinks like a corporation in the sense that neither really
       | thinks. One is misleading buzzword for modern computer-driven
       | pattern recognition & the other is a legal label for a group of
       | people trying to make money together. This premise is kind of
       | silly.
        
       ___________________________________________________________________
       (page generated 2020-02-04 23:00 UTC)