[HN Gopher] Anthropic AI
       ___________________________________________________________________
        
       Anthropic AI
        
       Author : vulkd
       Score  : 75 points
       Date   : 2021-05-28 17:25 UTC (5 hours ago)
        
 (HTM) web link (www.anthropic.com)
 (TXT) w3m dump (www.anthropic.com)
        
       | gavanwilhite wrote:
       | This looks quite promising!
        
         | mark_l_watson wrote:
         | I agree! Love the public benefit aspect of Anthropic.
        
       | Animats wrote:
       | Their paper "Concrete problems in AI safety"[1] is interesting.
       | Could be more concrete. They're run into the "common sense"
       | problem, which I sometimes define, for robots, as "getting
       | through the next 30 seconds without screwing up". They're trying
       | to address it by playing with the weighting in goal functions for
       | machine learning.
       | 
       | They write "Yet intuitively it seems like it should often be
       | possible to predict which actions are dangerous and explore in a
       | way that avoids them, even when we don't have that much
       | information about the environment." For humans, yes. None of the
       | tweaks on machine learning they suggest do that, though. If your
       | constraints are in the objective function, the objective function
       | needs to contain the model of "don't do that". Which means you've
       | just moved the common sense problem to the objective function.
       | 
       | Important problem to work on, even though nobody has made much
       | progress on it in decades.
       | 
       | [1] https://arxiv.org/pdf/1606.06565.pdf
        
       | ansk wrote:
       | I can't find any mention of who currently comprises the core
       | research team. It mentions Dario Amodei as CEO, and their listed
       | prior work suggests some others from OpenAI may be tagging along.
       | However, the success of this group is going to be highly
       | dependent on the caliber of the research team, and I was hoping
       | to see at least a few prominent researchers listed. I believe
       | OpenAI launched with four or five notable researchers as well as
       | close ties to academia via the AI group at Berkeley. Does anyone
       | have further info on the research team?
        
         | chetan_v wrote:
         | Seems you can see some of them on their company linkedin page :
         | https://www.linkedin.com/company/anthropicresearch/about/
        
           | ansk wrote:
           | LinkedIn authwall, we meet again. Could someone list the
           | researchers (if there are any, and assuming there are only a
           | few). Frankly, it's not a great sign that the Anthropic site
           | isn't touting the research team itself and LinkedIn sleuthing
           | is even necessary.
        
             | Qworg wrote:
             | Current list (in LI order):
             | 
             | * Dario Amodei
             | 
             | * Benjamin Mann
             | 
             | * Kamal Ndousse
             | 
             | * Daniela Amodei
             | 
             | * Sam McCandlish
             | 
             | * Tom Henighan
             | 
             | * Catherine Olsson
             | 
             | * Nicholas Joseph
             | 
             | * Andrew Jones
        
               | ansk wrote:
               | Thank you.
        
         | phreeza wrote:
         | Chris Olah posted that he is involved.
        
       | n1g3Jude wrote:
       | Complete waste of money.... Better to burn cash directly cause
       | that at least generates heat... This will generate nothing
        
         | etaioinshrdlu wrote:
         | Since this is Hacker News, I'll point out that training on GPUs
         | produces plenty of heat.
        
         | m4t3june wrote:
         | That's not true, they might generate some heat with the GPU
         | training
        
       | joe_the_user wrote:
       | Looks like an interesting project. The thing is, I don't think
       | ideal qualities like "reliable, interpretable, and steerable" can
       | really be simply added "on top of" existing deep learning systems
       | and methods.
       | 
       | Much is made of GPT-3's ability to sometimes do logic or even
       | arithmetic. But that ability is unreliable and even more spread
       | through the whole giant model. Extracting a particular piece of
       | specifically logical reasoning from the model is hard problem.
       | You can do it - N-times the cost of the model. And in general,
       | you can add extras to the basic functionality of deep neural nets
       | (few-shot, generational, etc) but with a cost of, again, N-times
       | the base (plus decreased reliability). But the "full" qualities
       | mentioned initially would many-many extras-equivalent to one-shot
       | and need to have them happen on the fly. (And one-shot is fairly
       | easy seeming. Take a system that recognizes images by label
       | ("red", "vehicle", etc). Show it thing X - it uses the categories
       | thing X activates to decide whether other things are similar to
       | thing X. Simple but there's still lots of tuning to do here).
       | 
       | Just to emphasize, I think they'll need something extra in the
       | basic approach.
        
         | Der_Einzige wrote:
         | Go check out the entire project of captum for pytorch. I assure
         | you that gradient based explanations can be simply added to
         | existing deep learning systems...
        
           | joe_the_user wrote:
           | All sorts of explanation scheme can and have be added to
           | existing processes. They just tend to fail to be what an
           | ordinary human would take as an explanation.
           | 
           | Note - I never argued that "extras" (including formal
           | "explanations") can't be added to deep learning system. My
           | point is you absolutely can add some steps at generally high
           | cost. The argument is those sequence of small steps won't get
           | you to the ideal of broad flexibility that the OP landing
           | page outlines.
        
       | chetan_v wrote:
       | Looking at the team seems to be all ex-openai employees and one
       | of the cofounders worked on building gpt3. Will be exciting to
       | see what they are working on and if it will be similar work to
       | openai but more commercialized.
        
       | andreyk wrote:
       | Excited for this! While OpenAI has generated plenty of overhyped
       | results (imo as an AI researcher), their focus on large scale
       | empirical research is pretty different from most of the field and
       | had yielded some great discoveries. And with this being started
       | by many of the safety and policy people from OpenAI, I am pretty
       | optimistic for it.
        
       | strin wrote:
       | https://techcrunch.com/2021/05/28/anthropic-is-the-new-ai-re...
        
       ___________________________________________________________________
       (page generated 2021-05-28 23:00 UTC)