[HN Gopher] An Open Letter to the Communications of the ACM
       ___________________________________________________________________
        
       An Open Letter to the Communications of the ACM
        
       Author : alokrai
       Score  : 45 points
       Date   : 2020-12-29 21:33 UTC (1 hours ago)
        
 (HTM) web link (docs.google.com)
 (TXT) w3m dump (docs.google.com)
        
       | 323454 wrote:
       | Is there a specific incident that this letter is protesting?
        
         | smitty1e wrote:
         | I understood the point directly.
         | 
         | The culture at large is going to do what it does. There is way
         | to support liberty and deny the right to be a ninny.
         | 
         | But technology should be about the code. It is better to _be_
         | diverse, because being a cool, gentle, adult human being is
         | simply the Golden Rule applied; rather than _flogging_
         | diversity.
         | 
         | We see endless codes of conduct, statistical analyses, and
         | innocents getting thrashed for some purely Kafka-esque
         | infraction.
         | 
         | I would say: "to hell with that" except that in many ways, hell
         | has arrived.
        
         | DoofusOfDeath wrote:
         | I'm confused as well. I think a letter like this would be
         | improved by adding some supporting references or examples.
        
         | navait wrote:
         | Yeah - I'm confused if this is a general sentiment going on, or
         | a recent incident going on. I did like "Quantum Computing Since
         | Democritus"
        
         | AlanYx wrote:
         | It seems to be at least partially in response to the blacklist
         | of researchers/students that Anima Anantkumar prepared and
         | circulated and then eventually retracted about three weeks ago.
        
           | threwaway4392 wrote:
           | Related discussion
           | https://news.ycombinator.com/item?id=25419844
        
           | incompatible wrote:
           | And presumably because she is one of many on the ACM
           | Communications Editorial Board: https://cacm.acm.org/about-
           | communications/editorial-board/
        
           | [deleted]
        
         | Jtsummers wrote:
         | Yeah, I'm trying to find some context but it's difficult. Many
         | keywords paired with "Communications of the ACM" just result in
         | articles _from_ CACM instead of _about_ CACM. I 'm checking out
         | the blogs of some of the signatories but haven't seen anything
         | yet.
        
       | [deleted]
        
       | say_it_as_it_is wrote:
       | Many industries are having the "and then they came for me"[1]
       | moment, but it is particularly pronounced in academia.
       | Intellectual giants such as Steven Pinker are attacked regularly
       | by fellow academics on the basis of not conforming. People such
       | as the group here must take a stand to stop this behavior.
       | 
       | [1] https://en.wikipedia.org/wiki/First_they_came_..
        
       | threwaway4392 wrote:
       | Important context: https://news.ycombinator.com/item?id=25419844
       | 
       | The author of the blacklist retracted the list and apologized.
        
         | itronitron wrote:
         | Might be the right time for any associated professional
         | associations to review their code of conduct.
        
         | google234123 wrote:
         | Is she still trying to cancel stephen pinker? I found it ironic
         | that she associated a jewish scientist with neo-nazism.
        
         | sjcoles wrote:
         | Funny, I get the feeling someone who looked different would be
         | immediately dismissed from their position.
        
       | gfodor wrote:
       | This stuff is just a cold religious war, between the believers
       | and non-believers. The sooner we recognize it as such, and
       | properly cap the blast radius as we have with prior religious
       | differences, the better. Until then this category error will
       | plague us all.
       | 
       | https://newdiscourses.com/2020/09/first-amendment-case-freed...
        
       | rosstex wrote:
       | The only reference to this I can find:
       | https://twitter.com/pmddomingos/status/1344028996850180097
       | 
       | Notably, Pedro Domingos recently fought against NeurIPS'
       | ethics/social impact requirement in research papers. The argument
       | here is likely related, that science be portrayed in a vacuum and
       | not be diminished based on possible societal harm or political
       | biases.
        
         | joshuamorton wrote:
         | Which is truly an exceptionally dumb position to take. Many
         | other sciences have IRB and human factors approvals for
         | experiments that involve or may involve humans. Computer
         | Science (and specifically ML, a lot of HCI research _does_
         | involve human subjects approval) ignores all of the potential
         | for harm here.
         | 
         | We've seen that go terribly wrong in other fields, and in the
         | ML field.
        
           | croissants wrote:
           | I agree with the general conclusion that machine learning
           | (just like any way of making decisions) involves ethics, but
           | I disagree with the conclusion that every NeurIPS submission
           | should have a concluding 1-2 paragraphs about the ethics of
           | the submission. I reviewed for NeurIPS this year, and almost
           | all of the ethics sections consisted of vacuous stuff along
           | the lines of "uh, I guess...being efficient is good, might
           | save some energy? and our method is pretty efficient, so
           | that's nice".
           | 
           | I think a system where area chairs skim abstracts and solicit
           | these ethics paragraphs where they seem necessary -- so, not
           | for the billionth paper obtaining pretty bounds for some
           | variant of convex optimization -- makes more sense.
           | 
           | I know that "think of the machine learning researchers!" is
           | not a take that engenders sympathy, but we're probably
           | talking thousands of human hours spent writing these things.
           | It's not nothing.
        
             | joshuamorton wrote:
             | Fair enough (I'm not a Neurips reviewer or author), but I
             | can accept the idea that convex optimization papers likely
             | don't need to pre-write a societal impact statement
             | (although I think that, as controversial as the paper may
             | be, stuff like the Gebru et. al. paper on large language
             | models ethical concerns shows that there probably is room
             | for more interesting thought than just "ehhh, efficiency"
             | in many places).
             | 
             | But the subtext of this letter goes fairly deep (and to be
             | clear this isn't meant as a response to you specifically):
             | 
             | - While it doesn't specifically say it, this letter does
             | appear to be at least somewhat anti-ethical concerns in
             | general, which isn't good.
             | 
             | - At least one of the signatories of this letter has not,
             | in my opinion at least, followed the guidelines in this
             | letter. Insinuating that a colleague of holds an opinion
             | because they watch too much online porn is in no way civil,
             | and is absolutely a personal attack.
             | 
             | - Given the "disagreement" between Domingos and Anandkumar,
             | who as others mention, sits on the ACM editorial board,
             | this could be an attempt to censure or "cancel" her for her
             | personal views. This is antithetical to the values held in
             | the letter itself, and leaves a sour taste.
        
           | defen wrote:
           | > Many other sciences have IRB and human factors approvals
           | for experiments that involve or may involve humans.
           | 
           | As far as I know, those are for experiments that directly
           | interact with humans as part of the experiment itself. They
           | exist because of a long history of unethical researchers
           | testing interventions (or lack thereof) on people without
           | their knowledge or informed consent - e.g. Tuskegee Syphilis
           | Experiment, MK-ULTRA, Stanford prison experiment (there are a
           | lot). The closest equivalent in CS of directly-interacting
           | experiments would be HCI research as you said. I could also
           | see a strong argument being made for research that uses the
           | creative or copyrighted output of a person without their
           | consent - for example, their face as part of training facial
           | recognition software, their written words as part of training
           | GPT-3, their voice as part of training voice recognition
           | software, etc.
           | 
           | However the incident in question is really about a different
           | kind of thing - it's asking researchers to speculate on the
           | future ramifications of their research as it pertains to
           | progressive ideals - in practice this means, "How could this
           | negatively affect minorities or the environment?" These
           | aren't inherently bad things to think about, but as you get
           | further and further away from concrete applications of ML, it
           | begins to look more and more like a religious ritual than
           | something that is actually trying to address the stated
           | problems.
        
         | Jtsummers wrote:
         | [deleted]
         | 
         | Parent comment has been edited to address my comment.
        
           | rosstex wrote:
           | Oops! Retracted, thanks for checking this.
        
             | Jtsummers wrote:
             | I can no longer delete my comment, but since you edited
             | your original I'll edit my prior comment to [deleted].
        
               | rosstex wrote:
               | Much appreciated. Only on HN.
        
         | throwawaysea wrote:
         | Pedro Domingos, for those who don't know, wrote "The Master
         | Algorithm" and is a professor at the University of Washington.
         | Recently, Anima Anandkumar (Director of AI at nvidia) tried to
         | get him cancelled/blacklisted. I wrote more about the incident
         | in this comment from a past HN discussion:
         | https://news.ycombinator.com/item?id=25419871
        
       | mellosouls wrote:
       | Hmmm. I'm not sure how successful this can be since the 3rd
       | principle about not discriminating on identity etc (though
       | righteous) is often that used to suppress research in the past.
       | 
       | Without some Asimovian style "except where that counters the 1st
       | principle" appended to "lower" principles it will continue to be
       | abused by those who would constrain science that offends them.
        
         | incompatible wrote:
         | Asimov himself has been accused of sexual harassment, although
         | I haven't yet seen a sign that his three laws are now guilty by
         | association.
        
       | forrestthewoods wrote:
       | Google Docs on iOS Safari is unreadable garbage. Here's what this
       | post looks like on my phone. I wish I could hide docs.google.com
       | links from HN.
       | 
       | https://i.imgur.com/7cuifN3.png
        
       ___________________________________________________________________
       (page generated 2020-12-29 23:00 UTC)