[HN Gopher] 230, or not 230? That is the EARN IT question
       ___________________________________________________________________
        
       230, or not 230? That is the EARN IT question
        
       Author : jbegley
       Score  : 125 points
       Date   : 2020-04-08 17:37 UTC (5 hours ago)
        
 (HTM) web link (signal.org)
 (TXT) w3m dump (signal.org)
        
       | ocdtrekkie wrote:
       | EARN IT is pretty disingenuous in how it is designed, of course,
       | but I am all for making it harder and harder to retain Section
       | 230 immunity: It's a mistake that we allow it in the first place.
       | 
       | We should indeed continue to erode the eligibility for Section
       | 230 to the point that either the limitations of remaining
       | eligible for immunity makes it easy for competitors to produce
       | better offerings without immunity, or that these companies accept
       | legal responsibility for their actions as a cost to doing
       | business the way they want to. Perhaps this is a vehicle upon
       | which we gradually sunset immunity-reliant platforms.
       | 
       | Section 230's supporters constantly push hilariously insane
       | narratives about it's importance, suggesting that without it
       | companies would be inherently violating the law any time one of
       | their users violated the law, or that taking reasonable measures
       | to prevent platform abuse is "impossible" at the scale Big Tech
       | operates at.
       | 
       | It's more than past time that we regulate tech companies and hold
       | them responsible for massive abuses permitted by their platforms
       | just as we regulate every other sector of business.
        
         | IAmEveryone wrote:
         | > suggesting that without it companies would be inherently
         | violating the law any time one of their users violated the law,
         | 
         | Well, that's almost literally what immunity means in this case.
         | I've read that post of yours you linked downthread, and you're
         | basically just saying "courts will be wise enough and make
         | reasonable decisions".
         | 
         | I'm somewhat sympathetic to some expansion of liability.
         | Revenge porn, for example, shouldn't exist. That's real harm
         | being done every day to real people. And the tube sites are not
         | just unwilling to spend money on moderating uploads. They
         | obviously know that a large percentage of uploads are made
         | without full consent, and that content represents a significant
         | chunk of their revenue.
         | 
         | BUT Sec 230 is specifically aimed to indemnify websites that do
         | try to moderate content. Before Sec 230 there was a brief
         | period of time where that theory everyone on the internet
         | believes in even though it is completely stupid was actually
         | true, namely that the act of moderating _some_ content somehow
         | creates an obligation to moderate _all_ content.
        
         | marcinzm wrote:
         | Regulation of this sort generally just helps the incumbent
         | players create a better moat around themselves. They can pay
         | for the AI and humans to moderate things while newcomers can't.
         | So it's question of trading of user benefit against giving even
         | more power to Big Tech.
        
           | ocdtrekkie wrote:
           | This is the standard scream of incumbent players when they
           | want to discourage regulation. It both ignores the fact that
           | what's "reasonable" for an incumbent monopoly and a small
           | startup are different, and that the law generally accounts
           | for scale.
        
             | saferalt wrote:
             | Does it "generally account for scale"? Citation needed. The
             | GDPR has a fine structure of up to 4% of world-wide
             | turnover or EUR20 million. Whichever is HIGHER. That means
             | for any company doing less than say, EUR20 million in
             | revenue and found to be non-compliant, GDPR gives the legal
             | authority to fine them out of existence. I only mention
             | GDPR as a specific example because of the familiarity here,
             | but the general pattern of non-scaling regulation that
             | results in regulatory capture and monopolization is the
             | norm, not the exception.
        
               | IAmEveryone wrote:
               | You're conveniently ignoring that the percentage-based
               | fine structure in itself is almost literally "accounting
               | for scale".
               | 
               | The minimum (of the maximum) set by the "whichever is
               | higher" clause is needed to remain effective with non-
               | and low-revenue entities. Something like Clearview
               | (universal face recognition but startup with little
               | revenue) would otherwise be free to ignore the law.
               | 
               | If your small company does enough damage to warrant a 20
               | million fine, it probably deserves to die. These fines
               | also aren't assessed arbitrarily: there's a specific list
               | of factors to take into account, and all decisions are
               | subject to judicial review under the established
               | principles of proportionality.
        
               | novok wrote:
               | > These fines also aren't assessed arbitrarily: there's a
               | specific list of factors to take into account, and all
               | decisions are subject to judicial review under the
               | established principles of proportionality.
               | 
               | You hope bureaucrats do not act mechanistically and do
               | not apply proportionality, but over and over again in
               | recent history you see that exact behavior. Which is why
               | no business trusts a statement of 'they'll be might be
               | nice, but nothing is effectively stopping them from not
               | being nice other than some platitudes'!
        
               | TheSpiceIsLife wrote:
               | > You hope bureaucrats do not act ... nothing is
               | effectively stopping them from not being nice other than
               | some platitudes
               | 
               | One argument goes something like this:
               | 
               | The _state_ always reserves the right to _extinguish_
               | you, either by execution or permanent non-judicial
               | incarceration, regardless of laws.
               | 
               | We all can only always hope state-level actors don't
               | abuse their powers.
               | 
               | Whether they are or aren't at any particular time is
               | somewhat subjective.
        
               | TheSpiceIsLife wrote:
               | Context matters:
               | 
               |  _The fines must be effective, proportionate and
               | dissuasive for each individual case. For the decision of
               | whether and what level of penalty can be assessed, the
               | authorities have a statutory catalogue of criteria which
               | it must consider for their decision. Among other things,
               | intentional infringement, a failure to take measures to
               | mitigate the damage which occurred, or lack of
               | collaboration with authorities can increase the
               | penalties. For especially severe violations, listed in
               | Art. 83(5) GDPR, the fine framework can be up to 20
               | million euros, or in the case of an undertaking, up to 4
               | % of their total global turnover of the preceding fiscal
               | year, whichever is higher. But even the catalogue of less
               | severe violations in Art. 83(4) GDPR sets forth fines of
               | up to 10 million euros, or, in the case of an
               | undertaking, up to 2% of its entire global turnover of
               | the preceding fiscal year, whichever is higher._
               | 
               | https://gdpr-info.eu/issues/fines-penalties/
        
             | dfee wrote:
             | this is an interesting counter-counter-argument i've not
             | seen before. does discussion of this sort of derivative
             | behavior exist elsewhere? i.e. is there an established
             | narrative of incumbents pushing against regulatory capture,
             | or examples of this behavior?
        
               | ocdtrekkie wrote:
               | It's just a general behavioral trend I (and plenty of
               | others) have noticed in arguments against regulation
               | coming from monopolies. When a big tech company claims a
               | regulation it dislikes would hurt newer players from
               | competing with it, you have to ask... why are they so
               | opposed then?
               | 
               | Is it out of the goodness of their hearts that large
               | companies complain about regulation hurting small
               | businesses? Or is it because the regulation will cost
               | them a ton of money they'd rather keep in the bank, and
               | they know they already have enough market capture to
               | continue to obliterate small businesses either way?
               | 
               | When someone says that regulations on large companies
               | will actually hurt small businesses, the first thing you
               | should do, is look who is claiming that, and see where
               | they get their funding from. It's almost always a think
               | tank funded by the biggest player in the market being
               | discussed.
        
               | marcinzm wrote:
               | There is a difference between short term and long term
               | impact. Short term regulation costs money and a large
               | player doesn't want that since it'd hurt their stock
               | price. Long term they probably benefit from it but wall
               | street doesn't care as much about that.
        
               | btreecat wrote:
               | Why not look at who wrote the regulations?
        
               | cookie_monsta wrote:
               | Because lobbyists tend not to appear in the credits
        
             | novok wrote:
             | The GDPR does not explicitly account for scale in practice,
             | but pretends you're a $300 million dollar business, with
             | the resources to do proper GDPR compliance, which is why
             | the response of many small businesses is to stop serving
             | europeans. And these businesses had nothing to do with
             | privacy invasion, such as classes you pay for, paid note
             | taking apps an so on.
        
             | marcinzm wrote:
             | >the law generally accounts for scale.
             | 
             | Not in a useful way I've noticed because a small company
             | can service hundreds of thousand easily thanks to the power
             | of the internet. CCPA, for example, essentially sets the
             | cutoff at 50000 users which you can get to pretty quickly
             | with a consumer startup. The cutoff helps the local
             | pizzeria I guess but not any actual competitor to
             | incumbents.
        
         | seibelj wrote:
         | Uhh... disagree? Even a 5 person startup should be responsible
         | for every single thing their users post? Or do you want some
         | arbitrary line of employees above which it's illegal and below
         | which it's fine?
        
           | ocdtrekkie wrote:
           | I'll refer you to my response here, on why people claiming
           | that's the alternative isn't really accurate:
           | https://news.ycombinator.com/item?id=22815922
           | 
           | But no, there shouldn't be an arbitrary line. Judges can make
           | fair determinations on when a company is or is not doing a
           | reasonable job controlling abuse on their platform, and the
           | profit motivations behind those decisions.
        
         | ori_b wrote:
         | Do you think that any blog with a comments section should be
         | legally responsible for spammers posting on it?
        
           | kevingadd wrote:
           | The fact that your proposed scenario is scary / unpleasant /
           | difficult does not _automatically_ make the alternatives
           | better, as we have learned over the last two decades. We need
           | to seriously consider that perhaps these things are not
           | actually simple unless you ignore the consequences.
        
           | tigerstripe wrote:
           | It would certainly change the face of the modern web. In the
           | early 2000's, many blogs and news sites did not have comment
           | sections.
           | 
           | Maybe a service like a third-party Disqus would come out that
           | would split the user-generated content from the actual sites
           | (and source the content via P2P networking).
        
           | ocdtrekkie wrote:
           | No, and they wouldn't be by any informed understanding of the
           | law. That's not how the law has ever worked in any developed
           | society.
           | 
           | Generally, law has both the concept of intent and
           | reasonableness. As such, a company that inadequately polices
           | malicious and abusive content because that content is wildly
           | profitable (hi Google and Facebook), we should have the legal
           | ability to fine these companies into oblivion, because their
           | behavior is not reasonable and the intent behind it can be
           | divined from their records.
           | 
           | Meanwhile, if you an individual with a blog, see someone
           | making a bad comment on your blog and you ban the person, the
           | law would recognize that as a pretty reasonable moderation
           | practice.
        
             | root_axis wrote:
             | How would the law distinguish between reasonable moderation
             | and unreasonable removal?
        
               | ocdtrekkie wrote:
               | "Unreasonable removal" isn't actually much of a concern
               | here under our current legal doctrine: As these companies
               | are private entities, they can decide that they simply
               | don't want this or that on their platform, and that can
               | be as unreasonable as they like.
               | 
               | Presumably, platforms which profit off user content have
               | a financial incentive already to allow user content as
               | much as they can, Section 230 only removes the financial
               | incentive to remote bad content. Removing Section 230
               | will restore balance: Companies will still be motivated
               | to keep as much non-abusive content as they can, but will
               | face legal challenge if they fail to remove abusive
               | content.
               | 
               | (There's an argument to be made that Facebook and Google
               | represent "public spaces" in the modern Internet era, but
               | we currently have no legal precedent for applying first
               | amendment rights to privately owned properties. Either
               | we'd need a huge legal shift to apply the first amendment
               | to private spaces or we'd need to nationalize online
               | platforms.)
        
               | IAmEveryone wrote:
               | > Section 230 only removes the financial incentive to
               | remote bad content.
               | 
               | Please go read the actual law. It's neither long nor
               | complicated.
               | 
               | Section 230 corrected a problem in other law that made it
               | dangerous to even attempt to moderate content. Before it
               | became law, websites basically had to choose between not
               | moderating at all, or assuming liability for all content.
               | 
               | Hell, I'll just quote the relevant part in full:
               | 
               |  _(2) Civil liability_
               | 
               |  _No provider or user of an interactive computer service
               | shall be held liable on account of--_
               | 
               |  _(A) any action voluntarily taken in good faith to
               | restrict access to or availability of material that the
               | provider or user considers to be obscene, lewd,
               | lascivious, filthy, excessively violent, harassing, or
               | otherwise objectionable, whether or not such material is
               | constitutionally protected;_
        
             | danShumway wrote:
             | > No, and they wouldn't be by any informed understanding of
             | the law.
             | 
             | You are misinformed about the history of 230. 230 was
             | proposed exactly because the law was interpreted the way
             | you're saying it wouldn't be.
             | 
             | From Wikipedia below, added emphasis mine:
             | 
             | > This concern was raised by legal challenges against
             | CompuServe and Prodigy, early service providers at this
             | time. CompuServe stated they would not attempt to regulate
             | what users posted on their services, while Prodigy had
             | employed a team of moderators to validate content. Both
             | faced legal challenges related to content posted by their
             | users. In Cubby, Inc. v. CompuServe Inc., _CompuServe was
             | found not be at fault_ as, by its stance as allowing all
             | content to go unmoderated, _it was a distributor and thus
             | not liable for libelous content_ posted by users. However,
             | Stratton Oakmont, Inc. v. Prodigy Services Co. found that
             | _as Prodigy had taken an editorial role with regard to
             | customer content, it was a publisher and legally
             | responsible for libel committed by customers._
             | 
             | > [...]
             | 
             | > United States Representative Christopher Cox (R-CA) had
             | read an article about the two cases and felt the decisions
             | were backwards. _" It struck me that if that rule was going
             | to take hold then the internet would become the Wild West
             | and nobody would have any incentive to keep the internet
             | civil"_, Cox stated.
             | 
             | ---
             | 
             | It's become increasingly popular for people to say that
             | Section 230 was a mistake. Usually they support that with
             | claims that concerns about its repeal are purely
             | theoretical fearmongering, despite the fact that we
             | literally have case president on the books right now about
             | what the Internet would look like without Section 230, and
             | how the existing laws were being interpreted.
             | 
             | When people raise concerns that without Section 230 the
             | Internet would be divided up into completely unmoderated
             | platforms and aggressively curated gatekeepers, that's not
             | fearmongering. It's history.
             | 
             | Ironically, the only websites that wouldn't be affected by
             | a repeal of Section 230 are the completely unmoderated
             | hellholes we want to discourage online, because they have
             | Compuserve's precedent and the 1st Ammendment to hide
             | behind.
        
               | ocdtrekkie wrote:
               | But in a world where we feel it was backwards that
               | moderators were punished and unmoderated platforms
               | weren't... Congress decided "let's just make everyone
               | immune" was the right way to go?
               | 
               | And again, I think the examples here are missing the same
               | concept that Section 230 fails to recognize: Profit, as I
               | discussed here:
               | https://news.ycombinator.com/item?id=22816016 It seems
               | like the author of Section 230 failed to recognize we're
               | in a capitalist society when this regulation was drafted.
               | 
               | When platforms are taking a cut out of illegal activity,
               | as Big Tech platforms do when they operate ad networks,
               | courts would have to agree that any platform party,
               | regardless of whether or not they currently moderate,
               | should be held to some manner of responsibility.
               | 
               | Right now, when an old lady clicks a Google search result
               | for "mapquest", clicks the top link for "Maps Quest"[0]
               | because Google ads aren't distinguishable from real
               | search results to the untrained eye, is pushed to install
               | a browser extension (from the Chrome Web Store) that
               | hijacks her browser's new tab and search, injects
               | malicious ads, and scrapes her private info to relay to
               | an attacker, Google makes money. And is wholly protected
               | by Section 230 for that activity and unable to be held
               | responsible for refusing to delist the malicious ad.
               | 
               | In what world is that the right legal position?
               | 
               | [0] (This is a very real world example, I've done a lot
               | of senior citizen tech support, and this is how 90% of
               | them get owned.)
        
               | IAmEveryone wrote:
               | That example has absolutely nothing to do with Sec 230.
               | Google's ad design is all on Google. If it were illegal,
               | Sec 230 wouldn't protect them. And while Google might be
               | protected against liability for Mapquest's business
               | practices, Mapquest isn't. If their behavior is harmful
               | and illegal, they are liable.
        
               | ocdtrekkie wrote:
               | MapQuest did nothing wrong in this example. The problem
               | is the fake sites that are taking the top spot in search
               | results above the legitimate MapQuest link when you
               | search Google for MapQuest, and Google refuses to delist
               | them. And of course, Google lets people buy ads for other
               | companies' trademarks, which is a whole different ball of
               | issues.
               | 
               | (MapQuest is a popular one for malicious sites to pretend
               | to be because most of the people searching for it are
               | seniors... they heard about it twenty years ago and then
               | never moved on from searching for it when they want
               | directions somewhere.)
        
               | spaced-out wrote:
               | What does this have to do with Section 230?
        
               | ocdtrekkie wrote:
               | The moment someone points out Google makes a huge amount
               | of money on scams and malware, and due to Section 230,
               | can't really be held responsible for it.
        
               | danShumway wrote:
               | To follow up, we've also tried going in the opposite
               | direction from 230 more recently with SESTA/FOSTA.
               | 
               | From that Wikipedia page, some of the current effects
               | (again, emphasis mine):
               | 
               | > Craigslist ceased offering its "Personals" section
               | within all US domains in response to the bill's passing,
               | stating _" Any tool or service can be misused. We can't
               | take such risk without jeopardizing all our other
               | services."_ Furry personals website Pounced.org
               | voluntarily shut down, citing increased liability under
               | the bill, _and the difficulty of monitoring all the
               | listings on the site for a small organization._
               | 
               | > _The effectiveness of the bill has come into question
               | as it has purportedly endangered sex workers and has been
               | ineffective in catching and stopping sex traffickers._
               | The sex worker community has claimed the law doesn 't
               | directly address issues that contribute to sex
               | trafficking, but instead has drastically limited the
               | tools available for law enforcement to seek surviving
               | victims of sex trade. Similar consequences of the law's
               | enactment have been reported internationally.
               | 
               | > A number of policy changes enacted by the popular
               | social networks Facebook and Tumblr (the latter having
               | been well known for having liberal policies regarding
               | adult content) to restrict the posting of sexual content
               | on their respective platforms have also been cited as
               | examples of proactive censorship in the wake of the law,
               | _and a wider pattern of increased targeted censorship
               | towards LGBT communities._
               | 
               | ----
               | 
               | Now, this kind of effect doesn't get as much mainstream
               | attention because people are primed not to think of sex
               | censorship as "real" censorship. But again, we have
               | examples on the book of what happens to legitimate
               | services (both large and small) when laws like this get
               | passed. It's not fearmongering, it's history.
               | 
               | People have these assumptions that laws are going to be
               | reasonably applied -- that's not a safe assumption to
               | make if you pay attention to the history of these laws.
               | 
               | I'm largely unsympathetic to those arguments for the same
               | reason that I'm unsympathetic to all of the lawmakers
               | saying, "well this time we regulate encryption it will be
               | different." We have a number of examples of how this can
               | go wrong (and has gone wrong). If somebody wants to
               | propose that it'll be different the next time we weaken
               | 230 or add exceptions, then I think the onus is on them
               | to provide some kind of compelling evidence as to _why_
               | it 's going to be different this time.
               | 
               | What makes you certain that the policies you propose
               | won't have the same effect as FOSTA/SESTA?
               | 
               | ----
               | 
               | As to why these laws primarily affect platforms that are
               | already trying to moderate and not free-for-all
               | hellholes, that's in part because of existing case law
               | around the difference between a publisher and a
               | distributor.
               | 
               | From Wikipedia's entry on Compuserve's case (once again,
               | emphasis mine):
               | 
               | > The court held that "CompuServe has no more editorial
               | control over such a publication [as Rumorville] than does
               | a public library, book store, or newsstand, _and it would
               | be no more feasible for CompuServe to examine every
               | publication it carries for potentially defamatory
               | statements than it would be for any other distributor to
               | do so._ "
               | 
               | Bills like SESTA/FOSTA have managed to pass without a lot
               | of opposition because, again, people are primed to think
               | that sex censorship isn't real censorship. But where more
               | mainstream content is concerned, you should understand
               | that proposing punishments for distributors is a pretty
               | big change to existing libel/speech laws. Big enough that
               | I don't even feel comfortable speculating on what the
               | legal challenges or possible effects would be. That's a
               | radical departure from how we currently think about
               | speech in the US, not just on the Internet but in
               | physical/print spaces as well.
        
               | drenvuk wrote:
               | I don't like this malware example. Yes Section 230
               | protects Google from that and yes google is in a position
               | of trust for the content they serve up but there's
               | something wrong with your stance.
               | 
               | The point in your old lady's chain of actions where a law
               | was and should be considered broken was when the malware
               | ads were injected, not before. You can't go that far up
               | the chain, there are too many proxies, too many people
               | with intents that are not obviously malicious. People
               | should be given the benefit of the doubt in most cases.
               | 
               | In addition, in your profit explanation that you linked
               | to you stated that if the service can't scale up human
               | interactions to match with complaints then that service
               | shouldn't exist. That's laughable. To do so would make
               | service owners so vulnerable to automated complaints that
               | legitimate ones would never make it through, that goes
               | for up and down the business scale. What your proposal
               | ends up doing is creating a non-anonymous internet by
               | necessity.
        
         | akersten wrote:
         | > Section 230's supporters constantly push hilariously insane
         | narratives about it's importance, suggesting that without it
         | companies would be inherently violating the law any time one of
         | their users violated the law,
         | 
         | Can you explain in your own words what you think Section 230
         | actually does? Because yes, without it, that was very much the
         | case (see Stratton Oakmont, Inc. v. Prodigy Services Co.)
         | unless the company decides not to moderate _at all_ , which is
         | not an Internet that most of us want.
        
         | ptudan wrote:
         | I can understand companies not being protected from profiting
         | off of ads that come before viral lies (facebook, youtube),
         | especially when the companies have a hand in spreading them
         | with algorithms promoting addiction.
         | 
         | But if they're not promoting the content, and aren't profiting
         | from it in a different way than other content, we can't hold
         | them responsible. These providers create platforms. Would you
         | hold CVS responsible for selling me the tape/sharpie/poster
         | board to make a racist sign?
         | 
         | If I create a twitter clone, post it online, and it somehow
         | blows up overnight with child porn and terrorism, why do I
         | deserve to be punished?
        
           | ocdtrekkie wrote:
           | "Would you hold CVS responsible for selling me the
           | tape/sharpie/poster board to make a racist sign?"
           | 
           | No, but I'd hold CVS responsible for displaying the sign in
           | their stores.
           | 
           | "But if they're not promoting the content, and aren't
           | profiting from it in a different way than other content, we
           | can't hold them responsible."
           | 
           | That's the biggest issue Section 230 fails to account for:
           | These companies are _profiting off it_. When Google or
           | Facebook take down content, they still keep the profits they
           | got from advertising it. Some of the most long-running ads on
           | high traffic search terms on Google distribute malware, and
           | they refuse to delist them due to the amount of money they
           | make. Facebook refuses to restrict blatant lies on political
           | ads because those political ads make it a huge amount of
           | money.
           | 
           | Section 230 is a failure because Section 230 removes any
           | financial incentive for platforms to moderate responsibility.
           | If we were to replace Section 230, rather than removing it
           | entirely, we would need a solution that makes it inherently
           | expensive to host bad content, such that platforms are
           | strongly incentivized to hire qualified staff to moderate and
           | manage content.
           | 
           | If I report harmful content on Twitter or Facebook or Google,
           | we need a system that ensures I receive a non-automated,
           | competent response, and that the company is legally
           | responsible for the decision they just made, such that they
           | can't pawn it off on an algorithm or someone making 5 cents
           | an hour.
        
             | akersten wrote:
             | > Section 230 is a failure because Section 230 removes any
             | financial incentive for platforms to moderate
             | responsibility.
             | 
             | What in the world does "moderate responsibility" mean? It's
             | their site, they get to decide what goes on it as long as
             | it's legal. If it's not legal, it has to be removed anyway!
             | 
             | > If I report harmful content on Twitter or Facebook or
             | Google, we need a system that ensures I receive a non-
             | automated, competent response, and that the company is
             | legally responsible for the decision they just made, such
             | that they can't pawn it off on an algorithm or someone
             | making 5 cents an hour.
             | 
             | Yeah okay, fight for that then. This legislation isn't
             | that.
        
               | ftvy wrote:
               | >If it's not legal, it has to be removed anyway!
               | 
               | Isn't that the original intent of section 230? Because
               | these websites couldn't possibly moderate all possible
               | user submissions for illegal content, that when illegal
               | content is discovered that liability is held with the
               | user and not the website hosting it?
        
               | akersten wrote:
               | Yes, that's the point of 230. It doesn't make anything
               | legal that wasn't before, or illegal that was legal
               | before. It simply assigns the responsibility of illegal
               | content to the party that created it. Which is just a
               | reasonable application of common sense.
               | 
               | I simply do not understand the motives behind people who
               | want to abolish 230 - they would turn the internet into a
               | stark split between heavily moderated websites, looking
               | out only for their own liability because should they lay
               | a finger on anything, they are culpable for everything -
               | and unmoderated hellholes. Maybe they enjoy the hellholes
               | and want more sites like that? Misery loves company.
               | 
               | I suspect most of the posters arguing against 230 are:
               | 
               | * Uninformed about what the law actually does
               | 
               | * Purposefully antagonistic and contrarian, or part of a
               | coordinated troll campaign to sow discord
               | 
               | * Folks who have a bone to pick with big tech and will
               | support any law, no matter how ridiculous, thinking it
               | would cause big companies grief
               | 
               | * Spiteful that their post got moderated off a popular
               | platform, and want websites to be forced to broadcast
               | their content (despite this being a clear 1A violation of
               | the company's rights)
               | 
               | * Really, truly, think that sites on the Internet should
               | be either a wasteland or approval-only-posting, and you
               | have to pick one
               | 
               | In any case, this kind of discussion around 230 is kind
               | of burying the lede of the EARN IT act, which is a
               | desperate attempt at not only further eroding 230
               | protections after the monstrosities of FOSTA/SESTA, but
               | to allow the government to take away these common sense
               | protections from a site unless they capitulate to
               | government spying.
               | 
               | Which really should be the focus here, but somehow we're
               | all distracted in the comments dismantling the faulty
               | "platform or publisher, pick one!" argument again.
        
             | gknoy wrote:
             | > If I report harmful content on Twitter or Facebook or
             | Google, we need a system that ensures I receive a non-
             | automated, competent response
             | 
             | That seems extremely prone to being DOS-ed by bots or a
             | brigade of complaint-heavy users.
        
               | ocdtrekkie wrote:
               | If a platform can't scale to handle content moderation
               | requests, it shouldn't exist at scale. Presumably a
               | company shouldn't be responsible to respond to bot
               | submissions, and could potentially ban complainants who
               | abuse the system. (Although doing so would potentially
               | open them to legal recourse if they were banning someone
               | for filing legitimate reports they just didn't want to
               | deal with, for example.)
               | 
               | There are reasonable controls that can be put in place,
               | but ultimately, Big Tech companies' responsibility needs
               | to be seated in the legal system, and there needs to be a
               | way to escalate to the legal system when these companies
               | operate in a societally harmful fashion.
               | 
               | "We're just a platform, it's not our fault" should never
               | be a conclusive answer to conversations about these
               | companies' operations.
        
               | cookie_monsta wrote:
               | > If a platform can't scale to handle content moderation
               | requests, it shouldn't exist at scale.
               | 
               | Agreed. This whole "we got so big chasing crazy growth
               | that making us responsible for cleaning up our own mess
               | would make us lose money" argument is very tiresome and
               | one that I can't see holding any water outside of tech.
        
               | ocdtrekkie wrote:
               | Exactly. There's an exclusive mindset in tech that it's
               | okay to automate human problems and then just say there's
               | nothing they can do when automation isn't adequate. Other
               | businesses have huge percentages of their workforce
               | tackling problems that tech companies just say they're
               | not responsible for, like content moderation, customer
               | service, etc.
        
               | yuliyp wrote:
               | No human review system can scale to automated reporting;
               | the number of attackers you have is not bounded by your
               | legitimate user base.
               | 
               | The system you describe would basically mean every online
               | service that allows human interaction would always run
               | under risk of any trolls being able to permanently take
               | them down by abusing content moderation requests.
        
             | ftvy wrote:
             | Begs the question: who defines "bad content"?
        
       | reggieband wrote:
       | I thought it was interesting when Twitch partners started talking
       | about a Twitch policy that seems to hold the partner responsible
       | for moderating their own chat. That is, if you are a partner and
       | you have community members posting prohibited content into your
       | Twitch chat then you stand to pay the penalty through a ban or
       | losing your partnership. You are forced to moderate your own chat
       | thereby relieving Twitch of having to do so (and presumably
       | giving them some plausible argument that they are enforcing some
       | level of site wide moderation).
       | 
       | It was interesting to see the reactions of these streamers since
       | they aren't typical business people or legal experts. There was
       | quite some debate amongst them about the fairness of the streamer
       | being held responsible for random trolls that entered their
       | chats. When I considered the viewpoint of individuals instead of
       | corporations it did expand my view of
       | responsibility/accountability.
        
       | seemslegit wrote:
       | Is signal still demonstrating their commitment to my privacy by
       | notifying everyone with my number in their phone contacts who is
       | also on signal when I join ?
        
         | spacephysics wrote:
         | That's a good question I'd like to know, too. Perhaps they hash
         | the phone numbers then compare to their master list for
         | matches?
        
           | tialaramex wrote:
           | Specifically they use the first ten bytes of
           | SHA1(phoneNumber) where phone number is like +12345553215 for
           | the US phone number 1-234-555-3215 or say +4424061184 for the
           | number I had as a child in a village in England.
           | 
           | This form of number is also the one your (mobile) phone
           | actually uses, although they let you type in any sloppy human
           | attempt at a phone number and translate.
           | 
           | Signal apps periodically reach out to Signal's servers to do
           | two things: Confirm that this specific user does still have
           | Signal (and so messages to them should be accepted) and
           | optionally upload a set of these hashes for their contacts to
           | see if any of those have Signal and so messages to those
           | numbers can go securely via Signal instead.
        
           | seemslegit wrote:
           | That's hardly helpful if I just don't want people who happen
           | to have my number to know I'm using signal.
        
             | doomrobo wrote:
             | How would a messaging app work without contact discovery?
             | You try a friend's number, and see if the message goes
             | through? Well if that's what you want, then you can do this
             | for all your phonebook numbers, and all the ones that go
             | through are on Signal, and all the ones that error are not.
             | Oops, you've reinvented contact discovery.
        
               | seemslegit wrote:
               | A. You can design it in such a way so that sending a
               | message to a non-user is indistinguishable from having a
               | user see it and not reply/acknowledge it.
               | 
               | B. You can exchange identifiers with people whom you want
               | to communicate it just like with any other non-phone-
               | based system: "Hey I'm @username on signal", "Cool, I'm
               | @username2" - composes well with method A.
        
               | doomrobo wrote:
               | Solution A does not compose well with how Signal does
               | encryption. In order to make this indistinguishable,
               | Signal would basically have to man-in-the-middle all non-
               | existent users. And if one of those users signed up for
               | Signal it would have to stop man-in-the-middling them,
               | causing all the people who were talking with their ghost
               | to observe a key change. It's complicated at best, and
               | sketchy at worst.
               | 
               | Solution B ties in to a bigger argument I won't address
        
               | seemslegit wrote:
               | There would be no key change because there would be no
               | initial key, signal facilitates contacts anyway the only
               | difference is that the sides have no ability to control
               | with whom it takes place. Messages to non-contacts will
               | not be sent because there would be noone in your contacts
               | to send them to, hence indistinguishable.
        
               | doomrobo wrote:
               | I don't follow. I'm sending (or trying to send) messages
               | to my contacts. If I know their phone number, I'm going
               | to try to initiate a Signal conversation with them. So I
               | ask the Signal server for a signed prekey. Your argument
               | is that Signal should not respond with "I don't know this
               | person" and should instead respond with something
               | indistinguishable from a "real" response. So they must
               | send me something that looks like a signed prekey, right?
               | Well then I would use that in order to do a key exchange
               | and now we're in the situation I described above.
        
               | seemslegit wrote:
               | Signal server should respond with "I either don't know
               | this person or they have not approved to be contacted by
               | you". You should only get a prekey if they are in fact a
               | signal user and have opted in to be discoverable by
               | everyone or only by select people including you.
        
               | andrewzah wrote:
               | > How would a messaging app work without contact
               | discovery?
               | 
               | "Hey, add me on telegram, my username is @andrewzah".
               | This isn't a hard problem.
               | 
               | I don't know why we decided apps hoovering up our contact
               | lists in exchange for convenience was so important. For
               | an app that touts itself as private and secure, I still
               | had to explain to my brother why giving it his contact
               | list wasn't a good idea.
        
               | UncleMeat wrote:
               | This is a hard problem. The evidence for this is the
               | decades of failed attempts to get people to use pgp and
               | other systems where I need to have a freaking party in
               | order to figure out who I can message and how before I
               | actually start communicating.
        
               | lotyrin wrote:
               | I guess I just don't value "contact discovery" as a
               | feature?
               | 
               | I just don't see it as a casual thing the way the target
               | users of these apps apparently do.
               | 
               | I want to explicitly control, per any form of
               | communication, each person who is to be made to know that
               | I operate that form of communication and whether or not
               | that form of communication with me is open to them.
               | 
               | I do not want to open up a new app and have a large
               | populated list of past acquaintances appear, I absolutely
               | do not want to appear in such a list, I would rather not
               | use a given app than risk having someone show up
               | messaging me uninvited.
        
               | TheAdamAndChe wrote:
               | That's great! Signal may not be for you then, which is
               | okay.
        
               | cortesoft wrote:
               | or....make this an opt in feature?
        
               | UncleMeat wrote:
               | In order to do this, the signal server must maintain a
               | list of who is allowed to talk to who, otherwise the list
               | of people available on signal can be obtained through
               | enumeration.
               | 
               | This is a privacy trade off. Some services chose to keep
               | this list. Signal chose to use phone numbers.
        
               | cortesoft wrote:
               | At least that requires the other person to have my
               | contact saved, and actively try to reach me. I don't
               | clear out my contact list frequently, so I don't want old
               | contacts to be PROACTIVELY messaged about me joining
               | Signal... if they are looking for me, fine.
        
               | seemslegit wrote:
               | How does it require them to actively try to reach you ?
        
               | cortesoft wrote:
               | Because it requires the customer to try to send a message
               | to the contact. They would have to continuously do that
               | if they wanted to be notified when I joined.
               | 
               | This is very different than Signal doing it automatically
               | when I join.
        
               | leetcrew wrote:
               | personally, I have admitted defeat and accepted that most
               | people will not use my favorite messaging app. so I just
               | default to sms and ask people who I talk to often if they
               | have (or would consider installing) my favorite app. at
               | this point, it's only slightly less convenient to
               | exchange usernames.
               | 
               | the contact prepopulation isn't even that useful.
               | telegram in particular seems to drop notifications if you
               | don't exclude it from power management on Android. if you
               | don't know the user has notifications turned on and has
               | jumped through the hoops to make them actually work, you
               | may as well send messages into a black hole. this is part
               | of why I never assume it's a good way to contact someone
               | without explicitly asking.
        
           | tlb wrote:
           | Hashing phone numbers isn't useful for privacy. You can test
           | the entire space of 10^10 phone numbers against a list of
           | hashes in hours, and you only have to do that step once.
        
           | ajconway wrote:
           | They use Intel SGX to keep themselves from being able to
           | access their users' contact lists.
           | 
           | Still, one has to possess and reveal a valid phone number to
           | use the service. This must help to grow the user base.
        
             | seemslegit wrote:
             | This misses the point, just because someone sometimes added
             | me to their phone contacts and uses signal does not mean I
             | want them notified when I start using signal too.
        
               | Macha wrote:
               | This has been my blocker to using Signal or Telegram
               | also.
        
               | 333c wrote:
               | With Telegram at least, you do not have to share your
               | contacts with the app. You can build up a Telegram-
               | specific list of contacts based on who you message on the
               | platform.
        
               | Macha wrote:
               | This is the inverse of the issue. I don't want everyone
               | that has my phone number to be able to see/add me on
               | telegram. That would require those users not to upload
               | their contacts, which is out of my control.
        
               | 333c wrote:
               | I agree with your point, and I suspect that part of this
               | comes from the Signal android app, which can be used as a
               | replacement for all messaging (including SMS). Like
               | Apple's Messages, it automatically upgrades from SMS to
               | Signal when the recipient supports receiving messages on
               | that platform.
        
         | TechBro8615 wrote:
         | Telegram does the same thing. In fact, so does Instagram, which
         | I find most egregious, since it asks for your number for 2FA
         | purposes then notifies anyone who has your number saved that
         | you've joined.
         | 
         | Every coach, recruiter, drug dealer or one night stand I've had
         | in my life doesn't need to know when I sign up to Instagram.
         | Some of them might not have even had my real name until they
         | got that notification.
         | 
         | IMO this should be illegal, now that I think about it.
        
           | BubRoss wrote:
           | That's almost worse than having no volume control in their
           | interface, even for their desktop site.
        
             | [deleted]
        
         | AndyMcConachie wrote:
         | Why is this downvoted?
        
           | dublinben wrote:
           | It's off topic of the submitted article.
        
           | [deleted]
        
           | eximius wrote:
           | Contact Discovery is not seen as privacy invasive, I guess.
           | It says, X has Signal, but that's it. So I see how it is,
           | strictly speaking, broadcasting 'private' information, but it
           | is hard to care terribly. I'm far more concerned by the
           | privacy of my conversations than the fact I at one point
           | installed signal.
        
             | ape4 wrote:
             | But it means they have slurped all your contacts. How are
             | they stored? Who are they shared with? etc
        
               | ajconway wrote:
               | It's really not a secret https://signal.org/blog/secure-
               | value-recovery/
        
               | eximius wrote:
               | The DO NOT slurp your contacts.
               | 
               | They invented a way to do contact discovery in a secure
               | way: https://signal.org/blog/private-contact-discovery/.
               | From the article:
               | 
               | "Using this service, Signal clients will be able to
               | efficiently and scalably determine whether the contacts
               | in their address book are Signal users without revealing
               | the contacts in their address book to the Signal
               | service."
               | 
               | This is why Signal gets so much benefit of the doubt from
               | the cryptography/security/privacy community. Their
               | default approach to these problems is conservative in
               | favor of the user until they can invent the technology
               | needed to support a feature with security/privacy.
        
               | ape4 wrote:
               | Thanks. That's actually great, I was wrong.
        
               | propinquity wrote:
               | The Signal App, which is open source, periodically sends
               | truncated cryptographically hashed phone numbers to the
               | Signal server, which is also open source. The server does
               | not store the truncated hashes that the app sends to the
               | server. So, they only temporarily have partial hashes
               | which they do not store or share with anyone.
        
             | seemslegit wrote:
             | See, there's an apparently archaic concept in software
             | called user preference - they could ask people upon joining
             | if they want to be contact-discoverable or not.
        
               | eximius wrote:
               | And that's fair! It's not perfect. And, ignorantly, I
               | would assume it'd be easy to add, so they probably
               | should.
               | 
               | But I can understand why this is a trade-off they'd make
               | in terms of your comfort level (relatively rare to care
               | about this) vs. massive use-ability and onboarding gains.
        
               | seemslegit wrote:
               | What you call "comfort level" is in fact the primary
               | value proposition of a tool like signal and the fact that
               | they have chosen to compromise on it to drive adoption is
               | symptomatic of the many things wrong with the setting in
               | which this decision was made. Not unlike calling out the
               | EARNIT senators on using child abuse justifications in
               | bad faith to promote the agenda of censorship and
               | surveillance.
        
           | clarkevans wrote:
           | Perhaps the parent is downvoted not because of the underlying
           | content of the message, but because of the black/white
           | framing.
           | 
           | Every tool has a barrier to use, and even with a strong
           | commitment to security/privacy, Signal seems to have decided
           | that it's more beneficial for their users know which one of
           | their friends are on Signal, than there is potential for
           | abuse. Further, there may come a time where this calculus
           | changes, for example, if spammers arrive on the Signal
           | platform and start decreasing usability.
        
             | rapnie wrote:
             | > Signal seems to have decided that it's more beneficial
             | for their users know which one of their friends are on
             | Signal
             | 
             | I feel the reason might be more as a way to promote more
             | widespread use of Signal itself, rather than directly
             | serving their users.
        
               | dublinben wrote:
               | The creators of Signal surely believe that 'promoting the
               | more widespread use of Signal itself' benefits their
               | users. They're not for-profit company that exists for
               | some other purpose.
        
       | hawkice wrote:
       | Plenty of people have complex opinions about 230, but it's a law
       | that says, if you see a comment that defames you, sue the person
       | who made it, it's got nothing to do with e.g. whoever runs the
       | skateboarding forum. Who opposes this? It's just codifying the
       | common sense understanding of the internet.
        
         | core-questions wrote:
         | OK, so please do tell me how to sue `sk8rboy2020` on the forum
         | then?
        
           | Macha wrote:
           | How do you sue the guy that shouted at you as they left the
           | restaurant? Who stuck a defamatory sign on the electric pole?
           | 
           | I don't see why the issues these present should move
           | responsibility to the restaurant or electric company,
           | however.
        
           | hawkice wrote:
           | I mean, 230 doesn't entitle them to immunity to subpeonas for
           | records about which IP address made a comment, and internet
           | providers routinely translate those to real identities in
           | response to valid legal requests.
           | 
           | But that might work. Maybe they don't keep records. How do
           | you sue defamatory information scrawled on a bathroom wall?
           | Sometimes we don't have records for things. That's life.
        
           | toast0 wrote:
           | Sue John Doe, and ask the court to issue a subpoena to the
           | forum for identifying information, and then to the ISP, and
           | once you have that, add the account holder as a defendant to
           | the suit.
           | 
           | It's not fast, and it's not easy, but such is life.
        
             | core-questions wrote:
             | I'm really coming more from the perspective of valuing
             | anonymity. I'd prefer a world where you simply can't sue
             | the guy, and have to suck it up that people say things you
             | don't like.
        
             | Der_Einzige wrote:
             | Proxy server usage would skyrocket if this became common -
             | as it should.
        
               | hawkice wrote:
               | Whoever is running the proxy can also get a subpeona. If
               | you run it yourself the ISP will know who you are.
               | Someone is paying to access the internet, so they
               | probably have records.
        
       ___________________________________________________________________
       (page generated 2020-04-08 23:00 UTC)