[HN Gopher] Section 230 Explained
       ___________________________________________________________________
        
       Section 230 Explained
        
       Author : gok
       Score  : 139 points
       Date   : 2020-10-16 19:11 UTC (3 hours ago)
        
 (HTM) web link (arstechnica.com)
 (TXT) w3m dump (arstechnica.com)
        
       | [deleted]
        
       | throwawaysea wrote:
       | This is one take, but another take is what Section 230 _should_
       | be, putting aside legal technicalities. We have expectations on
       | how big tech companies should operate, and they are not being
       | met. Facebook is bigger than any country. Twitter is bigger than
       | most. They are the digital public square.
       | 
       | Even if US law permits them to act as they will, it is dangerous
       | for our society to have organizations that are essentially
       | utilities provide a non-neutral platform. It doesn't make a
       | difference if a private company is censoring you - the
       | distinction is just cosmetic. The impact is as real as a
       | government censoring you, since any alternative avenue of speech
       | is significantly less effective and for most intents and
       | purposes, simply doesn't exist.
        
         | esoterica wrote:
         | Facebook and Twitter are absolutely not even close to being
         | utilities. Utilities are essential services AND have monopoly
         | power. Facebook and Twitter are neither essential services nor
         | do they have monopoly power (there are a billion other websites
         | you can post on, and the barrier to entry to creating your own
         | website is close to zero).
         | 
         | Being big is not the same as being a monopoly. McDonald's is
         | big, but they are not a monopoly because they have a lot of
         | competitors. Regulating Facebook or Twitter as utilities would
         | be as dumb as regulating Mcdonald's as if it were a utility.
        
           | j4nt4b wrote:
           | I think that Twitter and Facebook are a lot closer to the
           | United States Postal Service than they are to McDonald's.
           | Imagine getting banned from sending/receiving mail and having
           | your home or business address "delisted" by a private
           | company, and then you get closer to what is going on.
        
             | esoterica wrote:
             | The USPS is
             | 
             | 1. A government organization
             | 
             | 2. An essential service
             | 
             | 3. A de facto monopoly in many rural areas that are not
             | profitable for private companies to serve.
             | 
             | Facebook/Twitter are none of these things.
             | 
             | HN: Facebook and Twitter are a stupid, pointless waste of
             | time and you should delete your account and leave those
             | platforms.
             | 
             | Also HN: Facebook and Twitter are essential services and
             | banning people from Facebook and Twitter is a fundamental
             | violation of their human rights.
        
             | ceejayoz wrote:
             | If you get banned from Twitter, you can go to Facebook,
             | just like if you get banned from McDonald's you can go to
             | Burger King.
             | 
             | If you make a habit of shitting in the dining rooms, you'll
             | likely find yourself banned from _all_ the restaurants
             | eventually.
        
               | j4nt4b wrote:
               | The false equivalence between the right to due process in
               | accessing the world's de-facto global communication
               | platform and eating junk food is honestly sickening to
               | me. The power to say anything you want to anybody in the
               | world has fundamentally different consequences for every
               | society on Earth than the power to slowly poison yourself
               | in the colorfully branded plastic seat of your choosing.
        
               | ceejayoz wrote:
               | > the world's de-facto global communication platform
               | 
               | Sorry, was that Twitter? Or Facebook?
               | 
               | It's kinda hard to argue it's a monopoly when I can't
               | figure out which one you're referring to and you're
               | saying "Twitter _and_ Facebook ".
               | 
               | Meanwhile, this is why the USPS isn't a great comparison:
               | https://en.wikipedia.org/wiki/Private_Express_Statutes
        
               | typenil wrote:
               | If the person you responded to had said "monopoly," your
               | comment would only be pedantic (and maybe bad faith), but
               | they didn't say that - it's easy to argue with a straw
               | man. Don't demean yourself.
        
               | coldpie wrote:
               | If your problem is the power and reach that these
               | companies have, then _fix that problem._ Break them up;
               | mandate open communications protocols; create a gov
               | 't-owned communications platform. Destroying UGC on the
               | Internet, or passing blatantly unconstitutional laws,
               | isn't going to fix the problem.
        
       | cblconfederate wrote:
       | > Section 230 should be revoked immediately
       | 
       | No, instead it should be expanded to every publication.
        
       | cwhiz wrote:
       | I genuinely don't mind if the internet is fundamentally reset. I
       | am completely unconvinced that Facebook, Twitter, Reddit, and
       | other such services are net benefit for humanity. Being able to
       | yell at each other in giant anonymous forums is not something
       | that I think worth protecting. Perhaps we should go back to peer
       | to peer communication.
       | 
       | So to those who keep saying "the internet as we know it as at
       | stake", I say... so what? Maybe we got it wrong.
        
       | EdJiang wrote:
       | If you haven't read Section 230, go do so now. It's enabled the
       | development of the modern internet as we know it, and the meat is
       | only 3 sentences. The rest is preamble or interactions with other
       | laws.
       | 
       | > (c) Protection for "Good Samaritan" blocking and screening of
       | offensive material
       | 
       | > (1) Treatment of publisher or speaker
       | 
       | > No provider or user of an interactive computer service shall be
       | treated as the publisher or speaker of any information provided
       | by another information content provider.
       | 
       | > (2) Civil liability
       | 
       | > No provider or user of an interactive computer service shall be
       | held liable on account of-
       | 
       | > (A) any action voluntarily taken in good faith to restrict
       | access to or availability of material that the provider or user
       | considers to be obscene, lewd, lascivious, filthy, excessively
       | violent, harassing, or otherwise objectionable, whether or not
       | such material is constitutionally protected; or
       | 
       | > (B) any action taken to enable or make available to information
       | content providers or others the technical means to restrict
       | access to material described in paragraph (1).
       | 
       | https://uscode.house.gov/view.xhtml?req=(title:47%20section:...
        
         | gowld wrote:
         | Are anonymous trolls "information content providers"?
         | 
         | An easy fix is to say that an "information content provider"
         | must be a legal person who is liable for their content. Then
         | it's easy to find where the buck stops for a Tweet or a Rip-off
         | Report or a Revenge Porn.
        
         | makomk wrote:
         | What makes Section 230 a complicated and contentious issue
         | isn't the actual details of the law - as you say, that's quite
         | simple - it's the consequences of such a broad, powerful,
         | simple, thing as protecting "interactive computer services"
         | from almost all kinds of legal action for content created by
         | others that they keep up, regardless of what they remove, with
         | few caveats, across a vast swathe of causes for action,
         | business models, moderation policies, etc.
         | 
         | For example, suppose you're an online service Twitbook used by
         | a vast swathe of the world to communicate, and you decide that
         | you want to allow calls to murder politicians you dislike but
         | not (obviously) ones you like. Section 230 gives you pretty
         | good protection from liability over your decisions as to which
         | political figures get threatened with murder. Probably even if
         | one of your users gets inspired and puts a bullet in the head
         | of someone you'd like to see dead.
         | 
         | Or suppose you've got a nice legalized extortion racket seeking
         | out negative claims about people or businesses, getting them to
         | rank highly in Google, not allowing the original posters to
         | remove them, and demanding money from the targets to take them
         | down. Section 230 offers pretty much ironclad protection for
         | your business model by making it nearly impossible to get a
         | court order forcing you to take the content down, meaning you
         | can ensure the only way to make it go away is to pay up, and
         | you can even literally call the fee a charge to remove
         | libellous or defamatory content and there's not a damn thing
         | the court system will do about it. There's a long-running
         | website Ripoff Report that has this as their business model,
         | and they've won every case trying to get them to remove
         | defamatory content without paying them money for the privilege
         | thanks to Section 230. There's also plenty of imitators going
         | after individuals, seeking out (say) claims they've cheated on
         | their partners and charging money to remove them - again,
         | solidly protected by Section 230.
        
           | gowld wrote:
           | Why wasn't backpage.com projected by section 230?
        
             | curryst wrote:
             | e(5) of Section 230 is a specific carve out that the
             | protections don't apply to sex trafficking content. I
             | believe they used that to argue that Backpage wasn't
             | protected by section 230.
        
           | curryst wrote:
           | > For example, suppose you're an online service Twitbook used
           | by a vast swathe of the world to communicate, and you decide
           | that you want to allow calls to murder politicians you
           | dislike but not (obviously) ones you like. Section 230 gives
           | you pretty good protection from liability over your decisions
           | as to which political figures get threatened with murder
           | 
           | That's not true on 2 fronts. First, Section 230 requires good
           | faith. That would almost certainly fail to pass the good
           | faith muster, assuming they can demonstrate that it was done
           | intentionally. So civilly, they would likely still be liable.
           | In addition, Section 230 has no bearing on criminal law (it's
           | specifically called out in subsection e). So in the event
           | someone was killed, there would likely be a host of people
           | from Twitter facing charges for being complicit in the death.
           | They are effectively Charles Manson in this scenario, and I
           | think they would have a hard time arguing that selectively
           | filtering messages to expose users to messages encouraging
           | them to kill someone does not count as speech.
           | 
           | I don't see why the second is a terrible issue. They're
           | effectively a tabloid at that point, well known for spreading
           | libelous content. I would be surprised if Queen Elizabeth is
           | overly concerned that the tabloids say she's a lizard person.
           | And also, RipoffReport is the wrong person to sue here, which
           | is why that isn't working. If the content is libelous and you
           | want it taken down, sue the person who wrote it, and have the
           | judge issue a takedown order to Ripoff Report. Section 230
           | only protects them from civil liability, it doesn't make them
           | immune to takedown requests.
           | 
           | In a funny idea, I wonder if you could upload a copyrighted
           | image and then file a DMCA request against the page and have
           | it delisted by Google. Their terms say you grant them a
           | copyright, but if you upload a work that you don't own the
           | copyright for you can't give them a copyright. Technically
           | you're violating DMCA for the upload, and again by lying on
           | the DMCA form you fill out (since you have to own the
           | copyright) but as long as you pick something nobody is likely
           | to sue you for, it should be fine (copy the credits from a
           | book or something). Or if you want to get clever, you could
           | have a friend make a painting of a stick figure in Paint and
           | slap a copyright logo on it, then upload it and have your
           | friend file the DMCA complaint. For bonus points, do it to
           | every single page. They aren't liable because of section 230,
           | but you could probably still force them to play a game of
           | whack-a-mole with Google.
        
         | TimPC wrote:
         | There is good reason to reform section 230. Right now courts
         | are applying the liability so broadly that companies aren't
         | liable even after they are notified about illegal behaviours on
         | their site. In a court case involving Grindr refusing to take
         | down a profile created by someone's ex-bf that was being used
         | to harass him, their refusing to so even after contact by
         | lawyers was protected under section 230 and the case was thrown
         | out. I'd be all for a modified version of section 230 that
         | required sites to have a contact email and made them liable if
         | they don't address certain issues in an appropriate time
         | period.
         | 
         | It's also worth mentioning that before section 230 if you
         | didn't moderate you weren't liable so in certain senses it's a
         | censorship bill rather than a free speech one since it protects
         | removing speech. That being said I do understand the need to
         | moderate sites and remove some content, hence my proposal of
         | the modified version rather than a call for its elimination
         | entirely.
        
           | s__s wrote:
           | To me that problem can be dealt with between the user, their
           | ex-bf and local law enforcement. Grindr need not be involved,
           | and no special internet laws need apply.
           | 
           | That's a case of harassment and possibly some form of
           | identity theft.
           | 
           | I don't find your proposal tenable and it would obviously be
           | prone to abuse.
        
             | gowld wrote:
             | That's only true of Grinder is obligated to positively ID a
             | US resident who provided the content.
        
           | curryst wrote:
           | Grindr didn't refuse to take them down; the ex-bf kept
           | creating new ones. That's a whole different problem. Grindr
           | claims they were monitoring for new profiles, but that some
           | slipped through their checks.
           | 
           | In that scenario, I don't know what a reasonable level of
           | effort for Grindr to exert is. It seems infinitely
           | unreasonable to make them liable for any failure; there is a
           | determined person on the other end that will probably
           | eventually find some way of adding spaces or using symbols
           | instead of letters, or using weird UTF-8 symbols or
           | something.
           | 
           | I don't see Grindr as failing there; while they probably
           | could have done more, they seem to have made a best faith
           | effort to stop it. The police should have intervened and
           | filed charges against the boyfriend for stalking and
           | harassment. Even failing that, I would have filed a civil
           | case so I could subpoena the logs from Grindr and used them
           | as evidence in a restraining order.
           | 
           | Grindr is not the appropriate party to resolve this. I don't
           | call Ford when people drive their trucks like assholes. I
           | don't call Glock when somebody shoots someone. If you're
           | going to call Grindr, you might as well call their ISP and
           | Google too, see if you can get the ISP to block Grindr or get
           | Google to route Grinder to localhost. They're complicit in
           | enabling this too.
           | 
           | > Right now courts are applying the liability so broadly that
           | companies aren't liable even after they are notified about
           | illegal behaviours on their site
           | 
           | This, to a degree, makes sense. They haven't been notified
           | about illegal behavior on their site, they have been notified
           | of allegedly illegal behavior on their site. Grindr is well
           | within their rights to say that they don't believe that the
           | profile violates any laws. For example, it says that he
           | attempted to file for a restraining order and was denied. So
           | that court either found that what the ex-bf was doing wasn't
           | illegal, or that he failed to meet the requirement of a
           | preponderance of evidence. So he failed to convince a judge
           | that his ex was more likely than not stalking him. Should
           | Grindr be required to take action on a claim that is more
           | likely false than true?
           | 
           | > I'd be all for a modified version of section 230 that
           | required sites to have a contact email and made them liable
           | if they don't address certain issues in an appropriate time
           | period.
           | 
           | That's fraught with issues. What counts as addressing the
           | issue? Is it banning the profiles as people identify them? Is
           | it banning the personal info from appearing in profiles? Do
           | they have to hire a group of people to memorize all the bits
           | of bad data, and check new profiles and profile updates for
           | those snippets, as well as any clever encodings that a
           | computer wouldn't recognize?
           | 
           | What is an appropriate time period? Is it some flat period,
           | like a week, regardless of what changes are required? Does it
           | vary, and if so, who decides what's a reasonable amount of
           | time?
           | 
           | This is not to mention that literally none of this goes
           | through a court, which is terrifying and exceptionally prone
           | to abuse. Of course, it could go through a court, but we
           | already have laws and remedies for this situation in court.
           | 
           | Cases like that make it seem really cut and dry, like there
           | would never be a grey area. Even ignoring cases of outright
           | fraud, what do you do in situations where one side feels
           | victimized but it doesn't actually meet any legal standards?
           | Like if person A always replies and argues with person Bs
           | tweets. When person B blocks person A, they make a new
           | account. Person B says they feel harassed and wants to force
           | Twitter to do something about it. Person A says that Twitter
           | is a public forum, and that if people don't want other people
           | to disagree, they should use a more private forum. It never
           | goes further than that. No threats, no doxxing, no real life
           | interactions. Person A is probably an asshole, sure, but I
           | don't think section 230 grants you immunity from assholes. I
           | don't think it counts as stalking or harassment either
           | (though I could certainly be wrong, not a lawyer). Should we
           | really allow Person B to force Twitter to do something
           | without having a judge involved? I would really rather not
           | give the Twitter lynchmobs yet another way to dispense their
           | own vigilante justice.
        
           | eli wrote:
           | _> a modified version of section 230 that required sites to
           | have a contact email and made them liable if they don't
           | address certain issues in an appropriate time period._
           | 
           | So recreate the DMCA Takedown process but for speech? Do you
           | think the DMCA is working well for copyright holders and
           | users?
           | 
           | The abuse of this would be massive. Let's say I don't like
           | the comments you wrote so I email the host of the forum
           | they're on and say they're defamatory. Now the host has to
           | decide if they are defamatory (which is often a tough call
           | even for lawyers) and also weigh the risk that I might file a
           | costly lawsuit anyway. Or they just delete the comment.
        
         | wahern wrote:
         | I don't think it would be world ending to get rid of Section
         | 230. I _almost_ would like to see it happen, if only because it
         | would have precisely the opposite effect expected by all the
         | people whining about being censored. Though, I suppose you can
         | 't be censored if the channel itself is extinguished.
         | 
         | More practically, the technically literate would go back to the
         | world of Usenet, mailing-lists, and minimalistic forums like
         | HN, hopefully inventing distributed reputation systems in the
         | process. I have this vague idea for PGP web of trust-like
         | signing of Usenet posts (published as hidden posts when readers
         | +1/-1) which are then SPAM scored based on the depth of the
         | attestation chain to the reader's own trusted posters, which
         | may have been seeded from one or more centralized databases of
         | group maintainers, similar to the current registration system
         | for moderated Usenet groups except you could freely choose
         | alternative registrars.
        
           | heavyset_go wrote:
           | > _I don 't think it would be world ending to get rid of
           | Section 230_
           | 
           | If you're running a start up, how would you feel knowing that
           | if a user uploaded illegal content to your servers, you could
           | be raided in the middle of the night and imprisoned for it?
           | 
           | Only those with billions of dollars to throw at moderation
           | would be able to comply with the law. Everyone else would
           | need to block user content by necessity, or risk having their
           | lives ruined by malicious users.
           | 
           | The net result is that hosting free speech on the internet
           | would be too risky for anyone other than giant corporations.
           | The liability to host users' speech would be far too high for
           | anyone else.
        
           | [deleted]
        
           | rrobukef wrote:
           | Interesting. However, any manual action (choosing trusted
           | posters, maintained database) is bad for adoption. Facebook
           | and twitter take care of it, you should too.
           | 
           | Perhaps you should use karma and comment interactions to
           | automatically attest the people you interact with. Add a
           | "report" button to disavow certain users. Now there is a
           | positive and negative feedback loop to reduce the workload of
           | attestation.
           | 
           | Caveat: attestation must be stabilized. The existing
           | hierarchies of admin/(super-)moderator work well as trusted
           | posters. On the other hand, picking and choosing your
           | moderator(s) is interesting and will birth new flame-wars and
           | division.
           | 
           | Caveat (2): Adding more crypto explodes the amount of data
           | which must be handled. Especially when every comment and
           | upvote is signed.
        
           | throwawaygh wrote:
           | The plan is not to repeal Section 230. The plan is to make
           | protection contingent on appeasing political appointees at
           | the FTC.
           | 
           | Whoever controls the FTC will be able to (and will) pressure
           | the major social media networks into acting as a propaganda
           | arm for their political party.
           | 
           | As dystopian as FB and Twitter are today, in this case, the
           | medicine is poison.
           | 
           | See https://www.hawley.senate.gov/senator-hawley-introduces-
           | legi...
        
             | zalkota wrote:
             | Well it's either Facebook chooses their party or the
             | government chooses it for them...
        
             | eli wrote:
             | Hawley's plan is just one of them.
             | 
             | Some people, including both Joe Biden and Donald Trump,
             | have called for a complete repeal of Section 230 at various
             | times in the last year.
        
               | dragonwriter wrote:
               | > Some people, including both Joe Biden and Donald Trump,
               | have called for a complete repeal of Section 230 at
               | various times in the last year
               | 
               | AFAICT, that characterization of Biden's position is
               | based entirely on a single oral interview response, which
               | quite arguably was not saying that the law should be
               | repeated but that, on the facts of Facebook's specific
               | conduct, and that of some unspecified other platforms,
               | their conduct should be excluded from Section 230
               | protections because they were knowingly engaging in
               | misinformation.
               | 
               | Note that Section 230 protections in case law are broader
               | than what is provided on the face of the statute; in
               | addition to the "publisher or speaker" protection in
               | Section 230(c)(1); courts have extended it to also
               | prevent liability as a distributor for content, IIRC by
               | synthesizing 230(c)(1) and the good-faith blocking rule
               | in Section 230(c)(2) and some legislative history to add
               | the not-express-in-statute rule that sites are also not
               | liable even as a distributor for the material they don't
               | block, with some exceptions. Biden's statement is consist
               | with restricting Section 230 to what is says on the face,
               | which would be _only_ removing publisher /speaker
               | liability, not distributor liability (which comes about
               | when the distributor has knowledge or legal notice of the
               | legal problem with the content.)
        
               | eli wrote:
               | I think that's a very generous reading of what Biden
               | said, especially considering the followup question and
               | the fact that he's declined to clarify his position in
               | the intervening 8 months. Search "230" on this page to
               | see it https://www.nytimes.com/interactive/2020/01/17/opi
               | nion/joe-b...
               | 
               | Anyway my point was that there have been calls to repeal
               | 230 from across the ideological spectrum.
        
               | dragonwriter wrote:
               | > I think that's a very generous reading of what Biden
               | said, especially considering the followup question
               | 
               | In the followup, Biden reiterates the conduct condition
               | and the knowing falsehood criteria, which reinforces
               | rather than weakens the impression that he is calling for
               | the protections of Section 230 to be inapplicable to the
               | actor/action in question due to their knowledge, a
               | distributor-like standard, and not for the law itself to
               | be repealed generally.
               | 
               | I suppose you could read the first line of his response
               | to the second followup ("He should be submitted to civil
               | liability and his company to civil liability, just like
               | you would be here at The New York Times") as calling for
               | publisher-like liability if you ignore the _explicit_
               | references to actual knowledge as the basis for
               | nonprotection in both the original response and the first
               | followup, but I do think that that is the _more_ strained
               | interpretation, not the less strained.
               | 
               | > and the fact that he's declined to clarify his position
               | in the intervening 8 months.
               | 
               | Why would you assume that he doesn't want to clarify
               | because he wants a full repeal? Its not as if there isn't
               | a constituency for a full repeal, especially on the
               | right, and a key part of Biden's strategy is holding
               | together a Bernie Sanders-to-Bill Kristol left-right
               | alliance against Trump. Keeping disagreements the details
               | of his position on the issue (which is clearly peripheral
               | to his platform, on the grand scheme of things) out of
               | the reasons for people to not feel comfortable with him
               | is as plausible a motivation for that _regardless_ of
               | which side of the full-repeal-vs.-reform his preference
               | on 230 sits on.
        
               | throwawaygh wrote:
               | Do you have links to actual plans from other folks?
               | Hawley's is the only one I can find an actual draft bill
               | for.
        
               | eli wrote:
               | Brian Schatz and John Thune have one:
               | https://www.schatz.senate.gov/imo/media/doc/OLL20612.pdf
               | 
               | The DOJ has one: https://www.justice.gov/opa/pr/justice-
               | department-unveils-pr...
               | 
               | The Whitehouse has a somewhat bogus EO
               | https://www.whitehouse.gov/presidential-
               | actions/executive-or...
               | 
               | There have been a bunch of attempts to rewrite and at
               | least a few attempts to just repeal it from both
               | Democrats and Republicans.
        
               | throwawaygh wrote:
               | Thanks.
        
             | adamiscool8 wrote:
             | That's nonsense. Showing "their algorithms and content-
             | removal practices are politically neutral" is not an
             | insurmountable bar. It's just inconvenient for Big Tech's
             | supporting interests.
        
               | throwawaygh wrote:
               | Really? You think we can here in this thread all agree to
               | what it means for an algorithm or content-removal
               | practice to be "politically neutral"?
               | 
               | If so, please go ahead! But I seriously doubt it. This is
               | a thing political philosophers argue about in journals to
               | this day, that lawyers argue about in SCOTUS cases to
               | this day, and that has been litigated to death in
               | thousands of HN threads over the years.
               | 
               | The question of what "politically neutral" means is
               | perhaps the MOST political question there is. The
               | delineation of political speech from non-political speech
               | defines the playing field.
               | 
               | And even setting aside genuine disagreement, politics
               | does not operate on good faith. It operates on power. In
               | practice, the bill does not outline specific criteria. So
               | "politically neutral" will mean whatever the FTC wants it
               | to mean. Which means it will mean whatever the appointees
               | of the FTC chair want it to mean.
               | 
               | Josh Hawley, of course, knows and understands how power
               | works. He would not be proposing this bill if the big
               | tech companies were right-biased. Democrats also
               | understand how power works. So, in this counter-factual
               | world of right-biased social media, it would be Democrats
               | clamoring for federal intervention and Hawley decrying
               | the "Democrat attack on the most successful American
               | companies". Do you really believe otherwise?
        
               | dragonwriter wrote:
               | > You think we can here in this thread all agree to what
               | it means for an algorithm or content-removal practice to
               | be "politically neutral"?
               | 
               | Well, there's a simple answer, but I doubt we'll agree on
               | it. It is impossible for a content-removal practice,
               | algorithmic or otherwise, to be politically neutral. Any
               | such practice will involve (whether implemented case-by-
               | case or encoded into the design of the algorthm)
               | judgements of a political nature and with political
               | impacts.
        
               | throwawaygh wrote:
               | _> It is impossible for a content-removal practice,
               | algorithmic or otherwise, to be politically neutral._
               | 
               | Right. My point is that Hawley's whole premise of a
               | "politically appointed political neutrality committee" is
               | absurdly transparent.
        
               | judge2020 wrote:
               | Exactly. Imagine a republican-backed FCC arguing that,
               | because more voters in the US are democrat, it's not
               | politically neutral to try to show a news article to
               | everyone in the US and the algorithm should try to show
               | it only to an equal number of republicans and democrats.
        
               | JoshTriplett wrote:
               | Exactly. Also, people have a stronger uncomfortable
               | negative reaction to news they don't like than a positive
               | reaction to news they agree with, and extremists think
               | even neutral descriptions of reality are biased against
               | them, so if you show a neutral selection to a non-neutral
               | person they're likely to see it as biased because it
               | doesn't align with their perception of what the
               | proportions _should_ be. Nobody will _ever_ agree on what
               | "neutral" means, which means most likely it'll mean
               | "biased in favor of who currently has political power".
        
               | adamiscool8 wrote:
               | Very simple. No primary moderation action should be made
               | based on human input. Automated moderation should look
               | for identifiable harms (i.e. illicit content, directed
               | threats, terrorism), and absolutely nothing should be
               | removed or blocked based on vague and nebulously defined
               | concerns over "misinformation". Voila -- political
               | neutrality in moderation.
        
               | throwawaygh wrote:
               | _> No primary moderation action should be made based on
               | human input_
               | 
               | 1. That means no HN.
               | 
               | 2. I normally don't have to remind people of this at
               | places like HN, but... algorithms are written by...
               | humans! Supervised algos use data labeled by... humans!
               | 
               |  _> Automated moderation should look for identifiable
               | harms (i.e. illicit content, directed threats,
               | terrorism)_
               | 
               | Why do you list terrorism separately from directed
               | threats?
               | 
               | What is the line/difference between "terrorism" and an
               | "undirected threat"?
               | 
               | Are militia groups that don't make directed threats
               | terrorists? Are radical religious groups that don't make
               | directed threats terrorists? What if they are run by
               | actual terrorists but none of the speech amounts to a
               | directed threat?
               | 
               | Speaking of which, what is a terrorist organization? Is
               | the KKK? What about small white nationalist or black
               | power militia groups? What about QAnon? What about
               | antifa? What about BLM? What about Westboro Baptist? What
               | about the Black Panthers?
               | 
               | There are people -- elected officials -- who think each
               | of those is a terror organization.
               | 
               | So, defining terrorist organization is absolutely a
               | political fight. Maybe we avoid that and just talk about
               | directed threats/ Ok. Does that mean that Al Qaeda
               | allowed to operate on FB as long as they don't make
               | directed threats? In fact, that FB is prohibited from not
               | allowing Al Qaeda on as long as they don't make directed
               | threats? That seems like not a solution anyone is going
               | to get behind.
               | 
               | We haven't even gotten past the "obviously terrorism=bad"
               | and we already have to declare whether BLM, QAnon,
               | Westboro, or militia groups are "terrorists". Which some
               | senators believe is the case and is a 100% political
               | question.
               | 
               |  _> illicit content_
               | 
               | Is Ginsberg's _Howl_ illicit? Is a picture of two women
               | kissing illicit? What about non-sexualized nude breasts?
               | What about nude male bodies? What about an erect penis
               | but in a non-erotic context? Will the dominant answers to
               | these questions be the same in 50 years?
               | 
               | Lots of people would say a site that allows pictures of
               | heterosexual kissing but not not pictures of homosexual
               | kissing is obviously taking a political position, but
               | that was outside the realm of "political opinion" when I
               | entered adulthood! Any public homosexual display of
               | affection was _obviously_ illicit.
               | 
               |  _> absolutely nothing should be removed or blocked based
               | on vague and nebulously defined concerns over
               | "misinformation"._
               | 
               | What does vague mean? What does nebulously defined mean?
               | What is the difference between misinformation and libel?
               | What is the difference between misinformation and
               | dangerous information? Is it impressible to remove a
               | video that's targeted at kids and encourages huffing glue
               | as a fun and harm-free activity?
               | 
               | Anyone who has moderated a forum knows that such an
               | algorithm is going to have all sorts of holes and
               | perceived biases. I've _never_ written an automod that
               | some user doesn 't get pissed off about.
               | 
               | More generally: that's just straight-up moderation, it
               | has nothing to do with tweaks to recommendation algos.
               | 
               | What if Twitter realizes that people leave the site if
               | they see stuff about abortion but stay if they see stuff
               | about LGBT rights? Again, viewpoint-neutral, Americans
               | just one day start yawning about abortion and really
               | polarize on LGBT stuff. Can they prioritize posts about
               | LGBT rights over posts about abortion as long as the
               | content served up on the preferred topic is viewpoint-
               | neutral and the only algorithmic goal is more lingering
               | eyeballs?
               | 
               | If no to that, how about sports news vs. SCOTUS decision
               | news?
               | 
               | If yes to that, what about COVID case counts vs. Jobs
               | Report numbers?
               | 
               | Even more generally: anyone who's stayed up to date on
               | robust machine learning knows that defining good notions
               | of robustness -- and political neutrality is a type of
               | robustness -- is very much an open problem. So even if we
               | had a precise definition of political neutrality, which I
               | don't think we do, "simply create an algorithm that has
               | that property" is very much an open algorithmic problem.
               | 
               | In fact, there are even some impossibility theorems in
               | this space. So even if we can define neutrality in a
               | perfectly neutral way -- which we can't -- this might be
               | like passing a constitutional amendment that demands a
               | voting system has all of: Non-dictatorship, unrestricted
               | domain, monotonicity, IIA, and non-imposition. You can
               | legislatively demand "the perfect voting system", but the
               | universe is not obliged to ensure the existence of such a
               | thing. Same for some types of robust ML, and no one knows
               | which side of an impossibility theorem some precise-
               | enough-to-code notion of political neutrality might fall
               | on.
               | 
               | Which also brings up the REAL question: are tweaks to
               | recommendation algorithms allowed? Obviously we can't ask
               | FB/Twitter to freeze their recommendation algos -- it's
               | their core product. So. If they notice an "obvious bias"
               | and tweak the algorithm to correct for it, who decides
               | whether that was a biased human intervention or a totally
               | appropriate bug fix? Oh, right, a politically appointed
               | FTC.
               | 
               | I think that "politically neutral" is impossible to
               | formalize in code because it is a fundamental
               | contradiction in terms. But even if it does, I suspect
               | that any reasonable lists of formal specifications might
               | be either mathematically impossible to train a classifier
               | to respect or else at least AGI-complete to actually
               | implement. But if you disagree, I'm happy to clone the
               | Github repo and mess around with your proposal.
        
               | adamiscool8 wrote:
               | >1. That means no HN.
               | 
               | No, it means less 230 protection for HN. Stop conflating
               | this with destruction of the platform, it's becoming like
               | "net neutrality". Remember when tweaking that killed the
               | internet?
               | 
               | >What is the line/difference between "terrorism" and an
               | "undirected threat"? Speaking of which, what is a
               | terrorist organization?
               | 
               | The government has a clear processes to designate foreign
               | and domestic terrorist organizations. [0] Let the actual
               | politicians engage in that political fight. Social media
               | companies can use the result.
               | 
               | >What is the difference between misinformation and libel?
               | 
               | Actual malice? If the standard works for newspapers, why
               | can't it work for social media companies?
               | 
               | >More generally: that's just straight-up moderation, it
               | has nothing to do with tweaks to recommendation algos.
               | [...] If they notice an "obvious bias" and tweak the
               | algorithm to correct for it, who decides whether that was
               | a biased human intervention or a totally appropriate bug
               | fix?
               | 
               | None of this relates. Content should not be removed or
               | suppressed based on any political preference or
               | designation, and that includes a fig leaf of facial
               | neutrality. Whether it's recommended to some and not
               | others != suppression, and it's trivial to show that your
               | systems are based on user action not partisan interest.
               | 
               | These aren't sticky questions at all, they're just ways
               | to navel gaze and avoid the obvious solutions that are
               | inconvenient to certain actors.
               | 
               | [0] https://www.state.gov/terrorist-designations-and-
               | state-spons...
        
               | throwawaygh wrote:
               | _> >> No primary moderation action should be made based
               | on human input_
               | 
               |  _> > 1. That means no HN._
               | 
               |  _> No, it means less 230 protection for HN._
               | 
               | I'd be fascinated to hear what dang thinks about HN's
               | future existence if this hypothetical law where "No
               | primary moderation action should be made based on human
               | input" applied to HN.
               | 
               | It seems impossible to (a) run a healthy forum or (b)
               | avoid lawsuits or even jail. E.g., can you link me to a
               | github repo that automatically catches 100% of libel? Or
               | even 100% of child porn (or I guess actual porn as a
               | proxy for that problem)? Removing libel and other illegal
               | content without "primary moderation action"s that are
               | based on "human input" is not currently possible.
               | 
               | (BTW: that's NOT what Hawley's bill does! It allows human
               | moderation, you just have to keep the political
               | appointees happy.)
               | 
               |  _> > What is the difference between misinformation and
               | libel?_
               | 
               |  _> Actual malice? If the standard works for newspapers,
               | why can 't it work for social media companies?_
               | 
               | Because newpapers have a few journalists. Not hundreds of
               | millions of users.
               | 
               | This has to be done arithmetically or it's financially
               | reckless to allow free-form comments at all. If it's so
               | easy to algorithmically identify libel with 100.00%
               | accuracy, _go do it!_
               | 
               | Given that there are regularly court cases that hinge on
               | whether some statement raised to the level of libel --
               | cases that even get appealed and where highly trained
               | judges disagree -- I'm willing to bet the problem is AGI-
               | complete. And then some.
               | 
               |  _> The government has a clear processes to designate
               | foreign and domestic terrorist organizations. [0] Let the
               | actual politicians engage in that political fight. Social
               | media companies can use the result._
               | 
               |  _> Content should not be removed or suppressed based on
               | any political preference or designation_
               | 
               | So politicians get to define what terrorism means and
               | companies should suck it up and implement whatever the
               | politicians in power decide.
               | 
               | So, if some powerful GOP senator designates BLM a
               | terrorist organization, and social media companies then
               | remove all BLM content, is that not "removing or
               | suppressing based on political preference"? What about
               | pro-2A militias? What about QAnon?
               | 
               | By the way, what about "illicit content"? If some hard
               | core right-winger takes over Twitter tomorrow, can they
               | ban pictures of homosexuals kissing as "illicit content"?
               | 
               | Hawley -- whose bill doesn't even do what you suggest --
               | is just shifting power over content moderation decisions
               | from companies to political appointees. That's all. It's
               | not neutral, it is based on human input, and it's
               | primarily just a shift in decision making power.
               | 
               | Dressing this up as "neutral" is obvious bullshit. Hawley
               | wants Twitter to understand that his political party is
               | their ultimate master when they choose which speech to
               | amplify on their platform. This is his explicit and
               | openly stated goal. It is about power, not neutrality.
               | 
               | But anyways, this argument is easy to resolve in your
               | favor. You propose not Hawley's bill, but a hypothetical
               | different one where human input can't be a primary
               | consideration. So, you're claiming that a formal
               | specification of the political neturality of an NLP
               | classifier exists. I've build a lot of classifiers, and I
               | don't believe you. Show me the code.
        
               | whimsicalism wrote:
               | I'm not a republican, nor a Trump supporter, but I
               | disagree. I think the courts would be able to create a
               | body of case law over whether a removal was due to a post
               | being "violent, obscene or harassing" or for some other
               | reason.
               | 
               | We can't be terrified of regulating platforms that have
               | massive amounts of control over what most people see or
               | hear about.
        
               | throwawaygh wrote:
               | _> I think the courts would be able to create a body of
               | case law over whether a removal was due to a post being
               | "violent, obscene or harassing" or for some other
               | reason._
               | 
               | 1. Maybe, but that's not what Hawley's bill does.
               | 
               | 2. Leaving inherently political questions up to the
               | courts invites politicizing the courts -- something
               | that's already happened and that, if it continues apace,
               | threatens to delegitimize and gridlock the entire federal
               | legal system.
               | 
               | 3. Given that you're not a Trump supporter or Republican,
               | perhaps you should review the last 20 years of federal
               | judicial appointments before placing so much faith in the
               | courts...
               | 
               |  _> We can 't be terrified of regulating platforms that
               | have massive amounts of control over what most people see
               | or hear about._
               | 
               | Agreed. I think there are lots of reasonable approaches
               | toward regulation and/or self-regulation. The ability of
               | customers to choose from a marketplace of recommendation
               | algos (or implement their own) is the obvious market-
               | based solution.
               | 
               | However, I do not think a politically appointed committee
               | whose job is to define political neutrality is a
               | reasonable approach. And I think that leaving inherently
               | political moderation choices up to the courts would be
               | even worse -- at least FTC chairs aren't lifetime
               | appointments, and at least politicizing the FTC won't
               | deteriorate public trust in the one portion of the
               | federal government that is not yet perceived as nakedly
               | partisan.
        
               | untog wrote:
               | > Showing "their algorithms and content-removal practices
               | are politically neutral" is not an insurmountable bar.
               | 
               | It is when political appointees are the ones who judge if
               | you've cleared the bar.
        
               | SpicyLemonZest wrote:
               | It seems pretty insurmountable to me. Can you go into
               | more detail about how they'd do it? I've seen a _lot_ of
               | fights where one side says  "putting this post up is
               | biased against me" and another says "taking this post
               | down is biased against me", and I'm not sure how Facebook
               | could resolve those disputes with confidence the FCC
               | won't say they did it wrong.
        
               | jcranmer wrote:
               | I think that's the point of this bill. They force
               | Facebook et al. to get certified bias-free in a manner
               | that basically makes it impossible to get that
               | certification. So then they get the headlines saying "FCC
               | finds Facebook is biased!!!111"
               | 
               | The bill would be more palatable to me if they simply
               | dropped the immunity, without any certification process.
               | But then demonstrating bias would require winning civil
               | lawsuits, which requires demonstrating damage suffered by
               | the bias and also convincing 12 members of the jury in a
               | unanimous vote... which is unlikely to happen, I think.
               | 
               | (Addendum: actually, the real point of the bill may be to
               | just say "Facebook/Twitter/Google is biased, and I'm
               | doing something about it!" and ignore any actual chance
               | of it making law or being reasonable. It's not like many
               | people actually read details of bills to understand what
               | it does and doesn't say.)
        
               | throwawaygh wrote:
               | _> The bill would be more palatable to me if they simply
               | dropped the immunity, without any certification process.
               | But then demonstrating bias would require winning civil
               | lawsuits, which requires demonstrating damage suffered by
               | the bias and also convincing 12 members of the jury in a
               | unanimous vote... which is unlikely to happen, I think._
               | 
               | No. Killing 230 entirely would allow Twitter and Facebook
               | to be as politically biased as they want.
               | 
               | However, if one of their users libels you, then you could
               | sue Facebook in addition to that user.
               | 
               | And if any Facebook user posts child porn, even for a
               | short period of time, relevant parties at Facebook could
               | face criminal charges for distribution.
               | 
               | You couldn't sue Facebook for being politically biased,
               | but Facebook would be responsible for actual crimes that
               | its users commit.
               | 
               | Hawley's bill says "you won't be responsible for the
               | illegal stuff your users do (i.e., you get 230
               | protections), but only as long as you keep my political
               | appointees happy."
        
               | bosswipe wrote:
               | The hypocrisy of the same FCC that said that network
               | neutrality was too much regulation to now demand
               | "political neutrality" is outrageous.
        
               | jcranmer wrote:
               | Note that the test actually proposed (not the press
               | release blurb) says:
               | 
               | > The moderation practices of a provider of interactive
               | computer services are politically biased if the provider
               | moderates information provided by information content
               | providers in a manner that [...] disproportionately
               | restricts or promotes access to, or the availability of,
               | information from a political party, political candidate,
               | or political viewpoint
               | 
               | That means that any service that chooses to do something
               | like suppress known conspiracy theories is going to fall
               | afoul of the proposed changes.
        
               | JoshTriplett wrote:
               | For that matter, a policy of restricting hate speech will
               | currently restrict one party more than another. A policy
               | of prohibiting disinformation that could lead to voter
               | suppression will currently restrict one party more than
               | another. A policy of prohibiting misinformation about the
               | ongoing pandemic will currently restrict one party more
               | than another.
        
               | klyrs wrote:
               | What does "politically neutral" mean, anyway? What
               | happens if I establish a political party whose solitary
               | goal is to torture babies, kittens and puppies to death?
               | Is that suddenly a "political opinion" which must be
               | protected?
        
           | NationalPark wrote:
           | Would forums like HN survive? I can think of a few incidents
           | where malicious information about people made the front page
           | then turned out to be false. Is HN prepared to defend against
           | lawsuits about that? Is HN prepared to _lose_ lawsuits about
           | that?
           | 
           | It sounds like you're basically suggesting that making the
           | internet useless is a good thing, because maybe something
           | something cool will come out of the ashes and there's a
           | chance it could be even better after a bunch of extremely
           | hard and broad problems are solved. I don't like those odds.
        
             | ColanR wrote:
             | I think the odds are pretty good. There's a lot of smart &
             | motivated people who really like the internet, who would
             | probably go a long way to replace it.
        
               | gowld wrote:
               | Why aren't those people interested in working on that
               | today?
        
             | wahern wrote:
             | HN already has moderators who do a very good job of
             | filtering posts in a timely manner. HN's exposure to
             | liability for libel would be rather minimal. People and
             | companies are exposed to legal risk all the time,
             | everywhere they go, and somehow they don't curl up into a
             | ball and die of starvation in their basements.
             | 
             | Big, diverse sites like Facebook and Twitter need Section
             | 230 because they can't effectively use human moderators to
             | sift through the content. They have to rely on machine
             | learning, which has false negative rates magnitudes higher
             | than a human. Yet at the same time, they're constantly
             | trying to shape and edit and, basically, narrate the user
             | content, as part of their monetization strategy. That's
             | their dilemma.
             | 
             | Moreover, the distinction between publisher and distributor
             | will still exist. The alternative to strong moderation is
             | _no_ moderation--you 're just a distributor, like a Usenet
             | node or the telephone company. But that's more difficult to
             | monetize. (Of course, the legal landscape would be more
             | nuanced than that--traditional libel law wouldn't demand a
             | simple dichotomy between moderation and no moderation.)
             | 
             | Without Section 230 companies would have a more difficult
             | time trading profit potential for legal liability, but it
             | would still be done. Newspapers, write-in columns, bulletin
             | boards, and other forums were around for centuries, all the
             | same exposed to libel law. Even the internet was around for
             | decades prior to Section 230.
        
               | eli wrote:
               | _> HN 's exposure to liability for libel would be rather
               | minimal._
               | 
               | I don't understand how you reached that conclusion. They
               | could be sued over any comment that appears for any
               | amount of time. There are definitely comments that have
               | appeared on HN that are libelous.
               | 
               | Moreover even if they pre-screened every comment before
               | it was posted with a team of lawyers who never make any
               | mistakes, they'd STILL have to worry about defending
               | against frivolous lawsuits. Would it even be possible to
               | buy liability insurance for a forum in this world? It
               | would cost a fortune.
               | 
               | And this is for a site that has the resources to have
               | full time moderators. Smaller sites are even worse off.
               | 
               | I don't see how anyone could practically operate any
               | forum or discussion board or comments section that
               | allowed people to post messages in real time.
               | 
               |  _> Even the internet was around for decades prior to
               | Section 230._
               | 
               | Sure and sometimes your ISP got successfully sued because
               | someone didn't like a comment posted on a message board
               | they hosted.
        
               | nullc wrote:
               | > have appeared on HN that are libelous
               | 
               | Not just have appeared, but which are still on display.
               | 
               | Sometimes the difference between libellous and a critical
               | statement protecting the public is purely the difference
               | of the statement being true or not.
               | 
               | This is not something a moderator is necessarily in a
               | position to be able to judge but it's critically
               | important to a community that its members can communicate
               | true negative facts about other members.
        
               | eli wrote:
               | Worse yet: sometimes the difference is just who happens
               | to be on your jury that day!
        
               | [deleted]
        
               | NationalPark wrote:
               | We won't know the actual exposure until it ends up in the
               | courts. Remember, we're talking about removing the good
               | faith liability protections. Maybe a few thousand views
               | of a libelous comment is enough, even if it was
               | eventually removed. Either way, someone has to hire
               | lawyers to go defend this, so it's not free.
               | 
               | We should also expect new bad actors to take advantage of
               | this. As long as they can spam libel faster than
               | moderators can delete it, they can force the site to shut
               | down or risk the lawsuits. While I'm sure YC has its
               | share of enemies deserved or not, even perfectly innocent
               | people are attacked online every day for no reason at
               | all.
        
               | gowld wrote:
               | There's no reason to assume that an operator would be
               | liable for libel spam, since they lack mens rea intent.
               | Libel would only be in play if they intentionally refused
               | to take down content or tried to extort people with it.
        
       | jb775 wrote:
       | I feel like this guy still doesn't get the actual point why
       | politicians are talking about section 230. They aren't directly
       | rebutting the language of the law, they're threatening to
       | amend/repeal section 230 since they know social media giants
       | depend it to exist in their current form.
        
       | bosswipe wrote:
       | They're using revocation of 230 as extortion to try to abridge
       | the freedom of the press.
        
         | splintercell wrote:
         | Do you think conservatives are being censored by the big tech
         | platforms? Would your answer pass the veil of ignorance?
        
           | bosswipe wrote:
           | Everybody is being censored on all these platforms, it's
           | called moderation. Most people appreciate it, including you
           | or you would be commenting on 4chan instead of the heavily
           | moderated HN.
        
             | pyronik19 wrote:
             | That's really a manipulation of the term "moderation". They
             | aren't removing swear words or nipples... They are deciding
             | what is the "correct" information and then decide what we
             | all get to see. If they got out of business because they
             | abused the immunity grant they were given so be it. I won't
             | miss them. All of social media can burn for all I care.
        
               | [deleted]
        
         | [deleted]
        
       | ReptileMan wrote:
       | Can we leave section 230 as is for non profits and small
       | businesses and treat FAANGs differently with different rules.
       | Scale matters.
        
         | eli wrote:
         | How about we leave Section 230 completely alone and if we have
         | problems with FAANG we pass new laws to address those problems
         | directly.
        
       | dilly_li wrote:
       | Why are so many replies under this post shown in faded gray?
        
         | eranimo wrote:
         | Because there are a lot of conservative Trump-supporting HN
         | posters in these comments downvoting everybody who doesn't want
         | the Internet destroyed.
        
         | judge2020 wrote:
         | This is from other users (with a certain amount of karma)
         | voting down these comments. See "Why don't I see down arrows?"
         | on
         | https://news.ycombinator.com/newsfaq.html#:~:text=Why%20don'...
        
         | jb775 wrote:
         | Those are comments from non-democrats
        
         | kibwen wrote:
         | Comments with net negative votes get faded out. Remember that
         | HN is just as much of a political battleground as any other
         | forum (don't fall into the common trap of thinking that tech
         | folks are "above politics"), and that those who benefit from
         | the exceedingly effective propaganda mentioned in the OP aren't
         | going to quietly yield control of the narrative.
        
       | chemeng wrote:
       | Though the law makes no distinction of publisher vs platform, I
       | think people's intuitive sense that something is "wrong" is valid
       | even though they may not be able to express it clearly. Publisher
       | vs platform is just the easiest/closest way to express it for
       | them.
       | 
       | For me, the problematic/key question and example is Facebook's
       | (News) Feed. When content is collected, curated (algorithmically)
       | with specific intent, published/presented in a particular order
       | and layout to communicate and derive revenue, at what point is it
       | a creative work with authorship?
       | 
       | If I prompt 100 people to comment on a topic by placing
       | information in front of them, and then take portions of those
       | comments, reorder and present them to you shaped by an
       | overarching narrative of "what you may find interesting related
       | to this topic" and place it on the front page of my website, in
       | what way is this different than a newspaper?
       | 
       | A newspaper can be sued for defamation, however, Section 230
       | (c)(1) shields Facebook from any liability in the case where this
       | selective curation and display of information contains known
       | falsehoods or defamations. If any reasonable curator of facts
       | (reporter) or newspaper editorial board would identify and reject
       | these falsehoods or otherwise be sued, does Facebook get a pass
       | because it was a computer curating?
       | 
       | *Edit: The reason I think this may be problematic is that it
       | removes any check on purposeful misinformation that has
       | traditionally existed on our previous methods of speech
       | amplification (newspapers, tv, radio). Facebook has no incentive
       | not to publish the most engaging information even if it is false,
       | as it cannot be sued. If it could be, you would see it actively
       | prevent misinformation. The standard would be what the courts
       | would find it responsible for under existing libel laws, which is
       | a difficult bar to clear, particularly for public persons, but is
       | the only restraint on yellow journalism we've traditionally had.
        
       | partiallypro wrote:
       | I have not yet read this piece, but Ken has turned into a
       | completely different person since ~2016 and I'm not sure he's
       | quite an unbiased legal source at this point.
       | 
       | Edit: I read the piece and generally agree, but Ken has become
       | increasingly partisan over the past few years to the point where
       | I have had to stop following him.
        
       | zaroth wrote:
       | The problem is not that Section 230 clearly requires platforms to
       | be neutral. The problem is that it doesn't.
       | 
       | Section 230 isn't in the Bill of Rights. It's a legislative gift
       | that was given to internet companies to help them grow by
       | granting them a special legal shield for all the highly
       | problematic content that they host and monetize.
       | 
       | If they want to editorialize on the back of that content, then I
       | don't see why they should have such special status.
       | 
       | We do not need to eliminate Section 230. But the definition of
       | 'Good Samaritan' blocking and 'good faith efforts' should
       | explicitly not include the editorial decisions of a publisher.
        
         | klyrs wrote:
         | Would you also advocate that we bring back the FCC's Fairness
         | Doctrine, so that news sources would be required to present
         | contrasting viewpoints? Arguments against that were based on
         | the first amendment; that a free press should have complete
         | editorial control over the content that they distribute.
        
           | stale2002 wrote:
           | > that news sources would be required to present contrasting
           | viewpoints
           | 
           | Personally, I don't have a problem with publishers engaging
           | in publishing. So I wouldn't support that.
           | 
           | But there is a large, philosophical difference, IMO, between
           | a publisher and a platform.
           | 
           | And our laws need to be changed to further clarify this.
        
             | klyrs wrote:
             | I'd love to see the Citizens United ruling overturned. It
             | would be _great_ for our society to not treat corporations
             | as people. But as it stands, corporations are found to have
             | free speech rights. Social media companies are both
             | publishers _and_ platforms. Many news sites now have
             | comments sections, too -- they 're both publishers _and_
             | platforms. What is the large, philosophical difference that
             | separates them?
        
               | stale2002 wrote:
               | > But as it stands, corporations are found to have free
               | speech rights
               | 
               | Sure. They have those right, just like a newspaper has
               | those rights.
               | 
               | But these companies and newspapers are also liable for
               | their speech.
               | 
               | And the companies that are not liable for the speech, are
               | companies like phone companies.
               | 
               | Phone companies are required to follow certain
               | restrictions, and the courts have found them to be
               | perfectly legal.
               | 
               | > What is the large, philosophical difference that
               | separates them?
               | 
               | Take the phone network, as an example. The courts already
               | treated the phone network, differently than they do a
               | newspaper.
        
           | zaroth wrote:
           | I think that libel laws could be eased just a bit. As it
           | stands, to sue a newspaper for libel you need to prove that
           | they knew what they were reporting was false, or that they
           | showed a "reckless disregard" for the truth. It's a very high
           | standard, but occasionally you will see cases settle (e.g.
           | Covington High student).
           | 
           | I do not think the government should be intervening on
           | content decisions. I think;
           | 
           | (1) publishers & platforms should be legally responsible for
           | content they host if they are going to editorialize on it
           | 
           | (2) from a 'net neutrality' standpoint, that utilities and
           | platforms should be mostly blind and entirely blameless for
           | the packets they carry, and
           | 
           | (3) we should allow some level of packet and/or content
           | classification in the middle of #1 and #2 without making the
           | utility/platform fully liable for the packets/content they
           | are carrying, if that classification is based on fairly
           | protecting the network/platform from "attack".
           | 
           | The only reason I can see for a fairness doctrine would be
           | based on a theory of anti-trust. To the extent that Twitter,
           | Facebook, Apple, Google, etc. are monopolies, their ability
           | to censor non-obscene viewpoints on their platforms _should_
           | be limited... and that 's a spectrum not a binary switch.
        
         | tzs wrote:
         | > If they want to editorialize on the back of that content,
         | then I don't see why they should have such special status.
         | 
         | They don't have special status for their editorializing. If
         | Facebook or Twitter or any other interactive computer service
         | produces editorial content that, say, libels someone, they
         | could be sued over that and would not have a section 230
         | defense.
        
           | Karunamon wrote:
           | Would you consider "this is dangerous" or "this is
           | misinformation" to be editorializing? What about "adding
           | context"[1], as Jack Dorsey has promised to do at twitter?
           | 
           | Given that all are judgement calls, I'd say it's impossible
           | for it to not be. There's a difference between merely
           | removing something and giving your opinion on the contents.
           | 
           | [1]: https://twitter.com/jack/status/1317081843443912706
        
             | tzs wrote:
             | If they give their opinion on the content, they would be
             | liable for that, because (1) it is not information provided
             | by another information content provider, and so is out of
             | scope for section 230, and (2) even if it were in scope
             | they would be liable as the author--230 essentially says
             | "go after the author, not the host" and in this case they
             | would be sued as the author, not as the host.
        
         | madeofpalk wrote:
         | > should explicitly not include the editorial decisions of a
         | publisher
         | 
         | what happens when I want to run CatTalk.com and, and its
         | against the rules to talk about Dogs, and someone comes in
         | talking about Dogs? Shouldnt you have the ability to run and
         | moderate and host the content on your site that you decide?
        
           | zaroth wrote:
           | Of _course_ the have the ability and the right to host and
           | moderate your own content! You just do it without Section 230
           | protection.
           | 
           | That means you have a responsibility for that content; the
           | same legal liability that newspapers and magazines have when
           | publishing articles and editorials.
        
             | nullc wrote:
             | In practice this means that it would be extremely
             | irrational to run a not-for-profit or nearly not-for-profit
             | venue that accepted posts from anything but close friends.
             | 
             | Is that the world you want?
        
             | tzs wrote:
             | Newspapers and magazines run each article past editorial
             | review before publishing it, and usually fact check
             | anything that looks like it might be libelous or
             | controversial. They don't just accept what the writer
             | submitted and go to press.
             | 
             | It's hard to see how you could run a site that provided
             | near real time broadcast communication to the general
             | public if you had to do that level of vetting of each post
             | to make sure nothing slipped through that might get you
             | sued.
        
             | eli wrote:
             | Newspapers and magazines have a staff of editors (and
             | sometimes lawyers!) review each issue before it's
             | published. That's the standard you want to apply to anyone
             | running any forum anywhere on the internet? That every
             | comment has to be carefully prescreened for legal
             | compliance? And even then you have to operate under a
             | substantially increased risk of lawsuit.
             | 
             | Maybe you prescreen every comment but you make a bad call
             | and something defamatory gets through. Or maybe someone
             | sues you under bad faith and you have to pay to defend it
             | or settle.
             | 
             | This is a very bad future for the internet. It's an
             | internet where the powerful, who have the full support of
             | publishers, will get to have their voices heard loudly. And
             | the less powerful, who do not have teams of lawyers willing
             | to fight on their behalf, will have a very hard time
             | getting their voices heard.
        
         | dragonwriter wrote:
         | > It's a legislative gift that was given to internet companies
         | to help them grow by granting them a special legal shield for
         | all the highly problematic content that they host and monetize.
         | 
         | No, it was given to them to encourage them to engage in "good-
         | faith" censorship of things that the government doesn't like by
         | not making such censorship move them from the relatively weak
         | distributor-liability regime where they were only liable based
         | on a responsibility to stop distributing content once they had
         | notice of its unlawfulness to the any-oversight-is-on-you
         | publisher liability regime.
         | 
         | It wasn't to "help them grow", which it was assumed they would
         | do anyway, it was to encourage them to try to restrict "bad"
         | content as they grew (it is the one surviving part of the
         | internet censorship Communications Decency Act, and like the
         | rest of that act was, in fact, directed at promoting internet
         | censorship.)
        
         | roywiggins wrote:
         | If you tried to enforce a "neutrality" requirement, you'd slam
         | face-first into the First Amendment. It would be government
         | mandated viewpoint discrimination.
        
           | LegitShady wrote:
           | which is why the legal exemption should be tossed entirely,
           | and new publishers like twitter and facebook who have
           | demonstrated control of what gets seen on their platform,
           | should be liable for the content on their platforms.
        
             | madeofpalk wrote:
             | You're aware that this will result in _more_ content from
             | being moderated off the sites, right?
        
               | Karunamon wrote:
               | Full devils advocate mode here.. I think it would likely
               | result in a severe scaling back of the operations of
               | Facebook and Twitter (and Reddit and [...]). There are
               | simply not enough people to hire as moderators and
               | coordinate at the scales they're running at right now to
               | operate in a manner that won't get them sued on the
               | regular.
               | 
               | I also think this would result in a mad dash for
               | anonymous, distributed, decentralized communication
               | methods, i.e. things that can't be the target of a
               | subpoena.
               | 
               | Given the toxic influence of both social media on
               | society, and the severe centralization we're operating
               | under... both of those things look _very_ tempting.
        
               | madeofpalk wrote:
               | I think it would be a net negative for the internet,
               | attacking the very thing that let it get to this place.
        
             | roywiggins wrote:
             | That would make Hacker News liable for any defamation
             | posted in the comments section. Well done, you just killed
             | HN.
        
               | LegitShady wrote:
               | shrug if thats the cost of stopping technocratic control
               | of the flow of information in society and openly
               | censoring, its cheap.
        
               | roywiggins wrote:
               | Throwing out all online discussion fora is rather
               | throwing the baby out with the bathwater. The replacement
               | would be centralized media companies that have total and
               | absolute control over information flow.
               | 
               | Goodbye Mastodon (every Mastodon instance is liable for
               | all toots). Goodbye Internet Archive (can't host content
               | that might be defamatory, they'll be liable). Goodbye
               | GitHub pull requests (Git would be liable for any
               | defamation contained in them). And so on.
        
               | LegitShady wrote:
               | I would happily see all of those things go away to make
               | sure Silicon Valley can't control who sees what.
               | 
               | It's not that I don't understand the value of those
               | things, its that I see the value of not having
               | information in society be controlled by a few companies
               | as having far larger value.
        
               | roywiggins wrote:
               | If you choke off user-generated content, the remaining
               | content will be directly produced by Disney and other
               | media conglomerates. That's not better, that's worse.
               | 
               | The only people able to blog, for instance, would be
               | people who have the technical chops to completely self-
               | host. Everyone else would be reduced to handing out
               | flyers on the corner like the bad old days.
               | 
               | The behemoth old-media companies would be fine, because
               | they can afford lawyers to go over everything they
               | publish.
               | 
               | Not only would it not be worth it, repealing section 230
               | would _consolidate_ behemoth media companies ' control,
               | not break it. It would do the absolute opposite of what
               | you want.
        
           | throwawaygh wrote:
           | Not if you have a 6-3 majority on the court... 1A is whatever
           | SCOTUS says it is.
        
             | ardy42 wrote:
             | > Not if you have a 6-3 majority on the court... 1A is
             | whatever SCOTUS says it is.
             | 
             | That's true. Like much of the Constitution, the text is
             | pithy and doesn't specify its definitions. The current
             | legal interpretation of the First Amendment and free speech
             | rest on a particular philosophical traditions that the
             | court adopted in the last century, and especially after the
             | 60s.
             | 
             | A lot of people have absolute faith in the functioning of a
             | "marketplace of ideas," but it's not at all clear to me
             | that a such a market can work well when it's flooded by
             | disinformation, just like a market of goods can't work well
             | when it's flooded by counterfeits.
             | 
             | https://www.nytimes.com/2020/10/13/magazine/free-
             | speech.html
        
         | nahtnam wrote:
         | I agree with this. From a libertarian stand point, the gov
         | stepped in to help (rightfully, sometimes we need it) but it
         | was originally written in a way that is now being abused. I
         | don't see a problem changing it so that if you a companies gets
         | to decide what they want to show and not show, they should be
         | liable just like news sites.
         | 
         | I'm open to hearing the opposite side if anyone has any
         | arguments on why it shouldn't change
        
           | dlp211 wrote:
           | Well they aren't news sites. They are private entities
           | providing general platforms. Speech, per the US legal system,
           | is more than just what one says, but also the actions one
           | takes. These companies have not only a right, but a duty as
           | publicly traded companies to protect their platforms as they
           | see fit.
           | 
           | I fail to see how moderators on vBulletin boards in 2002 are
           | any different then moderators/admins/algorithms on Twitter,
           | Facebook, YouTube, etc in 2020. The scale is different, sure,
           | but you are not entitled to the amplification of these
           | platforms just because they are bigger the same way you
           | weren't entitled to the amplification on those old vBulletin
           | board systems.
        
           | triceratops wrote:
           | From a libertarian standpoint forced "neutrality" (scare
           | quotes intended) is an infringement on the property and free
           | speech rights of the platforms. You are not entitled to use
           | my property against my will to endorse viewpoints that I
           | disagree with, and the government should not force me to
           | comply.
        
           | TameAntelope wrote:
           | Can you help me understand why selectively removing content
           | from a privately owned website is considered "abuse"?
        
           | roywiggins wrote:
           | From a libertarian standpoint, the government regulating
           | "neutrality" is a nightmare.
           | 
           | Should Hacker News be required to treat all links the same?
           | If not, exactly how do you think a government "neutrality"
           | mandate would work?
           | 
           | Because such a mandate means that if the HN moderators are a
           | little biased, the whole shebang would become liable for any
           | defamatory comments posted here. That's a government-mandated
           | sword of Damocles hanging over every single moderation
           | decision made here.
        
         | shadowgovt wrote:
         | Neutral is very much in the eye of the beholder. To start
         | considering this scenario, we'd have to ask "Who will be the
         | arbiter of whether a site's content is 'neutral?'"
        
           | kyleblarson wrote:
           | Seems like a pretty easy call if the site in question blocks
           | peoples' accounts for posting an article from the 4th largest
           | print news publication in the USA.
        
             | eranimo wrote:
             | It's actually the 8th largest, not 4th largest.
        
             | bananabreakfast wrote:
             | It was an article that blatantly violated the posting
             | medium's terms.
             | 
             | Doesn't matter if the government itself posted that
             | article. Doxxing info is immoral and should be blocked.
        
             | mthoms wrote:
             | It's only "a pretty easy call" when you cherry pick
             | extremely easy, outlying examples.
             | 
             | What about more nuanced cases? Who will be arbiter of
             | "neutral" then?
        
           | joshuaheard wrote:
           | Follow the First Amendment.
        
             | kodablah wrote:
             | But the first amendment prevents the government from
             | telling me or my company that I have to follow the first
             | amendment like they do.
        
               | roywiggins wrote:
               | Precisely. Enforcing "neutrality" on private parties
               | would be unconstitutional viewpoint discrimination.
        
             | eli wrote:
             | The First Amendment is in conflict with the government
             | regulating what viewpoints companies allow on their
             | platforms.
        
         | nomdep wrote:
         | They don't even need to be "neutral", just to have written
         | rules that apply to everyone. eg: no spam.
         | 
         | That rules could include biases like "no news that favor Trump
         | reelection", but they should be in their terms of service
         | explicitly.
        
         | root_axis wrote:
         | Explicit deliberate bias is protected speech.
        
       | convery wrote:
       | You can't claim you're not a publisher when you start promoting
       | stories that push one narrative while blacklisting stories
       | questioning that narrative. Imagine a news site that accepts
       | submissions from the public, but filter which content is visible
       | based on their own views, claiming no responsibility for the BS
       | they allow on the frontpage.
        
         | basch wrote:
         | yes you can, thats what 230 protects. a private companies
         | ability to censor reach does not turn it into a speaker. you
         | can selectively choose what to allow and what to deny on your
         | platform. like a spam filter, but for anything.
        
           | LegitShady wrote:
           | right, which is why section 230 should be repealed, because
           | they're censoring politically and control what people see,
           | shaping public opinion in large ways. They should be liable
           | for the content of their platforms since they demonstrated
           | the ability to control whats on them.
           | 
           | They'd get freedom to control their platform, just without a
           | special legal exemption that gives them immunity from the
           | results of their control on of their platform.
        
             | darkwizard42 wrote:
             | The results of the control of their platform are... that
             | you can choose not to use it.
             | 
             | Section 230 says you have a right to speech, not a right to
             | a platform giving you reach.
        
               | LegitShady wrote:
               | I don't use it, I just dont want them to have legal
               | exemptions other publishers don't get while they control
               | what information people are or aren't allowed to see.
        
             | basch wrote:
             | >They'd get freedom to control their platform, just without
             | a special legal exemption that gives them immunity from the
             | results of their control on of their platform.
             | 
             | Exactly, thats the point. It's a market economy of
             | platforms.
             | 
             | >they're censoring politically and control what people see,
             | shaping public opinion in large ways.
             | 
             | Companies are allowed to choose what they say and project
             | outward.
             | 
             | The problem here is a lack of interoperability between
             | platforms, institutional silos. Standardized data exports
             | and imports. Cross platform messaging and subscription.
             | 
             | 230 exists precisely to prevent the world you are
             | advocating. It's not an accidentally careless law that had
             | a loophole, it's intent is to protect private companies
             | first amendment rights.
             | 
             | >They should be liable for the content of their platforms
             | since they demonstrated the ability to control whats on
             | them.
             | 
             | Says who? I dont think they have at all demonstrated they
             | are good at controlling whats on their platforms. They are
             | ok at at, at best.
        
       | h2odragon wrote:
       | one of his links (EFF) mentions "Fair and transparent moderation"
       | ... I'd be happy to see that, but aside from it being something
       | societies have had problems with in the past; can anyone actually
       | suggest that's what's happening now?
       | 
       | Are twitter and facebook being fair and transparent?
       | 
       | Isn't the very fact we're having the argument evidence that it's
       | such a difficult problem we need some interim solution while the
       | perfect AI algorithm gets worked out? Some solution that can last
       | for a while if the perfect AI moderators never come.
        
       | jtchang wrote:
       | Having read it over and thought about it I don't think I'd want
       | to give up all the freedoms we have on posting whatever the heck
       | we want. If it means Facebook ends up with hate speech and
       | antivax groups so be it. I think the alternative is worse: where
       | if I were going to build some type of new forum I'd have to be
       | liable for everything anyone posts. I want to see new ideas being
       | tried without fear of litigation.
        
         | originalvichy wrote:
         | That scary thought is the reason why I sympathize with big tech
         | even though they don't always do the right thing with bans and
         | restrictions. They are literally in new grounds. We haven't had
         | this world of technological discussion available to us ever
         | before. Most things are being done for the first time ever
         | right now.
         | 
         | I remember how tought it was to moderate IRC channels that were
         | larger than a certain amount of users. Imagine having to
         | wrangle HUNDREDS OF MILLIONS of users by trying to outmaneuver
         | all the bad faith, harmful actors.
         | 
         | I'd rather live in a world where people can create websites and
         | moderate them as they wish, since the alternative is probably
         | no website at all since you are bound to run into bad faith
         | actors in life.
         | 
         | As long as we can still freely create websites online there
         | should not be people who are against moderation.
        
       | iron0013 wrote:
       | Moderators, stop editorializing via title manipulation! The title
       | of this article is "Section 230 Is The Subject of The Most
       | Effective Legal Propaganda I've Ever Seen", and the article only
       | makes sense if read in light of its actual title as given by the
       | author, not the edited manipulated title you have given it on HN!
        
         | judge2020 wrote:
         | Not sure if this is satirical, but HN aims to prevent clickbait
         | and other 'spam' - see
         | https://news.ycombinator.com/item?id=6572466
        
         | dang wrote:
         | It was an obviously baity title and that is bog standard HN
         | moderation. More at
         | https://news.ycombinator.com/item?id=24805143
         | 
         | We've since changed the URL (and the title) in keeping with
         | another HN principle, of favoring original sources. The Popehat
         | article is really just a list of links with a bunch of extra
         | Popehatness.
        
       | reggieband wrote:
       | I'm having a moment here where I am considering this from another
       | viewpoint. While I hate the idea of siding with Ted Cruz on
       | literally anything I want to take a moment to really consider
       | something.
       | 
       | Section 230 makes sites like Facebook, Twitter, Reddit, Youtube,
       | Instagram and even Hacker News possible. Revoking Section 230
       | could expose those platforms to the possibility of liability for
       | content posted on them. This might cause a re-shaping of the
       | Internet in general.
       | 
       | Part of me seriously wonders - would that necessarily be a bad
       | thing? I am not convinced, by any stretch of the imagination,
       | that these social media opinion aggregation platforms are
       | universally positive. Everyone keeps acting like the existence of
       | Facebook somehow democratizes content publishing for the masses,
       | even when we are faced with clear evidence that this isn't the
       | case. The centralized nature of Facebook actually allows for
       | larger scale manipulation of the narrative.
       | 
       | And how would this affect Uber, Airbnb, Amazon, Netflix and other
       | sites? I suppose opening them up to liability for negative
       | reviews could be a problem.
       | 
       | I'm thinking on the fly here, but if Facebook just disappeared
       | off of the Internet tomorrow - I'm not really sure I would mourn
       | that. And if new Internet companies were burdened with stricter
       | moderation requirements (or the need to stand behind every piece
       | of content posted onto their site), maybe that would actually be
       | good? Maybe that would drive people to create their own websites
       | once again.
       | 
       | I'm sure I haven't thought deeply enough on this but I definitely
       | feel the tide here is a knee-jerk protection of Section 230. Yet
       | the companies it protects the most are the ones I feel are the
       | worst.
        
         | originalvichy wrote:
         | >Section 230 makes sites like Facebook, Twitter, Reddit,
         | Youtube, Instagram and even Hacker News possible. Revoking
         | Section 230 could expose those platforms to the possibility of
         | liability for content posted on them. This might cause a re-
         | shaping of the Internet in general.
         | 
         | >Part of me seriously wonders - would that necessarily be a bad
         | thing? I am not convinced, by any stretch of the imagination,
         | that these social media opinion aggregation platforms are
         | universally positive.
         | 
         | Here's where you are wrong to think this. It doesn't protect
         | social media giants, Hacker News and news websites. It protects
         | literally everyone on the internet in the USA.
         | 
         | Thanks to this law you can't be held liable if you have a blog
         | with a comment section. Anyone can post anything there and you
         | could be at serious risk of legal trouble if someone posted
         | something that breaks the law on your website. Any communal
         | website would either a) move out of the US and b) probably
         | require some very strict controls on who can post and what.
         | 
         | The law doesn't protect American giants, it protects everyone
         | that uses the internet to discuss.
         | 
         | I suggest you read this blog post on the issue, especially this
         | part:
         | 
         | ""If you said "Section 230 is a massive gift to big tech!"
         | 
         | Once again, I must inform you that you are very, very wrong.
         | There is nothing in Section 230 that applies solely to big
         | tech. Indeed, it applies to every website on the internet and
         | every user of those websites.""
         | 
         | https://www.techdirt.com/articles/20200531/23325444617/hello...
        
           | cfmcdonald wrote:
           | > Thanks to this law you can't be held liable if you have a
           | blog with a comment section.
           | 
           | Even absent a comment section, most blogs are hosted by
           | someone else (e.g. WordPress), who does not vet the content
           | on that blog. That won't be possible anymore in a post-
           | Section 230 world.
        
             | NovemberWhiskey wrote:
             | Right. So WordPress has no protections if you post bad
             | content. So probably WordPress goes away.
             | 
             | No problem, you say, I'll self-host my content on AWS and
             | use CloudFlare as a CDN. But AWS and CF also have no
             | section 230 protections any more either and probably won't
             | do business with you unless you indemnify under some kind
             | of insurance policy (which you won't be able to afford as a
             | small blog).
             | 
             | Even if you run your own server and install it at home, the
             | lack of section 230 protections will probably make your ISP
             | responsible for content you publish (remember: your ISP is
             | not a common carrier - thanks FCC) so you're probably going
             | to find that all consumer ISPs are going to have terms of
             | service that prohibit publication, and technical
             | implementations that enforce that.
             | 
             | I mean, in the non-Internet world, if I want to make a
             | newsletter and mail it out to a subscriber list; I can at
             | least do that. All I need is a laser printer, and a stack
             | of stamped envelopes. The postal service is a common
             | carrier, at least.
             | 
             | Today's internet is a composition of platforms, all of
             | which are really only possible due to the existence of
             | section 230. It blows my mind that people are so blase
             | about the idea of tossing it out or reworking it in a naive
             | way.
        
           | reggieband wrote:
           | > It protects literally everyone on the internet in the USA.
           | 
           | I think this is the kind of knee-jerk hyperbole I want people
           | to really think deeply about.
           | 
           | > Thanks to this law you can't be held liable if you have a
           | blog with a comment section.
           | 
           | Maybe that should not be protected. If I am unable/unwilling
           | to moderate the comment section of a blog I host then maybe I
           | should't have one. I do not believe a completely open free-
           | speech comment section is a requirement of a good or
           | successful blog. Also, there is a business opportunity for
           | those who want comment sections to pay for moderation
           | services.
           | 
           | > I must inform you that you are very, very wrong.
           | 
           | This kind of patronizing is neither useful nor conducive to
           | mature discussion. Aggregate user-generated-content sites
           | aren't some kind of holy thing, forums, comment sections or
           | otherwise. I want people to consider the fact that we may all
           | be fine without them at all. Nothing stops people from
           | posting whatever content they want on their own site. It just
           | discourages aggregation of other people's content.
        
             | originalvichy wrote:
             | >Maybe that should not be protected. If I am
             | unable/unwilling to moderate the comment section of a blog
             | I host then maybe I should't have one. I do not believe a
             | completely open free-speech comment section is a
             | requirement of a good or successful blog.
             | 
             | It's not about having to moderate a completely free speech
             | comment section. It's about being able to even have one and
             | to be able to host good people and good comments without
             | having to be liable for the time when people act terribly.
             | 
             | Where I'm from I don't think a bar owner can be held liable
             | if a patron starts a fight and wounds another patron. You
             | don't open a bar with the explicit intent of it being an
             | amateur boxing arena, but a nice place for people to enjoy
             | drinks and conversation.
             | 
             | If a user online decides to breach trust and common
             | courtesy by posting vile stuff on my site I shouldn't be
             | held liable for their actions as I did not force or coerce
             | them to do it.
             | 
             | >Nothing stops people from posting whatever content they
             | want on their own site. It just discourages aggregation of
             | other people's content.
             | 
             | Yes, nothing is stopping them. But 230 is allowing you to
             | do more with the internet other than just post things on
             | your own site. That blog even describes an instance where
             | replying/forwarding an email - in which you are repeating
             | what rule/law breaking thing another person said to be able
             | to reply/comment on said thing - you are protected against
             | liability.
             | 
             | If you dismiss these liability rules you are effectively
             | removing everything from chatrooms to comment sections from
             | the internet, effectively making the internet into a
             | snailmail/bulletin board service.
        
               | reggieband wrote:
               | > If you dismiss these liability rules you are
               | effectively removing everything from chatrooms to comment
               | sections from the internet, effectively making the
               | internet into a snailmail/bulletin board service.
               | 
               | Yes, I am pondering exactly this. How much of the
               | Internet do I consider valuable that would be lost? I
               | mean, lost in the sense it would be completely
               | irreplaceable without Section 230 protection. To be fair
               | to my position, the vacuum of unmoderated spaces would be
               | filled one way or another.
               | 
               | Maybe people are surprised by such a position, but I
               | don't actually see enough inherent value in unmoderated
               | comment sections on private blogs or even all of the
               | 1990's era phpbb forums to worry about their loss. In
               | fact, when I consider the negative effects of the massive
               | companies hiding behind Section 230 they really seem to
               | heavily outweigh whatever positive effect comments on my
               | personal blog could ever bring.
               | 
               | It seems these things keep coming up: public comments on
               | personal blogs and anonymous forums. I want people to
               | deeply think about whether or not these things are really
               | valuable and even more so whether or not the are
               | irreplaceable without Section 230.
        
             | save_ferris wrote:
             | > If I am unable/unwilling to moderate the comment section
             | of a blog I host then maybe I should't have one.
             | 
             | I completely agree, it's clear that a successful business
             | model around content moderation cannot exist in the
             | internet's current form, and there are very few
             | consequences for people who post malicious content.
        
         | klyrs wrote:
         | Would the forums of yesteryear survive? The early 2000s saw a
         | veritable utopia of information sharing, as people with special
         | interests came online and formed groups to discuss and
         | collaborate on those interests.
         | 
         | Would the open source software community survive? My world
         | revolves around github more than any other site; would ticket
         | discussions be forced back to mailing lists? Would mail readers
         | pick that slack up, and re-implement social media over email?
        
           | reggieband wrote:
           | > Would the forums of yesteryear survive?
           | 
           | I know this is questioning some fundamental assumptions of
           | most of our moral principals, but I am literally questioning:
           | should they survive? It seems everyone is assuming that they
           | should. Some seem to suggest they should for abstract "free
           | speech at all costs" philosophy. Others are assuming they
           | have some kind of positive effect, either on personal growth
           | or economic activity.
           | 
           | > Would the open source software community survive?
           | 
           | I'm not sure how Section 230 applies to code but I don't
           | think public forums like Facebook, Twitter or Youtube are
           | necessary for open source software to continue (any more than
           | they were necessary for it to start, which happened long
           | before they existed).
           | 
           | Besides, what I'm pondering is the centralized aggregation
           | points. People could still host blogs, they are just directly
           | liable for the content they post (as they likely are now).
        
             | giantrobot wrote:
             | You keep harping on Facebook et al but you're ignoring
             | everyone pointing out that Section 230 protects everyone at
             | all scales. Without Section 230 exceptions there's no
             | protection for _any_ type of user generated content.
             | 
             | That means no user reviews of _anything_. No user
             | contributed information so no more Wikipedia or
             | OpenStreetMaps. No Wikis of any kind in fact. No hosting of
             | public data sets for ML research. Forums would be a
             | liability nightmare so they would go away.
             | 
             | Not only would current/new instances of user generated
             | content make sites liable but hosting any historical co
             | rent as well. So to avoid liability the web would have to
             | be scraped clean of user generated content.
             | 
             | Facebook and other large sites would be the _only_ ones to
             | survive because they could afford a moderator army. Your
             | extremely short sighted position would basically leave only
             | large content producers. The Internet would regress to the
             | curated Online Service model but worse because user
             | communication would need to be disallowed over heavily
             | moderated.
             | 
             | You're advocating for the Internet to turn into broadcast
             | television. It's sad that you either can't or won't accept
             | that implication.
        
         | eli wrote:
         | I promise you that there are options between "do nothing" and
         | "remove the legal foundation crucial to the development of the
         | participatory internet."
        
         | cwhiz wrote:
         | I'm with you. I think we would all be better off without forums
         | for people to anonymously yell at each other.
         | 
         | If Facebook, Twitter, and Reddit all disappeared tomorrow it
         | would be an enormous win for American citizens. The path to
         | unification and healing does not run through big tech.
         | 
         | And yes, I understand that it would also be the end of HN. I
         | accept that.
        
         | spiffytech wrote:
         | Remember that Section 230 isn't just about the big, centralized
         | services. It protects everybody who's ever run a phpBB forum
         | for their little community, or allowed comments on their blog,
         | or set up a Mastodon instance they share with others. If you
         | host content written by someone else, Section 230 is what
         | allows you to moderate it, and what prevents you from being
         | legally liable for it. Proposals requiring "political
         | neutrality" even add new burdens to be compliant with; your
         | friend on your Mastodon ranting about "that politician"
         | probably isn't "neutral" publishing.
         | 
         | Facebook et al. could be big enough to wrangle the regulatory
         | burden of existing without these protections. But many proposed
         | 230 "reforms" could scare off anyone smaller, creating a
         | regulatory moat that keeps Facebook at the top in perpetuity.
        
           | reggieband wrote:
           | Yes, believe it or not I am questioning whether or not I
           | should be free from liability if I host a public forum. And
           | my conclusion is: perhaps I should be liable if I stand up a
           | public forum.
           | 
           | You may suggest that would prevent me from creating a user-
           | generated-content application of massive scale without the
           | resources to sufficiently moderate it. And again, yes - maybe
           | that should be a requirement of me doing such a thing.
           | 
           | It isn't like the only kind of business a person could create
           | on the internet is one that surrounds the aggregation of user
           | generated content. If it killed that entire class of business
           | ... I am not sure we would lose very much of value that
           | couldn't be replaced by individuals hosting their own
           | content.
        
             | spiffytech wrote:
             | > of massive scale
             | 
             | Of even tiny, microscopic scale. Even your personal blog's
             | comments, or the forum for your local club.
             | 
             | And it _still_ won't stop Facebook. It'll only stop _you_.
             | That outcome doesn't sound like the outcome you're saying
             | you may be comfortable with. It sounds like the opposite.
        
               | reggieband wrote:
               | > That outcome doesn't sound like the outcome you're
               | saying you may be comfortable with.
               | 
               | Oh no, I'm considering exactly that. I am saying: if my
               | tiny personal blog has a comment section I would be
               | liable for comments posted there. If that is a burden I
               | can't handle then I should turn off comments. At least in
               | my experience those comment sections are a complete waste
               | of space anyway and the trend I've noticed from the large
               | blogs that are still around (e.g. daringfireball, kottke)
               | is that their comment sections are long gone anyway.
               | 
               | What I am pondering is: would this ruin the Internet? If
               | I couldn't host a public forum if it got beyond my
               | limited means to moderate? If I couldn't have a public
               | comment section? It doesn't seem clear to me the Internet
               | breaks if I am forced to own the responsibility for those
               | things that I allow to be made public through sites I
               | control.
        
       | eli wrote:
       | It's somewhat ironic that HN's moderators apparently changed the
       | title of this submission from "Section 230 Is The Subject of The
       | Most Effective Legal Propaganda I've Ever Seen" to the apparently
       | less objectionable "What Section 230 really means"
        
         | dang wrote:
         | That's just bog standard HN moderation to make titles less
         | baity. If we didn't do that, HN would be a completely different
         | site--and not in a way that most HN users would appreciate.
         | Having the titles on the front page be 'bookish' (to use pg's
         | original term) is one of the core principles here.
         | 
         | We've since changed both the URL and the title.
        
           | eli wrote:
           | Fair enough! It just feels like a funny meta example of why
           | one might want to "editorialize" content without also being
           | forced to moderate every single thing posted.
        
       | iron0013 wrote:
       | In case you haven't noticed, the willful misinterpretation of
       | section 230 that popehat is describing is so common on HN that it
       | has effectively become the party line, and any attempts to
       | correct such "misunderstandings" (lies) are met with heavy
       | downvoting.
        
       | ry454 wrote:
       | I believe that the debate around Section 230 is a sign of the
       | growing consensus among the ruling class that Project Internet is
       | complete, that it's successfully deployed a world wide
       | surveillance framework, and that no further growth for the sake
       | of the growth, which could destabilize the said framework, is
       | needed, and thus it's time to close the gates and fortify the
       | site from competitors. One fact supporting this stance is the
       | rapidly growing popularity of the so called "end-to-end
       | encryption" idea, that could crack the foundation of
       | surveillance, unless hard measures are taken right now.
        
       | dionian wrote:
       | If platforms are going to start acting like publishers, they
       | should no longer get special treatment when compared to other
       | publishers.
       | 
       | Remember, if the info platform monopolies help the democrats
       | today, they can help the republicans tomorrow.
        
         | SpicyLemonZest wrote:
         | I'd suggest reading through some of the resources linked in the
         | article. The idea that Section 230 gives special treatment to
         | "platforms" as opposed to "publishers" is common but false. The
         | Techdirt one (https://www.techdirt.com/articles/20200531/233254
         | 44617/hello...) in particular I think helps clear up the
         | confusion.
        
           | luckylion wrote:
           | > No provider or user of an interactive computer service
           | shall be treated as the publisher or speaker of any
           | information provided by another information content provider.
           | 
           | If they don't get special treatment as opposed to publishers,
           | why is explicitly mentioned that they shall not be treated as
           | the publisher? If there's no difference, what's the point in
           | that?
        
             | SpicyLemonZest wrote:
             | "Publisher" here doesn't refer to a special legal status.
             | This sentence just means that, if you post something on a
             | website, it's still you and not the website who said it.
             | This law is applicable to any website, even ones which make
             | no attempt to be a neutral platform.
        
             | judge2020 wrote:
             | Probably because publishers are, by default, more liable
             | (as liable as any other legal entity) and thus the law aims
             | to reduce liability for the parties in question (the
             | platform creators like twitter, facebook) so that the
             | positive economic impact attributed to these platforms
             | isn't burdened by the law.
        
           | tim44 wrote:
           | > _To be a bit more explicit: at no point in any court case
           | regarding Section 230 is there a need to determine whether or
           | not a particular website is a "platform" or a 'publisher.'
           | What matters is solely the content in question. If that
           | content is created by someone else, the website hosting it
           | cannot be sued over it._
           | 
           | > _Really, this is the simplest, most basic understanding of
           | Section 230: it is about placing the liability for content
           | online on whoever created that content, and not on whoever is
           | hosting it. If you understand that one thing, you 'll
           | understand most of the most important things about Section
           | 230._
           | 
           | The way I understand it, these big sites aren't simply
           | hosting content, they are themselves creators through
           | editorializing content and so should not enjoy a blanket
           | immunity.
        
         | jhayward wrote:
         | > _Remember, if the info platform monopolies help the democrats
         | today, they can help the republicans tomorrow._
         | 
         | Facebook has had its thumb on the 'balance' scale in favor of
         | ultra-right-wing sites for half a decade, at least.
         | 
         | - Zuckerberg calling a group of non-profit news sites "not real
         | news" [1]
         | 
         | - Zuckerberg ordering the algorithm of the news feed to be
         | biased toward promoting Breitbart et. al. [2]
         | 
         | [1] https://twitter.com/mathewi/status/1317124357873860609
         | 
         | [2] https://twitter.com/dnvolz/status/1317160066047479809
         | 
         | [3] https://www.wsj.com/articles/how-mark-zuckerberg-learned-
         | pol...
        
         | formerly_proven wrote:
         | > Remember, if the info platform monopolies help the democrats
         | today, they can help the republicans tomorrow.
         | 
         | They do?
        
           | dimitrios1 wrote:
           | Case in point: two town halls yesterday. One gets asked about
           | their most current scandal (tax returns), another doesn't get
           | a single question about their scandal (burisima). The media
           | is undoubtably carrying water for Joe Biden. That isn't even
           | arguable anymore. The question is why?
        
             | srdev wrote:
             | That just demonstrates that there isn't an easily definable
             | political neutral. From my point of view, the "Burisma
             | scandal" got all the attention it deserved. The reason the
             | media isn't harping on it is because it wasn't a scandal,
             | and one political party was desperately trying to make it
             | so.
             | 
             | In the same way, a lot of people would say it would be
             | neutral for media to present arguments that global warming
             | is not man-made, but people who care about scientific fact
             | would claim that even presenting the skeptic argument is
             | non-neutral, sense you are signal boosting an argument with
             | no basis in reality.
        
         | ChrisClark wrote:
         | I think this comes down to the fact that removing lies and fake
         | news hurts Republicans at this point in time.
         | 
         | If in the future the Democrats are the lying ones, then those
         | lies deserve to be removed too.
        
           | jedmeyers wrote:
           | How do you know who is actually lying if half the information
           | is removed from the sources you usually read?
        
             | ardy42 wrote:
             | > How do you know who is actually lying if half the
             | information is removed from the sources that you usually
             | read?
             | 
             | As far as I know, they're not broadly removing "half the
             | information" (which I'm taking to refer to conservative
             | viewpoints), but disinformation related to QAnon, voting,
             | covid, etc.
             | 
             | Disinformation is not something that will help anyone make
             | better judgements.
             | 
             | https://apnews.com/article/election-2020-media-social-
             | media-...
             | 
             | https://www.bbc.com/news/world-us-canada-54443878
             | 
             | https://www.washingtonpost.com/technology/2020/08/11/facebo
             | o...
             | 
             | https://www.cnet.com/news/facebook-twitter-block-trump-
             | post-...
             | 
             | https://www.reuters.com/article/us-facebook-election-
             | exclusi...
             | 
             | https://www.theglobeandmail.com/world/us-politics/article-
             | fa...
        
           | ojnabieoot wrote:
           | This tidbit from a WSJ story is rather revealing of the
           | actual issue going on here[1]:
           | 
           | > In late 2017, when Facebook tweaked its newsfeed algorithm
           | to minimize the presence of political news, policy executives
           | were concerned about the outsize impact of the changes on the
           | right, including the Daily Wire, people familiar with the
           | matter said. Engineers redesigned their intended changes so
           | that left-leaning sites like Mother Jones were affected more
           | than previously planned, the people said. Mr. Zuckerberg
           | approved the plans.
           | 
           | That is: Facebook decided to intervene to benefit the right.
           | I don't think this is just because of right-wingers at
           | Facebook: surely a large part of it is bad-faith attacks from
           | people like Ted Cruz.
           | 
           | The idea that Twitter and Facebook are conspiring to suppress
           | legitimate criticism of Biden and thereby defeat Trump is
           | plain ridiculous.
           | 
           | [1] Story is here: https://t.co/sjOYrLQdc3?amp=1 but I got
           | the blurb from this tweet:
           | https://twitter.com/patcaldwell/status/1317140564169625600
        
         | ChrisClark wrote:
         | Platforms and publishers are no different under Section 230.
         | Platforms are not getting anything special.
        
           | jtbayly wrote:
           | False. "No provider or user of an interactive computer
           | service shall be treated as the publisher..."
           | 
           | The whole point is that the provider of the interactive
           | computer service (ie "the platform") is not to be treated as
           | the publisher of anything anybody says on the platform.
        
         | ojnabieoot wrote:
         | From Popehat's post:
         | 
         | > Among the most common lies: Section 230 requires sites to
         | choose between being a "platform" or "publisher"
         | 
         | The idea that Twitter moderating its users' posts means it's
         | acting like a "publisher" is nothing but Republican
         | propaganda[1]:
         | 
         | >Furthermore, a number of senators have prominently criticized
         | Section 230. For example, Senator Ted Cruz (R-TX) repeatedly
         | (but completely falsely) claims that Section 230 only applies
         | to "neutral public forums."
         | 
         | Threatening Twitter and Facebook with liability for their
         | users' content is authoritarian suppression of free speech. The
         | US government should not be forcing Twitter or any other social
         | media service to carry the president's re-election propaganda -
         | _even if the propaganda is factual,_ let alone if it 's full of
         | holes and lies like the NY Post story.
         | 
         | [1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3306737
        
           | luckylion wrote:
           | From the comment:
           | 
           | > If platforms are going to start acting like publishers,
           | they _should_ no longer get special treatment when compared
           | to other publishers.
           | 
           | The GP doesn't argue that Section 230 says this or that,
           | they're arguing that internet companies who act like news
           | sites should be subject to the same laws as news sites.
        
             | typenil wrote:
             | The fact that you're being downvoted just shows the
             | partisan reflexes of those that did it.
             | 
             | Making a factual correction without any additional
             | commentary. Down the memory hole.
        
             | ojnabieoot wrote:
             | The problem is that Twitter and Facebook are not acting
             | like news sites because
             | 
             | 1) they don't write the stories or in any way pay the
             | journalists
             | 
             | 2) news is a minority of the content
             | 
             | 3) moderation is not the same thing as curation
             | 
             | 4) hosting a story is not the same thing as publishing it.
             | 
             | If I post an NBC News article to Twitter, NBC is the
             | publisher. If that article contains libel, NBC is the one
             | on the hook in court, not Twitter. (However, if Twitter
             | discovered the article was very likely libelous then it
             | would be both reasonable and responsible to restrict
             | sharing the article).
             | 
             | GP is really making one of two authoritarian arguments:
             | 
             | a) Platforms are not allowed to make broad decisions about
             | what sorts of content they want to host. Presumably GP
             | would then also agree that YouTube's ban on pornography
             | means that YouTube is a "publisher," and that every time
             | Reddit remove a racist subreddit that it is acting like a
             | "publisher."
             | 
             | b) If a platform does not want to host the president's
             | dishonest re-election propaganda, they should expect to
             | face financial and legal consequences.
             | 
             | Of course nobody would really say "b" out loud, hence the
             | word games about "you see, Mastadon is a _platform_ but
             | Twitter is a _publisher._ "
        
         | Analemma_ wrote:
         | This is exactly the misconception that Ken is trying to
         | correct. There is _no_ difference between platforms and
         | publishers according to Section 230, and there is no  "special
         | treatment". This meme has sprung up out of nowhere because
         | people are angry at the social media companies and want to
         | think that anger has legal backing, but it doesn't.
        
           | jtbayly wrote:
           | No. It's not what Ken is trying to correct, because this is
           | an _opinion,_ not an interpretation of the law.
           | 
           | Furthermore, as for the law, there _is_ a difference between
           | platforms and publishers. Section 230 says that platforms
           | will _not_ be treated as publishers.
        
             | jtbayly wrote:
             | For the people downvoting my comment, please note the
             | following from Section 230:
             | 
             | "No provider or user of an interactive computer service
             | shall be treated as the publisher..."
             | 
             | FB and Twitter are interactive computer services in this
             | case, and we call them "platforms." The law says they are
             | _not_ to be treated as the _publisher_ of the content that
             | users post on their site. Thus, there is a big difference
             | between a platform and a publisher. That 's the whole
             | _point_ of the law.
             | 
             | Downvote all you want, but... that's the law.
        
               | coldpie wrote:
               | You have to read the whole sentence:
               | 
               | > No provider or user of an interactive computer service
               | shall be treated as the publisher or speaker of any
               | information provided by another information content
               | provider.
               | 
               | The law doesn't establish two categories, called
               | "publisher" and "platform." It says, if User writes
               | something on Website, then Website will not be treated as
               | the publisher of whatever User wrote. Instead, User will
               | be treated as the publisher. It defines who is
               | responsible for the content that is served by Website:
               | the User who wrote it, not the Website that served it. At
               | no point does it create a category called "publisher" who
               | is subject to different rules from a category called
               | "platform."
        
               | iron0013 wrote:
               | The text is very clear. It protects the rights of an
               | owner of a website to control that website, which is
               | their private property. It does not in any way heap
               | additional responsibilities or legal vulnerabilities upon
               | "publishers". It's straightforward if read and
               | interpreted in good faith.
        
       | InfiniteRand wrote:
       | How does this work for a "Letters to the Editor" type section in
       | a newspaper?
       | 
       | Or what separates a heavily moderated online forum from a
       | volunteer run online magazine? Is it the asking for submissions
       | part?
        
         | Mountain_Skies wrote:
         | Newspapers are publishers. They've never tried pretending
         | they're platforms. They have editorial control because they are
         | clear and forthright about who they are and what they do.
        
           | klyrs wrote:
           | There is no difference between a "publisher" and a "platform"
           | under section 230.
        
             | Karunamon wrote:
             | We're talking about amending section 230. Maybe there
             | should be.
        
       | dang wrote:
       | Url changed from https://popehat.substack.com/p/section-230-is-
       | the-subject-of..., which points to this and a bunch of other
       | articles, and doesn't add much otherwise (other than Popehat-
       | style snark).
       | 
       | If one of the other URLs is a better fit, we can change it again.
        
         | revnode wrote:
         | Isn't this an example of what we're talking about? Even if this
         | change is objectively better, there were many comments posted
         | here with the original URL as the intended target and now their
         | comments are directed at a different URL, all done without
         | their explicit consent?
        
       ___________________________________________________________________
       (page generated 2020-10-16 23:00 UTC)