[HN Gopher] Nirvana fallacy
       ___________________________________________________________________
        
       Nirvana fallacy
        
       Author : tomodachi94
       Score  : 68 points
       Date   : 2023-05-24 05:30 UTC (1 days ago)
        
 (HTM) web link (en.wikipedia.org)
 (TXT) w3m dump (en.wikipedia.org)
        
       | mock-possum wrote:
       | Oops, I've been calling this the 'utopia fallacy' for who knows
       | how long.
        
         | compiler-guy wrote:
         | I've always called it that too. I suspect it is a common
         | alternative name.
        
         | Brendinooo wrote:
         | Something I say often is "utopia means 'no place'" or "no such
         | place as utopia" - people too often focus on trying to build a
         | perfect world/thing/product/whatever rather than focusing on
         | how to exist in an imperfect world.
        
         | eikenberry wrote:
         | That at least is a name that somewhat represents the idea.
         | Nirvana was a poor choice and probably stems from a
         | misunderstanding of the idea by the economist.
        
         | mtraven wrote:
         | That's a much better name actually.
        
           | readthenotes1 wrote:
           | Right? I would say "utopia fallacy" is the perfect label for
           | this meaning over "nirvana fallacy" because it can never be
           | misconstrued as something associated with the musical band
           | Nirvana.
        
             | eikenberry wrote:
             | The band has more in common with the usage here than the
             | Buddhist idea.
        
             | tangent-man wrote:
             | As a Buddhist I agree. Heh.
        
       | tangent-man wrote:
       | I think you'll find Nirvana [?] Fallacy
       | 
       | https://www.accesstoinsight.org/lib/authors/thanissaro/nibba...
        
       | freeopinion wrote:
       | What do you call the opposite fallacy? The one where any proposed
       | solution is worse than the status quo because of all the things
       | that could hypothetically go wrong?
       | 
       | What-if-ism?
       | 
       | Add: Example: A restaurant that throws away 10% of their supplies
       | each day proposes to donate them instead to a soup kitchen a day
       | before they would normally dispose of them. Then somebody asks,
       | "What if the soup kitchen holds on to them too long and then
       | somebody gets sick from the food we donated and we get sued?"
        
         | throwaway290 wrote:
         | By definition fallacy is something that looks correct but is
         | wrong due to a sneaky error in reasoning itself. Your case has
         | to do with information not reasoning.
         | 
         | "These anti-drunk driving ad campaigns are not going to work.
         | People are still going to drink and drive no matter what." =
         | fallacy
         | 
         | "These anti-drunk driving ad campaigns are not going to work
         | because a study from 50 years ago sponsored by Big Alcohol
         | definitely proves so." = many possible issues (too
         | lazy/stupid/malicious to check a better study) but no fallacy
        
         | minsc_and_boo wrote:
         | Possibly the _Slippery Slope_ fallacy.
         | 
         | There's also an _appeal to consequences_ where if the outcome
         | of something is considered undesirable, then that something
         | must be false.
        
         | MichaelZuo wrote:
         | The restaurant example doesn't seem to be a fallacy?
         | 
         | That is a real legal concern in US jurisdictions. I'm fairly
         | certain there's some on-the-record case law too.
         | 
         | Plus, a real system can be almost limitlessly decomposed, the
         | lower bound is the black hole limit.
         | 
         | So it doesn't seem like there could be an inverse fallacy.
        
           | fknorangesite wrote:
           | I think GP is trying to get at "unintended consequences",
           | "this was a good idea at the time but didn't scale", or "we
           | made totally-reasonable assumptions that turned out to be
           | incorrect" ... all of which I'm sure we've all experienced
           | first-hand in our lives.
        
             | MichaelZuo wrote:
             | I can't see how saying there were, or could be, unintended
             | consequences becomes a fallacy.
             | 
             | All systems more complex then two electrons can behave
             | unpredictably. That's just a fact, that will always be true
             | in 100% of all possible scenarios.
             | 
             | There's of course a norm in day-to-day life to not quibble
             | about every possible combination of 3 electrons or however
             | many below a reasonable threshold, but that norm is based
             | on the differing opinions of individuals in society.
        
       | w10-1 wrote:
       | By contrast, the transaction cost economics models make-or-buy
       | decisions as rational choice between real, available
       | alternatives, imposing the reality principle at choice time.
       | 
       | In my experience of collective decision-making, it's often the
       | case that more aggressive, less-proven technologies are rejected
       | as unproven or unrealistic, largely because no one wants the
       | reputation in the group of having championed a mistake.
       | 
       | By contrast, people deciding alone often will take the more
       | optimistic choice. In technology, that can mean that
       | person/engineer who's now on the hook finds ways to make the new
       | technology work (and avoid its flaws).
       | 
       | That translates to high-achieving organizations giving
       | individuals the power to decide, but also holding them
       | responsible for the consequences. Whether the "move fast, break
       | things" permission to fail in service of learning new
       | technologies and the problem domain actually works depends on
       | some real capture of knowledge. Probably the job cuts in tech now
       | (particularly at Twitter) are driven by realizing this "real
       | capture" ain't happening.
       | 
       | So it's not enough to avoid the Nirvana fallacy. You also have to
       | get past decision paralysis to learn, but show the lessons you
       | learned are worth something to the company.
        
       | n4r9 wrote:
       | Reminds me of the classic parental rebuttal "life's not fair".
       | True enough, but it's still worth trying to be fair in the here
       | and now.
        
         | lr4444lr wrote:
         | That's usually a shorthand for a child's limited understanding
         | of complex factors when parents are too tired or unable to
         | explain things better - not an actual moral claim.
        
         | skulk wrote:
         | That's a great example of the is-ought fallacy. "Life's not
         | fair" is, but perhaps not ought to be.
        
       | TheAceOfHearts wrote:
       | Maybe related to this but with different framing, I actually
       | think comparing the real world to the ideal can help us
       | prioritize and take steps towards making improvements.
       | 
       | For example: I think abortions should be legal, but in an ideal
       | world the number of abortions would be near zero because access
       | to social safety nets, birth control, and sex education is
       | plentiful.
       | 
       | The thing about reality is that it forces us to deal with
       | engineering constraints, and we have to carefully consider and
       | understand the tradeoffs being made.
        
       | pachico wrote:
       | I like this Wikipedia article, however, I would have preferred
       | it's contents to be transmitted directly to my brain in real time
       | when I opened Hacker News. That would have been so much better.
        
         | tbm57 wrote:
         | This article is talking about 'unrealistic' solutions - what
         | you just said is going to be a neuralink plugin in 2030
        
           | jjeaff wrote:
           | Is neuralink even claiming to be working on input to the
           | brain? I thought it was just trying to read the human brain.
        
       | MontyCarloHall wrote:
       | This is a corollary of the fallacy of relative privation, aka the
       | "kids are starving in Africa so you have no right to complain
       | about anything less severe" fallacy. Both fallaciously dismiss
       | arguments by comparing them to unrealistic extremes.
        
         | atleastoptimal wrote:
         | Or "Do whatever I say because there's a slight chance if you
         | don't follow my arbitrary rules you will be tortured forever by
         | one of the characters in my arcane storybook"
        
           | mcphage wrote:
           | Isn't that just Roko's Basilisk?
        
         | mistermann wrote:
         | If the "so you have no right to complain" part actually
         | happened. Many people (including smart ones) throw around
         | popular memes with little regard for whether they are using
         | them legitimately.
         | 
         | This meme has excellent potential for that as the definition is
         | subjective, but not explicitly disclosed as such creating a
         | dependence on the reader to realize this.
         | 
         | Another excellent point:
         | 
         | https://news.ycombinator.com/item?id=36076175
        
         | Loquebantur wrote:
         | Funny you would respond with a fallacious comparison.
         | 
         | Relative privation is not fallacious because comparison was
         | useless. Kids do starve (realistic) and there is even worse in
         | the world (so not an extreme either). But you need to choose by
         | some metric where to use your abilities, if you don't want to
         | end up being an egotistic hedonist.
         | 
         | You should help to right wrongs in ways amenable to your
         | abilities, not more, not less. Honesty is key obviously, both
         | ways.
        
       | zokier wrote:
       | I think this is closely related to no true scotsman, both involve
       | comparison to idealized version of something.
        
       | atleastoptimal wrote:
       | aka every single one of Elon Musk's product pitches, especially
       | the Hyperloop
        
       | davidw wrote:
       | Oh. That Nirvana... Whatever.
        
         | skyhvr wrote:
         | ......Nevermind.
        
       | klodolph wrote:
       | I see this a lot when people are looking for some library /
       | framework / programming language / game engine. You keep adding
       | requirements and assume that you can spend some additional time
       | evaluating alternatives to make up for the longer list of
       | requirements you have. Reality is, there are often only a few
       | serious alternatives in the first place. Adding more and more
       | requirements to your search is, in some way, a stubborn refusal
       | to prioritize among those requirements. Prioritization doesn't
       | just mean affirming that some of your priorities are important,
       | it means acknowledging that some of your priorities are
       | unimportant and can be discarded.
       | 
       | Related is the assumption that any custom-built library you write
       | is going to beat an existing, well-known library that doesn't
       | exactly match your needs. It's easy to come up with a list of
       | problems with existing libraries, but your theoretical custom-
       | built library can be perfect, because you're not imagining that
       | it has any serious bugs or design flaws. You end up building your
       | own solution and, in the process, rediscover _why_ the existing
       | library was built the way it is.
        
         | ozim wrote:
         | It hits home when you realize people were saying 90% of
         | software projects were failure.
         | 
         | People wanted perfect solutions in one go. Everyone was blaming
         | software developers.
         | 
         | If one expects only perfect outcomes then it is easy to get
         | high fail rate.
        
         | remkop22 wrote:
         | This hits home. Manytime have I come to appreciate a library or
         | technology only after delusionally attempting to create a
         | 'better' version. Mid attempt I actually start to understand
         | the problem space, at which point I humbly and thankfully start
         | depending on said library or technology.
        
           | MichaelZuo wrote:
           | What kind of problem spaces need so much trial and error to
           | understand?
        
             | mcphage wrote:
             | https://xkcd.com/592/
        
         | austin-cheney wrote:
         | Due to libraries and frameworks I most typically see the
         | inverse of this fallacy. A team claims to want something
         | amazingly ideal and yet easily feasible, but then reject the
         | premise outright if they cannot execute it in their favorite
         | library or framework in 30 seconds or less.
        
       | pessimizer wrote:
       | This is usually just used as a sneak attack on someone's else's
       | suggestion, a way to call it unrealistic without actually making
       | a case that it's unrealistic. Rest assured, the people who tell
       | you not to let the perfect be the enemy of the good do not think
       | that what you suggested is either perfect or good, they just want
       | you to shut up.
       | 
       | The "fallacy" in this vein that I see is when after Bob suggests
       | idea A to solve problem X, Mary says that idea A shouldn't be
       | done because idea B is better for problem X, but Mary _also doesn
       | 't support idea B._ Mary actually supports problem X, but if she
       | admitted that, she would lose her influence on the reaction to
       | problem X.
        
       | thewataccount wrote:
       | My favorite is still the "Fallacy fallacy" aka "Argument from
       | fallacy".
       | 
       | From my understanding it's very difficult to make a good faith
       | debate without one of the bajillion fallacy's being applicable
       | somewhere.
       | 
       | Is there a name for the difficulty of making a debate without any
       | single fallacy?
        
         | gpderetta wrote:
         | Fallacy Fallacy Fallacy?
        
         | avgcorrection wrote:
         | Most "fallacies" are informal and rhetorical and not direct
         | logical fallacies. Almost no one will say that X is not
         | perfect, therefore it can be discarded. But plenty will focus
         | their argumentation on how X is not perfect and leave the
         | implication on the table that X is not worth bothering with.
        
         | xg15 wrote:
         | I see the same problem with the various lists of cognitive
         | biases, rethoric devices, etc.
         | 
         | I think the trick is to see them as patterns which should allow
         | you to more easily construct a counter argument - instead of
         | pretending that merely pointing out the pattern itself would
         | already be enough to disqualify the argument.
         | 
         | e.g., in the examples from the "perfect solution" section, they
         | didn't just shut down the discussion with "well, that's a
         | Perfect Solution Fallacy, so your argument is invalid!", they
         | actually explained in each case, _why_ a non-perfect solution
         | is still desirable.
         | 
         | You could compare it with chess: An opponent is absolutely
         | allowed to leave a piece vulnerable and you don't get an
         | automatic win by just pointing out a bad position - you only
         | get an advantage if you actually take the piece.
        
       ___________________________________________________________________
       (page generated 2023-05-25 23:00 UTC)