[HN Gopher] Why is it so hard to be rational?
       ___________________________________________________________________
        
       Why is it so hard to be rational?
        
       Author : ubuwaits
       Score  : 261 points
       Date   : 2021-08-16 12:57 UTC (10 hours ago)
        
 (HTM) web link (www.newyorker.com)
 (TXT) w3m dump (www.newyorker.com)
        
       | jsight wrote:
       | Isn't the answer obvious? Because doing otherwise involves a lot
       | more work, and people choose the easier path.
        
         | MarioMan wrote:
         | Sometimes I go into deep-dives to try to find some truth to a
         | contentious issue. I think it's important not to take the easy
         | path; certainly not if you want a well-learned opinion. Any
         | sense of superiority this gives me is dashed when I realize:
         | 
         | 1) It's not reasonable to expect someone to dig so deeply, and
         | there isn't enough time to do it for every issue.
         | 
         | 2) Someone, somewhere, has done an even deeper dive into the
         | same issue. From their perspective, I'm the one that hasn't
         | done my research. When it's "enough" is a fuzzy line.
        
         | esarbe wrote:
         | To quote evolution; why go for perfect when you can go for good
         | enough?
        
       | btilly wrote:
       | I maintain that it isn't just hard, it is computationally
       | impossible.
       | 
       | We should all know that given a belief about the world, and
       | evidence, Bayes' Theorem describes how to update our beliefs.
       | 
       | But what if we have a network of interrelated beliefs? That's
       | called a Bayesian net, and it turns out that Bayes' Theorem also
       | prescribes a unique answer. However, unfortunately, it turns out
       | that working out that answer is NP-hard.
       | 
       | OK, you say, we can come up with an approximate answer. Sorry,
       | no, coming up with an approximate answer that gets within
       | probability 0.5 - e, for 0 < e, is ALSO NP-hard. It is literally
       | true that under the right circumstances a single data point
       | logically should be able to flip our entire world view, and which
       | data point does it is computationally intractible.
       | 
       | Therefore our brains use a bunch of heuristics, with a bunch of
       | known failure modes. You can read all the lesswrong you want. You
       | can read _Thinking, Fast and Slow_ and learn why we fail as we
       | do. But the one thing that we cannot do, no matter how much work
       | or effort we put into it, is have the sheer brainpower required
       | to actually BE rational.
       | 
       | The effort of doing better is still worthwhile. But the goal
       | itself is unachievable.
        
         | snarf21 wrote:
         | Agreed. The world is too complicated. There is too much noise.
         | It might be possible to become _fairly_ knowledgeable about a
         | single issue but it would need to amount to an obsession. This
         | is why we are so keen to belong. We 've always tried to apply
         | the "wisdom of the crowds". It is why people latch onto one
         | viewpoint or the other, e.g. Red/Blue, FOX/CNN, etc., it takes
         | all the work out of it. Once you find a source that you agree
         | with on even _ONE_ issue, just blindly trust /agree with them
         | for everything. We'd rather spend our time streaming shows and
         | living life than investing into deep knowledge of any subject.
         | 
         | Take a non political example: How safe are whole tomatoes to
         | eat? What did the grocery store spray on them? Is it safe? Will
         | it wash off? What about the warehouse were they were stored for
         | months, what did they put on them to keep them from spoiling?
         | What about the farmer, what did they spray on them to protect
         | against pests? What is in the water, is it safe? Now we're
         | ready to eat: Does anyone in my family have any kind of
         | intolerance to raw tomatoes? And this is a pretty simple toy
         | example.... In general, we've collectively decided to trust in
         | the good in people. We hope that if something is
         | bad/lie/harmful, then someone in the know will raise the alarm
         | for the group.
        
         | jdmichal wrote:
         | In addition to _Thinking, Fast and Slow_ , I'd recommend Annie
         | Duke's _Thinking in Bets_. It builds on literature such as
         | _Thinking, Fast and Slow_ to discuss the separation of
         | decisions and results. Specifically, thanks to luck, the
         | quality of the decision is not always represented by the
         | quality of the result. And in order to learn, one has to be
         | able to recognize good and bad decisions regardless of results.
         | 
         | This seems to be a pretty good overview:
         | 
         | https://www.athenarium.com/thinking-in-bets-annie-duke/
        
         | 6gvONxR4sf7o wrote:
         | That's true for arbitrary graphs, but I don't believe it's
         | practically relevant here any more than the fact that I can't
         | compute the first 10000 digits of pi in my head is. We are
         | _much_ worse than our computational limits.
        
         | AndrewKemendo wrote:
         | This nails it.
         | 
         | I'd go further to say that there are real world issues that
         | compound the variables. Namely that individual actions
         | increasingly have global consequences eg. individual purchasing
         | behaviors have externalities that the market is not pricing in
         | and thus fall to the consumer to have to calculate.
         | 
         | Further, given that these global issues these kinds of
         | calculations are game theoretic by their nature, making it even
         | more complicated.
        
         | kazinator wrote:
         | > _I maintain that it isn 't just hard, it is computationally
         | impossible._
         | 
         | I further maintain that it's definitionally impossible. Before
         | we find it computationally impossible, we will find that we
         | can't write the a complete, detailed requirements specification
         | defining what rational is.
         | 
         | (Of course, we recognize egregious irrationality when we see
         | it; that's not what I mean; you can't just define rationality
         | as the opposite of that.)
         | 
         | People can behave rationally (or not) with respect to some
         | stated values that they have. But those can be arbitrary. So
         | the requirement specification for rationality has to refer to a
         | "configuration space", so to speak, where we program these
         | values. This means that the output is dependent on it; we can't
         | write some absolute test case for rationality that doesn't
         | include this.
         | 
         | Problem is, people with different values look at each other's
         | values and point to them and say, "those values are irrational;
         | those people should adopt my values instead".
        
           | UnFleshedOne wrote:
           | You can't say values are irrational -- they just are. If you
           | really like paperclips, no amount of logic can tell you
           | otherwise. What logic can tell you (and other people could),
           | is that your values conflict with each other and you have to
           | balance one against another. Turning whole universe into
           | paperclips is counterproductive if you also value pins. If
           | you literally have no value the other person is basing their
           | arguments on, then they can't convince you to have it.
           | 
           | Luckily we get our values from bunch of heuristics developed
           | through millions of years of biological and social evolution,
           | so we mostly have the same ones, just with different relative
           | weights.
           | 
           | Won't be true if we ever meet (or make) some other sentient
           | critters.
        
             | kazinator wrote:
             | > _You can 't say values are irrational_
             | 
             | People basically do say that, though.
             | 
             | (Values can be contradictory/inconsistent. E.g. you say you
             | value self-preservation, but you also enjoy whacking your
             | head with a hammer. That would be a kind of irrational.
             | That's not what I'm referring to though.)
        
               | UnFleshedOne wrote:
               | I think they make a category mistake when they do then.
               | Values tell you where you want to be, rationality is a
               | most accurate process to get where you want to go and
               | maybe to check if you want to be there before actually
               | getting there and checking out personally. (I think we
               | basically agree btw btw, it is all those other people who
               | are wrong :))
        
         | varjag wrote:
         | Thing is the common failures of rational thinking are not
         | approaching any computational limits. Witness the dumbassery of
         | the past two years.
        
           | irrational wrote:
           | Past 5-6 years you mean.
        
             | jaredhansen wrote:
             | All past years you mean. It's not exactly a recent
             | phenomenon.
        
               | mcguire wrote:
               | Life would be easier if we could agree on one rational
               | decisions in history and then just repeat it as
               | necessary.
        
               | irrational wrote:
               | It's both sides, right? Right....
        
               | mistermann wrote:
               | Logically, "both" sides seems like the correct answer to
               | me.
        
         | Tenoke wrote:
         | Nobody is disputing this. You can, however, clearly be more or
         | less 'rational', adopt better or worse heuristics, etc. which
         | is what you attempt to get from reading LessWrong or Kahneman.
        
           | nicoburns wrote:
           | But what constitutes a better heuristic is context dependent.
           | In particular, if I must make a decision in a time-
           | constrained manner then any heuristic that blows the time
           | budget is going to be worse even if it would be better given
           | more time. And one can't really know in advance how much time
           | spent thinking is optimal. So one has to pick a strategy. The
           | fact that humans have evolved to use a variety of strategies
           | fast and slow (depending on the human) suggests that there is
           | no single optimal strategy.
           | 
           | See also Gigerenzer's Ecological Rationality.
        
             | Tenoke wrote:
             | Sure, but I doubt you actually think that everyone is
             | already operating as well as they can within the contexts
             | they are placed. There's definitely room for improvement.
             | There are all sorts of scenarios where even knowing nearly
             | optimal techniques outcomes can be improved by going the
             | TimSort way due to the context you most often find yourself
             | in.
        
         | joe_the_user wrote:
         | Not mention we don't actually know with certain the probability
         | of even simple things. The average person is almost never
         | reasoning about simple, repeatable sequences of events. You
         | know there's a chance of your car breaking down each day but
         | you don't know the probability of that event and yet you still
         | deal with that possibility.
        
         | yann2 wrote:
         | Correct. Rationality is Bounded. That fact won a Nobel Prize -
         | https://en.wikipedia.org/wiki/Herbert_A._Simon
         | 
         | The recommendation of the theory is if you cant be rational
         | about a specific problem pick another problem, preferably a
         | simpler problem.
         | 
         | Unfortunately lots of chimps in the troupe are incapable of
         | doing that and therefore we shall always have drama.
        
           | ggm wrote:
           | I've made it a life rule to try to avoid conversations with
           | people who use "correct" to respond to statements, unless
           | they are in the role of a teacher.
           | 
           | Tell me, are you aware of the myriad of alternate words to
           | express your agreement with somebody else aside from correct?
           | You aren't here as judge of the right or wrong. Semantically,
           | philosophically, you're expressing agreement not correctness.
           | 
           | Or .. am I incorrect...?
        
           | WastingMyTime89 wrote:
           | > Correct. Rationality is Bounded. That fact won a Nobel
           | Prize.
           | 
           | It's a model not a fact. As a model, it can't really be
           | correct only more or less accurate.
        
             | zepto wrote:
             | > It's a model not a fact. As a model, it can't really be
             | correct only more or less accurate.
             | 
             | This is not true. Models of an external world may be only
             | more or less accurate, but models of other models may be
             | true or false. Mathematical proofs rely on this.
             | Rationality itself is a _model_ so models _of_ rationality
             | may be true or false.
        
               | WastingMyTime89 wrote:
               | Unless you have very peculiar and idiosyncratic
               | definition of the world model, I am fairly confident that
               | what you are saying doesn't make much sense.
               | 
               | In economy in a way that is not dissimilar to physics,
               | model has a precise meaning. To quote Wikipedia, it is a
               | simplified version of reality that allows us to observe,
               | understand, and make predictions about economic behavior.
               | You can't have a model of a model. That just doesn't
               | really make sense.
               | 
               | > Mathematical proofs rely on this
               | 
               | I'm confused by what you want to say here. Mathematical
               | proofs don't use models.
               | 
               | Every proved statements in mathematics can be built from
               | axioms which are presupposed true applying logical rules
               | which are themselves part of the axiomatic system. Saying
               | that something is mathematically proved basically means
               | that given this set of rules we can build up to that
               | point.
               | 
               | > Rationality itself is a model
               | 
               | Once again I'm fairly lost by what you are trying to
               | mean. I'm fairly certain that for most accepted meaning
               | of the world model and the world rationality, rationality
               | is in fact not a model in the same way that a dog is not
               | a theory.
        
               | md224 wrote:
               | > Mathematical proofs don't use models.
               | 
               | Maybe the person you replied to was taking a model
               | theoretic perspective?
               | 
               | https://plato.stanford.edu/entries/modeltheory-fo/
        
               | zepto wrote:
               | > Once again I'm fairly lost by what you are trying to
               | mean.
               | 
               | You may want to look up the difference between _formal_
               | models and _informal_ models.
               | 
               | Since both rationality and the paper showing that it is
               | bounded are based on _formal_ models, it is reasonable to
               | assume this is what we are talking about.
        
               | WastingMyTime89 wrote:
               | Sorry, I don't understand your argument. Are you actually
               | talking about rational choice theory?
               | 
               | > Since both rationality and the paper showing that it is
               | bounded are based on formal models
               | 
               | There is no paper showing that "rationality" is bounded.
               | Models use to consider actors making purely rational
               | choices in the sense that they are always optimizing
               | their utility functions using all available information.
               | Bounded rationility is a different way of modeling actors
               | choice function. It's just a different model. There is no
               | model of models.
               | 
               | Still I don't see what any of that has to do with the
               | difference between formal and informal models. Informal
               | model is a term I have never heard used outside of policy
               | discussion. It's basically dress up for "because the
               | expert said so".
        
               | zepto wrote:
               | > Sorry, I don't understand your argument.
               | 
               | Understood.
               | 
               | It's worth noting that the definition of a model that you
               | said you were using doesn't match with typical
               | definitions of a formal model.
               | 
               | You aren't talking about formal models, and I accept that
               | you are only thinking in terms of economic models.
               | 
               | Perhaps that explains where the difference in
               | understanding lies.
        
             | abc_lisper wrote:
             | any model better than naive intuition is better imo
        
             | loopz wrote:
             | It's a fact many people believe themselves rational and use
             | models expecting rational actors. Proof lacks that it
             | actually works that way. The opposite is often most
             | probable since you rarely have perfect knowledge in
             | practice. Exceptions can be games like tic-tac-toe and
             | chess.
        
               | WastingMyTime89 wrote:
               | > Proof lacks that it actually works that way
               | 
               | I mean everyone know it doesn't really work that way.
               | 
               | The actual question is: does viewing the average actor as
               | trying to perfectly optimise their utility function using
               | all the information available constitute a good
               | estimation of how actors work in aggregate and does it
               | yield accurate and interesting predictions?
               | 
               | The real insight of Simon in _Models of Man_ is not that
               | actors are not in fact perfectly rational. It 's that you
               | can actually model the limits of actors while keeping a
               | fairly rigorous and manageable formalization.
        
               | loopz wrote:
               | Sure, but such models are hackable / breakable, just by
               | breaking the rules or reinventing the game.
        
           | vendiddy wrote:
           | Could someone explain in laymen terms what "bounded"
           | rationality means?
        
             | glial wrote:
             | Rationality can be re-phrased as coming up with the optimal
             | solution to a problem. If you only have finite
             | compute/memory/time, your solution is 'bounded' by those
             | constraints - i.e. your job is now find the best solution
             | possible _given the constraints_.
        
           | btilly wrote:
           | That Nobel was won in 1978, and is based on the fact that in
           | practice we can't be rational.
           | 
           | The NP demonstrations that, in theory, updating a Bayesian is
           | a computationally infeasible problem was G. F. Cooper in 1990
           | (for Bayesian Networks). The stronger result that
           | approximating the update is also computationally infeasible
           | was Dagum, P. & Luby, M., 1993.
           | 
           | So Simon's work relates to what I said, but isn't based on
           | it.
        
         | threatofrain wrote:
         | Daniel Kahneman has soured on his own System 1/2 theory, plus
         | his original theory discussed _bounded_ rationality and not the
         | kind of objective rationality which fell out of favor in econ
         | literature a long time ago.
        
         | [deleted]
        
         | heresie-dabord wrote:
         | > I maintain that it isn't just hard, it is computationally
         | impossible. [...] The effort of doing better is still
         | worthwhile. But the goal itself is unachievable.
         | 
         | The goal of rational thinking is not some conceit of perfection
         | [1] but debugging the runtime for a better result. Humans are
         | in fact very good at communication and at debugging language
         | errors. They have evolved a rational capacity. It can evidently
         | be developed but it needs to be exercised.
         | 
         | This is where hypothesis of an educational system often enters
         | the discussion.
         | 
         | [1] Galef and others call the "Star Trek" Spock character a
         | Vulcan Strawman or Straw Vulcan.
         | https://en.wikipedia.org/wiki/Julia_Galef
        
         | didibus wrote:
         | That might be true given a single brain, but we as a species
         | have access to billions of brains.
         | 
         | The question is, can we organize and educate ourselves so we
         | can leverage that parallel power and let each person become
         | experts in their areas with proper trusts and incentives? And
         | manage to pass along the previous generation computation to the
         | next, without corrupting the data?
         | 
         | Edit: And I forgot all the tools we've designed to help us
         | compute all that, of which I'd count math as a tool to help us
         | compute, and computers as another.
        
         | Jensson wrote:
         | Being rational includes being rational about computation power
         | and heuristics used on a specific choice. Therefore irrational
         | is when people make completely stupid choices that aren't
         | computationally hard to make, not that people can't solve NP-
         | hard problems.
        
         | lalaithion wrote:
         | That's why it's called "less wrong". The goal isn't to be
         | perfect, the goal is to do better. To be less wrong.
        
           | wombatmobile wrote:
           | > the goal is to do better
           | 
           | Why is that "the" goal?
           | 
           | Who sets "the" goal?
        
             | Nav_Panel wrote:
             | Yudkowsky and other prominent contributors set "the" goal
             | as a sort of revealed wisdom regarding their speculations
             | about what a super powerful AI will do.
             | 
             | Pragmatically, the goals themselves appeal to individuals
             | who want to maintain conventional (liberal) morality yet
             | also position themselves as superior, typically as a form
             | of compensation.
        
               | nitrogen wrote:
               | _position themselves as superior_
               | 
               | This is why we can't have nice things. Any time someone
               | tries to find more effective ways of making good
               | decisions or accomplishing their goals, someone has to
               | bring out the most tortured cynical interpretation to
               | tear them down.
        
               | Nav_Panel wrote:
               | Have you hung out much with rationalists?
        
               | JohnPrine wrote:
               | i consider myself a rationalist (or at least, an aspiring
               | rationalist), and people like hanging out with me.
               | learning about this stuff has changed my life and
               | relationships for the better.
        
             | voxic11 wrote:
             | Its the goal of the LessWrong/Rationalist community.
        
               | analog31 wrote:
               | Less Wrong seems to be a manifestation of a thing that
               | comes in cycles: Something triggers the rise of a
               | "rationalist" movement, including possibly a new
               | evangelist or a new medium. Eventually, rational _ism_
               | and rational _people_ end up at a standoff. Then the
               | whole thing repeats itself after a period of time.
               | 
               | I'm probably rational _enough_ but also can 't make sense
               | of much of the rationalist literature, so I simply follow
               | my own compass and hope for the best.
        
             | JohnPrine wrote:
             | The goal is to make decisions that are "better" as defined
             | by your own utility function given limited information.
             | This is also called "winning"
        
           | hanche wrote:
           | I wouldn't be surprised to learn that even being less wrong
           | is NP-hard.
        
             | whatshisface wrote:
             | The saving grace of being able to survive in the universe
             | is that it's possible to climb up NP hard problems far
             | enough to get real results with hard work.
        
               | mensetmanusman wrote:
               | It also means that you might not know if the hard work is
               | climbing up or down (towards or away) from the solution.
        
               | whatshisface wrote:
               | No, you know if you're getting better or worse in an NP
               | problem because checking answers is in P.
        
               | mensetmanusman wrote:
               | If you find a solution, yes
        
               | whatshisface wrote:
               | Oh, you're thinking of NP-hard yes-no problems. Many if
               | not most NP-hard problems of practical importance,
               | including the traveling salesman, involve integer rather
               | than boolean scores.
        
               | mensetmanusman wrote:
               | Thanks for the additional clarification :)
        
               | karpierz wrote:
               | NP-hardness by definition is only yes-no decision
               | problems. The NP-hard formulation of Traveling Salesman
               | is "given the weighted graph G and an integer X, is there
               | a Hamiltonian cycle in G with total weight less than X?"
        
               | hanche wrote:
               | Indeed. And thanks! I needed a little morale booster now,
               | for reasons unrelated to this topic.
        
           | AndrewKemendo wrote:
           | I think this misses the point. Even being "less" wrong
           | requires an amount of work that even the best/smartest etc...
           | cannot consistently apply.
           | 
           | I do believe this is zero-sum in that improving on one set of
           | decisions means no applying the same rigor to others.
           | 
           | This is often seen in the form of very smart people also
           | believing conspiracy theories or throwing their hands up
           | around other massive issues. As an example, the "Rationalist
           | crowd" has de-emphasized work on climate change mitigation in
           | favor of more abstract work on AI safety.
        
             | ret2plt wrote:
             | > This is often seen in the form of very smart people also
             | believing conspiracy theories or throwing their hands up
             | around other massive issues. As an example, the
             | "Rationalist crowd" has de-emphasized work on climate
             | change mitigation in favor of more abstract work on AI
             | safety.
             | 
             | To be clear, the argument (in rationalist circles) is not
             | that climate change is no big deal, it's that there's
             | already a ton of people worrying about it, so it is better
             | to allocate some extra resources to underfunded problems.
        
           | kirse wrote:
           | Which is ironic because the pursuit of knowledge only
           | continues to increase the landscape of unknowns towards
           | infinity - the branches of the tree of undiscovered and
           | unknown knowledge continues to grow exponentially. It's as-if
           | today we thought the choices were A or B, yet tomorrow we
           | discover there was a C, and the next day D and so forth. If
           | anything we are only discovering we are "more wrong" every
           | day.
        
             | tines wrote:
             | Actually I think this is the same fallacy as one of Zeno's
             | paradoxes, and has the same resolution. We are discovering
             | more wrong, as you say, but the "infinity" of wrongs is in
             | the direction of the infinitely small (or "infinitely
             | detailed"), not the infinitely large. In other words, every
             | time we fill in a gap in our knowledge, we create two more
             | gaps, so to speak, but nevertheless we know more than we
             | did before.
        
         | nonameiguess wrote:
         | We should note the limitations of Bayes as well. I already
         | responded to another comment in this thread that giving an
         | update procedure based on seeing new evidence is necessarily
         | bounded in how quickly it can get you to beliefs more likely to
         | be true by your ability to actually gather that evidence or
         | possibly even to generate it if it doesn't already exist. We
         | don't have any perfect algorithms for doing that, and it is of
         | course not a purely computational problem anyway. Take general
         | relativity. It was proposed in 1915 and only confirmed in the
         | very strong gravitational field limit in 2016, because that was
         | our first opportunity to observe a black hole merger, which is
         | not something we have the ability to recreate in a lab.
         | 
         | Even beyond the hard process bottleneck on creating or lucking
         | upon events that produce the evidence we need, however, there
         | is also the limitation that Bayes only gives you a probability.
         | It doesn't give you a decision theory or even a thresholding
         | function. For those, you need a whole lot of other things like
         | utility functions, discount rates, receiver operating
         | characteristics and an understanding of asymmetric costs of
         | false positives versus false negatives, that are often
         | different for each decision domain.
         | 
         | And, of course, to get a utility function meaningful for
         | humans, you need values. There is no algorithm that can give
         | you values. They're just there as a basic primitive input to
         | all other decision making procedures, yet they often conflict
         | in ways that cannot be reconciled even within a single person,
         | let alone across a society of many people.
        
         | polote wrote:
         | Well, it depends on the topic. Choosing rationally between the
         | most green of two tomatoes is easy. You are limited by your
         | ability to distinguish colors but you can still decide
         | rationally.
         | 
         | Not all questions have answers, if you want to be rational when
         | you are asked to answer those question, you can just say "I
         | dont know"
         | 
         | At the beginning of the pandemic, when politics were saying
         | mask dont work. You could just say, well, if we transmit covid
         | by air, then putting something in front of my mouth is going to
         | decrease the spread. That's what is being rational. Of course
         | that's not going to be all the time the good answer, but you
         | have still thought rationally.
         | 
         | I'm not really sure what you are trying to prove. Of curse
         | being rational is possible. All people are rational for most of
         | their decisions.
        
           | not2b wrote:
           | A rationalist would recognize that we update our beliefs as
           | new evidence is available and not attack people for having
           | erroneous beliefs before that evidence was available. The
           | "masks don't work" advice was active for a short time in
           | March 2020 and almost immediately dumped. They thought at the
           | time that only n95 masks would be good enough, these masks
           | were in short supply and health care workers needed them,
           | this was the "politics" of it. But by mid March 2020 people
           | were already being encouraged to make cloth masks and how to
           | do it. That is when my daughter got out the sewing machine
           | and made a bunch, based on instructions from nurses.
        
             | polote wrote:
             | There was no new evidence. I'm not sure we have even
             | learned anything regarding the efficacy of masks trough the
             | pandemic. All what we know was already known prior of it.
        
               | btilly wrote:
               | First, there is active research and we demonstrably have
               | learned something. See https://aricjournal.biomedcentral.
               | com/articles/10.1186/s1375...,
               | https://www.nature.com/articles/s41598-020-72798-7, and
               | https://www.pnas.org/content/118/4/e2014564118 for
               | several examples.
               | 
               | Second, your simplistic analysis demonstrated that you,
               | personally, are ignorant of the real tradeoffs involved
               | in whether masks work.
               | 
               | Wearing a mask reduces how much virus leaves your mouth.
               | But when you breathe out, most of the virus is in larger
               | droplets that quickly hit the ground. However breathing
               | out through a mask creates perfect conditions to create
               | an aerosol, which can allow more of the virus to stay in
               | the air for an indefinite period of time. So there is a
               | tradeoff, and there were reasons to question whether
               | cloth masks were better than simple social distancing.
               | 
               | It turns out that what matters most is not that you get
               | exposed, but rather the initial viral load that you get.
               | You see, the virus will go on an exponential growth until
               | the relatively fixed time it takes the immune system to
               | figure things out and start shutting it down. If the
               | virus gets a solid head start, the odds of serious
               | illness go up. Therefore the lingering aerosol from a
               | mask is (except if it accumulates in poorly ventilated
               | indoor spaces) of less concern than an unmasked person
               | talking directly to you.
               | 
               | So the result is that masks work. Even crappy cloth masks
               | work.
        
               | varjag wrote:
               | ...and as they mentioned, we knew that masks work
               | already.
        
               | btilly wrote:
               | The quality of our evidence is easy to misjudge in
               | retrospect.
               | 
               | The last opportunity to study the effectiveness of
               | mandating low-quality masks in preventing community
               | spread during a pandemic was around a century old.
               | (Literally, the Spanish Flu epidemic.) In the meantime a
               | lot of new and untried modeling tools were in use, as
               | well as updated disease models, and lots of reasons to
               | question old data.
               | 
               | See
               | https://www.albertahealthservices.ca/assets/info/ppih/if-
               | ppi... for an idea of what was reasonable for educated
               | specialists in public health to believe. Note phrases
               | like, _" There was agreement that although the evidence
               | base is poor, the use of masks in the community is likely
               | to be useful in reducing transmission from community
               | based infected persons, particularly those with
               | symptomatic illness."_
               | 
               | So it is accurate to say that we had reason to believe
               | that masks work. But it is easy to overstate how much we
               | "knew" it to be true at the time.
        
               | s1artibartfast wrote:
               | Most of all of your linked papers are summaries of
               | earlier experiments, going back to the 1940's in some
               | cases.
               | 
               | Very little new knowledge was added.
               | 
               | >So the result is that masks work. Even crappy cloth
               | masks work.
               | 
               | I would agree if you change "results" to expert
               | conjecture and "work" to probably do something.
               | 
               | But again, this was always known.
        
               | kbelder wrote:
               | 'We' as the scientific community may not have, but 'we'
               | the unwashed public learned much.
        
               | polote wrote:
               | Being rational doesn't prevent you to be wrong. If your
               | assumption is that the government tells the truth and you
               | conclude that mask don't work. Then you have reasoned
               | rationally.
               | 
               | But if you dont trust the governement, and for this
               | specific case you followed them, then this is not
               | rational.
        
               | mcguire wrote:
               | Aside: As a general rule of thumb, conspiracy theories
               | are not rational.
        
               | UnFleshedOne wrote:
               | 2 years ago I would agree with you 100%. Lately though
               | conspiracy theories become conspiracy facts with alarming
               | frequency. And the speed with which media reaches "We've
               | always been at war with Eastasia" zeitgeist on each shift
               | does not inspire confidence.
        
             | tunesmith wrote:
             | What a lot of people have forgotten is that in March 2020,
             | they thought COVID was droplets, not aerosol - remember all
             | the emphasis on washing hands and hand sanitizer? - and as
             | such, masks would be overkill for most. Combine that with
             | the worry that people would hoard masks when PPE was in
             | short supply for people would be interacting directly with
             | patients, then the initial discouragement on masks seems
             | more understandable.
             | 
             | As the science changed to suggest that COVID was aerosol,
             | scientific opinions on masks got updated as well.
             | 
             | It also didn't help that some hyper-rational people got
             | hung up on ranting about how masks weren't perfect, and how
             | the virus could still get through if you wore a mask. It
             | was as if they imagined they heard someone said "masks are
             | 100% effective" and really really wanted to register their
             | counterpoints. So they said "they don't work!" when they
             | meant "they're not 100% effective!", and other people heard
             | "they don't work!" and took it to mean "they're 0%
             | effective!" That's one of those patterns you start to see
             | all over the place when you know to look for it - people
             | confusing "there exists" and "forall".
        
           | notsureaboutpg wrote:
           | >You are limited by your ability to distinguish colors but
           | you can still decide rationally.
           | 
           | Rationally, the ability to distinguish colors varies between
           | human beings, so much so that with a sufficient number of
           | tomatoes (say 50), you will have different people have
           | different answers for which are the greenest.
           | 
           | Knowing that your ability to distinguish these colors of
           | tomatoes might not be as strong as, say, a tomato farmer's
           | (since he likely works with these specific fruits and colors
           | all the time), you may be rationally inclined to follow his
           | logic in choosing which are the greenest.
           | 
           | Do you follow your intuition or trust an expert? Your
           | contrived example is already difficult to actually make the
           | most rational decision for.
        
           | mcguire wrote:
           | > At the beginning of the pandemic, when _politics_ were
           | saying mask dont work.
           | 
           | Are straw-man statements rational?
           | 
           | " _Then there is the infamous mask issue. Epidemiologists
           | have taken a lot of heat on this question in particular.
           | Until well into March 2020, I was skeptical about the benefit
           | of everyone wearing face masks. That skepticism was based on
           | previous scientific research as well as hypotheses about how
           | covid was transmitted that turned out to be wrong. Mask-
           | wearing has been a common practice in Asia for decades, to
           | protect against air pollution and to prevent transmitting
           | infection to others when sick. Mask-wearing for protection
           | against catching an infection became widespread in Asia
           | following the 2003 SARS outbreak, but scientific evidence on
           | the effectiveness of this strategy was limited._
           | 
           | " _Before the coronavirus pandemic, most research on face
           | masks for respiratory diseases came from two types of
           | studies: clinical settings with very sick patients, and
           | community settings during normal flu seasons. In clinical
           | settings, it was clear that well-fitting, high-quality face
           | masks, such as the N95 variety, were important protective
           | equipment for doctors and nurses against viruses that can be
           | transmitted via droplets or smaller aerosol particles. But
           | these studies also suggested careful training was required to
           | ensure that masks didn't get contaminated when surface
           | transmission was possible, as is the case with SARS.
           | Community-level evidence about mask-wearing was much less
           | compelling. Most studies showed little to no benefit to mask-
           | wearing in the case of the flu, for instance. Studies that
           | have suggested a benefit of mask-wearing were generally those
           | in which people with symptoms wore masks -- so that was the
           | advice I embraced for the coronavirus, too._
           | 
           | " _I also, like many other epidemiologists, overestimated how
           | readily the novel coronavirus would spread on surfaces -- and
           | this affected our view of masks. Early data showed that, like
           | SARS, the coronavirus could persist on surfaces for hours to
           | days, and so I was initially concerned that face masks,
           | especially ill-fitting, homemade or carelessly worn coverings
           | could become contaminated with transmissible virus. In fact,
           | I worried that this might mean wearing face masks could be
           | worse than not wearing them. This was wrong. Surface
           | transmission, it emerged, is not that big a problem for
           | covid, but transmission through air via aerosols is a big
           | source of transmission. And so it turns out that face masks
           | do work in this case._
           | 
           | " _I changed my mind on masks in March 2020, as testing
           | capacity increased and it became clear how common
           | asymptomatic and pre-symptomatic infection were (since
           | aerosols were the likely vector). I wish that I and others
           | had caught on sooner -- and better testing early on might
           | have caused an earlier revision of views -- but there was no
           | bad faith involved._ "
           | 
           | "I'm an epidemiologist. Here's what I got wrong about covid."
           | (https://www.washingtonpost.com/outlook/2021/04/20/epidemiolo
           | ...)
        
           | dahfizz wrote:
           | > You could just say, well, if we transmit covid by air, then
           | putting something in front of my mouth is going to decrease
           | the spread. That's what is being rational.
           | 
           | If I squint at a statement like this, I guess it could be
           | called rational, but it is certainly not rigorous or
           | convincing. You brush over too much and are making lots of
           | assumptions.
           | 
           | Are these statements rational?
           | 
           | The sun is warm, so if I climb a ladder I will be closer to
           | the sun and therefore warmer.
           | 
           | Masks impede airflow, so if I wear a mask I will suffocate.
           | 
           | Bleach kills germs, so drinking bleach will make me
           | healthier.
           | 
           | It is very easy to make an incorrect idea seem rational. You
           | should wear masks because rigorous science tells us that it
           | is effective. That is the only valid justification. "Common
           | sense" is used to justify a lot of junk science.
        
             | nonameiguess wrote:
             | I think yes, you can call those statements rational, but
             | that just gets at an additional level of difficulty here.
             | Bayes only gets you so far as holding a belief with maximum
             | probability it is true, _given_ some level of seen
             | evidence. To actually get maximally probably true beliefs
             | without the qualification, you need to actually gather more
             | evidence. In some cases, that may just mean accumulating
             | knowledge that other people already generated, but in some
             | cases, you may need to generate knowledge from scratch. The
             | ability to do that may be severely bounded by resource and
             | time constraints. One person can 't personally do all
             | science, so now you need division of labor and assignment
             | of workers to efforts, so you need optimal matching and
             | scheduling algorithms. These are theoretically not
             | computationally intractable, but the algorithms rely upon
             | pre-existing accurate ability and preference ranking, so
             | now you need to go back to information gathering and
             | suddenly you have a bootstrapping problem here that feeding
             | your algorithm the data it needs to tell you how to gather
             | data in the first place requires you to gather data first.
        
             | clairity wrote:
             | > "You should wear masks because rigorous science tells us
             | that it is effective."
             | 
             | you've really just glossed over the hard part, which is
             | when and where masks work, which is in turn the difficult
             | political problem to solve.
             | 
             | simplifying, covid spreads mouth-to-mouth with a brief
             | stint in the air, not mouth-to-air-then-(much)-later-to-
             | mouth, which is the mediopolitical narrative that's being
             | pushed vehemently but irrationally, and upon which masking
             | policies are erroneously based.
             | 
             | what's always ignored in these narratives is that the virus
             | falls apart quickly all by itself outside the cozy confines
             | of the body, not to mention floats away to oblivion quickly
             | when outside.
             | 
             | if we're really concerned about masks working, we'd have to
             | force people to wear them among friends and family in
             | private spaces like homes, not outside and in grocery
             | stores where they have basically no effect.
             | 
             | "masks work" is a grossly overreaching blanket political
             | statement, not a summary of "the science". scientific
             | evidence suggests masks _reduce_ droplets (and aerosols,
             | with better masks) being ejected into the air. there 's
             | less clear evidence that it reduces airborne viral
             | particles being inhaled through the mask. but there's
             | almost no evidence that the way we've deployed masks is
             | doing much other than signalling our fears and concerns.
             | 
             | i'd be open to supporting mask policies that are based on
             | actual evidence (e.g., wear them when socializing at home),
             | but not the mediopolitically fearmongering policies we
             | have.
        
         | tisthetruth wrote:
         | Not being jacked up on sugar and caffeine can help
         | tremendously.
         | 
         | I would still like to see some studies which delve into whether
         | sugar and caffeine are catalysts for biasing us towards system
         | 1 and how they affect system 2, mindfulness, patience, etc...
        
         | ulucs wrote:
         | Why bother reasoning with NP-hardness when you can just invoke
         | incompleteness? No brain power limitations are needed.
        
           | drdeca wrote:
           | Because incompleteness isn't really relevant here?
           | 
           | Are you just saying "people aren't logically omniscient, and
           | can't be because of incompleteness"?
        
         | yibg wrote:
         | Being NP-hard doesn't make it computationally impossible in all
         | cases though. So while it might be computationally impossible
         | to be rational in ALL cases, it could be computationally
         | possible to be rational in some (or even many) cases. I think
         | that's the goal to strive for.
        
         | garbagetime wrote:
         | > We should all know that given a belief about the world, and
         | evidence, Bayes' Theorem describes how to update our beliefs.
         | 
         | Should we? What of the problem of induction?
        
         | dwd wrote:
         | Memory and learning is additive - we don't have a delete key,
         | except for where a model can be completely replaced with
         | something new, which is usually at that simple fact level - but
         | it's then assimilated into the rest of what we believe (like a
         | wave function collapse) but it allows for discordant ideas at a
         | distance - irrationality!
        
         | strulovich wrote:
         | NP hard problems get abused for justifying things they cannot.
         | 
         | An NP hard problem, even if it cannot be approximated does not
         | mean the average input cannot be solved efficiently.
         | 
         | Examples:
         | 
         | - An NP hard problem is not sufficient for building crypto.
         | 
         | - Type solving for many programming languages is EXP TIME
         | complete, yet those languages prosper and compile just fine.
         | 
         | Beware the idea of taking a mathematical concept and proof and
         | inducing from it to the world outside the model.
        
           | nostrademons wrote:
           | And human beings make approximate solutions for the average
           | input all the time. That's what gut feelings, instincts,
           | heuristics, and motivated reasoning are, along with all the
           | other shortcuts we take to function in daily life.
           | 
           | The article is asking why it's so hard to be _rational_
           | though, i.e. follow a logically-valid set of inferences
           | forward to an unambiguous conclusion. Assuming one of your
           | premises is that correct rationality implies reasoning
           | statistically about a network of interrelated beliefs, the
           | uncomputability of a Bayesian net is relevant to that.
        
           | btilly wrote:
           | You are correct. For example the worst and average cases for
           | the Simplex Method are dramatically different.
           | 
           | However, in practice, complex Bayesian nets do wind up being
           | computationally intractable. Therefore attempts to build real
           | world machine learning systems consistently find themselves
           | going to computationally tractable heuristic methods with
           | rather obvious failure modes.
        
           | strulovich wrote:
           | Also, adding on my previous comment, for an interesting take
           | on the limitations of NP hard applicability to real life
           | problems see Parameterized Complexity:
           | 
           | https://en.wikipedia.org/wiki/Parameterized_complexity
        
           | User23 wrote:
           | Similarly even mediocre programmers do a pretty good job
           | writing programs that halt.
        
           | DiggyJohnson wrote:
           | First of all I do see that you called it an example; I don't
           | think you're straw-manning or anything:
           | 
           | I think using chaos theory / Bayesian concepts is a
           | significantly better metaphor for "life as we experience it"
           | than it is for the examples you gave.
        
           | amelius wrote:
           | Ok, so what is the class of problems that is hard for any
           | input?
        
             | MichaelZuo wrote:
             | Reducing entropy.
        
         | lisper wrote:
         | Ironically, some of the most irrational people I know are the
         | ones who profess to hew to rationality, to the point where, in
         | certain circles, "rationality" has become a sort of cult. This
         | is particularly evident in militant anti-theism (whose
         | adherents insist that the only possible explanation for someone
         | believing in God is that they are idiots or otherwise mentally
         | deficient), hard-core libertarians (who, ironically, end up
         | politically aligned with hard-core fundamentalist Christians,
         | at least in the U.S.) and a particularly weird strain of this
         | disease that causes people to subscribe to (and actively
         | proselytize!) the many-worlds interpretation of quantum
         | mechanics. It's bizarre, and unendingly frustrating. Sometimes
         | I feel like I'm the only rational creature in the universe
         | because, of course, none of _my_ beliefs are anything at all
         | like theirs.
        
           | btilly wrote:
           | How do you know that they are the ones who are being
           | irrational here, and not you?
           | 
           | This is a serious question. We should always challenge our
           | preconceptions. To take your examples:
           | 
           | 1. Traditional Judeo-Christian religions all claim we should
           | believe because of claims made in holy books of questionable
           | provenance, held by primitive people who believed things like
           | (for example) disease being caused by demons. What rational
           | reason is there for believing these holy books to be
           | particularly truthful? (I was careful to not include
           | Buddhism, whose basis is in experiences that people have
           | while in altered states of consciousness from meditation.)
           | 
           | 2. The shortcomings of libertarianism involve various
           | tragedies of the commons. (My favorite book on this being,
           | _The Logic of Collective Action_.) However the evidence in
           | favor of most government interventions is rather weak. And
           | the evidence is very strong that well-intended government
           | interventions predictably will, after regulatory capture,
           | wind creating severe problems of their own. How do you know
           | that the interventions which you like will actually lead to
           | good results? (Note, both major US parties are uneasy
           | coalitions of convenience kept together through the only
           | electoral realities of winner takes all. On the left, big
           | labor and environmentalism are also uncomfortable
           | bedfellows.)
           | 
           | 3. To the extent that the observer is described by quantum
           | mechanics, many-worlds is provably a correct description of
           | the process of observation. In the absence of concrete
           | evidence that quantum mechanics breaks down for observers
           | like us, what rational reason is there to advocate for any
           | other interpretation? (The fact that it completely violates
           | our preconceptions about how the world should work is an
           | emotional argument, not a rational one.)
        
             | lisper wrote:
             | I kind of intended that comment to be ironic self-
             | deprecating humor because, of course, I have no way of
             | knowing whether or not I'm being irrational. Irrational
             | people think they're rational, and so the fact that I think
             | I'm rational does not mean that I am. But it's likewise for
             | everyone. The real point is that everyone ought to have a
             | little more humility about their own rationality
             | (especially all the idiots who are downvoting my original
             | comments. Now _they_ are being totally irrational!)
        
               | lisper wrote:
               | To late to edit the above comment, but just for the
               | record, this is my actual response to the many-worlders:
               | 
               | http://blog.rongarret.info/2019/07/the-trouble-with-many-
               | wor...
        
               | breuleux wrote:
               | Thanks, that was interesting :)
               | 
               | One thing I'm curious about: I haven't read the
               | literature all that well, but my personal understanding
               | of MWI, after trying to wrap my head around it, is that
               | there's probably no branching or peeling at all: every
               | possible configuration of the universe immutably exists
               | and is associated with a complex amplitude. What does
               | change are the amplitudes. When I make a choice at point
               | A and the universe "splits" into B and C, the only thing
               | that happens is that the amplitude in bucket A is split
               | into buckets B and C. But there's no reason to think A, B
               | and C were ever empty or will ever be empty: after all,
               | some other state Z might pour amplitude into A at the
               | same time A pours into B and C. We might even currently
               | be in a steady state where the universal wavefunction is
               | perfectly static, because every single "branch" is
               | perfectly compensated by a "join". If so, MWI would
               | challenge the very idea that existence is a binary
               | predicate (it's actually a continuous complex amplitude).
               | I'm honestly not sure how we're even supposed to reason
               | about that thing.
               | 
               | Does that make any sense, or am I way off base?
        
               | btilly wrote:
               | I read it, but from it you seem to be making three
               | points.
               | 
               | 1. Many worlds is indeed what QM predicts should happen.
               | 
               | 2. Popular descriptions are oversimplified and the full
               | explanation is very complicated.
               | 
               | 3. Even if many worlds is true, it doesn't change my
               | experience and should not rationally change how I act
               | when faced with quantum uncertainty.
               | 
               | If I am correct, then I'm in violent agreement with all
               | three points. And am left with, "So until more data, I
               | will provisionally accept many worlds as the best
               | explanation."
               | 
               | My impression is that you seem to be left with, "If it is
               | true, then it is irrelevant to my life, and so I don't
               | care about whether it might be true."
        
               | lisper wrote:
               | > Many worlds is indeed what QM predicts should happen.
               | 
               | No. Many-worlds is what the SE predicts should happen.
               | But the SE != QM. MW does not explain the Born rule,
               | which is part of QM's predictions. MW is also violently
               | at odds with subjective experience. So MW is not a good
               | explanation of what is observed.
        
           | hindsightbias wrote:
           | Watching the SSC and NYTimes drama was pretty eye opening
           | about rationlists rational discourse.
           | 
           | Even when SA himself eventually started questioning his
           | response/allegations, few of the mob (there really is no
           | other word for it) would not have it. All absolutist and
           | conspiracy laden.
           | 
           | PG said keep your identity small. I've found few rationalist
           | or libertarians of any bent who meet that criteria.
        
       | legrande wrote:
       | I try to avoid _mind viruses_ , or ideas that can hijack your
       | decisions and thought process and take over. Think of a mind
       | virus as a sort of dangerous meme that underpins everything you
       | do. This is why first principles and making decisions based on
       | sound foundations is better, absent of some sort of virulent
       | dogma.
        
         | OnACoffeeBreak wrote:
         | Sci-fi novel "Lexicon" by Max Barry explores the idea of words
         | used for persuasion to the extent of actually hacking the brain
         | via spoken word to take control of the subject's thoughts and
         | actions.
        
           | FinanceAnon wrote:
           | I thought about something similar in the context of
           | "dangerous" AI. In a hypothetical scenario where super-smart
           | AI got control of the internet and all the devices, would it
           | be able to start controlling people?
        
         | jjbinx007 wrote:
         | Viruses. Virii isn't the plural of virus.
         | 
         | There's a YouTube channel (1) called Street Epistemology which
         | has a guy interview members of the public and ask them if they
         | have a belief they hold to be true such as "the supernatural
         | exists" or "climate change is real" or "x is better than y".
         | 
         | He then asks them to estimate how certain they are that it's
         | true.
         | 
         | Then they talk. The interviewer asks a question and makes
         | notes, then tries to summarise the reply. He questions how they
         | know what they think they know and at the end he asks them to
         | again say how confident they are that what they said is true.
         | 
         | It's fascinating to see people actually talk about and discuss
         | what are usually unsaid thoughts and it shows some glaring
         | biases logical fallacies.
         | 
         | (1) https://youtube.com/c/AnthonyMagnabosco210
        
           | WhompingWindows wrote:
           | I may be wrong, but "Mind Virii" could be using the genitive
           | or possessive form of Virus, like "Mind of a Virus" or
           | "Virus's Mind".
        
           | legrande wrote:
           | > Virii isn't the plural of virus.
           | 
           | Thanks for correcting me. I will refrain from ever using
           | _virii_ again!
           | 
           | https://en.wikipedia.org/wiki/Plural_of_virus
        
             | digitalsushi wrote:
             | I knew what you meant. I feel like we almost have our own
             | culture, sometimes. Weird.
        
         | jklinger410 wrote:
         | Glad to hear you aren't the only person thinking of the mind
         | virus idea!
         | 
         | Exactly what you said. Once you accept one toxic thought, it
         | tends to branch out into other decisions. Unfortunately there
         | are many, many memes out there ready to cause an infection.
         | 
         | These things can be fatal.
        
       | _moof wrote:
       | It's impossible to be perfectly rational without perfect and
       | complete information. Crucially, for questions that affect us
       | personally, this includes perfect insight. I've yet to meet
       | anyone who qualifies.
        
         | jhgb wrote:
         | Why? Are you equating rationality with omniscience? Then why
         | have the separate word "rationality" in the first place?
        
         | rafaelero wrote:
         | What a ridiculous take. Rationality is not the same as
         | omniscience. Being rational is optimizing predictability by
         | using the best available evidence we have. No one is claiming
         | to know the answer for some future event, but trying to reach
         | the best way to aggregate the current information.
        
         | flixic wrote:
         | That's why I appreciate that a rationality website is called
         | LessWrong. Of course you can't be perfectly rational, but you
         | can be less wrong.
        
           | _moof wrote:
           | Thanks for the reply. I think what I was trying to say by
           | implication is that I think folks fall so far short of the
           | ideal that it's actually a regression. Related to this is
           | what I see as an implicit belief that "rationality" means
           | completely dismissing the lived experience of actual humans,
           | i.e. lots of people are suffering but hey, at least we
           | applied principles in a soulless and mathematical way,
           | because that's what's important.
        
             | UnFleshedOne wrote:
             | A soulless and mathematically applied principles is a good
             | was to actually reduce the number of people suffering.
             | Assuming that's what your goal was from the start. If you
             | only look at "lived experience" and then make a random
             | change you feel might help, but don't actually check if it
             | does, you can make things worse (see the outcomes of all
             | the aid to Africa for example).
        
       | rafaelero wrote:
       | I am seeing a lot of "institutions lied to us and are actively
       | keeping information from ourselves" when people try to justify
       | acting irrationaly. I don't agree with this premise at all. What
       | do you mean they keep information from you? This assumes that
       | information can be contained, which in most cases is impossible.
       | There is always leakage.
       | 
       | Now, to be more generous, I will assume that people are actually
       | criticizing how "institutions impose a mainstream view that is
       | difficult to replaced even when facts say it should". To that I
       | say: fine. But even in this case, there should be enough
       | resources to form a rational opinion over the matter (with
       | probabilistic reasoning). Hell, I have a lot of non-orthodox
       | opinions that are so out of Overton Window that I rarely can
       | discuss them. And even in these cases, the internet and Google
       | Scholar/Sci-hub were sources that helped me explore it.
       | 
       | So, I have no sympathy for this "institutions lied to us, let me
       | believe now whatever I want" bullshit.
        
       | HPsquared wrote:
       | It's irrational to pretend as if we are rational.
        
       | [deleted]
        
       | eevilspock wrote:
       | Rational thought is important, but not sufficient. For example,
       | moral conscience is a far more important trait to me. Some people
       | will argue that pure reason is enough to establish a sound moral
       | system; I don't agree but that is a debate for another time.
       | Looking at the end result, Greg is not someone I admire or would
       | want to be:
       | 
       |  _> Greg...became a director at a hedge fund. His net worth is
       | now several thousand times my own._
        
       | nathias wrote:
       | Ah yes, the modern rationalists, few things are as cringe as
       | modern adaptation of classical intellectual currents. Like reddit
       | atheism, it makes a great disservice to the concept from which
       | they steal their name. They have no education beyond their narrow
       | limits, no interest in what lies beyond their time or their
       | common sense.
        
         | jgeada wrote:
         | And they are ever so full of themselves. They're a perfect
         | embodiment of Dunning-Kruger.
        
         | TheGigaChad wrote:
         | Idiot dumbass, get cancer and die squealing like a lab rat.
        
         | MrBuddyCasino wrote:
         | Sure, naive rationalism is intellectually dead, but post-
         | rationalism deserves a better endorsement, thus the downvotes.
         | I suspect most people aren't yet familiar with the discourse.
         | If I was more qualified I'd write it myself, but alas.
        
           | jhgb wrote:
           | What is this "naive rationalism" and "post-rationalism"? And
           | how is rationalism dead in the first place? Did science and
           | logic suddenly stop working without us noticing?
        
       | kerblang wrote:
       | My problem in everyday work is so often I have to deal with so-
       | called software engineers who fancy themselves quite the
       | scientific thinkers but whose irrationality borders on
       | delusional. In fact a lot of them believe "I'm very smart, so I
       | am therefore the most rational" which is obviously not true at
       | all. In fact this will probably make a lot of so-called software
       | engineers angry but I tend to think of the non-technical folk as
       | the rational ones and much easier to deal with as a result.
       | Purely anecdotal though.
        
         | BurningFrog wrote:
         | You won't make any engineers angry.
         | 
         | We know you're talking about _other_ engineers, and we agree
         | about those fools!
        
           | kerblang wrote:
           | I appreciate the humor, but I'm not. There are bitter
           | disagreements based on different interpretations of the
           | facts, and there are bitter disagreements based on a complete
           | disregard for the facts, a refusal to verify assumptions, a
           | persistent use of arrogance as a substitute for competence,
           | blaming the tools for failures of the person using them, and
           | more. In fact there is nothing so maddening as dealing with a
           | delusional person and being told, "I don't know why you're
           | always getting in arguments with them!" as if it's just one
           | of those "personality conflicts" - I would describe it as
           | practically a personality disorder conflict.
        
         | DamnYuppie wrote:
         | I have observed that behavior in many other professions where
         | the participants view themselves as very smart. Physicians and
         | lawyers are at the top of that list.
        
       | marsven_422 wrote:
       | We are human, glorious humans.
        
       | okamiueru wrote:
       | It's going against entropy. There are few ways to be rational,
       | and infinitely many ways to irrational.
        
       | alecst wrote:
       | It's really hard (for me, and I imagine, for everyone else) to
       | not put _myself_ into my views and opinions. Like, when someone
       | shows me that I 'm wrong, it's natural for me to feel attacked,
       | instead of just taking it as a learning moment. Noticing when
       | this happens and working with it has been my main struggle in
       | learning how to be more rational. Those views and opinions really
       | don't need to be a part of what I consider "myself."
       | 
       | Rationality, to me, is really about an open-minded approach to
       | beliefs. Allowing multiple beliefs to overlap, to compete, to
       | adapt, without interfering too much with the process.
        
         | sjg007 wrote:
         | If you demonstrate an open mind when someone says you're wrong
         | you are more likely to open their mind. That's a win.
         | 
         | Focus on yourself and controlling your emotions. Be the calm.
        
         | polote wrote:
         | The basis of a rational decision, is to work with hypothesis.
         | When someone shows you that you are wrong. Just ask yourself,
         | my belief is based of which hypothesis ? Did his points showed
         | that the logic between my hypothesis and my opinion were
         | flawed? Did he show that my hypothesis were false ?
         | 
         | If you want to be rational about an opinion, you have to think
         | first, "what are my hypothesis". Most people start with the
         | opinion and then go down to the hypothesis. That can't work
         | like that. That's the hypothesis + the logic that should create
         | an opinion. Not the other way around
        
         | XorNot wrote:
         | > Allowing multiple beliefs to overlap
         | 
         | This doesn't seem very rational. If your beliefs are in
         | conflict and you're content to not resolve that, then pretty
         | much by definition you're accepting a logical inconsistency.
         | 
         | If resolving the intersection doesn't lead to a new stable
         | belief system, then aren't you basically going with "whatever
         | I'm feeling that day"?
        
           | [deleted]
        
           | alecst wrote:
           | It's an ambitious and admirable goal to be completely
           | logically consistent, but I've given up on that. Sometimes
           | there are two different but consistent stories for the same
           | thing. I get that maybe it doesn't seem rational, but
           | sometimes there's no way to pick between stories.
           | 
           | And, also, sometimes you _think_ you 've settled on the right
           | path, but then you later get a new piece of information and
           | have to reevaluate.
           | 
           | So to me it's not so cut and dry.
        
             | mindslight wrote:
             | Your thinking is most certainly rational. The
             | contraposition to Godel's incompleteness theorem tells us
             | that any framework with sufficient explanatory power will
             | necessarily contain contradictions. Since we attempt to
             | reason about everything, our framework is necessarily large
             | enough to be full of contradictions. Since we've got to
             | deal with contradictions, they are not something to be
             | avoided but rather _acknowledged_. If you 're not
             | acknowledging the contradictions and the "opposite side"
             | for the implications you visit, then you will miss when
             | that "other side" starts making more sense than the chain
             | you're following. Not doing this means ending up at a
             | nonsensical position while ignoring its contradictory
             | obvious truth, a result we call cognitive dissonance.
             | 
             | This dual-thinking is related to the computer security
             | mindset - you can't naively write code thinking your
             | assertions will simply hold as you intend, but rather you
             | need to be continually examining what every assertion
             | "gives away" to a hostile counterparty.
             | 
             | There are alternative systems of logic that attempt to
             | formalize reasoning in the presence of contradictions, to
             | keep a single contradiction from being able to prove
             | everything. For example, intuitionistic logic and
             | paraconsistent logic. These feel much more in line with
             | reasoning in an _open world_ where a lack of a negative
             | doesn 't necessarily imply truth. The focus on a singular
             | "logic" that asserts that everything has some single
             | rational "answer" is a source of much of our modern strife.
        
           | karmakaze wrote:
           | People who gain knowledge by adding to a consistent/stable
           | belief system are the ones who have the most difficulty
           | adapting to new situations and processing new information
           | that may upend volumes of settled knowledge. You can
           | recognize them as the dogmatic types that remember the rules
           | but forget how/why they adopted them and are at a loss to
           | update them.
        
             | antisthenes wrote:
             | > People who gain knowledge by adding to a
             | consistent/stable belief system are the ones who have the
             | most difficulty adapting to new situations and processing
             | new information that may upend volumes of settled
             | knowledge.
             | 
             | That's such an incredibly rare occurrence, that having a
             | stable belief system far outweighs its potential drawbacks.
             | Not to mention that rationality itself encompasses the
             | ability to make such a switch anyway if the new information
             | actually does upend volumes of "settled" knowledge.
             | 
             | A much bigger problem, though, is people lacking critical
             | thinking skills to adequately assign probabilities to the
             | new information being valuable/useful/correct.
             | 
             | Hint: it's very low. (in the current stage of civilization,
             | there are definitely periods where it was different).
        
               | karmakaze wrote:
               | We may be in agreement and only categorizing 'stable'
               | differently. Of course you want a single-coherent world
               | view. What doesn't work well is if inferred or partial-
               | case knowledge is committed as rigid facts that are
               | incompatible with new information.
        
           | jcims wrote:
           | >This doesn't seem very rational. If your beliefs are in
           | conflict and you're content to not resolve that, then pretty
           | much by definition you're accepting a logical inconsistency.
           | 
           | This is just my perspective, but very few beliefs or values
           | map to the whole of reality...they tend to bind to certain
           | aspects of it with a variable priority along the spectrum of
           | that particular dimension, wither its personal agency, the
           | color red, public health, spiders, etc.
           | 
           | However, reality rarely provides us with the ability take a
           | position purely on one factor...nearly every context in which
           | a decision is required operates at the nexus of an
           | uncountable number of these dimensions. Some you can feel
           | swelling to the fore as their slope in your mental 'values'
           | model increases, others stay dormant because you don't see
           | how they apply. This is how most of my decisions that might
           | look outwardly 'inconsistent' arise, there are confounding
           | factors that dominate the topology and steer me in a
           | different direction.
        
           | claudiawerner wrote:
           | I've personally come to see this as a more complicated issue.
           | Often, rational priorities contradict and overlap in scope -
           | for example, discrepancies between moral reasoning and
           | instrumental reasoning. Although I try to be reasonable about
           | these, it's not always possible or preferable to side with
           | one over the other.
           | 
           | However, the drive for total and pure consistency is also
           | misguided in my judgement. One reason why we usually feel so
           | motivated and conflicted (to the point where it can lead to
           | depression) with inconsistency is the psychological effect of
           | cognitive dissonance. It's not clear to me that the only way
           | to quieten cognitive dissonance is to resolve the dissenting
           | thoughts.
           | 
           | Another way is to accept that not everything needs to be
           | resolved. This can be great for mental health - again, just
           | in my experience. Don't let the (sometimes irrational)
           | effects of cognitive dissonance override your decision
           | making. Resolution can work, but so can acceptance.
        
           | nvilcins wrote:
           | We all operate with abstractions and simplifications -
           | because it's impractical (and actually impossible given the
           | complexity of the world) to process end evaluate every single
           | detail.
           | 
           | Dealing with contradictions in our own beliefs (paradoxes) is
           | a part of life. The rational approach is to accept that and
           | "fuse" those beliefs carefully, not (a) accept one and reject
           | the others or (b) avoid the topic entirely.
        
             | lotsofpulp wrote:
             | The rational approach is to acknowledge that you do not
             | have sufficient information to proceed, or acknowledge the
             | various assumptions (better word than "belief) that you are
             | using.
             | 
             | If you are using contradicting assumptions, then you should
             | probably check to see if you are doing so because you want
             | the conclusion that you are getting from the assumption.
        
               | amanaplanacanal wrote:
               | We make decisions based on imperfect information and
               | conflicting values every day. We generally can't wait
               | until we have sufficient information to proceed.
        
               | lotsofpulp wrote:
               | That does not require using conflicting assumptions
               | though.
        
         | bluetomcat wrote:
         | You can only be rational within a greater framework defined by
         | a set of beliefs. When society at large believes that market
         | capitalism is the only way for promoting prosperity, the
         | rational action for a single individual is to get a job, pay
         | the bills and have a life. Other possible actions might have a
         | stronger moral justification, but aren't as beneficial or
         | rational for the individual.
        
           | s1artibartfast wrote:
           | The is no division between moral action and rationality.
           | People just pick what they wish to optomize for. You can
           | rationally pursue any moral cause just as easily as personal
           | comfort.
        
       | MrPowers wrote:
       | Studying logical fallacies and behavioral economics biases have
       | been the best ways for me to become more rational. I'm constantly
       | calling myself out for confirmation bias, home country bias, and
       | the recency effect in my internal investment thought process.
       | 
       | Learning about logical fallacies and identifying them in
       | conversations is great. Don't tell the counterparty of their
       | logical fallacies in conversations cause that's off putting. Just
       | note them internally for a more rational inner dialogue.
       | 
       | Learning other languages and cultures is another way to learn
       | about how different societies interact with objective truth.
       | Living other places taught me a lot about how denial works in
       | different places.
       | 
       | Thinking rationally is quite hard and I've learned how to abandon
       | it in a lot of situations in favor of human emotions. How someone
       | feels is more important than how they should feel.
        
         | newbamboo wrote:
         | Some are grateful to have them pointed out, after a bit of
         | initial discomfort and resistance. Didn't work out so well for
         | Socrates of course, but we're more enlightened now.
        
           | Matticus_Rex wrote:
           | > but we're more enlightened now
           | 
           | We hope.
        
         | nostromo wrote:
         | The sunk cost fallacy is particularly important to learn about
         | and teach your children about.
         | 
         | I see it everywhere, from my own decision making process to
         | international politics. Just this morning I was thinking about
         | it as I read the news about the US leaving Afghanistan, and
         | last week talking with a friend who is staying at a bad job.
        
           | mcguire wrote:
           | Here's a question for you: what is the difference between the
           | sunk cost fallacy and persistence?
           | 
           | And here's the answer: Persistence is good when it is
           | successful. If the activity us unsuccessful, it's an example
           | of the irrational sunk cost fallacy. (Making decisions
           | without knowledge of future events is quite hard.)
           | 
           | And the important lesson: If you bail at the first sign of
           | adversity, no one can ever accuse you of being irrational. Of
           | course, as the old saying goes, all progress is made due to
           | the irrational.
        
             | clairity wrote:
             | that's not irrationality, that's decision-making under
             | uncertainty, which is the norm, not the exception.
             | probabilities are dynamic, information is imperfect, and so
             | decision-making must incorporate that uncertainty.
             | 
             | the sunk cost fallacy is simply considering existing loss
             | when deciding on continued investment (in time, money and
             | other resources), when you should only consider future cost
             | for future benefit. it's thinking erroneously that existing
             | loss is not already locked in, that it's salvageable
             | somehow. but no, it's already lost.
             | 
             | in a project with continuously updating probabilities of
             | success, and under imperfect information, the go-or-no-go
             | decision should only be based on the likelihood of future
             | gains exceeding future losses, not future+existing losses.
             | 
             | in this framework, persistence would be having credible
             | evidence (e.g., non-public information), not just belief,
             | of the likelihood of future net gain relative to
             | opportunity cost. it'd be irrational to be persistent
             | simply on belief rather than credible information and
             | probability estimation.
        
             | aidenn0 wrote:
             | The difference between sunk-cost fallacy and persistence is
             | that of motivation. If you keep doing something because
             | "you've worked so hard already" then that's sunk-cost
             | fallacy. If you keep doing something because "success is
             | just around the corner" then that's persistence.
             | 
             | You can't go back in time and not work hard on something,
             | so whether or not you should continue is purely a function
             | of whether or not you think you will succeed, not a
             | function of how much effort you've already put into it.
        
         | oldsklgdfth wrote:
         | In an attempt to catch myself in the act of logical fallacies I
         | have a flash card app on my phone. One of the sets I have is of
         | logical fallacies. Educating myself has helped make me more
         | aware of them and when I fall victim to them.
         | 
         | It's not an easy task. But 10 minutes a day can add up and
         | reinforce that information.
         | 
         | A related idea is cognitive distortion. It's basically an
         | irrational thought pattern that perpetuates negative emotions
         | and a distorted view of reality. One example many here can
         | relate to is imposter syndrome. But to feel like an imposter
         | you have to overlook your achievements and assets and cherry-
         | pick negative data points.
        
         | wyager wrote:
         | "Logical fallacies" are mostly Boolean/Aristotelian and
         | identifying them is completely useless and/or counterproductive
         | in 99% of real world scenarios. Most of your reasoning should
         | be Bayesian, not Boolean, and under Bayesian reasoning a lot of
         | "fallacies" like sunk cost, slippery slope, etc. are actually
         | powerful heuristics for EV optimization.
        
           | jitter_ wrote:
           | > under Bayesian reasoning a lot of "fallacies" like sunk
           | cost, slippery slope, etc. are actually powerful heuristics
           | for EV optimization.
           | 
           | Can you elaborate on that?
           | 
           | This really piqued my interest. I feel like logic is easy to
           | apply retrospectively (especially so for spotting fallacies),
           | but trying to catch myself in a fallacy in the present feels
           | like excessive second quessing and overanalyzing. The sort
           | that prevents forward momentum and learning.
           | 
           | Would you by any change have any recommendations on reading
           | on the topic?
        
             | wyager wrote:
             | Sure. Fallacies, as usually stated, tell you when something
             | that feels like a logical entailment isn't actually a
             | logical entailment.
             | 
             | Intuitively, people find "bob is an idiot so he's wrong" a
             | reasonable statement.
             | 
             | Technically, the implication does not hold (stupid people
             | can be correct) and this is an ad hominem fallacy.
             | 
             | However, if we analyze this statement from a Bayesian
             | standpoint (which we should), the rules of entailment are
             | different and actually bob being stupid is _evidence_ that
             | he's wrong. So maybe this is actually a pretty reasonable
             | thing to say! Certainly reasonable people should use
             | speakers' intelligence when deciding how much to trust
             | speakers' claims, even though this is narrowly "fallacious"
             | in an Aristotelian sense.
             | 
             | I'm not aware of any reading on this topic. It seems under-
             | explored in my circles. However I know some other people
             | have been having similar thoughts recently.
        
       | PicassoCTs wrote:
       | I find the distinction between emotions and logic to be quite
       | synthetic. Emotions is nothing but logic, just hard coded,
       | subconscious and hard to trace back from the inside. Alot of
       | "rational" thought though, falls into a similar category as the
       | emotional pre-chosen outcome is just decorated with "rational"
       | arguments. The reason ultimately is the same as everywhere in
       | life. Economics. In this case energy economics. Heuristics and
       | early-outs, are more desirable then a long, energy-intensive
       | search of a complex space, coming to a indecisive conclusion to
       | wander between local maximums.
       | 
       | The real interesting thing here, is the answer to why emotions,
       | work as they do and what the patterns and bits are that trigger
       | them. To turn over that particular rock is to go to some deeply
       | disturbing places. And to loose the illusion that emotion make
       | one more "human" - meanwhile, if ones reaction is more hard
       | coded, shouldn't it be considered more machine-like?
        
       | joelbondurant wrote:
       | USA members need the Fact-Check algorithm integrated into
       | permanent surgically installed face masks.
        
       | mncharity wrote:
       | Jim Keller (famous cpu designer; Lex Fridman interview)[1]:
       | "Really? To get out of all your assumptions, you think that's not
       | going to be unbelievably painful?" "Imagine 99% of your thought
       | process is protecting your self conception, and 98% of that's
       | wrong". "For a long time I've suspected you could get better
       | [...] think more clearly, take things apart [...] there are lots
       | of examples of that, people who do that". "I would say my brain
       | has this idea that you can question first [sic] assumptions, and
       | but I can go days at a time and forget that, and you have to kind
       | of like circle back to that observation [...] it's hard to keep
       | it front and center [...]".
       | 
       | [1] https://www.youtube.com/watch?v=Nb2tebYAaOA&t=4962s
        
       | tomgp wrote:
       | "Know things; want things; use what you know to get what you
       | want"
       | 
       | I think the hardest bit of this is in some ways the middle,
       | wanting things. How do we know we really want what we want, and
       | how do we know what will make us happy. That's the bit I struggle
       | with anyway.
        
       | andi999 wrote:
       | I believe it is also an evolutionary advantage. Let's assume with
       | all information available it looks like the rational best
       | decision to do something. Then unexpectedly that thing kills you.
       | There is only a species left if not everybody did it.
        
       | damoe wrote:
       | Because there is a good chance reality is not rational.
        
       | karmakaze wrote:
       | Recognition of "motivated reasoning" can replace a whole lot of
       | recognizing logical fallacies in your own or others' thought
       | processes.
       | 
       | Here's an 20m audio interview[0] with the author of "The Scout
       | Mindset: Why Some People See Things Clearly and Others Don't"
       | 
       | It very well summarizes the way I like to gather information in
       | an area so that I can form an opinion and direction of movement
       | on a problem.
       | 
       | [0] https://www.cbc.ca/player/play/1881404483658
        
       | coldtea wrote:
       | For starters, who said it's better to be rational?
       | 
       | Not being rational - and instead being based on guts - has an
       | evolutionary advantage (it cuts through the noise, which, in the
       | past could be a life or death situation).
        
         | dnissley wrote:
         | Intuition could be said to be the opposite of reason, but not
         | rationality. There are whole parts of the rationalist diaspora
         | that emphasize how important it is to be in touch with one's
         | intuitions / feelings and to integrate them successfully into
         | one's decision making process with an aim towards being more
         | rational.
        
       | linuxhansl wrote:
       | I read somewhere (truly forgot where, sorry) that we humans are
       | mostly just lazy, that we avoid thinking as best as we can and
       | rather gravitate towards that (people, circle, or news) which
       | confirms what we already believe so that we do not have to think.
       | 
       | "Confirmation Bias" does not quite capture it. Really just
       | laziness. :)
       | 
       | The other part, being decisive... I can definitely relate to
       | that. I noticed that I often have a hard time making decisions
       | and realized it's because I tend look at the world in terms of
       | what I can possibly lose instead of looking at something new in
       | terms of excitement.
        
         | SavantIdiot wrote:
         | Critical thought is actually really, really hard. Pre-internet
         | the problem was too little signal, post-internet the problem is
         | too much noise.
         | 
         | I would argue we've largely been anesthetized due to successful
         | Gish Galloping. I have great admiration for people who put the
         | effort in to sort out the issues, academics and journalists.
         | But just now everyone eye-rolled when I said those two terms.
        
       | esarbe wrote:
       | Because we didn't evolve to be rational. We evolved to reproduce
       | as often as possible, not to thing as precises as possible. We're
       | not thinking machines, we're reproduction machines.
       | 
       | That we are able to think somewhat rational-ish is only because
       | we adapted by adopting extensive modeling simulations. The
       | fundamental function of these simulations is to simulate other
       | beings, primarily human. And in that our brainware is lazy as
       | hell, because - to quote evolution; why do perfect, when you can
       | do good enough? Saves a ton of energy.
       | 
       | The wetware we employ was never expected to rationally solve
       | differential equations or do proper statistical analysis. At best
       | it was expected to guess the parabola of a thrown stone or spear,
       | or empate the best way to mate without facing repercussions from
       | the tribe.
       | 
       | So, really. It's not that thinking is hard. It's just that we're
       | just not equipped to do it.
        
       | raman162 wrote:
       | I particularly enjoyed the concepts presented in this article,
       | from recognizing how confident you are in a certain idea to
       | understanding the steps it takes for someone to be rational.
       | 
       | Being self-aware I've only started learning post college and is
       | something I wish I was taught more growing up. As a child I was
       | always informed that I should do x and y because that's what
       | you're supposed to do! Only now as an adult I'm taking the time
       | to slowly ponder and analyze myself and be more strategic with my
       | future goals.
       | 
       | Side note. Really enjoyed the audio version of this long form
       | article
        
       | mrxd wrote:
       | It's actually not hard.
       | 
       | Rationality is a form of communication. Its purpose to persuade
       | other people and coordinate group activity, e.g. hunters deciding
       | where they should hunt and making arguments about where the prey
       | might be. In that setting, rationality works perfectly well
       | because humans are quite good at detecting bad reasoning when
       | they see it in others.
       | 
       | Because of the assumptions of psychological individualism,
       | rationality is misunderstood as a type of cognition that guides
       | an individual's actions. To a certain extent, this is a valid
       | approach because incentives within organizations encourage people
       | to act this way. We reward individual accomplishments more than
       | collaboration.
       | 
       | But many cognitive biases disappear when you aren't working under
       | the assumptions of psychological individualism. For example, in
       | the artificial limitations of a lab, you can show that people are
       | unduly influenced by irrelevant factors when making purchase
       | decisions. But in reality, when a salesperson is influencing
       | someone to spend too much on a car, people say things like "Let
       | me talk it over with my wife."
       | 
       | We instinctively seek out an environment of social communication
       | and collaboration where rationality can operate. Much of the
       | advice about how to be individually rational comes down to
       | simulating those conditions within your own mind, like
       | scrutinizing your own thinking as if it was an argument being
       | made by another person. That can work, but the vast majority of
       | people adopt a more straightforward approach, which is to simply
       | use rationality as it was designed to be used.
       | 
       | Rationality is hard, but only for a small number of "smart
       | people" who live in an individualistic culture prevents them from
       | using it in the optimal way.
        
         | UnFleshedOne wrote:
         | I think you are confusing the original purpose of our thinking
         | apparatus (social proof first, discovering true facts distant
         | second, unless facts can eat you quickly) and rationality as a
         | system for discovering facts as true as possible with given
         | energy budget that is running on that faulty hardware.
        
       | m3kw9 wrote:
       | Because certain degree of emotions have rational basis. Asking
       | humans to know which parts of their emotion is rational turns
       | into a multidimensional problem they can't just solve in a heat
       | of the moment
        
       | adrhead wrote:
       | It is quite hard to become rational as humans are emotional
       | beings. Sometimes, emotions will take over rationality in making
       | decisions. This is why people struggle to make wise decisions.
        
         | TuringTest wrote:
         | Conversely, it is impossible to be rational without emotions.
         | 
         | Reason needs axioms (beliefs) to build a rational discourse,
         | and without emotions, it is impossible to choose a limited set
         | of starting axioms to begin making logical inferences from.
         | 
         | I agree with the person above who said being rational is about
         | making post-hoc rationalizations. We know by cognitive science
         | that a majority of explanations are build that way: after
         | observing facts, we intuitively develop a story that is
         | consistent with our expectations about the fact, as well as
         | with our preconceived beliefs. "Being rational" in this context
         | would be limited to reviewing our beliefs when these ad-hoc
         | rationalizations become inconsistent one with another.
        
       | [deleted]
        
       | eevilspock wrote:
       | "Useless" https://xkcd.com/55/
        
       | [deleted]
        
       | achenatx wrote:
       | The ultimate issue is that underpinning every action is a value
       | system. Value systems are opinions and are fundamentally not
       | rational.
       | 
       | Virtually every political disagreement is based on values, though
       | most of the time people dont recognize it.
       | 
       | Values determine priorities and priorities underpin action.
       | 
       | For example some people feel that liberty (e.g. choice) is more
       | important than saving lives when it comes to vaccines.
       | 
       | Some people feel that economic efficiency is less important than
       | reducing suffering.
       | 
       | Some people feel that the life of an unborn child is worth less
       | than the ability to choose whether to have that child
       | 
       | Even in the article, is a stereo that sounds better actually
       | better than a stereo that looks better? That is a value judgement
       | and there is no right or wrong.
       | 
       | No one is actually wrong since everything is value judgements.
       | Many people believe in universal view of ethics/morality. There
       | is almost no universal set of ethics/morality if you look across
       | space and time.
       | 
       | However some values allow a culture to out compete other cultures
       | causing the "inferior" values to disappear. New mutations are
       | constantly being created. Most are neutral and have no impact on
       | societal survival. Some are negative and some are positive.
        
         | derbOac wrote:
         | I came to say something similar, that rational decision making
         | is really a poorly posed problem at some level.
         | 
         | Take money for example. You can create a theoretical decision-
         | making dilemma involving certain sums of money, and work out
         | what the most rational strategy is, but in reality, the
         | differences between different sums of money is going to differ
         | between people depending on different value systems and
         | competing interests. So then you get into this scenario where 1
         | unit of money means something different to different people
         | (the value you put on 1 EUR is going to be different from the
         | value I put on it; the exchange rates are sort of an average
         | over all these valuations), which might throw off the relevance
         | of the theoretical scenario for reality, or change the optimal
         | decision scenario.
         | 
         | The other issue beside the one you're relating to -- the
         | subjectivity of the weights assigned to different outcomes, the
         | achille's heel of utility theory -- is uncertainty not just
         | about the values in the model, but whether the model is even
         | correct at all. That is, you can create some idea that some
         | course of action is more rational, but what happens when
         | there's some nontrivial probability that the whole framework is
         | incorrect? Your decision about A and B, then, shouldn't just be
         | modeled in terms of whatever is in your model, but all the
         | other things you're not accounting for. Maybe there are other
         | decisions, C and D, which you're not even aware of, or someone
         | else is, but you have to choose B to get to them.
         | 
         | Just yesterday I read this very well-reasoned, elegant,
         | rational explanation by an epidemiologist about why boosters
         | aren't needed. But about 3/4 of the way through I realized it
         | was all based on an assumption that is very suspect, and which
         | throws everything out the window. There are still other things
         | their arguments were missing. So by the end of it I was
         | convinced of the opposite conclusion.
         | 
         | Rationality as a framework is important, but it's limited and
         | often misleading.
        
         | _greim_ wrote:
         | > is a stereo that sounds better actually better than a stereo
         | that looks better? That is a value judgement and there is no
         | right or wrong.
         | 
         | Disagree; value systems are the inputs to rationality. The only
         | constraint is that you do the introspection in order to know
         | what it is that you value. In that sense buying a stereo based
         | on appearance is the right decision if you seek status among
         | peers or appreciate aesthetics. It's the wrong decision if you
         | want sound quality or durability.
         | 
         | I think the real issue is that people don't do the necessary
         | introspection, and instead just glom onto catch-phrases or
         | follow someone else's lead. That's why so many people hold
         | political views that are contrary to their own interests.
        
         | mariodiana wrote:
         | Yes, and I think when people claim to be describing what a
         | "rational actor" would do, what they often leave out are the
         | normative assumptions inherent in their rational analysis.
         | Moreover, I suspect the omission at times is not accidental.
        
       | FinanceAnon wrote:
       | It's impossible to be absolutely rational. I feel like there is
       | so many different levels and viewpoints that there is no right
       | answer.
       | 
       | Simple example:
       | 
       | Let's say the same pair of shoes is available in two different
       | shops, but in one shop it's more expensive. It seem more rational
       | to buy it in the cheaper shop. However, what if you've heard that
       | the cheaper shop is very unethical in how it conducts the
       | business. Is it still more rational to buy the shoes there?
       | 
       | And then you might also start considering this situation "in the
       | grand scheme of things" - in the grand scheme of things does it
       | make any difference if I buy it in shop A or B?
       | 
       | And at which point does it become irrational to be overthinking
       | simple things in order to try to be rational? What if trying to
       | always be rational is stressing you out, and turns out to be
       | worse in the long run?
        
         | MisterBastahrd wrote:
         | Yeah, for example, let's say that I can buy from ShoeCo or big,
         | evil Amazon. But big, evil Amazon allows me to donate a portion
         | of their proceeds to a charity of my choice, and furthermore, I
         | am also within my rights as an individual to take the
         | difference between ShoeCo's price and Amazon's and donate it to
         | another cause as well.
         | 
         | Some will say that buying from Amazon simply perpetuates
         | Amazon... but Amazon is so large at this point that it doesn't
         | matter WHAT I do. So ultimately, is the world better off with
         | my two donations from my Amazon purchase or giving my money
         | away for the same product to ShoeCo?
        
           | SamBam wrote:
           | If Amazon is so big that your purchase is meaningless, then
           | the problems of the world are also so big that your donations
           | are probably meaningless.
           | 
           | If your donations have some tiny bit of meaning to them, then
           | removing a tiny bit of business from Amazon and paying your
           | local shopkeeper probably also has meaning.
        
             | notahacker wrote:
             | Don't think that follows automatically. My dollar - in
             | isolation - can feed someone tomorrow, even if it doesn't
             | feed others and they're all hungry next week. Lack of my
             | dollar alone won't change the ethics of Amazon in the
             | slightest, and much as the more ethical shopkeeper won't
             | mind the extra number in his bank account it's unlikely to
             | allow him to displace ethical companies or do anything else
             | wonderful with it. The difference between direct, tangible
             | outcomes and perhaps more significant outcomes which depend
             | on a lot more other people acting in a particular way is
             | one of the thornier questions about what's rational to
             | prioritise. tbh when I do boycott stuff it's mostly an
             | emotional response
             | 
             | (notwithstanding better objections to the original example:
             | in practice most donors' finances aren't so tight that
             | buying the $90 product rather than the $100 dollar one is
             | really necessary to free up the donor funds for a worthy
             | cause, as opposed to emotionally salve donor conscience for
             | buying from an unworthy vendor...)
        
             | vdqtp3 wrote:
             | > removing a tiny bit of business from Amazon and paying
             | your local shopkeeper probably also has meaning.
             | 
             | It might be fair to say that removing business from Amazon
             | has no real impact but giving that business to a small
             | business does.
        
         | UnFleshedOne wrote:
         | Deciding when to stop overthinking is also a rational process.
         | Some choices truly don't matter, or not matter enough to spend
         | time and energy on them.
         | 
         | If consumer ethics is important to you then it obviously
         | warrants some deliberation, weighted by an upper bound of your
         | potential impact. But identifying areas of meaningless choice
         | and simply choosing randomly (and not even caring if the choice
         | is sufficiently random) frees up a lot of mental energy.
        
       | 6gvONxR4sf7o wrote:
       | There are some good bits in here. I love the subtitle especially:
       | "The real challenge isn't being right but knowing how wrong you
       | might be." Knowing when not to provide an answer is hard. A big
       | part of my job is communicating statistical findings and giving a
       | good non-answer is much harder than giving a good answer, both
       | technically speaking and socially speaking.
       | 
       | One thing I'll add that drives me nuts is the fetishization of
       | bayesian reasoning I see some times here on HN. There are times
       | that bayesian reasoning is helpful and times that it isn't.
       | Specifically, when you don't trust your model, bayes rule can
       | mislead you badly (frequently when it comes to
       | missing/counterfactual data). It's just a tool. There are others.
       | It makes me crazy when it's someone's only hammer, so everything
       | starts to look like a nail. Sometimes, more appropriate tools
       | leave you without an answer.
       | 
       | Apparently that's not something we're willing to live with.
        
         | hinkley wrote:
         | Thinking Fast and Slow left me with a feeling of despair about
         | the human inability to reason effectively about statistics.
         | 
         | I like to tell people that charts work better for asking
         | questions than answering them. Once people know you look for
         | answers there, the data changes. More so than they do for
         | question asking (people will try to smooth the data to avoid
         | awkward questions).
        
           | belter wrote:
           | "Thinking Fast and Slow" left me with the same feeling but
           | not because of "Thinking Fast and Slow"
           | 
           | https://news.ycombinator.com/item?id=27261501
        
         | belter wrote:
         | I am with you :-) https://xkcd.com/1132/
        
           | tomjakubowski wrote:
           | Maybe I'm just missing the joke here, but "Bayesian
           | reasoning" is hardly needed to realize that if the sun did
           | explode, the $50 you'd lose in the bet is worthless anyway.
        
       | neonate wrote:
       | https://archive.is/7dAh5
        
       | swayvil wrote:
       | Rationality is a game of checkers played outside in a meadow.
       | 
       | So many distractions. Wind, rain, bees, rampant squirrels.
       | 
       | And what makes that game more interesting than a squirrel anyway?
        
       | newbamboo wrote:
       | My answer, in jeopardy format: What is Psychology? Every mind is
       | different; a feature not a bug.
        
       | danans wrote:
       | I think there is a simpler explanation that draws from
       | evolutionary theory: being excessively rational is not a good
       | survival strategy, be it in the distant past or today.
       | 
       | If our ancestors would have made the rational assessment that
       | there is unlikely to be a predator hiding behind the bush, that
       | would have worked only as long as it worked, until one day they
       | got eaten.
       | 
       | Irrationally overestimating threats and risks is not an optimal
       | approach, but as long as you can survive it can be a long-term
       | optimal approach.
       | 
       | Humans using irrational stories to enable group cohesion and
       | coordination are similarly irrational but intrinsic ways of being
       | that also provide an evolutionary advantage.
       | 
       | Rationality, however is an incredible optimization tool when
       | operating in domains that are well understood, like the example
       | of stereo equipment that the author gave in the article. It can
       | also help in the process of expanding knowledge by helping a
       | systematically compare and contrast signals.
       | 
       | But it doesn't prevent the lion from eating you or the religious
       | or temporal authority from ostracizing you from the safety of the
       | settlement, and it may even make both of those outcomes more
       | likely.
        
         | lazide wrote:
         | Humans operate by doing, then rationalizing, and much of the
         | attempts at rational thought here demonstrate how easy it is to
         | fool ourselves into thinking we are being rational, when really
         | we are acting on feelings and delusions and then constructing
         | what feels like a rational argument that we originally had -
         | but falls apart upon analysis.
         | 
         | In the past, it _is_ a rational concern to be worried about
         | being jumped by a predator from behind a bush, and if you don't
         | know if or if not there is a predator, it is perfectly rational
         | to be worried about such a concern!
         | 
         | Same with diseases and causes when you don't know what is
         | causing them, etc.
         | 
         | It's a tendency to dismiss older concerns from a time when
         | there was a severe lack of information as irrational, where
         | when you know your limits and see the results, there is no
         | other rational way to behave except to be concerned or avoid
         | those things. While also not rational to believe clearly
         | contradictory religious dogma that covers the topic, it _is_
         | rational to follow or support it when it has clear alignment
         | with visibly effective methods encoded in it for avoiding
         | disease and other problems.
        
           | danans wrote:
           | > In the past, it is a rational concern to be worried about
           | being jumped by a predator from behind a bush, and if you
           | don't know if or if not there is a predator, it is perfectly
           | rational to be worried about such a concern!
           | 
           | I think we agree, but I also think you are using "rational"
           | here in the colloquial sense to mean the "smartest" thing to
           | do.
           | 
           | The article, and my comment in response, uses the traditional
           | definition of "rational" as something derived from logic, and
           | _not_ from impulse or instinct.
           | 
           | The two definitions are not the same (not that one is better
           | than the other, they just mean different things).
        
             | lazide wrote:
             | Nope, explicitly using logic. We didn't invent thinking
             | about things in the last hundred years after all.
             | 
             | If you don't know what is behind x thing, and every y times
             | someone walks by a thing like x thing they get jumped by a
             | leopard, then only walk by x thing when the risk is worth
             | it. Which it rarely is.
             | 
             | If you're referring to formal logic, then sure - but almost
             | no one in that thread seems to be using that definition
             | either. Formal logic is incredibly expensive (mentally),
             | and only a few percent of folks even now can afford to use
             | it with any regularity.
        
         | wyager wrote:
         | This is also captured in the "midwit phenomenon", where people
         | who are just smart enough to start applying "rationality" make
         | worse decisions than stupid people. This is because stupid
         | people are operating off of hard-earned adaptations (encoded as
         | traditions, folk wisdom, etc.). Midwits are smart enough to
         | realize that the putative justifications for these adaptations
         | are wrong, and therefore they toss out the adaptations. People
         | who think about it even harder realize that these adaptations
         | were mostly there for good reasons, and getting rid of them
         | isn't a good idea even if the relevant just-so stories
         | explaining them don't hold up to "rational" scrutiny.
        
           | UnFleshedOne wrote:
           | Midwits (which we all are to one degree or another) can be
           | mostly fixed by applying Chesterton's Fence principle though.
           | We just need a knock or two in both directions to better
           | estimate a relative weight of that rule as a heuristic.
        
         | SamBam wrote:
         | > If our ancestors would have made the rational assessment that
         | there is unlikely to be a predator hiding behind the bush, that
         | would have worked only as long as it worked, until one day they
         | got eaten.
         | 
         | That wouldn't have been a rational assessment, because it
         | wouldn't have been an accurate assessment of the risks of being
         | wrong, and the behavior required to avoid them.
         | 
         | If there's only a 1% chance that a predator is behind a bush,
         | and that predator might eat you, it's absolutely rational to
         | _act_ as though there is a predator. You 'll be seeing lots of
         | bushes in your life, and you can't escape from those 1% chances
         | for long.
         | 
         | The same thinking is why it would have been rational to try and
         | avoid global warming 30 years ago. Even if the science was not
         | settled, in the worst-case scenario, you'd have "wasted" a
         | bunch of money making green energy production. In the best-case
         | scenario, you saved the planet.
        
           | fallous wrote:
           | It's not actually rational, let alone long-term optimal, to
           | act as though there is a predator behind every bush given a
           | 1% (in reality it's probably a couple of orders of magnitude
           | less likely, but we'll ignore that). If you need water and
           | head for the local watering hole, avoiding bushes will most
           | likely result in you not getting water since bushes tend to
           | grow where there is water. I may be 1% likely to get eaten by
           | something hiding behind the bush but I am 100% likely to die
           | if I don't drink water.
           | 
           | Avoidance of all possible risk is a recipe for paralysis.
           | Part of being rational is evaluation of risks vs rewards as
           | well as recognizing the dangers of unintended consequences
           | and the fact that nearly all meaningful decisions are made
           | with incomplete information and time limits.
        
             | slingnow wrote:
             | Somehow you took their response to mean "the rational thing
             | to do is avoid all bushes, forever, no matter the
             | consequences".
             | 
             | The OP merely stated you should adjust your behavior to the
             | 1% chance. That would include weighing it against the risk
             | of dying from dehydration, in your example.
        
       | johnwheeler wrote:
       | Perfect rationality is impossible because in order to make
       | correct decisions you need all the facts and a rational actor
       | would do nothing at all given that all the facts can't be had.
       | The best you can do is to be an odds maker;therefore, an odds
       | maker would spend their time looking for as many of the lowest
       | effort ventures with the highest chances of success and biggest
       | payoffs relative to effort and chance. In their free time (time
       | when no reasonable opportunities were present), they would learn
       | as much as possible to increase decision making power thus odds
       | of success.
        
         | raldi wrote:
         | Your opening sentence makes no sense. If you and I don't know
         | the results of a coin toss, and I offer you a two-for-one wager
         | on the result, the rational choice for you would be to take
         | that bet, even without knowing the most relevant fact.
        
           | johnwheeler wrote:
           | ah, you should have read the second sentence
        
             | raldi wrote:
             | I don't see how the second sentence makes sense of the
             | first. A perfectly rational actor would not do nothing;
             | they would carry out the most reasonable action given the
             | information available.
        
               | johnwheeler wrote:
               | But then you're not being perfectly rational. You're
               | calculating the odds, which is what my second sentence
               | says.
               | 
               | Being perfectly rational is impossible.
               | 
               | See: perfect rationality vs bounded rationality
        
               | JohnPrine wrote:
               | I think you may have a confused definition of what it
               | means to be a rational actor. Being rational means making
               | the optimal decision given the information available
        
               | johnwheeler wrote:
               | No, you're confused. See
               | 
               | https://en.wikipedia.org/wiki/Bounded_rationality
        
               | raldi wrote:
               | Maybe I don't understand what you mean by "perfectly
               | rational". I'm using the definition from the article: A
               | perfectly calibrated individual will be right X% of the
               | time about statements in which they are X% confident.
               | 
               | Are you using a different definition?
        
               | johnwheeler wrote:
               | Perfectly rational means just what it sounds like. Making
               | the correct decision because you have all available data.
               | 
               | Being perfectly rational is impossible.
        
               | raldi wrote:
               | Would a perfectly rational person duck if there were a
               | 50% chance they were about to be punched? Or would they
               | do nothing?
        
       | throwaway9690 wrote:
       | I think part of the problem is that most people are conditioned
       | into many beliefs from a young age
       | 
       | I know a guy who hates foo (using a place holder). In fact he's
       | downright foophobic. He is pretty convinced he has a natural
       | unbiased hate of foo and is being rational when he expresses it.
       | 
       | To me as an outsider it is pretty obvious that his hate of foo is
       | the result of cultural conditioning. To him it is perfectly
       | rational to hate foo and to me it is totally irrational,
       | especially since he can't give any concrete reason for it.
       | 
       | So who is right and who is being rational?
        
         | pessimizer wrote:
         | > natural unbiased hate
         | 
         | ...is a pretty silly phrase. If you don't have a reason for
         | something, it can't (by definition) be reasonable.
        
         | carry_bit wrote:
         | It could be a case of implicit vs explicit knowledge. In the
         | context of evolved culture beliefs, the foophobia may serve
         | some real purpose, even if most/all of the enculturated
         | individuals can't explicitly state what the real purpose is.
         | 
         | It could be that, like dietary restrictions to reduce the
         | spread of disease, the foophobia is no longer needed, but keep
         | Chesterton's fence in mind before you say it's unneeded.
        
         | someguy321 wrote:
         | Value judgements exist in a separate domain than pure
         | rationality.
         | 
         | I like chocolate ice cream more than vanilla ice cream, and
         | you're not gonna convince me otherwise by debating the flavor
         | with me. It entirely could be the case that my preference is
         | from cultural conditioning, but it's not my concern.
         | 
         | If your friend has a mindset of "to each his own" there's no
         | problem.
        
         | teddyh wrote:
         | > _to me it is totally irrational, especially since he can 't
         | give any concrete reason for it._
         | 
         | In my experience, people usually _can_ give 'concrete' reasons
         | for it, but what constitutes 'concrete' is a matter of opinion,
         | and I don't consider everybody's reasons to be valid. But of
         | course, they do.
        
         | dfxm12 wrote:
         | It really depends what foo is. I don't think it's rational to
         | waste time on unimportant things. If foo is eating red meat,
         | then I don't think it's rational to really worry about it one
         | way or another.
         | 
         |  _I think part of the problem is that most people are
         | conditioned into many beliefs from a young age_
         | 
         | I think it's irrational to not consider new information when
         | processed. So, again, this depends on what foo is. If it is
         | obeying speed limits even when no one else is on the road, and
         | your friend learns the penalties for not obeying road signs
         | when they get their license, they would probably find it
         | irrational to not do the speed limit, even if they hate it.
         | They wouldn't want to risk the fines, license suspension, etc.
         | 
         | However, let's say your friend's brother has stronger beliefs
         | and can afford any fines and legal action. He could think about
         | it and still decide that it's rational to not obey the speed
         | limit. This doesn't make it right; I think right and rational
         | are mutually exclusive.
        
           | throwaway9690 wrote:
           | When I mention conditioning, I mean from a very young age.
           | 
           | For example: Throw salt over your shoulder if you spill some
           | -or- Green skinned people are bad and you should never trust
           | them or allow them in your neighborhood.
           | 
           | Now the former is pretty harmless but not so the latter. In
           | both cases the only explanation is "that's how I was raised"
           | which I don't find compelling or rational.
        
         | wizzwizz4 wrote:
         | Preferences do not need to be rationally justified; without
         | axiomic preferences, we have no preferences at all.
        
           | throwaway9690 wrote:
           | I'm referring more to prejudices rather than preferences.
        
       | wizzwizz4 wrote:
       | > _In a recent interview, Cowen--a superhuman reader whose blog,
       | Marginal Revolution, is a daily destination for info-hungry
       | rationalists--told Ezra Klein that the rationality movement has
       | adopted an "extremely culturally specific way of viewing the
       | world." It's the culture, more or less, of winning arguments in
       | Web forums._
       | 
       | This matches my observations, too.
       | 
       | > _Cowen suggested that to understand reality you must not just
       | read about it but see it firsthand; he has grounded his priors in
       | visits to about a hundred countries, once getting caught in a
       | shoot-out between a Brazilian drug gang and the police._
        
         | SamoyedFurFluff wrote:
         | I agree that a culture of "winning arguments in Web forums"
         | often has bias in of itself that requires going out and
         | diversifying experiences. But I don't think it will always
         | require travel. Volunteering at a soup kitchen, fostering a
         | rescue animal, organizing a community event, and talking to the
         | elderly in care facilities will all expose you to experiences
         | outside of the internet and don't require travel.
        
           | skybrian wrote:
           | Sure, those are good for learning more about your own
           | community.
           | 
           | But you're not going to learn the same things you would from
           | travel. For example, you're not likely to learn another
           | language if everyone you talk to speaks English. Similarly
           | for learning about other cultures that aren't near you.
           | 
           | But I'm not sure how much brief travel to see the tourist
           | sites helps, and hanging out with expats might not help so
           | much.
        
         | kubb wrote:
         | One of my many pet peeves are people who travel to more than a
         | 100 countries to get "experiences". It feels misguided,
         | wasteful, excessive and done to impress others, as a sort of a
         | status symbol. I bet he wouldn't be able to name all those
         | countries and cities that he's been to. A deep and meaningful
         | experience requires way more than a superficial visit.
        
           | someguy321 wrote:
           | I read that fellow's blog (marginalrevolution.com) and he
           | goes out of the way to get the best authentic local food he
           | can get, he's well read about the history of many different
           | countries and the economic implications of the recent history
           | (he's an academic economist). He often does a brief blog
           | writeup about the particularly culturally unique bits of
           | places after he visits. Part of his job as an academic/
           | popular econ culture writer is to understand cultures and
           | economies around the world.
           | 
           | I don't mind if part of his motivation is to impress others,
           | or if it's wasteful, etc. Why would his motivations have to
           | be pure for it to be meaningful for him?
        
             | kubb wrote:
             | Don't get me wrong, gorging yourself on a variety of foods
             | from around the world can be pleasurable. It also gives you
             | zero insight into how people in that country are different
             | than elsewhere.
             | 
             | You could understand more about a country by studying it
             | from home than by visiting it for a week.
             | 
             | I don't like that it's presented as a lifestyle that people
             | should strive to pursue. I know certain people here will
             | vehemently oppose this opinion, because in effect it's a
             | critique of them or that which they admire.
        
               | Retric wrote:
               | It goes both ways.
               | 
               | No you really can't understand a culture from a week of
               | study the same way you can from being there for a week.
               | The issue is the millions of unknown unknowns that you
               | never really consider. How large is people's personal
               | space, where do they stand and look in an elevator,
               | what's traffic like, how loud are people, etc etc. Of
               | course a week or three isn't that long, but there are
               | real diminishing returns here.
               | 
               | On the other hand personal experience is very narrow in
               | scope. You're never going to find out country wide crime
               | rates by wondering around for a week.
        
               | tonyedgecombe wrote:
               | >Of course a week or three isn't that long, but there are
               | real diminishing returns here.
               | 
               | I suspect you have to live and work in a place to really
               | understand it. If you are wealthy and visiting a poor
               | country there is virtually zero chance, you will always
               | be too insulated from the reality.
        
               | pessimizer wrote:
               | If you are wealthy _and born and raised_ in a poor
               | country, you will likely be quite ignorant of most of the
               | lifestyle of most of its people.
        
             | karmakaze wrote:
             | That actually sounds very resourceful than wasteful, as
             | readers can have vicarious experiences through his
             | writings.
        
           | zepto wrote:
           | The people you describe do seem to exist, but what makes you
           | think Cohen is one of them?
        
       | SMAAART wrote:
       | Nobody wants to deal with rational people.
       | 
       | Big business want people to buy things they don't need, with
       | money they don't have to impress people they don't like
       | 
       | Politicians want people who will drink the cool-aid and follow
       | what they (the politicians) say (and not what they do)
       | 
       | Religions... well, same.
       | 
       | And so all messages from advertisement, to movies, TV, narrative
       | is about hijacking people's feelings and suppressing rationality.
       | Common sense is no longer common, and doesn't make much sense.
        
         | DoingIsLearning wrote:
         | I don't disagree but I have to say this absolute reads like a
         | voice-over from an Adam Curtis documentary.
        
           | cortesoft wrote:
           | I think part of it is a quote from Fight Club
        
             | chromaton wrote:
             | Quote Investigator says it's from 1928 newspaper column:
             | https://quoteinvestigator.com/2016/04/21/impress/
        
           | zentropia wrote:
           | Fight Club, bus scene
        
         | marcod wrote:
         | I maintain that the concept of "common sense" is also quite
         | useless now :p
        
         | athenot wrote:
         | This sounds cynical but yes, unfortunately, there are many
         | incentives to _not_ be rational.
        
           | Siira wrote:
           | I think you're confusing group rationality with individual
           | rationality. There is never an individual incentive not to be
           | individually rational, by definition. Bad Nash equilibria, in
           | game-theoretic terms.
        
         | jimbokun wrote:
         | I think this is connected to another reason why so many seem to
         | reject "rationality" today.
         | 
         | They are rejecting the authorities that in the past have tried
         | to associate themselves with "rationality". The political think
         | tanks. The seminaries. The universities. Government agencies.
         | Capitalist CEOs following the "invisible hand" of the market.
         | 
         | All of these so-called elites have biases and agendas, so of
         | course none of them should be accepted at face value.
         | 
         | I think what's missed, is rationality is not about trusting
         | people and organizations, but about trusting a process.
         | Trusting debates over lectures. Trusting well designed studies
         | over trusting scientists. Trusting free speech and examining a
         | broad range of ideas over speech codes and censorship. Trusting
         | empirical observation over ideological purity.
         | 
         | This is the value system of the so called "classical liberals",
         | and they are an ever more lonely and isolated group. There is a
         | growing embrace for authoritarianism and defense of tribal
         | identity on both the "left" and the "right" taking its place.
        
           | pessimizer wrote:
           | "Classical liberalism" has little or no relationship to any
           | sentiment you've expressed here, as far as I know.
        
         | ret2plt wrote:
         | It's worse than that. The problem is that being truly rational
         | is hard, unpleasant work that few people want to do. If you
         | read an article that makes your political opponents look bad,
         | you can't just feel smugly superior, you have to take into
         | account that you are predisposed to believe convenient sounding
         | things, so you have to put extra effort into checking the truth
         | of that claim. If you follow the evidence instead of tribal
         | consensus, you will probably end up with some beliefs that your
         | friends and relatives wont like, etc.
        
         | toshk wrote:
         | When all experiences we have are based in meaning those
         | emotional experiences in some sense might be more "real" then a
         | logical thought.
        
       | WhompingWindows wrote:
       | Can rationality exist outside of our minds? Is it just another
       | mental heuristic?
       | 
       | In meditation, a common teaching is to examine an object for a
       | long period, really just stare at it and allow your mind to focus
       | on it fully. I see a coffee mug, it has a handle and writing on
       | it, it's off-white and has little coffee stains. This descriptive
       | mind goes a mile-a-minute normally, but eventually you can break
       | through that and realize, this is just a collection of atoms,
       | this is something reflecting photons and pushing back
       | electrically against my skins' atoms. Even deeper, it's just part
       | of the environment, all the things I can notice, like everything
       | else we care about.
       | 
       | Such exercises can help reveal the nature of mind. There are many
       | layers of this onion, and many separate onions vying for our
       | attention at once. Rationality relies upon peeling back these
       | superficial layers of the thought onion to get towards "the
       | truth." That means peeling back biases, emotions, hunches,
       | instincts, and all the little mental heuristics that are nice
       | "shortcuts" for a biologically limited thinker.
       | 
       | But outside our minds, how is there any rationality left? It
       | feels like another program or heuristic we use to make decisions
       | to help us survive and reproduce.
        
       | paganel wrote:
       | The rational powers that be were saying out loud 3 days ago that
       | in an optimistic scenario the Afghan government would hang on for
       | another 90 days, in a pessimistic scenario only for 30 days. As
       | we all know it collapsed completely in just 2-3 days.
       | 
       | Early on during the pandemic (the first half of February 2020)
       | the people writing on Twitter about covid in China were being
       | labeled as conspiracy nuts, with some of them outright having
       | their accounts suspended by Twitter. Covid/coronavirus was (I
       | think purposefully) kept out of the trending charts on Twitter,
       | the Oscars were seen as much more important.
       | 
       | And these are only two recent examples that came to my mind where
       | the "rational" parts of our society (the experts and the media)
       | failed completely, as such it's only rational not to trust these
       | pseudo-rational entities anymore. Imo I think in a way the post-
       | modernists were right, (almost) everything is negotiable or a
       | social construct, there's no true or false, apart from death, I
       | would say.
        
       | morpheos137 wrote:
       | Because people have feelings. Because rationality is poorly
       | defined. For example some times it may be rational to agree with
       | something that is factually wrong if it is popular or serves
       | one's self interest.
        
       | myfavoritedog wrote:
       | Human irrationality will only get worse on average. There's very
       | little evolutionary disadvantage for humans to be irrational in
       | our modern society.
       | 
       | Not synching up with reality would likely cost you your ability
       | to be in the genetic pool back in the day.
        
       | jscipione wrote:
       | It is hard to be rational in the way the New Yorker intends
       | because we are constantly being lied to and having information
       | hidden from us by institutions and so we have lost trust in them.
       | 
       | President Dwight D. Eisenhower put it succinctly in his farewell
       | address to the nation:
       | 
       | "The prospect of domination of the nation's scholars by Federal
       | employment, project allocations, and the power of money is ever
       | present and is gravely to be regarded. Yet, in holding scientific
       | research and discovery in respect, as we should, we must also be
       | alert to the equal and opposite danger that public policy could
       | itself become the captive of a scientific technological elite."
        
       ___________________________________________________________________
       (page generated 2021-08-16 23:00 UTC)