[HN Gopher] Bentham's Mugging (2022)
       ___________________________________________________________________
        
       Bentham's Mugging (2022)
        
       Author : mattmerr
       Score  : 111 points
       Date   : 2023-10-12 07:16 UTC (13 hours ago)
        
 (HTM) web link (www.cambridge.org)
 (TXT) w3m dump (www.cambridge.org)
        
       | paulhart wrote:
       | I love this for three reasons:
       | 
       | 1: the dig at Effective Altruism;
       | 
       | 2: I went to UCL back in the days when you could hang out with
       | the Bentham In A Box;
       | 
       | 3: One of my (distant) colleagues is a descendant of Bentham.
        
       | kubb wrote:
       | It's amazing how contrived and detached from reality the
       | counterexamples for utilitarianism have to be able to attack even
       | the most basic forms of it. It really makes you think that
       | utilitarianism is a solid principle.
        
         | nopassrecover wrote:
         | But aren't the counterexamples largely detached from reality
         | because in reality people adopt other ethical
         | systems/principles to avoid extreme outcomes?
         | 
         | I'm by no means opposing a general morality of optimising for
         | the greater good, and I think on the whole utilitarianism, like
         | other ideological/ethical systems, gets critiqued in comparison
         | to an impossible standard of perfection. My sense is there are
         | some more basic principles that underpin the success and
         | pragmatism of any ethical/ideological system, and that these
         | principles, to your implied point I think, would safeguard
         | utilitarianism as well as other systems.
         | 
         | I think this is implied in the critique some have against
         | utilitarianism, namely that it needs to introduce weighting in
         | order to adjust the morality towards palatable/sensible means
         | and outcomes. But I don't think any system could avoid those
         | same coping mechanisms.
        
           | kubb wrote:
           | People do adopt other systems, feel that utilitarianism must
           | be "wrong" for whatever reason, get research grants from
           | people who agree, and produce incredibly unimpressive work.
           | 
           | What basic principles are you thinking of? Even more basic
           | than hedonism, consequentialism, etc.?
           | 
           | Weighing is just one of critiques against utilitarianism, and
           | it's a valid one. Maybe the extreme happiness of one person
           | isn't worth mild suffering of 5 people. But pretending that
           | this upends the entirety of this moral framework, and not one
           | of its building blocks (basically the aggregation function)
           | is kinda silly.
        
             | nopassrecover wrote:
             | Yeah I think we agree that utilitarianism is held to an
             | unreasonable standard. I think contributing to that is some
             | advocates suggesting it's a solid utopian model to guide
             | all decisions without further refinement and nuance (and I
             | don't think this is what you're arguing).
             | 
             | And because it hasn't been in practice widely adopted in
             | history (unlike e.g. liberalism or Catholicism) the rubber
             | hasn't hit the road to allow us to understand how it would
             | work practically. I think some other good ideas suffer the
             | same problem/preemptive attack. Indeed any social progress
             | seems to be attacked by a sort of whataboutism or false
             | slippery slope attack.
             | 
             | To your question on basic principles, I think they're
             | caught in exercises like the trolley problem or the
             | psychological experiments of the 60s: people on the whole
             | don't want to be responsible for causing harm, they don't
             | want to see people in their influence of control harmed,
             | they don't want to feel bad about themselves, they don't
             | want to be judged/punished by others - even if convinced
             | it's for the greater good. I'm not saying some people won't
             | take a fiercely rational or ideological lens, but on the
             | whole people are influenced by some common psychology. And
             | I think actually this is probably good: as much as it
             | hinders "utopian" ideas being realised I think it ensures
             | humanity moderates ideology.
             | 
             | I think without this a strict utilitarianism, eg a robotic
             | approach, would lead to kinds of harm that I wouldn't
             | support, even if justified to some sort of ends that itself
             | is subjective. But I think with it, an elevation of the
             | greater good would probably be better than many approaches
             | today. For a practical example I think we should permit
             | more people to consensually enrol in promising but risky
             | cancer research and treatments.
             | 
             | To reiterate that same point I think that in practice those
             | factors would probably allow most systems to be successful,
             | and some/many might be better than what we have now.
        
         | MereInterest wrote:
         | This is known as a "reductio ad absurdum" argument, and isn't
         | contrived at all. It's easy to make a general rule that applies
         | in the majority of cases. To test whether a general rule has
         | flaws, and to improve upon a general rule, it must be tested by
         | applying it to edge cases. The same way that you test a
         | datetime library by picking potential edge cases (e.g. Leap
         | Day, dates before 1970, dates between Feb. 1-13 in 1918 in
         | Russia, etc), you test a philosophical theory by seeing what it
         | predicts in potential edge cases.
         | 
         | This also deliberately avoids introducing irrelevant arguments.
         | By framing it as a mugger who wants to gain money for purely
         | selfish reasons, we deliberately exclude complicating factors
         | from the statement.
         | 
         | * The argument could be framed around donating to the Susan G.
         | Komen Foundation, rather than a mugger. With the controversies
         | it has had [0], it could be argued that these donations may or
         | may not increase total utility, but donations to charities are
         | part of the best possible path. However, using the Susan G.
         | Komen Foundation as an example relies on accepting a premise
         | that it isn't using donations appropriately, and makes the
         | argument dependent on whether that is or isn't the case.
         | 
         | * The argument could be framed around allowing tax exemptions
         | for all self-described charitable foundations, with Stichting
         | INGKA Foundation [1], part of the corporate structure that owns
         | IKEA, playing the narrative role of the mugger. The argument
         | would be that the tax exemptions provided to charitable
         | foundations are necessary for bringing about the best outcomes,
         | but that they can be taken advantage of. Here, the argument
         | would depend on whether you view the corporate structure of
         | INGKA as a legitimate charity.
         | 
         | * Even staying with purely hypothetical answers, we could ask
         | if the mugger going to starve should be mugging be
         | unsuccessful. These could veer into questions of the local
         | economy, food production, and so on, none of which help to test
         | the validity of utilitarianism.
         | 
         | I've heard this described as crafting the least convenient
         | world. That is, whenever there's a question about the
         | hypothetical scenario that would let you avoid an edge case in
         | a theory, update the hypothetical scenario to be the least
         | convenient option. What if the mugger just needs a hug? Nope,
         | too convenient. What if the mugger isn't going to go through
         | with the finger-chopping? Nope, too convenient.
         | 
         | [0]
         | https://en.wikipedia.org/wiki/Susan_G._Komen_for_the_Cure#Co...
         | 
         | [1] https://en.wikipedia.org/wiki/Stichting_INGKA_Foundation
        
           | roenxi wrote:
           | The problem here is that the counterargument is contrived to
           | the point where it is stupid. This article isn't identifying
           | a problem in theory or practice.
           | 
           | In theory a utilitarian is likely comfortable with the in-
           | principle idea that they might need to sacrifice themselves
           | for a greater good. Pointing that out isn't a counterargument
           | against utilitarianism. In practice, no utilitarian would
           | fall for something this dumb. They'd just keep the money and
           | assume (correctly in my view) they missed something in the
           | argument that invalidates the mugger's position. Or, likely,
           | assume the mugger is lying about being an insane
           | deontologist.
        
             | MereInterest wrote:
             | > In practice, no utilitarian would fall for something this
             | dumb.
             | 
             | This is the penultimate conclusion of the dialogue as well,
             | that even Bentham would need to admit so many post-hoc
             | deviations from the general rules of Utilitarianism that it
             | ends up being a form of deontology instead. The primary
             | takeaway is then that Utilitarianism works as a rule-of-
             | thumb, but not as an underlying fundamental truth.
        
               | roenxi wrote:
               | No it isn't, the dialog is strawmaning and claims that
               | Bentham would have to abandon utilitarianism.
               | 
               | I'm claiming that the initial scenario where Bentham
               | caves is reasonable, but in practice will never occur. A
               | utilitarian could reasonably believe Bentham's response
               | was correct (I mean, seriously, would you look at someone
               | and refuse to spend $10 to save their finger? You'd be a
               | monster. As the article points out, we're talking
               | literally 1 person). There is no theoretical problem in
               | that scenario. Bentham has maximised utility based on the
               | scenario presented. It was a scenario designed where the
               | clear-cut utility maximisation choice was to sacrifice
               | $10.
               | 
               | The issue is this scenario is an insane hypothetical that
               | cannot occur in practice. There are no deontologists that
               | strict and there are no choices that binary. So we can
               | conclude in alternate universes that we do not inhabit
               | utilitarianism would not work because these muggers would
               | end up with all the money. Alright. Case closed. Not a
               | practical problem. The first act plays out then the
               | article should end with the conclusion concludes "if that
               | could happen then utilitarianism would have a problem.
               | But it can't so oh well. Turns out utilitarianism is a
               | philosophy that works out really equitably in this
               | universe!"
        
             | empath-nirvana wrote:
             | > In practice, no utilitarian would fall for something this
             | dumb.
             | 
             | What you are saying is exactly what the article says, and
             | you are conceding the article's point, which is that nobody
             | actually practices pure utilitarianism.
        
           | Micaiah_Chang wrote:
           | Do we want to talk about a hypothetical world where
           | deontology was the underlying moral principle? Where, for
           | example, a large agency in charge of approving vaccines
           | decided to delay approval of a life saving because, even
           | though they received the information on November 20th, they
           | scheduled the meeting for December 10-12th dammit, and that's
           | when it'll be done? By potentially delaying several months
           | because, instead of using challenge trials to directly assess
           | the safety of a vaccine by exposing willing volunteers to
           | both the supposed cure and disease, instead gave the cure to
           | a couple of tens of thousands of people, and just waited
           | until enough of them got sick and died to a disease "that
           | would have got them anyway" to gather enough statistics for
           | safety? Which is definitely good, you see, because no one got
           | directly harmed by said agency, even if many more people in
           | the country were dying of this theoretical disease. [0]
           | 
           | Or, even better, what if distribution of this life saving
           | cure was done based on the deontological concept of fairness?
           | Surely, this wouldn't result in limited and highly demanded
           | vaccines being literally thrown away[1] in the name of equity
           | and where vaccination companies wouldn't need to seek
           | approval for something as simple as increasing doses of
           | vaccines in vials. [2]
           | 
           | You know, just all theoretically, since it would be a
           | terrible shame if any of these things happened in the real
           | world, since this is just one specific scenario and I'm sure
           | I can make up various [3] other [4] ways [5] in which not
           | carefully evaluating the consequences of moral actions would
           | turn out poorly, but hey!
           | 
           | I'm sure glad that utilitarianism isn't being entertained
           | more on the margin, since we already live in the best of all
           | possible moral universes.
           | 
           | (Footnote, I'm not going to justify these citations within
           | this post, because it's pithier this way. I recognize this is
           | not being fully honest and transparent, but I'd be happy to
           | fully defend the inclusion of any these, if necessary)
           | 
           | [0] https://www.cdc.gov/mmwr/volumes/70/wr/mm7014e1.htm
           | 
           | [1] https://worksinprogress.co/issue/the-story-of-vaccinateca
           | ctrl f "On being legally forbidden to administer lifesaving
           | healthcare"
           | 
           | [2] https://www.businessinsider.com/moderna-asks-fda-approve-
           | mor...
           | 
           | [3] https://news.climate.columbia.edu/2010/07/01/the-
           | playpump-wh...
           | 
           | [4]
           | https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2641547
           | 
           | [5]
           | https://papers.ssrn.com/sol3/papers.cfm?abstract_id=983649
        
         | b450 wrote:
         | It's not really a reflection on utilitarianism. That's just
         | philosophical ethics, at least in the form that predominates in
         | Anglo-American philosophy departments.
         | 
         | The game of coming up with "counterexamples" to moral theories
         | is fun, but basically stupid. By definition it involves
         | "contriving" cases, however realistic really, which can make
         | whatever preposterous "stipulations" they please. The
         | underlying assumption is that moral theories are somehow like
         | scientific theories in that they are validated by "predicting"
         | the available observational "data", i.e. our moral intuitions,
         | i.e. the social values of the cultural/economic groups we're a
         | part of. Mysteriously, christian conservative scolds engage
         | with philosophy and end up developing something a lot like
         | christian social conservatism, and cosmopolitan liberal scolds
         | come up with something a lot like cosmopolitan social
         | liberalism, despite the fact that both are engaged in this
         | highly scientific form of inquiry. Very odd.
         | 
         | The whole game is also probably largely irrelevant to the kind
         | of stuff Bentham actually cared about, since he mainly wanted
         | to use utilitarianism to guide state policy, and (famously)
         | hard cases make bad law.
        
         | mannykannot wrote:
         | This is mostly an amusing logic puzzle of the sort Lewis
         | Carroll liked to write, but there is an unstated moral here:
         | utilitarianism requires a metric of utility, and it can be
         | gamed by people who are merely paying lip service (at best) to
         | utilitarianism, opening the door, in the worst cases, to Mill's
         | tyranny of the majority. The global news, on any given day,
         | contains several such cases.
        
       | nisegami wrote:
       | I feel like this statement hides something critical, "Here's the
       | thing: there is, clearly, more utility in me keeping my finger
       | than in you keeping your measly ten pounds."
       | 
       | My point is that, is that so clear? Or is the utility function
       | being presumed here lacking?
        
         | Diggsey wrote:
         | Well the ten pounds still exists either way. You'd have to
         | argue that there's more utility in Bentham owning the PS10 than
         | the mugger owning the PS10, and that the difference in utility
         | between them is greater than the utility of a finger.
         | 
         | I imagine you could define utility that way, but presumably the
         | mugger could increase the cost (two fingers? an arm?) until the
         | argument works. Also, if you do definite a utility function
         | like that (say, "there is more utility in this PS10 being mine
         | rather than yours than the utility of your arm") then that's a
         | pretty questionable morality.
        
           | roenxi wrote:
           | The mugger, through no coercion of Bentham, chooses to go
           | down a finger. It is obvious that the mugger has an insane
           | utility function, but it isn't obvious that Bentham letting
           | him act it out is causing a drop in overall utility.
           | 
           | If the mugger doesn't want his own finger, it is Bentham can
           | choose to trust him that 9 fingers are better than 10. Maybe
           | the mugger is even behaving rationally, maybe the 10th finger
           | has cancer, who knows. As the story illustrates, giving him
           | $10 didn't stop him from losing his finger. There are many
           | factors here that make the situation unclear.
        
           | snapcaster wrote:
           | Not really, my utility function weighs some mugger being hurt
           | at 0
        
         | optimalsolver wrote:
         | Yup. According to which utility function? Certainly not mine.
        
         | Tao3300 wrote:
         | > Or is the utility function being presumed here lacking
         | 
         | They're all lacking in someway, so sure.
        
       | jl6 wrote:
       | Is there perhaps more than a finger's worth of utility in
       | deterring such muggings by refusing the initial deal?
        
       | AndrewDucker wrote:
       | The point here is largely that reality (at our level) is not
       | something which can be simply solved by the application of a
       | couple of rules, from which Right Action will thenceforth
       | necessarily flow.
       | 
       | Reality is a big, complex, ball of Stuff, and any attempts to
       | impress morality upon it will be met with many corner cases which
       | produce unwanted results unless we spend our time engaged with
       | dealing with what initially look like tiny details.
        
         | bwanab wrote:
         | So we end up coming full circle from "here are the rules" to
         | "play each situation by ear". Ethics is just too dang hard!
        
           | AndrewDucker wrote:
           | I'm sure you can find a compromise in the middle of "Mostly
           | follow some vague rules, but when they lead you to what seem
           | to be negative outcomes think about whether it's because you
           | don't enjoy doing the moral thing, or if it's because
           | actually it's led somewhere unpleasant and you need a new
           | rule for this situation."
        
       | JR1427 wrote:
       | But the mugger could have avoided making the deal with the thug,
       | so I don't see how that deal changes much.
        
       | lcnPylGDnU4H9OF wrote:
       | Bentham brought up a good point:
       | 
       | > Fair enough. But, even so, I worry that giving you the money
       | would set a bad precedent, encouraging copycats to run similar
       | schemes.
       | 
       | I don't understand how it was logically defeated with escalation
       | as in the story. Would it be wrong for a Utilitarian to continue
       | arguing against this precedent, saying that the decision to be
       | mugged removes overall Utility because now anyone who can be
       | sufficiently convincing can also effectively steal money from
       | Utilitarians. (I guess money changing hands is presumed net
       | neutral in the story?)
        
         | ameliaquining wrote:
         | No, the mugger getting the money counts as negative. "Now, as
         | an Act Utilitarian, I would happily part with ten pounds _if_ I
         | were convinced that you would bring more utility to the world
         | with that money than I would. The trouble is I know I would put
         | the money to good use myself - whereas you, I surmise, would
         | not. "
        
           | uoaei wrote:
           | No, it doesn't. People having money is Good under
           | utilitarianism because they can utilize it no matter which
           | person it is.
           | 
           | Utilitarianism does not benefit from covert insertions of
           | specific moral carve-outs. Surmisal does not impact outcomes
           | only predictions of outcomes. It is not appropriate to make
           | judgments based on surmisal because utilitarianism can only
           | ever look backward at effects to justify actions post-hoc.
           | This is the primary flaw with utilitarianism as a moral
           | philosophy.
        
         | jefftk wrote:
         | I'm also confused why they drop this point. I don't give in to
         | this kind of threat because I expect overall a policy of giving
         | in leads to worse outcomes.
        
           | HWR_14 wrote:
           | Act utilitarians specifically don't believe in evaluating the
           | overall consequences of a policy. Rule utilitarians do that.
           | That is, in fact, the major difference between the two.
        
             | jefftk wrote:
             | Good point, I phrased it poorly. Because of the effects of
             | the specific action, I think an act utilitarian should
             | still refuse to be mugged in this case.
             | 
             | The policy I was describing is just a mental shortcut, a
             | part of adapting morality to human beings. See
             | https://en.wikipedia.org/wiki/Two-level_utilitarianism for
             | more in this direction.
        
         | HWR_14 wrote:
         | As an act utilitarian, the utilitarian was trying to evaluate
         | the consequences of the act, not a rule that could be followed
         | in multiple instances. Therefore, credibly claiming that the
         | act will be a secret removes any consideration of motivating
         | other people or being judged by other people, etc. (Missing
         | from the story was a promise by the mugger not to repeat this
         | with the utilitarian every day).
        
           | jefftk wrote:
           | I don't see why the utilitarian should trust a mugger's
           | promises of secrecy or non-replicability though?
        
             | hamishrayner wrote:
             | The mugger is a Deontologist in this scenario and therefore
             | does not lie. If the utilitarian couldn't trust the
             | mugger's promises, the whole scenario would fall apart as
             | they couldn't trust the mugger's promise to cut off their
             | finger.
        
               | jefftk wrote:
               | How does the utilitarian know this?
               | 
               | Any morality needs to take into account our uncertainty
               | about claims other people make.
        
               | sigilis wrote:
               | The mugger has a lapel pin denoting himself as a
               | deontological agent. Lapel pins in these fantasies cannot
               | be forged, I guess.
        
               | jefftk wrote:
               | If we're assuming unforgeable moral-method pins I don't
               | think we should expect intuitions generated in this sort
               | of thought experiment to be a good guide to what we
               | should actually think or do.
        
             | thecyborganizer wrote:
             | The mugger is a deontologist, right? We're already assuming
             | that he'll keep his promises.
        
       | ameliaquining wrote:
       | The problem here isn't with the main character's moral
       | philosophy, but with his decision theory. He'd be dealing with
       | exactly the same predicament if the mugger were threatening to
       | harm _him_.
       | 
       | The solution is indeed "don't give in to muggers", but it's
       | possible to define this in a workable way. Suppose the mugger can
       | choose between A (don't try to mug Bentham) or forcing Bentham to
       | choose between B (give in) or C (don't give in). A is the best
       | outcome for Bentham, B the best outcome for the mugger, and C the
       | worst for both. The mugger, therefore, is only incentivized to
       | force the choice if he expects Bentham to go for B; if he expects
       | Bentham to go for C, then it's in his interest to choose A.
       | Bentham, therefore, should have a policy of always choosing C, if
       | it's worse for the mugger than A; if the mugger knows this and
       | responds to incentives (as we see him doing in the story), then
       | he'll choose A, and Bentham wins.
       | 
       | And none of this has anything to do with utilitarianism, except
       | in the respect that utilitarianism requires you to make decisions
       | about which outcomes you want to try to get, just like any other
       | human endeavor.
        
         | tylerhou wrote:
         | It does have to do with utilitarianism -- if you change the
         | mugger to harming Bentham, the situation is different. In that
         | situation, many other reasonable moral theories would agree
         | with utilitarianism.
         | 
         | In the original situation, where the mugger is harming
         | themselves, the critique is that utilitarianists are required
         | to treat their own interests as exactly the same as other
         | people's interests. It doesn't matter if someone is harming
         | themselves in order to provoke some action from you; if your
         | action prevents that harm, you are obligated to do that action
         | (even if you suffer because of it).
        
           | Micaiah_Chang wrote:
           | Yes, the point of the GP comment is exactly this, if Bentham
           | becomes an agent that goes for C, he _also_ explicitly
           | discourages the mugger from being an agent that would cut off
           | their fingers for a couple of bucks.
           | 
           | Notice that what Bentham is altering is their strategy and
           | not their utility. If they could spend 10 dollars to treat
           | gangrene and save the fingers, they would do it. It's not
           | clear many other morality systems would be as insistent on
           | this as utilitarianism, because practitioners of other
           | moralities curiously form epicycles defending why the status
           | quo is fine anyway, how dare you imply I'm worse at morality.
           | 
           | Edit: Slight wording change for clarity
        
             | slibhb wrote:
             | > practitioners of other moralities curiously form
             | epicycles defending why the status quo is fine anyway
             | 
             | This is exactly what the Bentham in the story is doing!
        
             | tylerhou wrote:
             | > if Bentham becomes an agent that goes for C, he also
             | explicitly discourages the mugger
             | 
             | How is this different from saying that if Bentham decides
             | to not adhere to utilitarianism, he is no longer vulnerable
             | to such a mugging? If Bentham always responds C, even when
             | actually confronted with such a scenario (the mugger was
             | not deterred by Bentham's claim), then Bentham is not a
             | utilitarianist.
             | 
             | In other words, the GP is saying: "if Bentham doesn't
             | always maximize the good, he is no longer subject to an
             | agent who can abuse people who always maximize the good."
             | But that is exactly the point -- that utilitarianism is
             | uniquely vulnerable in this manner.
        
               | Micaiah_Chang wrote:
               | My wording is wrong, because it sounds like I'm saying
               | that Bentham is adopting the policy ad hoc. A better way
               | to state this is that Bentham _starts out_ as an agent
               | that does not give into brinksmanship type games, because
               | a world where brinksmanship type games exist is a
               | substantially worse world than ones where they don 't
               | (because net-negative situations will end up happening,
               | it takes effort to set up brinksmanship and good actions
               | do not benefit more from brinksmanship). It's different
               | because by adopting C, Bentham prevents the mugger from
               | mugging, which is a better world than one where the
               | mugger goes on mugging. I don't see any contradiction in
               | utilitarianism here.
               | 
               | If the world where the thought experiment is not true and
               | "mugging" is net positive, calling it mugging then is
               | disingenuous, that's just more optimally allocating
               | resources and is more equivalent to the conversation "hi
               | bentham i have a cool plan for 10 dollars let me tell you
               | what it is" "okay i have heard your plan and i think it's
               | a good idea here's 10 bucks"
               | 
               | Except that you are putting the words "mugging" and
               | implying violence so that people view the interaction as
               | more absurd than it actually is.
        
               | tylerhou wrote:
               | > It's different because by adopting C, Bentham prevents
               | the mugger from mugging, which is a better world than one
               | where the mugger goes on mugging.
               | 
               | This assumption is wrong. You are assuming that the
               | mugger is also a utilitiarian, so will do cost-benefit
               | analysis, and thus decide not to mug. But that is not
               | necessarily true.
               | 
               | If the mugger mugs anyway, despite mugging being
               | "suboptimal," Bentham ends up in a situation where he has
               | exactly the same choice: either lose $10, or have the
               | mugger cut off their own finger. If Bentham is to follow
               | (act-)utilitarianism precisely, he _must_ pay the mugger
               | $10. (Act-)utilitarianism says that _the only thing that
               | matters is the utility of the outcome of your action._ It
               | does not matter that Bentham previously committed to not
               | paying the mugger; the fact is, after the mugger
               | "threatens" Bentham, if Bentham does not pay the mugger,
               | total utility is less than if he does pay. So Bentham
               | _must_ break his promise, despite  "committing" not to.
               | (Assuming this is some one-off instance and not some kind
               | of iterated game; iteration makes things more
               | complicated.)
               | 
               | (In fact, this specific objection -- that utilitarianism
               | requires people to "give up" their commitments -- is at
               | the foundation of another critique of utilitarianism by
               | Williams: https://123philosophy.files.wordpress.com/2018/
               | 12/bernard-wi...)
               | 
               | If everyone were a utilitarian, then there would be far
               | fewer objections to utilitarianism. (E.g. instead asking
               | people in wealthy countries to donate 90% of their income
               | to charity, we could probably get away with ~5-10%.)
               | Bentham's mugging is a specific objection to
               | utilitarianism that shows how utilitarians are vulnerable
               | to manipulation by people who do not subscribe to
               | utilitarianism.
               | 
               | Also, to be precise, Bentham's mugging does not show a
               | contradiction. It's showing an unintuitive consequence of
               | utilitarianism. That's not the same thing as a
               | contradiction. (If you want to see a contradiction,
               | Stocker has a different critique:
               | https://www.jstor.org/stable/2025782.)
        
         | slibhb wrote:
         | The mugger cuts off his own fingers when a different
         | utilitarian doesn't pay him. Given that, and given that he's
         | right back at it after surgery, I don't think it's so clear
         | that he'll "respond to incentives" and stop mugging people if
         | people stop giving in.
         | 
         | After all, one of the premises here is that the mugger is a
         | deontologist. He doesn't care about outcomes.
        
         | amalcon wrote:
         | The mugger in the story is essentially contriving a situation
         | that turns him into a utility monster. He is arranging that he
         | will derive more benefit from the money than any other
         | plausible application -- by imposing a massive harm on himself
         | if he doesn't get the money. It's relatively straightforward to
         | vary the threat to adjust incentives as necessary -- e.g. the
         | binding deal with the thug later in the story.
        
         | wzdd wrote:
         | > none of this has anything to do with utilitarianism
         | 
         | "Always go for C (or any strategy)" is not in general a
         | utilitarian strategy, so the mugger would not expect Bentham to
         | employ it.
         | 
         | Your argument assumes that the characters have perfect
         | knowledge, but the point of the parody is that utilitarian
         | choices can change as more information is revealed.
         | 
         | Yes, the mugger could have said something like "if I were to
         | promise to cut off my finger unless you gave me PS10, would you
         | do it?", Bentham could have have followed up with "if you knew
         | I would reply no to that question, would you make that
         | promise?", the mugger could have replied "no," Bentham could
         | have responded "In that case, no", and the mugger would have
         | walked away. But Bentham doesn't have all the information until
         | he is faced with the loss of a finger which he can prevent by
         | giving up PS10. Bentham is obliged to do so, as it maximises
         | the overall good at that (unfortunate) point.
         | 
         | The idea that Bentham can be "trapped" in a situation where he
         | is obliged to cause some small harm to himself in order to
         | prevent a greater harm is the parody of utilitarianism which is
         | at the heart of the story.
        
       | ertgbnm wrote:
       | The underlying assumption is that Bentham is a true act
       | utilitarian yet simultaneously has 10 pounds in his pocket that
       | he can stand to lose without much harm. If he truly were an act
       | utilitarian, the utility of the 10 pounds remaining in Bentham's
       | possession must be so high that it outweighs the mugger losing
       | their finger, otherwise Bentham would have already spent it on
       | something similarly utility maximizing. Clearly that 10 pounds
       | was already destined to maximize utility such as staving off
       | Bentham's hunger and avoiding his own death or the death of
       | others.
       | 
       | Meanwhile the utility of the mugger's finger is questionable. The
       | pain of losing the finger is the only real cost. If they are just
       | a petty criminal, the loss of their finger will probably reduce
       | their ability to commit crimes and prevent him from inflicting as
       | much suffering on others as he otherwise would have. Maybe losing
       | his finger actually increases utility.
       | 
       | Bentham: "I'm sorry Mr. Mugger but I am on my way to spend this
       | 10 pounds on a supply of fever medication for the orphanage and I
       | am afraid that if I don't procure the medicine, several children
       | will die or suffer fever madness. So when faced with calculating
       | the utility of this situation I must weigh your finger against
       | the lives of these children. Good day. And if the experience of
       | cutting your finger off makes you question your own deontological
       | beliefs, feel free to call upon me for some tutoring on the
       | philosophy of Act Utilitarianism."
       | 
       | Any other scenario and Bentham clearly isn't a true Act
       | Utilitarian and would just tell the Mugger to shove his finger up
       | his ass for all Bentham cares. Either strictly apply the rules or
       | don't apply them at all.
        
       | throwaway101223 wrote:
       | > Here's the thing: there is, clearly, more utility in me keeping
       | my finger than in you keeping your measly ten pounds.
       | 
       | How is this clear? This is one of the things I find strange about
       | academic philosophy. For all the claims about trying to get at a
       | more rigorous understanding of knowledge, the foundation at the
       | end of the day seems to just be human intuition. You read about
       | something like the Chinese Room or Mary's Room thought
       | experiments, that seem to appeal to immediate human reactions.
       | "We clearly wouldn't say..." or "No one would think..."
       | 
       | It feels like an act of obfuscation. People realize the fragility
       | of relying on human intuition, and react by trying to dress human
       | intuition up with extreme complexities in order to trick
       | themselves into thinking they're not relying on human intuition
       | just as much as everyone else.
        
         | smif wrote:
         | I think the point here is that it's subverting and redirecting
         | Bentham's own utilitarianism against itself. How does the
         | utilitarian decide which one of those has more utility? That's
         | a rhetorical question and it's sort of immaterial how that
         | question gets answered, because regardless of how they decide,
         | the dialogue is structurally describing how utilitarianism is
         | vulnerable to exploitation of this type.
        
         | tylerhou wrote:
         | Professional philosophers understand that many arguments rely
         | on intuition. But they need intuition to create basic premises.
         | Otherwise, if you have no "axioms" in your system of logic, you
         | cannot derive any sentences.
         | 
         | Also, moral philosophy deals with what is right and what is
         | wrong. These are inherently fuzzy notions and they likely
         | require some level of intuitive reasoning. ("It is clearly
         | wrong to kill an innocent person.") I would be extremely
         | surprised if someone could formally define what is right and
         | wrong in a way that captures human intuition.
         | 
         | It's also not worth debating philosophy with people who will
         | argue that $10 is not clearly worth less than a finger. (And if
         | you don't believe that, then we can consider the case with two
         | fingers, or three, or a whole hand, etc.).
        
           | throwaway101223 wrote:
           | > It's also not worth debating philosophy with people who
           | will argue that $10 is not clearly worth less than a finger.
           | 
           | Some of these arguments feel like the equivalent of spending
           | billions to create a state of the art fighter plane and not
           | realizing they forgot to put an engine inside of it.
           | 
           | It's not $10 vs. "a finger," it's $10 vs. the finger of
           | someone who goes about using their fingers to threaten people
           | to give them money. If the difference isn't immediately
           | obvious, I think it's time to step back from complex
           | frameworks and take a look at failures with common intuition.
        
             | tylerhou wrote:
             | The point is, to a utilitarian, it's a finger, because part
             | of the setup is that the "mugger" won't use their finger
             | for bad things in the future.
             | 
             | Maybe not part of this specific dialogue, where the mugger
             | repeatedly asks for rhetorical reasons. But in a case where
             | there is only a single instance of a mugging, the
             | assumption is that the mugger will only mug once.
        
         | TremendousJudge wrote:
         | I used to feel just like that. Then I learned that academic
         | philosophy studies this phenomenon as "metaethics". There are
         | arguments such as yours that would be considered "moral
         | skepticism". Read up on those (or watch a course like
         | https://youtu.be/g3f-Lfm8KNg); I think you'll find these
         | arguments agreeable.
        
       | alphazard wrote:
       | The most pressing problem facing utilitarians has never been
       | choosing between principled vs. consequentialist utilitarianism.
       | It's how to take a vector of utilities, and turn it into a single
       | utility.
       | 
       | What function do I use? Do I sum them, is it the mean, how about
       | root-mean-squared? Why does your chosen function make more sense
       | than the other options? Can I perform arithmetic on utilities
       | from two different agents, isn't that like adding grams and
       | meters?
        
         | jefftk wrote:
         | _> What function do I use?_
         | 
         | Traditionally you use sum, which gets you total utilitarianism.
         | Some have advocated avg which gets you average utilitarianism.
         | https://en.wikipedia.org/wiki/Average_and_total_utilitariani...
         | 
         |  _> root-mean-squared_
         | 
         | Why?
         | 
         |  _> Can I perform arithmetic on utilities from two different
         | agents?_
         | 
         | This is called "interpersonal utility comparison", and there's
         | a ton of literature on it. Traditionally utilitarians have
         | accepted it, and without it ideas like "sum the utility across
         | everyone" don't make sense.
        
         | pdonis wrote:
         | _> It 's how to take a vector of utilities, and turn it into a
         | single utility._
         | 
         | Not just "how", but _whether_ doing such a thing is even
         | possible at all. And even that doesn 't push the problem back
         | far enough: first the utilitarian has to assume that utilities,
         | treated as real numbers, are even measurable or well-defined at
         | all.
        
           | tome wrote:
           | I don't think a utilitarian requires that utilities are real
           | numbers, just that they satisfy a total ordering.
        
             | dragonwriter wrote:
             | It requires a total ordering _and_ an aggregation function
             | (and to be useful in the real world rather than purely
             | abstract, a reliable and predictive measuring mechanism,
             | but that 's a different issue.) I'm pretty sure
             | (intuitively, haven't considered a formal argument) if both
             | exist, then there is a representation where utilities can
             | be represented as (a subset of) the reals.
        
               | pdonis wrote:
               | _> It requires a total ordering and an aggregation
               | function_
               | 
               | Yes. And note that this is true even for just a single
               | person's utilities, i.e., without even getting into the
               | issues of interpersonal comparison. For example, a single
               | person, just to compute their own overall utility (never
               | mind taking into account other people's), has to be able
               | to aggregate their utilities for different things.
               | 
               |  _> if both exist, then there is a representation where
               | utilities can be represented as (a subset of) the reals._
               | 
               | Yes. In more technical language, total ordering plus an
               | aggregation function means utilities have to be an
               | ordered field, and for any reasonable treatment that
               | field has to have the least upper bound property (i.e.,
               | any sequence of members of the field has to have a least
               | upper bound that is also in the field), and the reals are
               | the only set that satisfies those properties.
        
         | dragonwriter wrote:
         | > It's how to take a vector of utilities, and turn it into a
         | single utility.
         | 
         | I mean, that's a problem that lost of people skip to in
         | utilitarianism, but the bigger problem is that utility isn't
         | really measurable in a way that produces a meaningful "vector
         | of utilities" in the first place.
        
       | 1970-01-01 wrote:
       | Reads like a ChatGPT argument with an idiot savant, with emphasis
       | on the idiot.
        
       | earthboundkid wrote:
       | Utilitarianism is supposed to be a strawman theory that you teach
       | in the first week of class in order to show the flaws and build a
       | real theory of ethics the remaining 14 weeks of the semester.
       | _SMDH_ at all these people who didn 't get that basic point.
        
       | jjk166 wrote:
       | The problem here stems from trying to have some universal utility
       | values for acts. You can't say cutting off a finger is
       | fundamentally worse than losing 10 pounds, even if it frequently
       | would be. I wouldn't give up one of my fingers for 10 pounds, and
       | I think most sane people wouldn't either, but here the mugger is
       | willing to do that. So in this particular instance, the mugger is
       | valuing the utility of keeping his finger at 10 pounds, and thus
       | the decision on whether or not to give it to him is a wash. The
       | moment you start dictating what the utility values are of
       | consequences for other people you get absurd outcomes (e.g. some
       | of you may die, but it's a sacrifice I'm willing to make).
        
       | superb-owl wrote:
       | Maybe morality can't be quantified.
       | 
       | https://blog.superb-owl.link/p/contra-ozy-brennan-on-ameliat...
        
       | Veedrac wrote:
       | > If I find an unmuggable version of utilitarianism with more
       | explanatory power, I'll let you know.
       | 
       | Functional Decision Theory
        
       | erostrate wrote:
       | I used to be a utilitarian, but it made me morally repulsive,
       | which pushed my friends away from utilitarianism. I had to stop
       | since this had negative utility.
       | 
       | More seriously, any moral theory that strives too much for
       | abstract purity will be vulnerable to adversarial inputs. A blunt
       | and basic theory (common sense) is sufficient to cover all
       | practical situations and will prevent you from looking very dumb
       | by endorsing a fancy theory that fails catastrophically in the
       | real world [1]
       | 
       | [1] https://time.com/6262810/sam-bankman-fried-effective-
       | altruis...
        
         | tim333 wrote:
         | I'm not sure that SBF being a crook shows that effective
         | altruism failed.
        
           | erostrate wrote:
           | One main idea of EA is that you should make a lot of money in
           | order to give it away. The obvious problem is that this can
           | serve as a convenient moral justification for greed. SBF
           | explicitly endorsed EA, Will MacAskill vouched for him, and I
           | understand he was widely admired in EA circles. And he turned
           | out to be the perfect incarnation of this problem, admitting
           | himself he just used EA as a thin veil.
           | 
           | What would you count as evidence that effective altruism
           | fails?
        
         | Smaug123 wrote:
         | SBF did have a _very_ unusual discounting policy, namely  "no
         | discounting", in fairness. I'm not aware of anyone other than
         | SBF who bites the "keep double-or-nothing a 51% probability
         | gamble forever, for infinite expected utility and probability 1
         | of going bust" bullet in favour of keeping going forever. (SBF
         | espoused this policy in March 2022, if I recall correctly, on
         | Conversations with Tyler.)
        
       ___________________________________________________________________
       (page generated 2023-10-12 21:00 UTC)