






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
In this forthcoming paper, Max Khan Hayward argues that act-utilitarians and effective altruists may face a problem called 'utility cascades', where ongoing rational updating of judgements about an intervention's effectiveness can push agents away from the optimistic outcome. Hayward suggests that utilitarian agents should sometimes ignore evidence or not apportion their support to an intervention's effectiveness to avoid utility cascades.
Typology: Slides
1 / 12
This page cannot be seen from the preview
Don't miss anything!
Abstract In “Utility Cascades” (Analysis, 2020, 80(3): 433-42), Max Khan Hayward argues that act-utilitarians should sometimes either ignore evidence about the effectiveness of their actions or fail to apportion their support to an action’s effectiveness. His conclusions are said to have particular signifi- cance for the effective altruism movement, which centers seeking and be- ing guided by evidence. Hayward’s argument is that act-utilitarians are vul- nerable to succumbing to ‘utility cascades’, that these cascades function to frustrate the ultimate goals of act-utilitarians, and that one apposite way to avoid them is by ‘ostriching’: ignoring relevant evidence. If true, this con- clusion would have remarkable consequences for act-utilitarianism and the effective altruism movement. However, Hayward is mistaken—albeit in an interesting way and with broader significance for moral philosophy. His argument trades on a subtle mischaracterization of act-utilitarianism. Act- utilitarians are not especially vulnerable to utility cascades (or at least not objectionably so), and they shouldn’t ostrich.
In “Utility Cascades”, Max Khan Hayward raises a purported problem for act- utilitarians (and their effective altruist brethren): namely, that they are vulnerable to utility cascades, which “occur when ongoing rational updating of judgements concerning the effectiveness of an intervention causes a utilitarian to push a sit- uation further and further away from the antecedently optimific outcome,” (433). Hayward argues that, when facing a utility cascade, utilitarian agents should ei- ther ignore evidence about the effectiveness of their interventions (“to ostrich”) or not apportion their support of these interventions to their effectiveness.¹
¹ The argument is something of a tu quoque response to (Doody, ms), which criticizes a class on non-utilitarian views on the grounds that they, in some case, recommend avoiding relevant
If true, this would be a remarkable finding—and one with far-reaching import for those who wish to live by the light of act-utilitarianism, including, as Hayward notes, members of the effective altruism movement. But, in what follows, I will argue that Hayward’s paper does not establish either of these claims, because, in principle, the sorts of act-utilitarians with which he is concerned will either (i) avoid the utility cascade or (ii) fail to avoid it but in an unobjectionable fashion. The night may be dark and full of terrors for diligent act-utilitarian effective altru- ists, but threats of utility cascades need not crowd out nightmares of the coming robot apocalypse.
2 Hayward’s Target
Hayward’s targets are act-utilitarians of a particular stripe: let’s call them expected utility maximizing act-utilitarians. Traditionally, act-utilitarianism is the view that one ought to take the available action that maximizes utility. Because we are rarely ever in a position to know which action actually maximizes utility, act- utilitarianism is rarely action-guiding—which is fine; the view is meant to provide a criterion of rightness, not a decision-procedure (Railton, 1984). That said, act- utilitarianism is typically supplemented with a subjective criterion of rightness as well: one that says what one ought to do in light of one’s beliefs about the con- sequences of one’s actions.² One such proposal involves evaluating actions in terms of their expected utility. An action’s expected utility is the weighted sum of the utilities of its potential outcomes, where the weights correspond to one’s ratio- nal credence that that outcome will result.³ On these views, what you objectively
information—even when doing so is guaranteed to make everyone worse off. This criticism loses much of its bite if Hayward is correct that, in analogous cases, utilitarians, too, should avoid re- ceiving relevant information. ² An early version of the distinction between ‘objective rightness’ and ‘subjective rightness’ can be found in Sidgwick (1907: 206-8). The distinction is now widely (although not universally) ac- cepted: e.g., Brandt (1959: 381-5), Gibbard (1990: 42-3), Mulgan (2001: 42), Oddie and Menzies (1992: 512), Smart (1973: 46-7), Timmons (2002: 124), Zimmerman (1996: 10-20). One central motivation for the distinction is to account for cases in which we are torn between two conflict- ing ways of evaluating an agent’s actions, where one of those two ways seems to track praise and blame. For some, though, the distinction is also thought to help address worries concerning a moral theory’s action-guidingness. (For more discussion of these issues see Feldman (2006) and Smith (2010).) Different accounts of ‘subjective rightness’ can be given: in terms of, e.g., whether an act is likely to be objectively right (Hospers, 1961; Russell, 1910; Smart, 1973), or whether an act would be objectively right if the world were as the agent believes it to be (Brandt, 1959; Broad, 1985), or whether an act has highest expected value (Parfit, 1984; Timmons, 2002). Furthermore, some define ‘subjective rightness’ in terms of what the agent should believe given their evidence (Brandt, 1959; Gibbard, 1990; Hospers, 1961) rather than in terms of what the agent actually believes. ³ Hayward (2020) calls an initiative’s expected utility its “effectiveness score”, but I prefer to stick with the usual terminology because I worry that “effectiveness score” invites confusion: e.g., the
fail to apportion support to effectiveness. How are we to understand this ‘should’? If it’s the objective ‘should’, then Hayward’s conclusion is undoubtedly true—but uninteresting. When evidence is misleading, it would be better objectively to ig- nore it; the initiative that is actually best is what should objectively be supported, not the initiative (if it is different) that is expectedly best. One doesn’t need a util- ity cascade to show this, though; all one needs is an unpurchased winning lottery ticket. So, instead, I presume that the ‘should’ in the conclusion is meant to be sub- jective: that is, in some cases, act-utilitarians subjectively ought to ignore evidence, or subjectively ought to fail to apportion support in accordance with an initiative’s expected utility. This is, I take it, what Hayward’s utility cascades are meant to show. I agree that, if they succeed in showing this, that’s an interesting and trou- bling problem for those, like members of the effective altruism movement, who are broadly sympathetic to act-utilitarianism. But, we have good reason, or so I’ll argue, to be skeptical that Hayward’s argument succeeds.
3 Unpacking Hayward’s Argument
The argument is developed narratively—Hayward engagingly sets forth two ex- amples of a utility cascade: the first, an intrapersonal case; the second, an in- terpersonal case. In what follows, I focus on the former, developing lines of cri- tique that have application in the interpersonal case as well.
Bill, Hayward’s protagonist, is a wealthy philanthropist who is considering back- ing the rollout of a vaccine, effectanol. In June, the evidence available to Bill sug- gests that effectanol is 80% effective, which, given the good it could yield, means that the option donate $10,000 per month to effectanol maximizes expected utility. In July, Bill learns that the vaccine is only 70% effective, which changes his assign- ment of expected utilities so that, now, the option donate $8,000 per month to effectanol and $2,000 to mosquito nets maximizes expected utility.⁷ But, because
⁷ As an aside, it’s worth re-emphasizing the difference between the effectiveness of the vaccine ini- tiative (which, according to the story, has dropped) and the expected utility of supporting that initiative by allocating a particular sum of money to it. The fact that the former has dropped doesn’t entail anything in particular about the latter. In fact, a drop in the effectiveness of the vaccine might rationalize allocating more, not less, money to the initiative. Suppose, for exam- ple, that Bill initially regards the vaccine as 100% effective. Among other things, Bill hopes the vaccine initiative will help the community achieve herd immunity. Furthermore, suppose that herd immunity can only be achieved if 70% or more of the population have immunity to the dis- ease. If Bill were to learn that the vaccine is actually only 70% effective, it might make sense for him to allocate more money to the program because, now, it appears that more dosages will be needed—we’d need to inoculate everyone rather than only 70%—in order for herd immunity to
of Bill’s reduced support, in August, Bill learns that the effectanol project is now even less likely to succeed, which makes it the case that, now, the option donate $4,000 per month to effectanol and $6,000 to mosquito nets is what maximizes ex- pected utility. Bill’s reduced support makes it such that, in September, he is forced to conclude that the project is no longer worth investing in at all. So Bill reduces his support to $0 per month. Once all is said and done, Bill has wasted thousands of dollars on a vaccine program that is ultimately unsuccessful. He has, Hayward tells us, fallen victim to the eponymous ‘utility cascade’. But what’s the real problem here? Here’s one way we could read the case. Bill, we might say, has behaved in a way that has brought about a sub-optimal out- come; as it turns out, it would’ve been better for him to have spent the entirety of his $10,000 per month on the mosquito nets all along. But, under conditions of uncertainty, bringing about an outcome that turns out to be sub-optimal isn’t particularly objectionable, even for the act-utilitarian. We do it every time we buy fire insurance and our house doesn’t burn. What does this have to do with Hayward’s claim that Bill would have done better to have ignored evidence? Hayward suggests that it would’ve been better for Bill to have never learned that effectanol was only 70%, rather than 80%, effec- tive. This, in turn, suggests that the option donate $10,000 per month to effectanol is the one that maximizes actual utility—it was the optimal choice. But if that’s right, then the information Bill received in July—the information that led him to conclude that effectanol was no longer worth donating the $10,000 per month to—was misleading.⁸ Of course, there is a sense in which it is better to disregard misleading information (it always is), but, given that Bill had no reason to suspect that his evidence was misleading, there’s nothing particularly objectionable about his having failed to do so. On the other hand, it would be objectionable for Bill to predictably bring about a sub-optimal outcome (or to knowingly fail to disregard misleading information). But, given that Bill is a rational act-utilitarian who maximizes expected utility, he won’t do these things. If he knows that some action will result in a sub-optimal outcome relative to some other, the expected utility of the former will be lower than that of the latter; and, because Bill maximizes expected utility, he will avoid performing it. Let’s apply this last thought to Hayward’s utility cascade. Suppose that, in July (after receiving the disappointing news about effectanol), Bill can predict that, if
be secured. ⁸ It might be that the information was correct about the effectiveness of the effectanol vaccine—it’s only 70%, and not 80%, effective—but misleading in the sense that it downgraded the expected utility of the option that, as a matter of fact, maximized utility. To borrow a distinction from (Buchak, 2010), the information might not be epistemically misleading, but nevertheless mislead- ing in an instrumental sense.
low expected utility to lowering his contribution to $8,000 isn’t because future- Bill will lower the contribution even further; instead, it’s because, if he lowers the donation, the vaccine initiative won’t be worth funding at that level—a fact that he’s both in a position to appreciate now as well as anticipate appreciating in a month from now. By assigning a low expected utility to reducing his donation, Bill isn’t capitulating to his future-self; he’s not making a concession to a future decision that he, now, doesn’t endorse; instead, he’s responding to the fact that funding the initiative at $8,000 will result in it no longer being worth funding at that level. It’s not the cascade itself that explains why Bill assigns low expected value to the option; rather, it’s the facts underlying the cascade—the facts about the effects his choices have on the initiative’s success, to which his future-selves would be responding—which justify his evaluations of the options. So, contra Hayward, “this lowering of efficiency scores” is based on features of the situation, and not on Bill’s vulnerability to cascades.
The central lesson of the previous section—that expected utility maximizing act- utilitarians will either avoid the utility cascade or fail to avoid it but in a way that is not objectionable—carries over to Hayward’s interpersonal example which deals with the problem of climate change. In this case, Bill is recognized by other phi- lanthropists as an important contributor to charitable projects: his every move is scrutinized and others weigh the likely efficacy of their own contributions in rela- tion to those that they assume Bill will tender. Now, in this case, either Bill knows that, for example, shifting his support from preventative measures to mitigative ones will herald a similar shift on the part of other members of his community (thus rendering the preventative measures even less likely to succeed than before) or he doesn’t. Suppose he does know this. Then, given that he’s a rational act-utilitarian, he will take that information into account when deciding what to do. So after receiving the first round of bad news about the efficacy of the preventative mea- sures, it’s not obvious that Bill is committed to shifting his contributions from prevention to mitigation. That bad news will lower the expected utility he assigns to the preventative strategy, but—given that, as we’re supposing, he can predict that other members of his community will follow his lead (plus, that at least some of them are privy to the same information he is)—it’s implausible to think that it will have lower expected utility than shifting his contributions from prevention to mitigation will. Why? So long as Bill isn’t shortsighted, he’ll know that the pre- dictable result of shifting his contributions away from prevention will eventually result in his (and the rest of the community) going all in for mitigation. So, un-
less Bill thinks that it’s not worth putting any resources toward prevention, he’ll regard keeping his contributions where they are as the better of the two options. As before, the cascade is avoided. On the other hand, if Bill has no idea that his decision will influence the be- havior of the others (and his future-self), then down the cascade he might go. But why is that a mark against act-utilitarianism? If it turns out that, unbeknownst to me, a murderous villain will destroy the planet if I snap my fingers, there’s a sense (the objective one) in which I shouldn’t snap my fingers; but, given that I have no inkling of the influence my decision will have, how am I criticizable? Fur- thermore, just as before, this is a case in which Bill receives misleading evidence about the effectiveness of the potential interventions—the evidence he receives suggests that prevention isn’t worth supporting at its current level when, in fact, it is. Let me address a potential objection—analogous to the one from §3.1—which holds that, while Bill might be doing the best he can do given the situation in which he finds himself, that situation would nevertheless be even better if it didn’t contain so may act-utilitarians, but instead contained agents who acted in accor- dance with different principles.⁹ The worry is that, if Bill’s reason for assigning very low expected utility to shifting his support from prevention to mitigation is that he anticipates that this will herald a similar shift on the part of the other members of his community, this is an example of act-utilitarianism collectively getting in their own way. As before and for analogous reasons, I think this is mistaken. The reason Bill should assign very low expected utility to shifting his resources from prevention to mitigation isn’t because other members of the com- munity will lower their contributions per se; instead, it’s because, if he shifts his support away from prevention, prevention won’t be worth funding by his peers either. His contributions—or lack thereof—alter the effectiveness of the interven- tions, which in turn affect (in ways he wholly endorses) what other act-utilitarians ought to do. That said, one potentially important difference between Hayward’s interper- sonal case and his intrapersonal one is that Bill’s peers might act contemporaneously— and, thus, absent knowledge of what he and the others have likewise decided. This added degree of uncertainty might make it difficult—especially if the decisions are made independently and in ignorance of each other—for the community to co- ordinate on the optimal level of funding. But, as previously argued, it’s far from clear that bringing about a sub-optimal outcome in the face of uncertainty is par- ticularly objectionable.¹⁰ However, even if this were objectionable in some sense,
⁹ Thanks to an anonymous referee for pressing this point. ¹⁰ Spelled out in this way, Hayward’s interpersonal case potentially raises some interesting issues about coordination. Suppose, for example (and simplicity), that there are two effective altruists—
that they’d endorse making. But, in any case, what’s called for is more information, not less. The lesson I take from Hayward’s examples is not that act-utilitarians should avoid evidence, or selectively ignore it, but that, rather, they should seek out as much information as possible. Their view does not license ostriching. All that said, I think Hayward’s utility cascades potentially do raise some very interesting issues—if not for impeccably rational expected utility maximizing act- utilitarians, than at least for the rest of us—about how best to navigate a complex, interconnected world. It might be that, in an environment replete with potential cascades, groups of expected utility maximizers will do worse—in some sense— than groups employing some other decision-rule. When maximizing is difficult, it might be better to not even try—to adopt some simple heuristic instead (e.g., Todd and Gigerenzer, 2012).¹¹ That might be right, but more would be required to show it.¹²
References
Berkey, Brian. 2018. The Institutional Critique of Effective Altruism. Utilitas, 30: 143–
——. 2021. The Philosophical Core of Effective Altruism. Journal of Social Philos- ophy, 52(1): 93–
Brandt, Richard. 1959. Ethical Theory: The Problems of Normative and Critical Ethics (Englewood Cliffs: Prentice-Hall)
Broad, C.D. 1985. Ethics, chap. 3 (Dordrecht: Martinus Nijhoff)
Broi, Antonin. 2019. Effective Altruism and Systemic Change. Utilitas, 31: 262– 276
¹¹ Of course, care is called for in our choice of heuristics. Some members of the effective altruism movement (e.g., MacAskill, 2015; Wiblin, 2016) endorse a heuristic for “cause prioritization”—the Importance-Tractability-Neglectedness (ITN) Framework—that is particularly prone to misfire when applied to Hayward’s examples. In particular, if Hayward (2020: 436) is correct that these are cases of increasing marginal utility, then Neglectedness (or the lack thereof) would not provide a reliable guide to how one ought to prioritize various causes: e.g., the fact that many others have already invested in prevention—that it is anything but neglected—doesn’t mean that it would be less valuable for you to too. (For similar criticisms of the ITN framework, see Broi, 2019; Halstead, 2019). Because all heuristics are just that, the measure of a good one cannot be infallibility. But, if Hayward’s cases are prevalent enough, it might be wise for effective altruists to move away from the ITN framework nonetheless. Thanks to an anonymous referee for suggesting this point. ¹² For helpful feedback and discussion, I’d like to thank Max Hayward, Simone Gubler, Frances Howard-Snyder, an audience at the 2021 APA Eastern Division Meeting, and an audience at the 2020 Annual PPE Society Meeting. I would also like to thank Geoff Sayre-McCord for starting the cascade of events that occasioned the existence of this paper, and two anonymous referees for their helpful feedback.
Buchak, Lara. 2010. Instrumental Rationality, Epistemic Rationality, and Evidence-gathering. Philosophical Perspectives, 24: 85–
Collins, Stephanie. 2019. Beyond Individualism. In Effective Altruism: Philosophi- cal Issues, edited by Hilary Greaves and Theron Pummer (Oxford: Oxford Uni- versity Press), pp. 202–
Dietz, Alexander. 2019. Effective Altruism and Collective Obligations. Utilitas, 31(1): 106–
Doody, Ryan. ms. Consider the Ostrich: Non-Utilitarians, Ex Ante Interest, and Burying Your Head in the Sand. Unpublished manuscript
Feldman, Fred. 2006. Actual Utility, the Objection from Impracticality, and the Move to Expected Utility. Philosophical Studies, 129(1): 49–
Gibbard, Allan. 1965. Rule-Utilitarianism: Merely an Illusory Alternative? Aus- tralasian Journal of Philosophy, 43: 211–
——. 1990. Wise Choices, Apt Feelings (Oxford: Oxford University Press)
Good, I.J. 1967. On the Principle of Total Evidence. The British Journal for the Philosophy of Science, 17(4): 129–
Halstead, John G. 2019. The ITN framework, cost-effectiveness, and cause prioritisation. Effective Altruism Forum. URL https: //forum.effectivealtruism.org/posts/Eav 7 tedvX 96 Gk 2 uKE/ the-itn-framework-cost-effectiveness-and-cause
Hayward, Max Khan. 2020. Utility Cascades. Analysis, 80(3): 433–
Hospers, John. 1961. Human Conduct: An Introduction to the Problems of Ethics (New York: Harcourt, Brace and World)
Jackson, Frank. 1991. Decision-theoretic Consequentialism and the Nearest and Dearest Objection. Ethics, 101: 461–
MacAskill, William. 2015. Doing Good Better: How Effective Altruism Can Help You Make a Difference (New York: Penguin Random House)
——. 2019. The Definition of Effective Altruism. In Effective Altruism: Philosoph- ical Issues, edited by H Greaves and T Pummer (Oxford: Oxford University Press)
Mulgan, Tim. 2001. The Demands of Condequentialism (Oxford: Clarendon Press)