laitimes

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Source: Philosophical Cooperative

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Rule consequentialism

The following is a translation of the main content of the November 2015 edition of the Stanford Encyclopedia of Philosophy's "Rule Consequentialism" entry (the latest content of the entry, and other information such as related terms, see the link below).

The entries were translated by Lu Yifan (verses 1-5) and Li Ruohan (verses 6-9), students of the 20th grade modern philosophy international class of the School of Philosophy of Wuhan University, and then submitted to the guidance teacher Ge Siyou for proofreading. This push is the first time for the International Class of Modern Foreign Philosophy of Wuhan University, and the "Translation" series of New Year dedications has been pushed for a total of four times.

First edition on Wednesday, December 31, 2003;

Substantive revisions on Wednesday, November 18, 2015

We can call this moral theory of complete rule consequentialism, which picks rules based only on consequences and claims that these rules determine which behavior is morally inappropriate. George Berkeley was arguably the first rule-based consequentialist. He wrote: "In formulating general natural law, what guides us must be the public good of mankind exclusively, not the everyday moral conduct of our lives... Although rules are made according to human interests, it must be rules that directly shape our practice. (Berkeley 1712: v. 31) Scholars commonly classified as regular consequentialists include Austin (1832), Harrod (Harrod 1936), Toulmin (1950), Urmson (1953), Harrison (1953), Mabbott (1953), M. Singer (M. Singh). Singer 1955, 1961), as well as the most influential Brandt (1959, 1963, 1967, 1979, 1989, 1996) and Harsanyi (1977, 1982, 1993), including Rawls 1955, Ezorsky (1968), Ihara 1981, Haslett (1987, 1994: Chapter 1, 2000), Attfield 1987: 103–12, Barrow 1991: Chapter 6, 2015, Johnson (Johnson 1991), Riley 2000, Shaw (Shaw 1999), Hooker (2000, 2005), Mulgan (2006, 2009), Ridge (2006, 2009), Ridge (2006, 2009) .B Miller 2009, Parfit 2011, Cowen 2011, Kahn 2012, 2013, Levy 2013, Tobia 2013, and D.E. Miller 2013 Miller 2014)。 Whether J.S. Mill's ethics is a rule-based consequentialism is debated (Urmson 1953; Lyons 1994:47-65; Crisp 1997:102-33; D.E. Miller 2010:79-110).

1. Utilitarianism

2. Benefits

3. Other goodnesses that should be promoted

4. Complete rule consequentialism

5. Global consequentialism

6. Construct a complete rule consequentialism

○ 6.1 Actual and expected benefits

○ 6.2 Compliance and Acceptance

○ 6.3 Full acceptance and incomplete acceptance

7. Three methods of argument in support of rule consequentialism

8. Does rule-based consequentialism necessarily degenerate, discord, or worship rules?

9. Other refutations to rule consequentialism

bibliography

Academic tools

Other Internet resources

Related entries

1. Utilitarianism

2. Welfare

3. Other Goods To Be Promoted

4. Full Rule-consequentialism

5. Global Consequentialism

6. Formulating Full Rule-consequentialism

○6.1 Actual versus Expected Good

○6.2 Compliance and Acceptance

○6.3 Complete Acceptance versus Incomplete Acceptance

7. Three Ways of Arguing for Rule-consequentialism

8. Must Rule-consequentialism Be Guilty of Collapse, Incoherence, or Rule-worship?

9. Other Objections to Rule-consequentialism

Bibliography

Academic Tools

Other Internet Resources

Related Entries

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Utilitarianism

If and only if a moral theory evaluates actions, character traits, practices, and institutions only in terms of the good nature of the consequences, such a theory is a form of consequentialism. Utilitarianism is the most famous consequentialism in history. Utilitarianism evaluates behaviors, personality traits, practices, and institutions based solely on overall net benefits, which typically refer to aggregated benefits or well-being. Aggregate benefits are calculated by calculating the gains or damages of each individual impartially, and then adding all the benefits and damages to arrive at a total value. But the theory of what is the best of welfare is controversial among consequentialists.

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

welfare

Classical utilitarians (such as Jeremy Bentham, Mill, and Henry Sidgwick) saw gains and losses as merely related to pleasure and pain. The idea that welfare is pleasure minus pain is generally referred to as hedonism. Although hedonism has become increasingly complex (Parfit 1984: Appendix I; Sumner 1996; Crisp 2006; de Lazari-Radek and Singer 2014: Chapter 9), it still promises the argument that the quality of an individual's life depends entirely on its pleasures minus pain, although the understanding of pleasure and pain has become very broad.

Even if we understand pleasure and pain very broadly, hedonism still faces all kinds of difficulties. The main difficulty is that many people (if not all) care deeply about other things in addition to their own pleasures and pains. Of course, these things may be important as a means of achieving pleasure and avoiding suffering. But many people's intense concern for these things outweighs their hedonistic instrumental value. For example, many people want to know the truth about various things, even if it doesn't increase their (or someone else's) happiness. Another example is that many people want to achieve something, and not just because of the happiness that those achievements can bring. Similarly, many people's care for the well-being of their family and friends is not instrumental. There is an explanation for these views, especially the last one, that competes with hedonism, which is that people care about many other things in addition to their own welfare.

Any conception of welfare that makes sense will be accepted, and the sense of satisfaction that people feel when their desires are satisfied will increase their welfare; any conception of welfare that makes sense will also accept that the frustration of unfulfilled desires will reduce their welfare. It's debatable whether the fulfillment of someone's desires is beneficial in itself, in addition to influencing people's sense of satisfaction or frustration. The hedonistic answer is no, and it claims that only the effect on satisfaction or frustration is important.

A different theory of welfare gives an affirmative answer. This theory holds that any satisfaction of an actor's desire constitutes the actor's interest, even if the actor never knows that the desire has been satisfied, or the actor does not derive any pleasure from the satisfaction of desire. This welfare theory is often referred to as the desire satisfaction theory of welfare.

Obviously, the theory of desire satisfaction of welfare is broader than that of hedonism, because the theory of satisfaction of desire holds that it is not only pleasure that constitutes interest. However, it is reasonable to think that this broad theory is too broad. On the one hand, people may have a legitimate desire that has nothing to do with their own lives and therefore to their own welfare (Williams 1973:262; Overvold 1980, 1982; Parfit 1984:494). For example, if I desire a hungry person in a foreign country to get food for my hunger, it is not good for me to be satisfied.

On the other hand, people may also have ridiculous desires for their own sake. Suppose I want to count how many blades of grass are on the lawn of this road. If I can derive satisfaction from the act of doing so, then that satisfaction constitutes my interest. But simply satisfying my desire to count all the blades of grass on the lawn does not constitute my interest (Rawls 1971: 432; Parfit 1984: 500; Crisp 1997: 56)。

Upon careful reflection, we may think that the satisfaction of desires can only increase welfare if they have certain content. For example, we might think that satisfying someone's desires for pleasure, friendship, knowledge, achievement, or their autonomy will really increase their well-being, but satisfying her desires for anything else will not directly benefit her (although the happiness she derives from these satisfactions will). If we think of it this way, we seem to think that there is a list of things that constitute the welfare of all (Parfit 1984: Appendix I; Brink 1989: 221-36; Griffin 1996: Chapter 2; Crisp 1997: Chapter 3; Gert 1998: 92–4; Arneson 1999a).

As long as the good to be promoted is welfare, then the theory remains utilitarian. Utilitarianism has a lot to say. Obviously, how life is lived is important. The idea that morality is fundamentally impartial, that is, at its most fundamental level, whether men or women, the strong or the weak, the rich or the poor, blacks, whites, Latin Americans, or Asians, is equally important, is very attractive, if not completely irresistible. Utilitarianism interprets this as equally important and feasible to mean that in calculating the overall welfare, the profit and loss of one person and the profit and loss of any other person must be calculated the same as long as the two are the same, neither less nor less.

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Other goodness that needs to be promoted

Non-utilitarian members of the consequentialist family are theories that argue that behavior, character traits, practices, and institutions are evaluated only in terms of the good nature of the consequences, but the good here is not limited to welfare. "Non-utilitarianism" here means "non-pure utilitarianism", not "complete non-utilitarianism". When scholars claim to be consequentialists rather than utilitarians, they are usually saying that their fundamental moral evaluation is based not only on welfare, but also on some other good. What are these other goodnesses? The most common answers are justice, fairness and equality.

According to Plato, justice is "giving everyone their due" (Republic, vol. 1). We might think that what people deserve is a question of what people deserve, either because they deserve it or because they have moral rights to it. Assuming that we insert these views into consequentialism, we get a theory that things are evaluated not only on how much welfare they can produce, but also on the extent to which people get what they deserve, and to what extent moral rights are respected.

However, if consequentialism adopts this line of thinking, its explanatory ambitions will be limited. A theory is simply something that is presupposed, and it does not explain it. If consequentialist theory assumes that justice is composed of one element or another and that justice is one of the things that need to be promoted, then it does not explain why the element of justice is important. It neither explains what is deserved nor the importance of moral rights, let alone the content that determines them. But for consequentialists, these issues are too important and controversial to leave them unexplained or open. If consequentialism is to refer to justice, deserving, and moral rights, it needs to analyze these concepts and defend why it should give them the kind of role it should give.

The same question can be asked about fairness. If consequentialism presupposes the theory of fairness and simply prescribes the promotion of fairness, then the theory does not explain fairness. But fairness (like justice, deserving, and moral rights) is too important a concept for consequentialism to be left unexplained.

One of the ways in which consequentialists respond to justice and fairness is that the assertion of justice and equity lies in conformity with a set of social practices that are justified, and that such defenses lie in the fact that they usually promote overall welfare and equality. In fact, this proposition may mean that what people have, what moral rights they have, what requirements for fairness and justice must be consistent with any practice that promotes overall welfare and equality.

However, whether equality needs to be added to this expression is controversial. Many people believe that a purely utilitarian formulation has enough egalitarian implications. They argue that even if the goal is merely to promote welfare, rather than to promote welfare plus equality, there are still occasional but very broad facts of human nature that drive material resources towards an equal distribution (Brandt 1979).

According to the Law of Diminishing Marginal Utility of Material Resources, the more units of goods an individual owns, the less beneficial it will be to add one unit of goods. Let's say I'm going to walk around and only have a bike on foot from the start, or I live in a very cold place and I go from not having a warm coat to having one. I benefited more from owning the first than from 9 to 10, and I benefited more from having the first than from 9 to 10 coats.

There are exceptions to the law of diminishing marginal utility. Most exceptions would be to increase the number of material resources by one unit, pushing someone over an important threshold. For example, the kind of food, pill, or air that could save someone's life, or the acquisition of another car that would push a competitive collector to number one. In such a case, a unit of resource that places someone above the critical value is likely to be as beneficial as any previous unit. In general, however, there is a diminishing marginal utility of material resources.

In addition to the assumption of diminishing marginal utility of material resources, we add the assumption that different people will usually receive roughly the same benefits from the same material resources. Again, there are exceptions to this assumption. If you live in a place where the climate is harsh and the climate in the place where I live is hot, you will benefit a lot more than I do from the same warm coat.

However, let's say we live in the same place where the winters are cold, there are good bike paths, and there is no public transport. Let's say you have ten bikes and ten coats (though you're not vying for the title of bike or coat collector either). At the same time, I was poor, with neither a bicycle nor a coat. Well, give me one of your bikes and a coat, and you'll most likely suffer less damage than I'll gain. As long as resources are unevenly distributed in society, this phenomenon is widespread. Wherever this phenomenon arises, as long as a morality is fundamentally emphasized to be impartial, it faces the same pressure to distribute the resources of the rich to the poor.

However, there are also some accidental but widespread human facts that support the kind of practice that leads to an unequal distribution of resources. First, higher levels of overall welfare require higher levels of productivity (think of the welfare growth that comes with higher agricultural productivity). In many areas of the economy, providing high material returns for efficient production appears to be the most acceptable and effective way to increase productivity. Some individuals and groups are more efficient than others (especially if incentive schemes are available), so the practice of providing material incentives for productivity increases creates unequal resource allocation.

So, on the one hand, the pressure of diminishing marginal utility of material resources will support a more equitable allocation of resources. On the other hand, the pressures of the need to increase productivity support incentive schemes, which are foreseeable to lead to inequalities in the allocation of resources. Utilityists and most other consequentialists find that they need to balance this set of opposing pressures.

It is important to note that these pressures are about resource allocation. There is a further question about how the benefits themselves should be distributed equally. Many scholars have argued recently that utilitarianism is indifferent to the distribution of benefits. Imagine choosing between two consequences, one with a larger overall benefit but unequally distributed and the other with a smaller overall benefit but equally distributed. It is generally accepted that utilitarians will support the consequences of greater overall welfare, even if their distribution is less equitable.

To illustrate this point, take the simple artificial population as an example, which is divided into only two groups.

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Many would argue that option 2 above is better than option 1, and might argue that a comparison between the two options suggests that there is always pressure to support greater welfare equality.

However, as Derek Parfit (1997) specifically argued, we cannot be too rash. Consider the following options:

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Is welfare equality so important that option 3 is superior to option 1? Considering Parfit's example, assuming that equality in vision can only be achieved by making everyone completely blind, would morality require such a "pull-down"? Is it even morally desirable?

If our answer is no, then we might argue that welfare equality in general is not really an ideal (see Temkin 1993). The loss of the rich is justified only when it is good for the poor. What we would have thought was the kind of pressure to support welfare equality is actually the kind of pressure to support higher welfare levels. We might say that the worse the situation of those affected by welfare, the more important the increase in welfare. This view has come to be known as prioritism (Parfit 1997; Arneson 1999b), and it has a strong intuitive appeal.

To understand how prioritism works, consider a simplified example that assumes that the welfare of the poorest groups is five times that of the rich. Option 1 in the table above totals (1×5×10,000) + (10×100,000), for a total of 1,050,000 units of benefits. The benefits of the poorest groups are again counted as five times, with options totalling (8×5×10,000) + (9×100,000), for a total of 1,300,000 units of benefits. Option 2 is then morally superior to Option 1, which is consistent with the general response.

Of course, in real cases, society has never had only one division. Rather, there is a scale that goes from worst to less bad, and so on until it's best. Prioritism promises that the welfare of people in different positions on this scale will have different degrees of importance: the worse the situation of the individual, the more important the level of welfare.

But this raises two serious concerns about prioritism. The first concern about prioritism is difficult to arbitrarily determine: how important the welfare of the poor should be. For example, should the one-unit benefit of the poorest be counted as 10 times the equal benefit of the richest and 5 times the equal benefit of the average welfare? Or should this multiplier be 20 and 10, or 4 and 2? The second concern about prioritism is whether the greater emphasis on welfare growth for some rather than others contradicts fundamental impartiality (Hooker 2000:60-2).

It is not appropriate to delve into the debate between prioritism and its criticism. Therefore, the remainder of this article will put these arguments on hold.

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Complete rule consequentialism

Consequentialists distinguish between three components of their theory: (1) the argument that makes the behavior morally unjust, (2) the argument that the actor should use what procedures to make moral decisions, and (3) the argument under what conditions moral sanctions (such as blame, guilt, and praise) are appropriate.

If rule consequentialism includes the rule consequentialist norm for all three questions, we will call it complete. Thus, complete rule consequentialism claims that what the rule of being defended by consequence prohibits is inappropriate; that the rule of defending consequences requires that how decisions are made, and that the rules of defending consequences determine under what conditions moral sanctions should be imposed, and that moral sanctions should be imposed under that condition.

Complete rule consequentialists may argue that there is actually only one set of rules on these three different subjects. Or they may argue that there are several different sets of rules that in some sense correspond to or complement each other.

More important than distinguishing between different kinds of complete rule consequentialism is the distinction between complete rule consequentialism and partial rule consequentialism. Partial rule consequentialism can take many forms, but we focus here on the most common form, which argues that actors should make moral decisions based on rules that are justified by consequences, but does not argue that these rules determine moral impropriety. Some rule consequentialists generally accept the theory that moral inadequacy depends directly on the consequences of the action. This theory of impropriety is called behavioral consequentialism.

Distinguishing between full and partial rule consequentialism clarifies the difference between behavioral consequentialism and rule consequentialism. Behavioral consequentialism is best understood as insisting only on the following claims: Behavioral consequentialist criteria of inappropriateness: An act is inappropriate if and only if the outcome of that act is not as good as the outcome of another alternative act.

When confronted with this principle of moral misconduct, many naturally assume that the way to decide what to do is to apply the code: the behavioral consequentialist procedure for moral decision-making: in any case, the actor should decide what to do by calculating which behavior will produce the greatest benefit.

However, consequentialists almost never defend this decision-making process as a general and typical way of making moral decisions (Mill 1861: Chapter 2; Sidgwick 1907: 405–6, 413, 489–90; Moore 1903: 162–4; Smart 1956: 346; 1973: 43, 71; Bales 1971: 257–65; Hare 1981; Parfit 1984: 24–9, 31–43; Railton 1984: 140–6, 152–3; Brink 1989: 216–7, 256–62, 274–6; Pettit and Brennan 1986; Pettit 1991, 1994, 1997: 156–61; de Lazari-Radek and Singer 2014: Chapter 10). There are many compelling consequential reasons why the behavioral consequential process of decision-making is counterproductive.

First, the actor usually has no detailed information on the consequences of various actions. Second, the cost of accessing such information is often higher than the benefits involved in such decisions to be made. Third, actors may miscalculate even if they have the information needed to perform the calculations. (Especially when the actor's natural biases invade, the calculations are very complex, or decisions must be made quickly.) Fourth, there are factors that we might call expected effects. Imagine a society where people know that other people have a natural bias toward themselves and their loved ones, but will try to calculate the overall benefits to make every moral decision. In such a society, everyone is likely to worry that as long as others convince themselves that breaking promises, stealing, lying, even attacking, etc., will have the greatest overall benefit, they will do so. In such a society, people do not feel that they can trust each other.

The fourth consideration is more controversial than the first three. For example, Hodgson (1967), Hospers (1972) and Harsanyi (1982) advocated that trust would collapse, but Singer (Singer 1972) and Lewis (Lewis 1972) argued that it would not.

Most philosophers, however, accept the idea that the use of behavioral consequential procedures for decision-making does not maximize goodness for these four reasons. Thus, even if philosophers support the adoption of behavioral consequentialist standards for moral misconduct, they will still reject the adoption of behavioral consequentialist procedures for moral decision-making. Instead, they generally support the idea that the rules of decision-making are consequential procedures: at least in general, it is the rules on which actors should make behavioral decisions (e.g., "Do not harm innocent others", "Do not steal or destroy others' property", "Do not break promises", "Do not lie", "Pay special attention to the needs of family and friends", "Benefit others in general") and that their acceptance should have the best possible consequences.

Since behavioral consequentialists generally accept this decision-making process in the norm of impropriety, behavioral consequentialists are in fact partial rule consequentialists. In general, moral misconduct adopts behavioral consequentialist norms and appropriate decisions use a rule consequentialist procedure, and this combination of the two is called indirect consequentialism by scholars.

In general, the decision-making process recognized by complete rule consequentialism is the best acceptable to society. The "general" qualification is needed because there are other versions of rule consequentialism, where rules can be relative to small groups or even individuals (D.E. Miller 2010; Kahn 2012). Behavioral consequentialism insists on a decision-making process that is best accepted by the individual. Thus, according to behavioral consequentialism, since Jack and Jill's abilities and situations may be very different, the best decision-making process for both may also be different. In practice, however, behavioral consequentialists generally largely ignore this difference and endorse the decision-making process of the rule consequentialism described above (Hare 1981, chapters 2, 3, 8, 9, 11; Levy 2000).

When behavioral consequentialists endorse the consequential decision-making process described above, they acknowledge that following that decision-making process does not guarantee the best consequences for our actions. For example, we follow this decision-making process of not harming innocent people, which sometimes prevents us from doing the actions that have the best consequences. Similarly, in some cases, stealing, breaking promises, etc. can have the best consequences. Nevertheless, in the long run and overall, following a decision-making process that generally prohibits acts may have better consequences than performing consequential calculations on a case-by-case basis.

Because behavioral consequentialists generally subscribe to the rule consequentialist process of decision-making, it may be questionable whether a philosopher is classified as an acting consequentialist or a rule consequentialist. For example, G.E. Moore (1903, 1912) is sometimes classified as a behavioral consequentialist and sometimes as a regular consequentialist. Like his teacher Henry Sidgwick and many others, Moore combined behavioral consequentialist norms for moral improprieties with a rule consequentialist procedure for decision-making. It's just that he went further than most, emphasizing the dangers of leaving the rule consequentialist procedures of decision-making (see Shaw 2000).

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Global consequentialism

Some scholars have proposed that the purest and most consistent form of consequentialism is the view that everything, whether by its conduct, or by rules, motives, sanctions, etc., should absolutely be evaluated in terms of its consequences. According to Pettitand & Smith (2000), this view is known as global consequentialism. Kagan (2000) portrays it as multidimensional direct consequentialism, given that everything is evaluated directly on the basis of whether its consequences are as good as the consequences of alternatives.

How is this global consequentialism different from what we call partial rule consequentialism? What we call a partial-rule consequentialist theory is nothing more than a combination of morally inappropriate behavioral consequentialist norms and a regular consequentialist procedure for decision-making. Defined this way, partial rule consequentialism is open to when moral sanctions are appropriate.

Some parts of the rule consequentialists argue that as long as the actors choose actions that do not produce the best consequences, they should be blamed and feel guilty. But a more plausible position that some rule consequentialists can take is that as long as the actor chooses an act that is prohibited by the rule consequentialist decision-making process, he should be blamed and feel guilty, regardless of whether the action produces the best consequences. Finally, as we define, partial rule consequentialism is compatible with the assertion that whether an actor should be blamed or feel guilty depends neither on whether their actions were justified or whether their actions were required by the kind of desirable moral decision-making process, but entirely on whether such blame or guilt would bring benefits. This is precisely the global consequentialist view of sanctions.

One fatal counterpoint to global consequentialism is that applying consequentialist norms to both behavior, decision-making procedures, and sanctions enforcement leads to obvious paradoxes (Crisp 1992; Streumer 2003; Lang 2004).

Suppose that overall and in the long run, the best decision-making process you're going to accept will lead you to do x right now. But it is also assumed that in this case, the behavior that actually produces the best consequence is not x but y. Thus, global consequentialism requires you to use the best decision-making process and at the same time requires you not to do the actions that this decision-making process selects. This seems like a paradox.

It gets worse when we think about blame and guilt. Suppose you follow the best decision-making process, but your actions don't have the best consequences. Are you going to be blamed? Should you feel guilty? Global consequentialism asserts that you should be blamed, and that if and only if you blame you will have the best consequences; you should feel guilty, if and only if this will produce the best consequences. Suppose that for some reason, "blaming you for complying with prescribed decision-making procedures (and doing x for it)" will have the best consequences. Then according to global consequentialism, it is undoubtedly a paradox that the theory still requires you to be blamed, even though you follow the kind of moral decision-making process that the theory instructs. Or suppose, for some reason, blame "you deliberately chose the behavior with the best consequences (y)" and that blame produces the best consequences. Similarly, while you deliberately choose the behavior required by a moral theory, it is undoubtedly a paradox that the theory demands to rebuke you.

Thus, one of the problems of global consequentialism is that it creates a potential gap between the acts it demands and the decision-making procedures to be used by the actors, between both and the reproachability. (For a clear response to this attack line, see Driver 2014:175 and de Lazari Radek and Singer 2014:315–16.) )

But this is not the most common problem in global consequentialism. Rather, its most common problem lies in its improperness to maximize the action-consequentialist norm. According to this maximal criterion, an act is improper if and only if the act fails to bring about the greatest good. According to this criterion, certain acts that appear to be certainly improper are judged to be justified, and some acts that appear to be justified are judged to be improper.

For example, consider an act of murder whose consequences are slightly better than any other act. Then, according to the most familiar principle of maximization of inappropriateness- consequentialism, this behavior is not improper. There are also many acts such as assault, theft, breach of promises, and lying, even if doing them will yield slightly more benefits than not doing them, and these behaviors are still improper. But again, this common maximal behavior-consequentialism would deny this.

Or consider someone leaving some resources to their children or themselves and not using them to help strangers, even if the latter can benefit slightly more. It seems hard to think that such behavior is improper. However, the maximal behavioral consequentialist criterion determines that it is inappropriate. Imagine how much self-sacrifice a person with an average welfare level would have to make in order to satisfy the behavioral consequentialist norm of maximizing inappropriateness. In fact, she will have to continue to act until her sacrifice does more harm to herself than to the benefits to others. As a result, the consequentialist norm of maximizing behavior of inappropriateness is often accused of being unreasonablely demanding.

There is a consequentialist rebuttal of this maximal behavior, there is a consequentialist version that does not require the maximization of good, and it can avoid this rebuttal. This behavioral consequentialism is now known as contented consequentialism. For more discussion of this theory, see the article "Consequentialism".

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Construct a complete rule consequentialism

Constructing rule consequentialism has many different approaches. For example, we construct rules based on the benefits that the rule actually produces (good) or on the benefits that can reasonably be expected under the rule; whether it is based solely on the consequences of compliance or the broader consequences of acceptance rules; whether it is based on the consequences of everyone's absolute acceptance of the rule, or on the consequences of almost "everyone" accepting the rule. The next three subsections will make it clear that some of the ways in which rule consequentialism is constructed will be more feasible than others. These questions of construction will also affect the discussion of the classical rebuttal of rule consequentialism in later sections.

6.1 Actual and Expected Benefits

As mentioned earlier, for rule consequentialism to be complete, three such questions must be answered in the form of rule consequentialism. The first question is, what makes an act morally inappropriate? The second question is, what procedures should actors use to make ethical decisions? The third question is, in what circumstances are moral sanctions such as condemnation, guilt, and praise appropriate?

As we have seen, the answer to the question of decision-making procedures is the same as that of full-rule consequentialists as other types of consequentialists. So we focus on two other questions that can be contrasted: what makes an act inappropriate, and when sanctions are appropriate. These two issues are more closely related than we sometimes recognize.

In fact, Mill, as one of the righteous founders of the Lord of Consequences, has long affirmed the close connection between them: we will not consider anything to be improper unless we believe that the person who did it deserves some punishment—if not by the law, at least by the opinion of his fellow citizens; if not by opinion, at least by condemnation of his own conscience. (1861: Chapter 5, para. 14)

Let's suppose that Mill's belief that "should be punished, if not punished by others, at least by the condemnation of one's own conscience" is roughly equivalent to "blameworthy." With this assumption, we can understand Mill's words above to mean that moral impropriety and reproach are closely related. We can then consider what it would mean if this combination of Mill was wrong. But let's consider first what it would mean if Mill were right.

Consider the argument that the first premise comes from Mill: If an act is improper, then it is reprehensible. Obviously, if the actor cannot foresee that a rule will have suboptimal consequences, it is not appropriate to blame the actor for accepting and abiding by the rule. From this, we derive the second premise: if an act is reprehensible, then the rules that allow that conduct to have suboptimal consequences must be foreseeable. From these two premises, we conclude that, therefore, if an act is improper, then the rules that allow that behavior to have suboptimal consequences must be foreseeable. Of course, the practical consequences of accepting a set of rules are not necessarily the same as the foreseeable consequences. Thus, if full rule consequentialism asserts that "an act is improper if and only if the rules that permit the conduct have foreseeable suboptimal consequences", then rule consequentialism cannot simultaneously consider that "an act is improper, if and only if the rules that permit the conduct have actual suboptimal consequences".

Now, let's assume in reverse that the relationship between inadequacy and blame is far less tight than Mill claims (see Sensen 1996). That is, the criteria for assuming misconduct may be quite different from the criteria for reproachability. In this case, we can hold the view that the practical rule of inappropriateness is the consequential criterion: an act is morally inappropriate, if and only if it is prohibited by such a rule—accepting that rule actually produces the greatest benefit.

Reprehensible rule consequenceism criterion: An act is reprehensible if and only if it is prohibited by such a rule that the expected benefits of accepting the rule will be greatest.

Here's how to calculate the expected benefits of a set of rules. Obviously, the acceptance of a set of rules has multiple possible consequences, assuming that we can determine the value or negative value of each possible consequence, multiply the value of each possible consequence by the probability of the consequence occurring, and then add up all these products, and the resulting figure is the expected benefit of the set of rules.

It's important to note that this is not to say that any (no matter how crazy) estimate of the probability of consequences can be used in calculations. Instead, the expected benefit should be calculated by multiplying a rational or defensible probability by the value or negative value of the possible consequences.

One might be quite skeptical that such calculations are often possible. Even if such calculations were possible, they would usually be impressionistic and imprecise. Nevertheless, it is reasonable to expect that we can make some informed judgment about the likely consequences of alternative rules and then make choices based on that judgment. In contrast, we often have no way of knowing which rules will actually produce the best consequences. Thus, the predictable rules of reproach are consequentialist standards attractive.

Now we return to this proposal: while the criterion of reproachability is expected rule consequentialism, the criterion of moral inadmissibility is actually rule consequentialism. The proposal rejects Mill's step of tying moral inaccuracies with reproachability. There is a strong rebuttal to this proposal: if moral impropriety is separated from reproachability, what is the role and importance of moral impropriety?

Thus, in order to preserve the obvious role and importance of moral impropriety, those who work on the expected rule consequentialist criterion of reproachability are likely to adopt: a simple version of the expected rule consequentialist criterion of moral impropriety: an act is morally inappropriate, if and only if it is prohibited by such a rule that the rule is accepted will produce the greatest expected benefit.

In fact, once it is understood that there is a difference between the amount of value actually generated and the reasonably expected benefit, the full rule consequentialists will most likely adopt the presumptive criteria of moral impropriety, blameability, and decision-making procedures.

What if, as far as we know, no set of rules has greater expected value than its competitors? We need to modify the expected criterion to accommodate the possibility that the elaborate version of the expected rule of moral impropriety is consequentialist criterion: an act is morally inappropriate, and only if it is prohibited by a rule that is accepted to produce the greatest expected benefit, or when there are more than one set of rules equally good according to the maximum expected benefit, the rule is closest to the usual custom.

The argument for using proximity to traditional morality to break the deadlock between sets of rules of the same expected value is based on the observation that social change often has unintended negative consequences. Not only that, but the greater the difference between the new rules and the traditionally accepted rules, the greater the scope of unintended consequences. Therefore, when we judge that two sets of rules have the same high expected value, we should choose the one that is closest to the existing morality (if the two sets of rules have the same high expected value and seem to be equally close to the situation of traditional morality, see Hooker 2000:115 for a discussion of this). For more nuanced views, see Hooker 2008:83–4. )

Implicitly, we should only change the status quo if it has a greater expected value than insisting on the status quo. Rule consequentialism clearly has the power to call for change, but it does not favor change for the sake of change. There is no doubt that the construction of rule consequentialism needs to be able to cope with the expected benefit draw. However, I will ignore this conundrum in the later sections.

6.2 Compliance and Acceptance

The construction of rule consequentialism faces other important questions, one of which is whether the construction of rule consequentialism should be based on the consequences of compliance or acceptance. Admittedly, one of the most important aspects of accepting rules is following them. The rule consequentialists, in their early formulations, did explicitly refer to compliance. For example, they say that an act is improper, if and only if it is prohibited by a rule that compliance will maximize the benefits (or expected benefits) that arise. (See Austin 1832; Brandt 1959; M. Singer 1955, 1961.)

However, acceptance of the rule may have consequences other than compliance with the rules. As Kagan (2000: 139) puts it, "Rules, once implanted, can have other effects on the consequences independent of the rules' effects on behavior: for example, just thinking about a set of rules can reassure people and thus contribute to happiness." (For a more discussion of what we might call the "consequences beyond compliance" of the rule, see Sidgwick 1907:405–6, 413;) Lyons 1965: 140; Williams 1973: 119–20, 122, 129–30; Adams 1976, esp. 470; Scanlon 1998: 203–4; Kagan 1998: 227–34.)

There is no doubt that these consequences of accepting the rule should also be incorporated into the analysis of the results benefits of the alternative rule. This can be achieved by constructing a rule consequentialism based on the consequences of accepting the rules. In fact, considerations of reassurance and incentives have played a large role in the development of rule consequentialism (Harsanyi 1977, 1982:56-61; 1993:116-18; Brandt 1979:271-77; 1988:346ff[1992:142ff.] ;1996: 126, 144; Johnson 1991, especially Chapters 3, 4, and 9).

Just as we need to move from the consequences of compliance with the rules to the broader consequences of accepting the rules, we need to go further: it is not enough to focus on the consequences of accepting the rules, and we ignore the "transformational" costs required to make the rules accepted in the first place, which of course can have an impact (Brandt 1963:section 4; 1967 [1992:126]; 1983:98; 1988:346-47349-50[ 1992:140-143144-47;1996:126-28145148152223)。

For example, suppose there is a fairly simple and relatively low-demand rule A that accepts the expected benefit of n, and suppose another set of more complex and demanding rules B produces the expected benefit of n+5. Therefore, if we only consider the expected benefits of these two sets of rules after they are accepted, then Rule B wins.

Now, however, let's add the costs associated with accepting these two sets of rules. Let's assume that since Rule A is fairly simple and the requirements are relatively low, the cost of accepting it is ?1; since Rule B is more complex and demanding, the cost of accepting Rule B is −7. Thus, if we consider the acceptance cost of two sets of rules when comparing them, then the expected value of rule A is n? 1, rule B is n+5? 7。 That is, once we consider the cost of accepting the rule, then rule A will win.

As mentioned earlier, we call this cost of making the criterion acceptable "transformation costs." Obviously, the transition must be from one arrangement to another. The arrangement we imagine to be transformed into is, of course, to accept a certain set of rules proposed, but where does our imaginary transformation begin?

One answer is that this transformation should begin with any set of moral codes that society happens to accept. This seems like a natural answer. However, this is a bad answer because rule consequentialism should not consider the cost of getting people to abandon the internalized rules when they analyze the costs of a set of rules proposed. There are two reasons for this.

One of the most important reasons is that the moral concepts that need to be considered directly or indirectly when evaluating rules can only be derived from the rule consequentialism itself, and cannot originate from other moral theories. For example, suppose that in a society where women are taught to obey men, should rule-consequenceists evaluate proposed non-sexist rules under such conditions, should they consider the costs of getting people to abandon internalized sexist rules in order to accept new non-sexist rules? Since sexist rules are unjustified, the fact that they have been accepted should not affect the evaluation of rule consequentialism.

Another reason to reject that answer is that it might support an unattractive relativism. Existing moral beliefs in different societies can vary widely, so if the rules presented by the evaluation need to consider the cost of transitioning to people who have already accepted other rules, then there may be different costs of transitioning to the same rules. For example, the cost of moving to non-racist rules is much higher than starting with a set of non-racist rules. So the desirable way we construct rule consequentialism should be this: It would make michigan in the 1960s and Mississippi in the 1960s adopt the same set of rules.

The way to do this is to accept the rules according to the "new generation". In this way, we will teach children who have not yet received any moral education based on these rules, thus comparing the "educational costs" of different alternative rules. We can imagine that children begin with a natural (immoral) tendency to favor themselves and a few others. We should also assume that learning each rule has a corresponding cognitive cost.

These are realistic assumptions that have important implications. One of them is that the cost/benefit analysis of rules will support simpler, not more complex, rules. Of course, there may be benefits to having more or more complex rules, but once the cost of teaching is taken into account, there is an upper limit to the complexity of the rules, which makes it more of the expected value than simple guidelines. Another implicit rule about sacrificing oneself to help others. Children are initially focused on their own satisfaction, so it can be extremely costly for them to internalize the impartiality of continually making great sacrifices for others. Of course, the internalization of this rule also brings huge benefits – mostly to others. Will the benefits outweigh the costs?

Beginning at least with Sidivik (1907:434), many utilitarians take it for granted that human nature can only be one of these two situations: (1) enthusiastic about some and less concerned about the rest; and (2) impartial to everyone, but only slightly concerned. In other words, according to this view of human nature, it is impossible for human beings to have a strong and impartial concern for everyone in the world. If this view is correct, then success in making people completely impartial comes at a huge cost, which is to make them care only slightly for all.

Even if this picture of human nature is incorrect, that is, it can make people completely impartial even without exhausting enthusiasm and passion, the cost of making people care for everyone as much as they care about themselves is unaffordable. There will be a point in the continuous spectrum from complete bias to total impartiality, from which it will be more than worth the loss to further push and guide everyone towards more impartiality.

6.3 Full Acceptance and Incomplete Acceptance

The cost-benefit analysis of rules only calculates the costs of internalizing a new generation, and rule consequentialists do so with a realist spirit, and they are more realistic if they assume that the internalization of rules does not need to be extended to the last person. After all, in reality, there will always be some people who end up with the wrong views on what is morally permissible, and there will be some people who will not accept any morality at all (psychopaths). Rule consequentialism requires rules to deal with such people.

Such rules are mainly punishment rules. From the point of view of rule consequentialism, the main purpose of punishment is to deter certain acts, and it is also necessary to lock up those who are not deterred but are very dangerous. Rule consequentialists may also admit that another purpose of punishment is to appease the victim and his family and friends with a primitive desire for revenge. Finally, the rules of punishment also have an expressive and reinforcing effect.

Nevertheless, there are ways in which rule consequentialism can be constructed in such a way that the rules of punishment are difficult to interpret, one of which is that an act is morally inappropriate, and only if it is forbidden by a rule that is absolutely accepted by everyone and will have the greatest expected benefit.

Suppose every adult fully accepts, for example, the rules prohibiting physical assault, theft, breach of promises, and lying against innocent people. The rules of punishment, then, are presumably unnecessary or almost unnecessary, and this makes society benefit little or no from these rules. But since incorporating each rule will have its own corresponding costs, there will be costs to including any penalty rules. Since this is a combination with costs but no benefits, the above form of rule consequentialism will not adopt any rules of punishment.

What we need is such a rule consequentialism, which has rules for dealing with those who do not accept the right rules, even those who cannot be saved. In other words, the construction of rule consequentialism should be based on the concept of society that contains people who do not fully accept the right rules, and even some who will not accept any moral rules. There is a constructor that can do this: an act is morally inappropriate, if and only if it is forbidden by a rule that is accepted by the vast majority of the new generation, which will produce the greatest expected benefit.

Note that the vast majority of those outside of the rule do not accept the rules, and the rule consequentialism neither endorses nor condones this, while the rule consequentialism claims that these people are morally wrong. In fact, the purpose of such a construction of rule consequentialism is precisely to make room for the rules that punish these people.

Of course, there is a problem with the above construction: the "vast majority" is extremely imprecise. If you choose a precise ratio, say 90%, there will obviously be an element of arbitrariness (why not 89% or 91%?). )。 However, we can argue that a certain number in a certain range is a reasonable compromise under two pressures: on the one hand, the percentage we choose should be close to 100%, in order to ensure that the moral rules are accepted by the whole of human society; on the other hand, this proportion needs to be a sufficient distance from 100% to leave enough room for the punishment rule. Given the need to balance these factors, 90% seems to be within a reasonable range. (For a dissenting discussion, see Ridge 2006; for the Response to Ridge, see Hooker and Fletcher 2008.) The issue was further discussed in H. Smith 2010; Tobia 2013; Portmore 2015;)

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Three methods of argument in favor of rule consequentialism

We have seen that rule consequentialism evaluates rules according to the expected value of the vast majority of people accepting them. What are the rules for such an approach? It would subscribe to the rules that physically assaulting innocents, damaging or taking away other people's property, breaking promises, and lying are prohibited; individuals are required to be more concerned about the needs of family and friends; and more generally, they are required to be willing to help others complete their (morally permissible) plans. Why? The rough answer is that societies that widely internalize and accept these rules are better than societies that don't.

The fact that rule consequentialism agrees with such rules makes rule consequentialism attractive because, intuitively, the rules seem correct. However, other moral theories also agree with these rules. The most obvious of these is the common moral pluralism, which argues that these intuitively attractive rules constitute the most basic level of morality, that is, there is no deeper moral principle to support and unify these rules. We call this view Rossian pluralism (in honor of its supporter W.D. Ross (1930, 1939)).

Rule consequentialism may be consistent with Rus' pluralism in identifying rules that prohibit rules such as "attacking the innocent, stealing, and breaking promises" and demanding "all sorts of loyalties and doing good for others in general." But one thing about rule consequentialism that transcends Russian pluralism is that it establishes a supporting unifying principle that can provide an impartial defense of these rules. There are other moral theories that also attempt to do this, such as some forms of Kantism (Audi 2001, 2004) and contractualism (Scanlon 1998; Parfit 2011; Levy 2013). In any case, the first way to defend rule consequentialism is to argue that it establishes a supporting principle that can provide an impartial justification for the various ethical rules that are intuitively feasible, and that no other competing theory can do so (Urmson 1953; Brandt 1967; Hospers 1972; Hooker 2000)。 (Attacks on the argument of rule consequentialism include Stratton Lake 1997; Thomas 2000; D.E. Miller 2000; Montague 2000; Arneson 2005; Moore 2007;) Hills 2010; Levy 2014.)

The first method of argument in favor of rule consequentialism can be seen as based on the idea that the more the theory increases the coherence of our beliefs, the more justification it becomes. (Rawls 19511971: 19-21, 46-51; DePaul 1987; Ebertz 1993; Sayre-McCord 1986, 1996) [see the entry "The theory of integration on cognitive defense"]. But this approach can also be seen as moderate foundationalist, since its construction of moral theories is based on a set of beliefs (about various moral rules) that, though not considered absolutely infallible, are endowed with independent reliability. (Audi 1996, 2004; Crisp 2000) [see article on foundationalist theories of cognitive defense]. It is true that merging with our moral convictions does not make a theory true, because our moral convictions can certainly be wrong. But the more incompatibility a moral theory is with our moral beliefs, the harder it is for us to find it to be defended.

The second argument in favor of rule consequentialism is very different. It first endorses the consequentialist way in which behavior is evaluated, and then argues that indirect evaluation (e.g., by focusing on the consequences of the rule of public acceptance) produces better consequences than direct evaluation based on the consequences of the behavior itself. (Austin 1832;Brandt 1963、1979;Harsanyi 1982:58-60;1993;Riley 2000)。 After all, the primary significance of moral evaluation of behavior is to guide behavioral decision-making, so if an evaluation method is likely to lead to bad decisions, or more generally leads to bad consequences, then in the eyes of consequentialists, this method of evaluating behavior is just as bad.

As we have seen earlier, all consequentialists now agree that evaluating the expected value of each act individually is largely a poor moral decision-making process. It is recognized that actors decide how to behave based on certain rules, such as "don't attack others", "don't steal", "don't break your promises", "be more concerned about the needs of family and friends" and "help others in general". Such rules are all agreed upon by rule consequentialists. However, many consequentialists argue that this is not enough to show that complete rule consequentialism is the best form of consequentialism. As long as two aspects are distinguished, on the one hand, the best procedure for moral decision-making, and on the other hand, the standard of moral legitimacy, then all consequentialists can admit that our decision-making process needs rules of consequentialism. But those consequentialists who are not rule-based consequentialists argue that rules have no effect on the standard of moral legitimacy of behavior. Thus, these consequentialists reject the full rule consequentialism defined in this article.

The above rebuttal to the second argument in favor of rule consequentialism, whether it is good or bad, depends on the legitimacy of the criteria that distinguish between proper process and moral legitimacy for moral decision-making. This issue is still controversial. (Hooker 2010; de Lazarradek and Singer 2014: Chapter 10).

However, the second argument in favor of rule consequentialism encounters a very different rebuttal. This rebuttal is directed at the first step of this argument, the identification with the consequentialist way of evaluation, because this evaluation itself needs to be defended. Why assume that the consequentialist way is the only reasonable way to evaluate things?

One might argue that the consequentialist approach is justified because of the apparent intuitive appeal of promoting the idea of impartial interests. But this doesn't work, because there are other ways besides the consequentialist way that also have a clear intuitive appeal. For example, "act according to the set of rules that no one can reasonably reject". In fact, no moral conception of so abstract as its competitors can be so superior that it does not require further defense. The way we defend moral theories cannot be bet the question, that is, we cannot start with the assumption of "which moral theory is most feasible?"

A third argument in favor of rule consequentialism is contract theory. (Harsanyi 1953、1955、1982、1993;Brandt 1979、1988、1996;Scanlon 1982、1998;Parfit 2011;Levy 2013)。 Now suppose we can determine reasonable conditions under which everyone (or at least for good reason) chooses the same set of rules. Intuitively, such a hypothetical agreement would legitimize the set of rules. Now, if internalizing the chosen set of rules maximizes the expected benefits, then contract theory will lead to rule consequentialism.

Views differ on what conditions are reasonable conditions for the choice of moral rules. One view is that this condition imposes an imaginary "veil of ignorance" behind which everyone is unaware of any specific facts of himself or herself (Harsanyi 1953, 1955), thus ensuring the impartiality of each person. Another view is that people should choose ethical codes based on: (a) having all the complete information that "everyone is affected by the rules", (b) normal concerns (both self-interested and altruistic), and (c) having roughly equal bargaining power for everyone (Brandt 1979; cf. Gert 1998)。 Parfit (2011) proposes that rules should be sought, that they are choices or wills that everyone has (personal or impartial) reasons to choose or will be accepted by all. If sufficient reasons for impartiality always exist, even if they conflict with personal reasons, then if the general acceptance of rules leads to the best consequence of considering them according to impartiality, then everyone has a good reason to wish that all people accept such rules. Similarly, Levy (2013) argues that no one can reasonably reject a set of rules that places a smaller overall burden on one person than any other rule imposes on any other person. These arguments show that contract theory and rule consequentialism are equivalent in extension. (For Parfit's assessment of the success of the conventionally consequentialist view of contract theory, see J. Ross 2009; Nebel 2012; Hooker 2014.) )

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Does rule consequentialism necessarily degenerate, discord, or worship rules?

Rule consequentialism was only clearly articulated in Ulson's paper of 1953 and Brandt's paper of 1959. Until the 1970s, this theory received considerable attention. Since then, however, most moral philosophers have argued that rule consequentialism has been dealt a fatal blow by the dilemma that rule consequentialism has either degenerated into a simpler behavioral consequentialism that is equivalent in practice, or is inherently inconsistent.

Some argue that rule consequentialism and behavioral consequentialism are equivalent in practice for the following reasons. Let's think about a rule that rule consequentialists claim to endorse, such as "Don't steal." Now suppose the actor is in a situation where stealing produces more benefits than not stealing. Rule consequentialism If the choice is made according to the expected interests of the rule, then it seems to have to admit that the observance of "do not steal unless ... or...... Or..." Such a rule is better than following a simpler "don't steal" rule. This can be further promoted. In other words, as long as compliance with a certain rule does not produce the greatest expected benefit, rule consequentialism seems to be forced to comply with a modified rule in order to avoid missing out on producing the greatest expected benefit in the current situation. But if rule consequentialism works in this way, then what it ultimately requires in practice and behavioral consequentialism requires is exactly the same behavior.

If the behavior that rule consequentialism ultimately requires is exactly the same as that of behavioral consequentialism, then rule consequentialism does get into serious trouble, because rule consequentialism is the more complex of the two theories. This leads to the rebuttal that if we can derive the same practical results more efficiently with simpler behavioral consequentialism, what is the point of adopting a rule consequentialism and endlessly correcting it?

In fact, rule consequentialists have a good response to this rebuttal. This response is based on the idea that in the best version of rule consequentialism, the ranking of the rule system is not based on the expected benefit of abiding by the rule, but on the expected benefit of accepting the rule. For example, if a rule against theft is accompanied by one exception after another, those exceptions will provide too many temptations for the actor to believe that one of the extra-legal provisions applies, but is in fact only beneficial to the actor himself. And offering temptation can also weaken other people's confidence that their property will not be stolen. The same is true for most other ethical rules: adding too many exceptions can undermine confidence that others will behave in a certain way, such as keeping promises and avoiding theft.

In addition, when comparing alternative rules, we must also consider the costs associated with internalizing them by a new generation. Obviously, if the number of rules that the new generation learns is too numerous or too complex, the costs required will be prohibitive. Therefore, the set of rules supported by rule consequentialism will not be too numerous and not too complex.

Similarly, if a rule requires actors to make great sacrifices for those who are not particularly related to themselves, the cost of internalizing that rule can be high. Of course, there are many benefits to following such demanding rules, mostly for others. But we should weigh the costs of internalizing these rules against the benefits of complying with them. At least at some high level of requirements, the cost of internalizing these rules outweighs the benefits of complying with them. Therefore, a careful cost/benefit analysis of internalized high-demand rules will find that the analysis rejects overstepped rules.

These rules, which rule consequentialism supports (i.e., a set of rules that are not too numerous, not too complex, and not too demanding) can sometimes lead to people doing behaviors that do not bring the most expected value. For example, with compliance with more complex rules such as "Don't steal unless... or...... or...... or...... or...... Or..." Compared to following simpler rules like "don't steal," the results are not that good. Another example is that rules allow people to prioritize their own plans to some extent, even if sacrificing themselves to help others yields more benefits. The rule consequentialist view is that while it is sometimes true that getting people to generally accept simpler, less demanding rules can sometimes lead to suboptimal outcomes in the actions people do, the expected value in the long run is greater than the general acceptance of the most complex and demanding rules. Since rule consequentialism can tell people to follow simpler and less demanding rules, even when following those rules does not maximize the expected benefits, rule consequentialism does not degenerate into practically equivalent behavioral consequentialism.

But as long as rule consequentialism avoids this metamorphosis, it is accused of incompatibility because it believes that an act can be morally permissible or even required, if not maximizing the expected benefits. Behind this accusation must be the assumption that rule consequentialism overwhelmingly promises to maximize profits. It is incompatible to have both this overwhelming commitment and to oppose the acts required by that commitment. (For the latest developments in this line of thinking, see Arneson 2005; Card 2007; Wall 2009.) )

In order to evaluate this "incoherence objection" against rule consequentialism, we need to be more clear about the overwhelming promise of maximizing profits. Should it be part of the moral psychology of the rule-based consequentialist actors? Or should it be part of the theory of rule consequentialism?

However, rule-based consequentialists may not need to make maximizing profit the ultimate and supreme moral goal. Instead, they can have a moral psychology: their basic moral motivation is to do what can be defended impartially. They believed that it was possible to defend impartially by acting in accordance with the rules for obtaining an impartial defence. They also argue that rule consequentialism, taken together, is the best argument for an impartial defense. The actor who possesses this moral psychology—that is, this combination of moral motives and beliefs—will have a moral incentive to act according to the rules of consequentialism. This moral psychology is certainly possible, and, for the actor with this moral psychology, there is nothing inconsistent with the rules that do not maximize the expected benefits.

We already know that rule-consequenceist actors do not necessarily have an overwhelming commitment to maximizing the expected benefits, but does their theory have to do so? The answer is no, and the essence of rule consequentialism is a combination of these two claims: (1) rules can only be chosen based on their consequences, and (2) rules determine which actions are morally inappropriate. This is the whole of the theory, especially without a third part that includes or implies an overwhelming commitment to maximize the expected benefits.

There is nothing wrong with the absence of an overwhelming commitment to maximizing the expected benefits, then rule consequentialism prohibits certain behaviors, even if they maximize the expected benefits. Similarly, even if the behavior required by rule consequentialism conflicts with the maximization of interests, there is nothing incompatible here. Once we realize that neither the actors of rule consequentialism nor the theory itself demand an overwhelming commitment to maximizing the expected benefits, then the most famous refutation of rule consequentialism disintegrates.

Whether this defense of dissonance rebuttals works depends in part on what arguments are in favor of rule consequentialism. If this argument begins with a commitment to consequential evaluation, then this defense does not seem to work as well. For starting with such a commitment is very close to starting with an overwhelming commitment to maximize the expected benefits. However, if this argument is that rule consequentialism, in contrast to any other moral theory, provides the best unbiased defense of intuitive moral rules, then this defense of dissonance is much more secure. (For more information on this, see Hooker 2005, 2007.) )

Another ancient rebuttal to rule-based consequentialism is that rule-based consequentialists must be "rule-worshippers"—those who insist on following the rules even when they know that following them will have disastrous consequences.

One response to this rebuttal is that rule consequentialism subscribes to the rule that requires individuals to prevent catastrophic consequences, even if doing so would require violating other rules (Brandt 1992: 87–8, 150–1, 156–7). Admittedly, what constitutes a disaster is a very complex one. Think for example, what counts as a disaster when the "disaster prevention" rule conflicts with the rule against lying; what counts as a disaster when the "disaster prevention" rule conflicts with the rule against theft; or what counts as a disaster when it conflicts with the rule against harming innocent people. Rule consequentialism may need to illustrate these issues more clearly. At the very least, though, it's not right to blame rule-based consequentialism for leading to disaster.

We need to avoid a serious confusion – the idea that the addition of rule-based consequentialism to the "disaster prevention" rule means that it is degenerating into a behavioral consequentialism that is practically equivalent to maximization. Maximal behavioral consequentialism holds that we should lie, steal, or harm the innocent, as long as doing so will have a greater benefit than the expected benefits of not doing so. And the rules that require people to prevent disasters do not draw this inference. In contrast, the "disaster prevention" rule applies only if the expected value varies greatly.

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Other rebuttals to rule consequentialism

From the mid-1960s to the mid-1990s, most philosophers believed that rule-consequenceism was the result of the rebuttals discussed in the previous section, so for more than three decades most philosophers felt the need for other rebuttals to the theory. If, however, rule consequentialism provides a convincing response to all three rebuttals just discussed, then a good question arises: whether there are other lethal rebuttals to the theory.

One such rebuttal attempts to show that, according to the principles of the theory's selection of rules, it sometimes chooses rules that are intuitively unacceptable. For example, Tom Carson (1991) argues that rule-based consequentialism would be extremely demanding in the real world. Mulgan (2001, especially chapter 3) agrees with Carlson and goes on to argue that rule consequentialism has counterintuitive conclusions in the possible world, even if conclusions are fine in the real world. If Margan is right on this point, it will make one wonder how rule consequentialism can also explain "why certain requirements are appropriate in the real world." The debate over these issues continues (Hooker 2003; Lawlor 2004; Woollard 2015: 181–2055)。 Margan has become a developer of the theory rather than a critic (Mulgan 2006, 2009, and 2015).

A related rebuttal to rule consequentialism is that the rule consequentialism's defense of common-sense rules is based on empirical facts, such as actual human nature, such as how many people need help and how many people can help. This rebuttal is that the defense of common-sense moral rules should be inevitable, not accidental (McNaughton and Rawling 1998; Gaut 1999, 2002; Montague 2000; Suikkanen 2008)。 The close relatives of the above refutation are the defense of rule consequentialism that relies on false facts (Arneson 2005; Portmore 2009). Similarly, the debate over whether the theory points to false facts continues (see Woollard 2015, in particular pp. 185–86, 203–205)。

If rule consequentialism takes into account the cost of a new generation of internalized rules, the mechanism by which new rules are taught raises many difficult questions. The purpose of proposing the "new generation" is to avoid calculating the cost of de-internalizing specific rules of the existing generation that has internalized other moral rules and prejudices, but can we make a coherent description of those responsible for teaching the new generation? If we imagine teachers as having internalized the ideal rule, how did this internalization come about? If these teachers are imagined not to have internalized the ideal rule, then the conflict between the ideal rule and the rule they have already internalized will have a cost. (This rebuttal was presented by John Andrews, Robert Ehman and Andrew Moore, see Levy 2000).) A related rebuttal is that rule consequentialism has yet to find a way to construct it to make it feasible to deal with conflicts between rules (Eggleston 2007).

Another line of thought against rule-based consequentialism focuses on the idea that the factors that determine moral right and wrong must be appropriate for public recognition. Arneson (2005), de Lazari Radek and Singer (2014) argue that, contrary to the rule-consequenceist view, there is a potential gap between what is appropriate for public recognition and what truly determines moral legitimacy. However, rule consequentialism holds that the factors that determine moral legitimacy must be appropriate for public recognition, which some argue is not only one of the commonalities between rule consequentialism and Kant's ethics, but also one of the attractive points of rule consequentialism (Hooker 2000, 2010; Hill 2005; Parfit 2011; Cureton 2015).

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism

Editor: Aero

Typography: Mo Yi

Audit: Yongfang

Artist/VI: Little Week

● Pessimistic or optimistic? How does the philosophical community view the metacosm?

● An existential love experiment

● "Mechanic Hime" | Look for people in the tension between things and gods

● Chen Jiaying: All human intelligence is a kind of dialogue

● The Milky Way in the Eyes of Marxists

Stanford Philosophical Encyclopedia SEP | Rule Consequentialism