Years ago when i used to study filosofy, i came across Joshua’s website. On the site i found his phd thesis which i read. It is probably the best meta-ethics writing ive come across. He seems to have removed it from the site “available by request”, however i still have it: Greene, J. D. (2002). The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It. Anyway, this thesis is what apparently turned into the book. The book is clearly written for a mass market, so it has only a few notes and is very light on statistics. I think it is basically sound. The later chapters were somewhat annoying to read due to excessive repetition and unclear language. I suppose he added to appeal more to laymen and confused people.
In he introduction, he is so nice as to lay out the book:
In part 1 (“Moral Problems”), we’ll distinguish between the two major kinds of moral problems. The first kind is more basic. It’s the problem of Me versus Us: selfishness versus concern for others. This is the problem that our moral brains were designed to solve. The second kind of moral problem is distinctively modern. It’s Us versus Them: our interests and values versus theirs. This is the Tragedy of Commonsense Morality, illustrated by this book ‘s first organizing metaphor, the Parable of the New Pastures. (Of course, Us versus Them is a very old problem. But historically it’s been a tactical problem rather than a moral one.) This is the larger problem behind the moral controversies that divide us. In part 1, we’ll see how the moral machinery in our brains solves the first problem (chapter 2) and creates the second problem (chapter 3).
In part 2 (” Morality Fast and Slow”), we’ll dig deeper into the moral brain and introduce this book’s second organizing metaphor: The moral brain is like a dual-mode camera with both automatic settings (such as “portrait” or “landscape”) and a manual mode. Automatic settings are efficient but inflexible. Manual mode is flexible but inefficient. The moral brain’s automatic settings are the moral emotions we’ll meet in part 1, the gut-level instincts that enable cooperation within personal relationships and small groups. Manual mode, in contrast, is a general capacity for practical reasoning that can be used to solve moral problems, as well as other practical problems. In part 2, we’ll see how moral thinking is shaped by both emotion and reason (chapter 4) and how this “dual-process” morality reflects the general structure of the human mind (chapter 5).
In part 3, we’ll introduce our third and final organizing metaphor: Common Currency. Here we’ ll begin our search for a met amorality, a global moral philosophy that can adjudicate among competing tribal moralities, just as a tribe’ s morality adjudicates among the competing interests of its members. A metamorality’s job is to make trad e-offs among competing tribal values, and making trade-off s requires a common currency, a unified system for weighing values. In chapter 6, we’ll introduce a candidate metamorality, a solution to the Tragedy of Commonsense Morality . In chapter 7, we’ll consider other ways of establishing a common currency, and find them lacking. In chapter 8, we’ll take a closer look at the metamorality introduced in chapter 6, a philosophy known (rather unfortunately) as utilitarianism. We’ll see how utilitarianism is built out of values and reasoning processes that are universally accessible and, thus, how it gives us the common currency that we need.*
Over the years, philosophers have made some intuitively compelling arguments against utilitarianism. In part 4 (” Moral Convictions”), we’ll reconsider these arguments in light of our new understanding of moral cognition. We’ll see how utilitarianism becomes more attractive the better we understand our dual-process moral brains (chapters 9 and 10).
Finally, in part 5 (” Moral Solutions”), we return to the new pastures and the real-world moral problems that motivate this book. Having defended utilitarianism against its critics, it’s time to apply it-and to give it a better name. A more apt name for utilitarianism is deep pragmatism (chapter 11 ). Utilitarianism is pragmatic in the go o d and familiar sense: flexible, realistic, and open to compromise. But it’s also a deep philosophy , not just about expediency. Deep pragmatism is about making principled compromises. It’s about resolving our differences by appeal to shared values-common currency.
So, TL;DR, morality is an evolved mechanism to facilitate cooperation. It does this well, but not always. Typical moral disagreements are confused due to relying on rights-talk. Rights-talk is fundamentally useless even counter-productive to resolving conflicts. Utilitarianism (aka cost-benefit analysis in moral language) is the only game in town, so even if it is not technically true, it is still the most useful approach to moralizing.