Saturday, March 12, 2011

Science And Morality: You Can’t Derive 'Ought' From 'Is'

A little while back, I picked up on the debate between Sam Harris and Sean Carroll on Science and Morality. This is a subject Ursula has also written about for us. Today Sean Carroll guest blogs for 13.7 and offers some more thoughts on the issue. Sean is a well known theoretical physicist, blogger and author of the most excellent From Eternity to Here: The Quest for the Ultimate Theory of Time. We've also invited Sam Harris to reply to this post.
—Adam Frank
Thanks to Adam and the others for a chance to wander away from my normal stomping grounds and post here at 13.7.

Back in March, a TED talk by Sam Harris sparked a debate about whether you could derive morality from science? I posted about it, and Harris responsed, and I posted a brief followup.  But my contributions were more or less dashed off, and I never did give a careful explanation of why I didn't think it was possible.
So, what do you say, once more into the breach?
I'm going to give the basic argument first, then litter the bottom of the post with various disclaimers and elaborations.
I want to start with what I think is a non-controversial statement about what science is.  Namely, science deals with empirical reality — with what happens in the world, i.e. what "is."  Two scientific theories may disagree in some way — "the observable universe began in a hot, dense state about 14 billion years ago" vs. "the universe has always existed at more or less the present temperature and density."  Whenever that happens, we can always imagine some sort of experiment or observation that would let us decide which one is right.  The observation might be difficult or even impossible to carry out, but we can always imagine what it would entail.  (Statements about the contents of the Great Library of Alexandria are perfectly empirical, even if we can't actually go back in time to look at them.) If you have a dispute that cannot, in principle, be decided with observable facts about the world, your dispute is not one of science.
With that in mind, let's think about morality. What would it mean to have a science of morality?  I think it would look have to look something like this:
Human beings seek to maximize something we choose to call "well-being" or "utility" or "happiness" or "flourishing" or something else.  The amount of well-being in a single person is a function of what is happening in that person's brain, or at least in their body as a whole.
That function can in principle be empirically measured. The total amount of well-being is a function of what happens in all of the human brains in the world, which again can in principle be measured.  The job of morality is to specify what that function is, measure it, and derive conditions in the world under which it is maximized.
All this talk of maximizing functions isn't meant to lampoon the project of grounding morality on science, it's simply taking it seriously.  Casting morality as a maximization problem might seem overly restrictive at first glance, but the procedure can potentially account for a wide variety of approaches.  A libertarian might want to maximize a feeling of personal freedom, while a traditional utilitarian might want to maximize some version of happiness.
The point is simply that the goal of morality should be to create certain conditions that are, in principle, directly measurable by empirical means.  (If that's not the point, it's not science.)
Nevertheless, I want to argue that this program is simply not possible.  I'm not saying it would be difficult — I'm saying it's impossible in principle.  Morality is not part of science, however much we would like it to be.  There are a large number of arguments one could advance for in support of this claim, but I'll stick to three.
  1.  There's no single definition of well-being.  
People disagree about what really constitutes "well-being" (or whatever it is you think they should be maximizing).  This is so perfectly obvious, it's hard to know what to defend.  Anyone who wants to argue that we can ground morality on a scientific basis has to jump through some hoops.
First, there are people who aren't that interested in universal well-being at all. There are serial killers, and sociopaths, and racial supremacists. We don't need to go to extremes, but the extremes certainly exist.
The natural response is to simply separate out such people; "we need not worry about them," in Harris's formulation.  Surely all right-thinking people agree on the primacy of well-being.  But how do we draw the line between right-thinkers and the rest?  Where precisely do we draw the line, in terms of measurable quantities? And why there?  On which side of the line do we place people who believe that it's right to torture prisoners for the greater good, or who cherish the rituals of fraternity hazing?  Particularly, what experiment can we imagine doing that tells us where to draw the line?
More importantly, it's equally obvious that even right-thinking people don't really agree about well-being, or how to maximize it.  Here, the response is apparently that most people are simply confused (which is on the face of it perfectly plausible).  Deep down they all want the same thing, but they misunderstand how to get there; hippies who believe in giving peace a chance and stern parents who believe in corporal punishment for their kids all want to maximize human flourishing, they simply haven't been given the proper scientific resources for attaining that goal.
While I'm happy to admit that people are morally confused, I see no evidence whatsoever that they all ultimately want the same thing.  The position doesn't even seem coherent.  Is it a priori necessary that people ultimately have the same idea about human well-being, or is it a contingent truth about actual human beings?  Can we not even imagine people with fundamentally incompatible views of the good?  (I think I can.)  And if we can, what is the reason for the cosmic accident that we all happen to agree?  And if that happy cosmic accident exists, it's still merely an empirical fact; by itself, the existence of universal agreement on what is good doesn't necessarily imply that it is good.  We could all be mistaken, after all.
In the real world, right-thinking people have a lot of overlap in how they think of well-being.  But the overlap isn't exact, nor is the lack of agreement wholly a matter of misunderstanding.  When two people have different views about what constitutes real well-being, there is no experiment we can imagine doing that would prove one of them to be wrong.  It doesn't mean that moral conversation is impossible, just that it's not science.
2.  It's not self-evident that maximizing well-being, however defined, is the proper goal of morality. 
Maximizing a hypothetical well-being function is an effective way of thinking about many possible approaches to morality.  But not every possible approach.  In particular, it's a manifestly consequentialist idea — what matters is the outcome, in terms of particular mental states of conscious beings.  There are certainly non-consequentialist ways of approaching morality; in deontological theories, the moral good inheres in actions themselves, not in their ultimate consequences.  Now, you may think that you have good arguments in favor of consequentialism.  But are those truly empirical arguments?  You're going to get bored of me asking this, but:  what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?
The emphasis on the mental states of conscious beings, while seemingly natural, opens up many cans of worms that moral philosophers have tussled with for centuries.  Imagine that we are able to quantify precisely some particular mental state that corresponds to a high level of well-being; the exact configuration of neuronal activity in which someone is healthy, in love, and enjoying a hot-fudge sundae.  Clearly achieving such a state is a moral good.  Now imagine that we achieve it by drugging a person so that they are unconscious, and then manipulating their central nervous system at a neuron-by-neuron level, until they share exactly the mental state of the conscious person in those conditions.  Is that an equal moral good to the conditions in which they actually are healthy and in love etc.?  If we make everyone happy by means of drugs or hypnosis or direct electronic stimulation of their pleasure centers, have we achieved moral perfection?  If not, then clearly our definition of "well-being" is not simply a function of conscious mental states.  And if not, what is it?
3.  There's no simple way to aggregate well-being over different individuals.
The big problems of morality, to state the obvious, come about because the interests of different individuals come into conflict. Even if we somehow agreed perfectly on what constituted the well-being of a single individual — or, more properly, even if we somehow "objectively measured" well-being, whatever that is supposed to mean — it would generically be the case that no achievable configuration of the world provided perfect happiness for everyone.  People will typically have to sacrifice for the good of others by paying taxes, if nothing else.
So how are we to decide how to balance one person's well-being against another's?  To do this scientifically, we need to be able to make sense of statements like "this person's well-being is precisely 0.762 times the well-being of that person."  What is that supposed to mean?  Do we measure well-being on a linear scale, or is it logarithmic?  Do we simply add up the well-beings of every individual person, or do we take the average?  And would that be the arithmetic mean, or the geometric mean?  Do more individuals with equal well-being each mean greater well-being overall?  Who counts as an individual? Do embryos?  What about dolphins?  Artificially intelligent robots?
These may sound like silly questions, but they're necessary ones if we're supposed to take morality-as-science seriously.  The easy questions of morality are easy, at least among groups of people who start from similar moral grounds, but it's the hard ones that matter.
This isn't a matter of principle vs. practice; these questions don't have single correct answers, even in principle.  If there is no way in principle to calculate precisely how much well-being one person should be expected to sacrifice for the greater well-being of the community, then what you're doing isn't science. And if you do come up with an algorithm, and I come up with a slightly different one, what's the experiment we're going to do to decide which of our aggregate well-being functions correctly describes the world?  That's the real question for attempts to found morality on science, but it's an utterly rhetorical one; there are no such experiments.
Those are my personal reasons for thinking that you can't derive ought from is.  The perceptive reader will notice that it's really just one reason over and over again — there is no way to answer moral questions by doing experiments, even in principle.
Now to the disclaimers. They're especially necessary because I suspect there's no practical difference between the way that people on either side of this debate actually think about morality. The disagreement is all about deep philosophical foundations. Indeed, as I said in my first post, the whole debate is somewhat distressing, as we could be engaged in an interesting and fruitful discussion about how scientific methods could help us with our moral judgments, if we hadn't been distracted by the misguided attempt to found moral judgments on science.  It's a subtle distinction, but this is a subtle game.
First: It would be wonderful if it were true.  I'm not opposed to founding morality on science as a matter of personal preference. I mean, how awesome would that be?  Opening up an entirely new area of scientific endeavor in the cause of making the world a better place:  I'd be all for that.  Of course, that's one reason to be especially skeptical of the idea; we should always subject those claims that we want to be true to the highest standards of scrutiny. In this case, I think it falls far short.
Second: science will play a crucial role in understanding morality.  The reality is that many of us do share some broad-brush ideas about what constitutes the good, and how to go about achieving it.  The idea that we need to think hard about what that means, and in particular how it relates to the extraordinarily promising field of neuroscience, is absolutely correct.  But it's a role, not a foundation.  Those of us who deny that you can derive "ought" from "is" aren't anti-science, we just want to take science seriously, and not bend its definition beyond all recognition.
Third: morality is still possible. Some of the motivation for trying to ground morality on science seems to be the old canard about moral relativism: "If moral judgments aren't objective, you can't condemn Hitler or the Taliban!"
Ironically, this is something of a holdover from a pre-scientific worldview, when religion was typically used as a basis for morality. The idea is that a moral judgment simply doesn't exist unless it's somehow grounded in something out there, either in the natural world or a supernatural world.  But that's simply not right.  In the real world, we have moral feelings, and we try to make sense of them.  They might not be "true" or "false" in the sense that scientific theories are true or false, but we have them.
If there's someone who doesn't share them (and there is!), we can't convince them that they are wrong by doing an experiment. But we can talk to them and try to find points of agreement and consensus, and act accordingly.  Moral relativism doesn't imply moral quietism.  And even if it did that wouldn't affect whether or not it was true.
And finally: Pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don't agree with ordinary science.  That's mixing levels of description.  It is true that the tools of science cannot be used to change the mind of a committed solipsist who believes they are a brain in a vat, manipulated by an evil demon; yet, those of us who accept the presuppositions of empirical science are able to make progress.  But here we are concerned only with people who have agreed to buy into all the epistemic assumptions of reality-based science — they still disagree about morality.  That's the problem.  If the project of deriving ought from is were realistic, disagreements about morality would be precisely analogous to disagreements about the state of the universe fourteen billion years ago. There would be things we could imagine observing about the universe that would enable us to decide which position was right.  But as far as morality is concerned, there aren't.
All this debate is going to seem enormously boring to many people, especially as the ultimate pragmatic difference seems to be found entirely in people's internal justifications for the moral stances they end up defending, rather than what those stances actually are.  Hopefully those people haven't read nearly this far.  To the rest of us, it's a crucially important issue; justifications matter!  But at least we can agree that the discussion is well worth having.  And it's sure to continue.
This piece first appeared on Cosmic Variance

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...