Please participate in my highly scientifical, completely experimental scientific experiment:
You are in a situation where there is an opportunity to to [horrible thing X]. You become aware, through a brief moment of unexplained omnipotence, that you can be absolutely certain that if you do not do [horrible thing X] that [even more horrible thing Y] will happen. Will you do [horrible thing X]?
What? You won't? What kind of a moral imbecile are you? Can't you see that according to my highly experimental scientifical and completely realistic situation that I have constructed that the only reasonable thing to do is [horrible thing X]? Boy, what a bunch of idiots you non-scientifical layschmucks are.
Of course, there is now an entire field devoted to exploring similar experiments to the one above with literature that, according to the New York Times book review (sorry, it's not online) "makes the Talmud look like Cliff's notes." The most famous is the trolley problem, and the related "fat man trolley problem." In the simple trolley problem, a trolley is going to kill five people, and you can save them by switching the track, which only has one person on it. Stuff like this happens to me all the time.
Most people are willing to flip the switch. By comparison, in the related "fat man" problem, the only way to save the five people is to throw a single fat man in front of the trolley. He's fat, see, because you know that your skinny self won't stop the trolley, so there's no "out" through self-sacrificial altruism. I assume people who are already fat aren't allowed to participate in the experiment.
A lot of really intelligent people are shocked that experimental subjects who are willing to flip the switch to the single person are not willing to throw the fat guy. Or if not shocked, then patronizingly dismissive, like Peter Watts. Not to specifically pick on Watts; he's a crank in the good sense and we can't have enough of those.
I'd argue that the reason people can't seem to make this seemingly completely logical choice is that our sense of morality has evolved not to deal with completely logical theoretical situations like this, but rather to deal with moral choices we actually might have to make in real life.
The FMT problem assumes that you are somehow able to calculate, in the split-second you have to make the decision, that the mass of the FM will be enough to stop the trolley, but you won't. Also, that you'll be able to overcome the resistance of the FM and that the trolley going off the tracks won't cause even more death and destruction, and a thousand other things that no one could be expected to know. A thousand factors that you must somehow instantly calculate to determine that there is no alternative to going against what the Torah and Bible call the Sixth Commandment, a rule that's pretty universally noted (though not, unfortunately, so universally followed) in every religious and moral system anyone's ever come up with.
But Jim, you foolish literalist you, can't you see that this is a completely theoretical concept? We know that no one is actually going to have to make that choice. It's not like it's going to change what people do in the real world.
Recently enough, however, a completely different exercise in theoretical morality might well have had a similar effect. Alan Dershowitz, the famous legal mind who among other things defended OJ, argued not long after 9/11 that there might be circumstances under which there would be a justification for "torture warrants." His reasoning is based on the "ticking time-bomb terrorist case," in which we have a terrorist and are somehow absolutely certain that there is a ticking nuclear bomb that's going to blow up New York and that this terrorist knows where it is, but we somehow also don't know where it is ourselves. It doesn't take a lot of thought to see the similarities to the trolley problem. Tom Tomorrow had a brilliant cartoon, which I somehow can't find, in which he imagined a possibility that a small baby swallowed the instructions to disable a ticking time bomb. "Foolish shortsighted Congress! They never created a legal mechanism to cut open a baby!" wails a policeman.
The point is, if you think about it enough, you can come up with a theoretical situation under which any horrible action could theoretically be justified. Like imagine if, for some reason, um, if you didn't torture an innocent little girl to death, like, a hundred nuclear bombs would go off all over the world killing half of Earth's population! Would you torture the little girl? Would you? Would you? Come on, it's a completely logical theoretical situation!
A few years after Dershowitz' completely theoretical bit of reasoning, we discovered what was going on at Abu Ghraib and Guantanamo, which of course was nothing like what Dershowitz had in mind. But the moral reasoning here was like a game of telephone, in which all of the reasonable logical parts were lost as it passed from person to person. Once the possibility for torture became open to discussion, the barriers against it washed away like sand in the tides. Yes, I know it's not that simple. Our intelligence agencies have been torturing people for a long time, and Alan Dershowitz didn't open the gate himself, but rather was responding to the gates being opened by other people. The point is, theoretical arguments can have real-world outcomes.
But what about the fat-man trolley argument? We don't have an epidemic of people shoving obese people in front of trains, right? So what's the problem?
But then, for a lot of people, the Iraq war itself was a fat-man trolley type problem. Yes, people argued, a lot of people will die in the war (though in most cases, nobody they personally knew). But as a result we'll get rid of the monster Saddam, as a result saving many more people. That argument doesn't hold a lot of water now, because according to most estimates the number of people killed since the invasion has surpassed even the worst estimates of Saddam's monstrosities. And if you're being a strict utilitarian, you can't argue that it's different because we ourselves didn't kill all the people. The trolley went off the rails and straight through a pedestrian mall, just as anyone who has read much about the history of wars should have anticipated it might. That's why a lot of people preferred we stuck with a special-case variant of C6, one agreed to by all the members of the UN after WWII including us, to the effect of "you don't just go and attack another country that hasn't attacked you."
I'm not strictly opposed to utilitarianism. The most popular alternative view, which is to simply see morality as a bunch of rules to be followed because they're written in a really old book somewhere (or the UN charter, for that matter), has an equal if not greater number of shortcomings. But let's watch out for being really stupid by being too smart.
Showing posts with label morality. Show all posts
Showing posts with label morality. Show all posts
Sunday, February 3, 2008
Tuesday, August 14, 2007
Watts: Do unto others before they do unto you?
Peter Watts argued a couple of days ago that people are, at heart, selfish bastards, who are only good when they think it will benefit them. It's an old argument, going back to Socrates' diaologues, and no doubt much further back than that.
There is no doubt that peer pressure is an important influence on our morality. But I just can't believe it's the only influence, or else the world would be even a much more horrible place than it is. Watts seems to think that outside of kin selection and immediate reciprocal altruism, there is no direct benefit for moral actions.
This is an important issue for those of us who don't think Big Daddy God is always watching over our shoulders ready to throw us into Hell for doing bad things. Why, after all, should we do good things if it doesn't directly benefit us? In spite of the fact that there's not a shred of evidence that athiests/agnostics are any less moral than believers, we are constantly accused of it because people just can't see any reason why we should be good.
I'd argue that the best evidence that there is a personal benefit for moral behavior even when we're not aware of immediate payback is the fact that we have the urge to do it at all. Imagine a time that you had the opportunity to do something that you knew was immoral, and you also were almost sure you could get away with it. Whether you did it or not, you probably still had a sense of guilt that urged you in the direction of "moral" action. This urge to be moral is undeniable. Of course it's not as strong as our urge to eat or have sex, which is why it loses out so often. But the fact that it is there at all implies there is some evolutionary benefit for it.
But, you might point out, you didn't know for sure that no one would know. If you could ever know absolutely for certain, you might feel nothing. But then, I'd point out, that can't happen. The theoretical example of the opportunity to do bad and be absolutely certain no one will ever know remains completely theoretical. You can never know for sure if down the road your immoral actions will reflect back on you negatively.
Imagine you were in a casino, playing roulette. The roulette wheel has purple and green numbers (I avoid red and black because those numbers have implicit associations with morals). If the colors are fifty fifty, you have no more reason to pick one than the other. But if there were fifty-one purple and forty-nine green, your only sensible bet would be to go purple every time, except in outstanding circumstances, like if someone will kill you if you bet purple. In fact, even if purple had only a 0.0001 percent advantage, it would be to your benefit to go purple every time.
Consequently, since it is always uncertain whether moral actions will reflect positively back on us with the rest of our species, we would have evolved an urge to act this way every time, though subject of course to stronger urges that might overrule it. In fact it seems obvious that this biological urge must have come before any religious or societal rules, or they wouldn't all be so similar.
And if people who have developed the adaption of this moral urge have survived in spite of the obvious benefits of immoral behavior, it seems clear that a moral lifestyle is statistically most likely to result in a happy life. Of course this says nothing about what a moral lifestyle actually is, but let's face it, the important stuff is pretty obvious. The "final six" commandments, the part that doesn't involve man's relationship to God, sums up most of it.
Of course this hypothesis might be difficult to state in a falsifiable way. But then I'm not sure how falsifiable Watts' "we're all selfish bastards" hypothesis is either.
Interestingly enough, though, the connection between belief and morality is quite experimentally testable. As far as I know, most religions tend to believe that there is a direct link between the belief in a (their) diety and moral behavior. In the Abrahamic religions, this would be the belief that there is a direct correlation between the "first four" and "final six" commandments, or the "man to God" ("Thou shalt have no other God before me", etc.) and "man to man" (shalt not kill, bear false witness, etc.) commandments.
Anyone who's thinking straight should be able to think of an experiment that tested this correlation. For example, controlling for race, income, etc., you could take people who are in state penetentiaries for violations of the final six (killers, thieves, perpetrators of fraud) and a control group of people with no known offenses, then have them fill out a questionnaire about what religious beliefs they were raised with. (That's better than asking them what they believe now, since lots of people get born again in prison.) If there was a correlation between the first four and final six, you should find a lot more believers among the non-offenders. Somehow I imagine this would be unlikely.
This experiment would no doubt piss a lot of people off, and be quite contentious. To really test this, you'd need to approach it a lot of different ways. But if what I suspect turned out to be correct, that nonbelievers are no more or less moral than believers, it would be quite handy to throw in the face of the next person who implicitly accused me of being inherently immoral.
There is no doubt that peer pressure is an important influence on our morality. But I just can't believe it's the only influence, or else the world would be even a much more horrible place than it is. Watts seems to think that outside of kin selection and immediate reciprocal altruism, there is no direct benefit for moral actions.
This is an important issue for those of us who don't think Big Daddy God is always watching over our shoulders ready to throw us into Hell for doing bad things. Why, after all, should we do good things if it doesn't directly benefit us? In spite of the fact that there's not a shred of evidence that athiests/agnostics are any less moral than believers, we are constantly accused of it because people just can't see any reason why we should be good.
I'd argue that the best evidence that there is a personal benefit for moral behavior even when we're not aware of immediate payback is the fact that we have the urge to do it at all. Imagine a time that you had the opportunity to do something that you knew was immoral, and you also were almost sure you could get away with it. Whether you did it or not, you probably still had a sense of guilt that urged you in the direction of "moral" action. This urge to be moral is undeniable. Of course it's not as strong as our urge to eat or have sex, which is why it loses out so often. But the fact that it is there at all implies there is some evolutionary benefit for it.
But, you might point out, you didn't know for sure that no one would know. If you could ever know absolutely for certain, you might feel nothing. But then, I'd point out, that can't happen. The theoretical example of the opportunity to do bad and be absolutely certain no one will ever know remains completely theoretical. You can never know for sure if down the road your immoral actions will reflect back on you negatively.
Imagine you were in a casino, playing roulette. The roulette wheel has purple and green numbers (I avoid red and black because those numbers have implicit associations with morals). If the colors are fifty fifty, you have no more reason to pick one than the other. But if there were fifty-one purple and forty-nine green, your only sensible bet would be to go purple every time, except in outstanding circumstances, like if someone will kill you if you bet purple. In fact, even if purple had only a 0.0001 percent advantage, it would be to your benefit to go purple every time.
Consequently, since it is always uncertain whether moral actions will reflect positively back on us with the rest of our species, we would have evolved an urge to act this way every time, though subject of course to stronger urges that might overrule it. In fact it seems obvious that this biological urge must have come before any religious or societal rules, or they wouldn't all be so similar.
And if people who have developed the adaption of this moral urge have survived in spite of the obvious benefits of immoral behavior, it seems clear that a moral lifestyle is statistically most likely to result in a happy life. Of course this says nothing about what a moral lifestyle actually is, but let's face it, the important stuff is pretty obvious. The "final six" commandments, the part that doesn't involve man's relationship to God, sums up most of it.
Of course this hypothesis might be difficult to state in a falsifiable way. But then I'm not sure how falsifiable Watts' "we're all selfish bastards" hypothesis is either.
Interestingly enough, though, the connection between belief and morality is quite experimentally testable. As far as I know, most religions tend to believe that there is a direct link between the belief in a (their) diety and moral behavior. In the Abrahamic religions, this would be the belief that there is a direct correlation between the "first four" and "final six" commandments, or the "man to God" ("Thou shalt have no other God before me", etc.) and "man to man" (shalt not kill, bear false witness, etc.) commandments.
Anyone who's thinking straight should be able to think of an experiment that tested this correlation. For example, controlling for race, income, etc., you could take people who are in state penetentiaries for violations of the final six (killers, thieves, perpetrators of fraud) and a control group of people with no known offenses, then have them fill out a questionnaire about what religious beliefs they were raised with. (That's better than asking them what they believe now, since lots of people get born again in prison.) If there was a correlation between the first four and final six, you should find a lot more believers among the non-offenders. Somehow I imagine this would be unlikely.
This experiment would no doubt piss a lot of people off, and be quite contentious. To really test this, you'd need to approach it a lot of different ways. But if what I suspect turned out to be correct, that nonbelievers are no more or less moral than believers, it would be quite handy to throw in the face of the next person who implicitly accused me of being inherently immoral.
Subscribe to:
Posts (Atom)
