2007-06-01

Altruism, empathy and morality

Recent research shows that when you do something totally unselfish, the same regions in the brain that responds to food or sex lights up. The study, originally published in October 2006 by Jorge Moll and Jordan Grafman, got a lot of publicity in the press lately. Some good summaries could be found in the Washington Post and Science Daily, and also in some blogs from people interested in this topic.

My personal uneducated take on altruism is that I know it feels good. My brain rewards me when I help someone, when I make someone else happy. The evolutionary benefit from this is a bit harder to pinpoint, but the articles mentioned above has some good theories. Scott Huettel draws the following conclusions from his study:
"Our findings are consistent with a theory that some aspects of altruism arose out of a system for perceiving the intentions and goals of others."
and
"To be altruistic, you need to see that the people you’re helping have goals, and that your actions will have consequences for them."
The also studies show that people with damage to a specific part of the frontal lobe lacked empathy and would solve tricky emotional problems in a cold "the-end-justifies-the-means" way. This sounds a lot like Asperger's syndrome to me. Now every other kid with a slight communication problem seems to get this diagnose, but I wouldn't be surprised if this condition has something to do with them having a slight error in parts of their frontal lobe.

Marc Hauser has done psychological experiments showing that people all over the world process moral questions in the same way, suggesting that moral thinking is intrinsic to the human brain, rather than a product of culture. That morality is comparable to language in the way that they are both intrinsic in humans. He suggests that people reach moral conclusions in the same way they construct a sentence without having been trained in linguistics. A quote from this excellent summary of a Nature article:
Thus, the findings confirm the notion that there are at least two neural systems involved in making moral decisions: one in which emotions are involved, and one which performs a cost-benefit analysis. [... ...] It is believed that the emotion-based system for making moral decisions evolved first, perhaps in a situation where small numbers of people lived in kin groups. [Antonio] Damasio says, “A nice way to think about it is that we have this emotional system built in, and over the years culture has worked on it to make it even better”.
A consequence of this kind of study is that we might have to rethink what is immoral and not. If morality is automatic and unconscious process, why are we so quick to differ on what's right or wrong in a moral dilemma? The Washington Posts article says:
U.S. law distinguishes between a physician who removes a feeding tube from a terminally ill patient and a physician who administers a drug to kill the patient. Hauser said the only difference is that the second scenario is more emotionally charged -- and therefore feels like a different moral problem, when it really is not: "In the end, the doctor's intent is to reduce suffering, and that is as true in active as in passive euthanasia, and either way the patient is dead."
The difference sounds clear at first thought since it feels like an active choice to administer a lethal drug. But removing a feeding tube is equally active and the two situations should defintely be equal before the law.

Anyway, I guess we should try to think more about the end results instead of moral. A good comment on this from an interesting blog called Atheist's Wager:
"Morality is about doing the right thing when no one is watching."
Let's all do the right thing.

No comments: