Law in Contemporary Society
"He who makes a beast of himself gets rid of the pain of being a man."- Samuel Johnson


      • Mike, just thought I would go through and post some initial reactions, more for myself than anything. Its mostly just comments that I am thinking about, and in some areas I have changed around some wording. I plan on just working through here and there, in pieces.

-- PeterWade - 25 Apr 2010

Why Should I Care About People I Will Never Meet?

Introduction

Sitting in class this past semester, I feel like most of Professor Moglen's reasoning rests on the assumption that people are obligated to do the "right" thing. This idea of "good" often leads individuals to sacrifice their own best interests for the benefit of another. If my actions will negatively affect someone thousands of miles away, who I will never meet or have a chance of coming into contact with, why should I care about that person?

For example, suppose I am given two choices: 1) Make millions of dollars, with the result of one unknown foreign person killed 2) Not making any money at all

Why should I choose the option that is obviously worse for me? What is the innate or social influence that compels me to act "good", instead of in my own best interest?

I am not sure I agree with this. Instead of reasoning from the assumption that people are obligated to do the right thing, I think Eben is often trying to show us that the choice to "do the right thing" (or a right thing) is, in fact, NOT a choice to sacrifice our own self-interest. Much of the class seems to focus on the idea the decision is not a binary one. We do not have to choose financial success or seeking some form of social justice, we can have both. Not only that, but the choice to seek justice/do the right thing is actually in our best interest, we just don't know it yet (in large part because of the con that is law school).

This paper explores some justifications for altruism, for acting for the benefit of others without an immediate reward; it is an attempt to separate pure evolutionary selfishness from the morality that often defines human behavior.

Kin Selection

The most basic justification for altruism is kin selection. In an evolutionary sense, my most fundamental goal is to spread my genes to as many offspring as possible. By quantifying the amount of genetic material that will be passed on, one could conceivably work for the benefit of nephews and cousins, since they will contain at least some of my personal genetic material . However I am looking at altruism on a truly global scale. Kin selection does not reach past individuals I share genetic material with. I must look further.

This is a little confusing. Is it supposed to mean that to some extent I am concerned with my brother's well-being because if he lives longer/flourishes, the likelihood that he will spread some of the same genes I want to spread increases? Also, how far does this feeling extend? It isn't a conscious factor in my decision making, so if it operates on a subconscious/biological level, wouldn't it ultimately extend to everyone via the instinct towards the propagation of the species (in other words, ensuring the spread of human genetic material)?

Reciprocal Altruism

A slightly more far-reaching justification is "reciprocal altruism." Reciprocal altruism, an offshoot of game theory, suggests that we act altruistically in the hope of inspiring repayment in some form at some time in the future. For example, I give a loan to someone in the hope that, if I need help in the future, my good deed will inspire them to help me as compared to if I had not given the loan. However, this justification of altruism reaches only as far as those I reasonably will come into contact with; not nearly as global or broad as the justification I am looking for.

Reciprocal Altruism, as you point when you say it is an offshoot of game theory, seems like more of a practical, strategic consideration. Behavior motivated by expected future gains isn't divorced from self-interest, it is simply an investment. I don't think this fits well into the discussion of why I would choose to help someone else at the expense of personal gain (i.e., it does not "separate pure evolutionary selfishness from the morality that often defines human behavior"). It would, however, be relevant to a discussion of why altruistic behavior is not mutually exclusive of personal gain.

The Guilt Ratio

All of these theories justify altruism to some extent, but only towards individuals I will likely come into contact with. While only a theory, I think the answer to my question lies in the emotions of empathy and guilt, as well as personal awareness and self-perception.

Assume for simplicity that every decision an individual makes is binary in character: I can choose to do something, or I can choose not to do something. When making this decision I take into account all known factors, whether conscious or subconscious, and choose the option with the "best" result as compared to the other. How do I measure which is "best?"

Formula

I think (and this is only opinion) that there is a "guilt ratio" that weighs on our decisions. When choosing between two options, one of the variables that affects my decision is the size of the following:

[(cost-to-others) / (benefit-to-myself)] x (percent responsibility)

Does this cost benefit analysis actually separate "pure evolutionary selfishness" from morality? It seems to say that if I am aware of the suffering of someone else, and thus my neglecting to help them equates to some amount of responsibility in that suffering, I will more likely ignore it/neglect it (perhaps through denial, cognitive dissonance, etc. as you mention below) if there is more benefit to me.

The more moral decisions are those with low "Guilt Ratios". The cost to others component is dependent on the ability to feel empathy, to understand how another human feels, while the percent responsibility is a product of personal knowledge of circumstances.

Continuing on my comment above, does this mean that there is a correlation between a decision's moral value and its potential for personal gain? How would this benefit to myself be measured? On one level, I see how the financial compensation "promised" by less moral pursuits is one part, but if the goal is to justify altruism, should we consider to what extent choosing the "more moral" thing is actually in our best interest?

Defining the Variables

When I make a decision with a high guilt ratio, since as a human I have an established sense of self, I am aware of the "immorality" of that decision and my responsibility for the result. Each individual is different, and some are able to disregard the guilt ratio more easily than others.

Note that if I am not aware of the externalities of my decisions, their absence will decrease the "percent responsible" aspect of the ratio. Eben's class is geared to make you think deeper about what it is a lawyer does. In most cases, we are helping the fortunate, not the victims. The reason ignorance is bliss is that we do not need to live with the guilt of knowing the consequences of our actions; to make the "good" decision, we have a responsibility to question what it is we are actually doing as lawyers. This will help lead us towards making the more "moral" decision.

I should also note that individuals can resort to cognitive dissonance based ego defense mechanisms to minimize high guilt ratios: delusion, denial, repression, rationalization etc. For example, given the two choices in the earlier decision, I could rationalize killing a stranger by assuming someone else would take the money if I did not, decreasing the "percent responsible" factor.

Deviation from "Morality"

This paper lacks a determination of what the "moral" decision actually is. Morality is not universal, but specific to each individual and the result of many different factors. Our law is somewhat a reflection of our "morality", defining reasonable expectations of behavior. But, if my law does not punish a crime such as assault, does that mean it is not wrong? The main relevance of defining morality for the guilt factor is that the "cost to others" or "gain to self" aspects are dependent somewhat on deviations from the "moral" decision.

The Choice

When I began writing this paper, I assumed that an individual had to "choose" how much the guilt ratio affected their decision making process. I though that self-perception and moral judgment would force a person to choose between being "good or "bad" person. However, such personality components may be completely biological and outside of the realm of free choice. Or, the good/bad distinction could be cultural and a reflection of our society's values.

Whatever the answer, I like to think that as humans we have a choice. I cannot lie to myself about what type of person I am. And while every person has his or her price (would you turn down $1,000,000,000 to kill a foreign stranger if you knew there were no consequences?), I hope I will always choose to be the "good" person.

Since everyone has their price, is this the hope that no one will ever offer you that much money?

I like the overall points here about learning to seek out knowledge regarding our “percent responsible factor.” I agree that a major part of our work this year has been pulling back the curtain hung by various institutions in our lives, which doesn't allow us to see the injustices that should motivate us in our career (and life) choices.

But I am not sure that our motivations for such an undertaking are really being addressed here. The premise of the paper begins with the desire to separate altruistic behavior from personal gain, to discover what should make us want to perform open our eyes to the effects of the choices that we make (and don't), and why we should devote ourselves to helping others. But by setting this up as a choice between personal (presumably financial) gain and benefit to others, and then incorporating both into a pseudo cost-benefit analysis, the paper seems to define the moral decision as being the one which brings one closest to, or supersedes, their “price point.” The decision to help others is not more moral because the denominator of personal gain has increased. Am I a better person because I got paid more to ignore some terrible atrocity that I could have prevented?

And this decision is an illusory one anyway, since we don't have to choose between personal happiness/gain and justice. There can be both.

Navigation

Webs Webs

r8 - 26 Apr 2010 - 19:26:17 - PeterWade
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM