Wednesday, December 4, 2013

Gun Rights and the Non-Aggression Axiom: A Challenge to Gun-Toting Libertarians

Although libertarians as a whole tend to frown on governmentally enforced gun regulations, gun ownership does imply a set of moral constraints. The most puzzling involve the interface between the “non-aggression axiom” and self-defense; how that “axiom” might relate to the possession and use of various kinds of lethal weapons; and under what circumstances those weapons might be used for personal and/or collective self-defense? In this blog I will argue that the right of self-defense is contextual; and therefore can only be exercised in the context of a real threat (as opposed to an “imagined threat”). I will also question whether the right to use lethal force is limited to “defense of self” or whether it also includes “defense of property,” “defense of the weak,” or other “potential” or “actual” victims? Finally, I’ll argue that the “pre-emptive” use of lethal weapons by individuals and collectives is deeply problematic.  

From the outset, it is important to state as clearly as possible, exactly what I intend to establish in this paper. First of all, there are two distinct domains that involve normative reasoning about gun rights: legality (legal rights) and morality (moral rights). Here I will be exploring primarily the “moral right” to “own” and/or “use” a lethal weapon. That entails explaining exactly what constitutes a moral right in the classical liberal tradition, and whether there is an unlimited (or limited) right to “own” and or “use” use a lethal weapon. I’ll argue that “moral rights” apply to both individual persons and groups of persons. Moreover, there are many different kinds of “lethal weapons:” sharpened sticks, stones, poisons (and a host of chemical and biological agents), and variety of “guns.” Thus, the right to own and/or use a lethal weapon also raises the question: “What kind of lethal weapon?” Secondly, moral rights are often classified in terms of their sanction. Many libertarian philosophers argue in terms of “natural rights” which are sanctioned by various aspects of human nature. The right of self-defense, for example, is most often defended in terms of the natural right to defend, not only one’s own life, but also the lives of one’s family, the weak, friends, pets…even one’s non-living property. Are all of these “defenses” equally defensible? Thirdly, I’ll limit my discussion to rights-based arguments within the classical liberal social and political philosophy, or libertarianism. Within the classical liberal approach there are several competing formulations; the most significant opposition is between proponents of anarchy and minarchy. Although I’m sensitive to the claims of anarchy, I’ll focus exclusively on minarchy, or the question of whether there are compelling moral arguments that might support gun control within a minarchist regime. In the end, I’ll suggest that a libertarian’s moral right to own and/or use a lethal weapon is contextual, and complicated by our dutiful adherence to the non-aggression axiom. Therefore, at least some alleged instances of self-defense involving the use of lethal weapons are morally problematic.

There are many well-established arguments in support of the individual and collective right of self-defense. For libertarians the most compelling initial arguments were presented by John Locke and Immanuel Kant. Locke based his theory of self-defense on the “principle of self-ownership,” that is to say that there is continuity between personal and property rights. Kant argued based on the infinite value of human life; and, argued that we have a duty to defend human lives because they possess infinite value. While we could delve more deeply into Locke and Kant, for the sake of argument I shall simply assume that the right of self-defense is at least sometimes morally justifiable, and see where the argument might go from there. So what would the right of self-defense look like within a libertarian framework?

All libertarians embrace the “non-aggression axiom,” or the idea that individuals and groups of persons may not employ “lethal force” for any reason other than self-defense. Unfortunately, the concept of “self-defense” is conveniently vague. Does that “right,” necessarily, include the individual right to defend one’s family, friends, the weak, pets etc. Does that right include the right to defend one’s property or the property of family, friends, or the weak? Does that right include the right to protect one’s honor? Do groups of likeminded individuals that congregate in voluntary association have a similar right to defend themselves against lethal aggression?  If there is an individual or collective duty or right to defend other individuals and/or groups with lethal weapons, how might that right interface with the non-aggression axiom; the only undisputed pillar of classical liberalism?  

First, let’s sketch in the architecture of libertarian rights-based arguments. Rights-based arguments are ONE way of sorting out the relationships between individuals, individuals and groups, and between groups. Rights-based arguments are basically claims against others. That is to say, a “right” implies a “duty” on the part of others to either do something or not do something in support of that right. Positive rights assert a duty to “do something” on our behalf, while negative rights imply a duty to “not interfere” in someone else’s pursuit of something. Most rights-based libertarians agree that all rights are negative rights and therefore, our primary moral duties consist in noninterference. This is NOT to say that individuals may not perform beneficent acts on behalf of others; only that those acts must be non-coerced and purely voluntary. It’s up to us. Therefore, if I have a right to own or use a gun, it’s a negative right.  Thus, no one has a duty to provide me with a gun BUT no one, including government, can morally (or legally) prevent me from owning or using one. Hence, rights-based arguments carry with them an aura of absolutism that allegedly trumps contextual variation.  

But context matters. Some lethal weapons are more lethal than others. Hence, under most circumstances a 357 Magnum is a more lethal than a sharp stick. Weapons of mass destruction are the most lethal, and designed to kill more than one human at a time. Thus an atomic bomb or a MOAB (Mother of All Bombs) can potentially kill more aggressors at a time than a 357 Magnum; but bombs can also kill more innocent non-aggressors.

Lethal weapons (guns), by definition, are tools deliberately designed to achieve specific ends. In the case of guns, many persons own guns for non-lethal purposes such as selling, collecting, hunting, or target shooting. Let’s set those purposes aside for the time being. Non-lethal weapons (pepper spray) are tools designed to inflict lower levels of harm in order to repel, thwart, or discourage lethal aggression by others. Thus, the first obvious issue libertarians must address is whether the non-aggression axiom, necessarily, implies that we (at least in some circumstances) have a positive duty to employ non-lethal weapons, rather than lethal weapons; and whether we, sometimes, have a duty to retreat rather than use a lethal weapon against a potential or active aggressor?

The first problem that any libertarian theory of gun rights must encounter is what I call the “problem of imaginary risks.” In the final analysis, the right of self-defense implies “risk assessment;” that is, we must interpret whether a given “threat of harm” rises to a risk level high enough to justify a lethal response. For example, suppose three women in dresses ring your doorbell on a Saturday afternoon. Would that justify killing them by shooting through the door? Suppose you read in the morning paper that three women dressed up like Jehovah’s Witnesses had been reported to be killing and robbing homeowners in your neighborhood? Or, suppose that three black-male teenagers were knocking on your door at 3 AM? Or perhaps a person you recognize as an escaped convict (a serial killer) is knocking on that door?

Now, my point here is that if libertarians take the non-aggression axiom seriously, we must be prepared to take at least some risks. That might mean waiting for an over act of aggression to take place, or at least an act that might be reasonably interpreted as an act of aggression. In short, you cannot be both a coward and a libertarian at the same time. Your right of self-defense with lethal weapons does not include defense against low probability or imaginary risks.

Let’s also agree that in the case of defending oneself from those “Jehovah’s Witnesses,” our choice of “lethal weapons” may also be limited by the non-aggression axiom. It’s one thing to shoot a perceived aggressor with a 45. But it’s another thing to use an AK 47 in a crowded neighborhood; and yet another to drop an atomic bomb. Obviously, if we take the non-aggression axiom seriously, potential collateral damage must be taken into account; that is, we are not justified in killing unknown (and/or) innocent non-aggressors in self-defense based on imperfect information. (There is also a problem with defending oneself against innocent aggressors.) There is also a cluster of epistemological issues involved in preemptive strikes against non-active aggressors; that is, when a potential aggressor may appear to be threatening but has not yet exhibited or issued a credible threat or actively inflicted a potentially lethal act of aggression. Hence, shooting through your front door at apparently unarmed Jehovah’s Witnesses, is clearly unjustified. But how much certainty is required before one may employ lethal weapons in self-defense against any possible aggressor?

If individuals have a limited, contextual right to defend themselves against lethal aggression or threats of lethal aggression, do groups of individuals bound by voluntary association have a similar right of self-defense? Well, anarchist libertarians argue that all governments rely on coercive force for their survival, and therefore they are all a-priori illegitimate. Therefore, one might argue that the collective right of non-governmental sub-groups to defend themselves against lethal aggression initiated by government is an essential component of libertarian political philosophy. What is especially interesting here is the question of whether all voluntary associations have an unlimited right to defend themselves from all threatening macro-groups (voluntary associations and/or governments), and if so what kinds of lethal weapons do micro-groups have a right to own and/or use in self-defense?

Obviously, there is a longstanding political debate over the right of various groups to revolt against dominant governments and/or voluntary associations; therefore will be similar debate over what kinds of defensive weapons these sub groups might be justified in owning or using. While I intimated earlier that individuals probably do not have an unfettered right to own and/or use atomic bombs in self-defense against aggressive individuals because of the problem of collateral damage. But what about the self-defense of private groups such as groups of anarchists or revolutionaries? The problem here is that any legal or moral limit placed on the right to own or use lethal weapons, advances the interests of political regimes that are already in power and own offensive weapons. Hence, laws against the private ownership of surface to air missiles, empowers regimes that possess aircraft that might be used against revolutionaries. Of course, the war in Syria this raised the question of whether the U.S. (and other regimes) has a duty to provide revolutionaries with surface to air missiles, tanks, and other military weapons of mass destruction. On the other hand, does the non-aggression axiom imply that Syrians (the U.S. and others) exhaust non-lethal alternatives, before deploying lethal weapons? Under circumstances, might the non-aggression axiom imply a duty on the part of Syrians to submit to the Assad regime or migrate to another country? Would that imply a duty on the part of the U.S. to take-in those emigrates?
 
In summary, the legal and moral debate among libertarians over the “right to bear arms,” obviously lacks both depth and rigor. The complicating factor is the non-aggression axiom, (which in my view) is the only undisputed necessary condition for libertarian thought. On the other hand, in the real world we are often confronted by the possibility that that we libertarians may be called upon to deploy lethal weapons in defense of the non-aggression axiom. Therefore, I submit that we need to clarify exactly what separates libertarianism (if anything) from ideal pacifism. Does adherence to the non-aggression axiom undermine self-defense? Does it condemn us to be easily overcome by the very forces we eschew? I’ll start working on that right now.           

Sunday, September 1, 2013

I. The Ethics of War

Broadly speaking there are three prescriptive theories of human warfare: Ideal Pacifism (Lethal aggression by nations (or other political entities) is never morally justified.) Political Realism (Lethal aggression by nations is justified based solely on utility), and Just War Theory (Lethal aggression by nations is sometimes morally justified.)

All three theories are deeply problematic. The basic problem with Ideal Pacifism is that, if the use of lethal force is never morally justified, then self-defense (or the defense of the weak) must be limited to non-lethal means; which rarely works against lethal aggressors. Critics argue that: "If life is of infinite value, then why is it not worth defending?" In fact, lethal aggressors prefer to wage war against pacifists (especially wealthy ones!)! After all, passivist nations are easy targets for nations willing and able to employ lethal force. The problem with Political Realism is that if every nation on earth goes to war whenever it's leaders judge that the benefits outweigh the costs, the incidence and lethality of war would increase substantially. And (as we will see below) the basic problem with Just War Theory is that the criteria are so vague and malleable, that almost any war or weapon can be justified.                

 As I write, President Obama is asking Congress to vote on whether to bomb Syria in response to its (apparent) use of chemical weapons against it's own citizens. Any arguments over U.S. intervention in Syria entails the application of the principles of Jus ad Bellum and Jus in Bello. Let's look at both sets of criteria.

                    Jus ad Bellum: Principles for Engaging in War

C1: There must be a just cause for entering war. The only just cause for entering war is defense from lethal aggression: "self-defense" or to "defend the weak." You cannot go to war in order to secure resources or to punish lethal aggressors.    

C2: Only a proper authority can declare war. Generally, this refers to the "legitimate political leader(s) of a nation." One problem here is that during times of civil war, there may be no "proper authority." In the U.S. multiple "authorities" have the authority to declare war. In the past, the President declared war and Congress decided whether or not to pay for it. Since Vietnam, Congress has not been a responsible steward of U.S. resources.    

C3: The decision to declare war must be accompanied by the right intention. This means that legitimate leaders must really "intend" to end lethal aggression, and NOT intend to secure resources or punish lethal aggressors. Here it is not always easy to determine "intent," as many leaders lie over their true intent.

C4: Going to war must be the last resort: all non-lethal means must be exhausted first. The usual non-lethal means include: passive resistence such as protests and economic boycotts. The problems here are that "non-lethal means" rarely work, and it's difficult to decide how long to employ these non-lethal means.    

C5: The benefits of engaging in war must outweigh the costs. Combatants must determine whether the amount of pillage, plunder, and death required to end a war is worth it. Here the problem is that wars are so complicated that it's impossible to generate accurate cost/benefit analyses. Wars are rife with unanticipated consequences. President Bush predicted that the war in Iraq would last a few months.  

C6: There must be a reasonable chance of success. This means that combatants may not engage in wars of futility. Combatants must decide beforehand what ultimate "success" means within any given context. If a war is hopeless, then it cannot be justified. Again, before the 1960s, very few war critics argued that the war in Vietnam was futile. Some scholars still argue that it was winnable, but poorly executed.

                      Jus in Bello: "Principles for Conducting War"

C7: Combatants must exercise discrimination; that is combatants may only deliberately target enemy combatants and not non-combatants. This usually means that women and children and residential areas may not be targeted. Killing non-combatants is, however, morally acceptable if they are killed accidentally, as an unintended "side-effect" of targeting legitimate combatants.

C8: Combatants must exercise proportionality in the use of lethal force. They may not use any more lethal force than is necessary to end the war. It is never acceptable to kill all enemy combatants and/or destroy all property.

C9: Combatants may not employ any evil means or war strategies that are inherently immoral. This usually refers to the use of "weapons of mass destruction" or weapons that inflict an extraordinary amount of pain and suffering, such as chemical weapons or biological weapons. Rape, pillage, theft, and torture are generally considered to be a priori immoral; even if they "work."

C10: Combatants must exercise benevolent quarantine when it captures enemy combatants. This generally means that prisoners of war must be adequately fed and clothed and provided adequate health care. Torture of prisoners is never justified.

C11: When a war is over any violators of the principles of just war must be held responsible. This usually entails war trials. This usually means that the winners of the war take the losers to court, execute war criminals and impose financial costs on the losers. And of course, powerful nations, whether they win a war or not are never tried for war crimes.

In summary, most of the world still refer to Just War Theory as the basis for arguing over the basis for going "to war" and arguing over what can be done "in war." How would you analyze the justice of the following U.S. wars: War in Iraq, War in Afghanistan, and the looming in Syria. How would you analyze the justice of following weapons: atomic bombs, torture, rape, chemical and biological weapons, and armed drones to kill terrorist leaders.

Monday, June 10, 2013

The Ethics of Dying

Are there "better" and "worse" ways to die? If so, how would Aristotle go about approaching the subject? Let's start with a few distinctions. More often than not the "ethics of dying" is understood in terms of how "survivors treat the dying" rather than how the "dying treat survivors." Although, both are relevant subjects of moral analysis, I'd like to focus primarily on the latter. But first, let's distinguish between how we individually and collectively "face death" (as a matter of fact) and how we ought to "face death." For example, we know that as a matter of fact, we all go through well-known stages when we internally deal with the "process of dying." That's certainly important to know, and it may have some bearing on how we behave in our final days. But that's not what I want to write about right now. So how would Aristotle look at the "ethics of dying?"

First of all, it's a fact that all human beings, at all times, and all places have always asked the same basic existential questions: What am I? Where did I come from? And, where am I going? The answer to that last question, "where am I going?" has been obvious for a long time. All living things die, therefore, so do humans! Aristotle and I would argue that ethics is (for the most part) about how our behavior effects others, and therefore there are both social virtues and social vices associated with how our behavior effects those who will remain alive after we're gone. Hence, there are "better" and "worse" ways to die. And of course, Aristotle argued that "good persons" serve as role models for virtuous behavior for survivors. Thus, when we die we are essentially teaching our survivors how to die.  

For Aristotle, virtue is bound by context: person, time, place, and degree. The ethics of dying, therefore, would be contingent upon details: Person: Who is dying? (personal variables such as age, degrees of moral and/or intellectual virtue). Time: When am I dying? (If there is a choice of when to die, is it better to die "sooner" or "later," at one time rather than another?) Place: Where am I dying? (If there is a choice of where to die? Are there better or worse places: in a hospital, at home, in the wilderness?) Degree: "what is the probability that I will, in fact, die? (Is death more-likely or less-likely to occur?) So how would Aristotle go about unpacking some of this stuff?

Aristotle taught that the end (or purpose) of life is happiness predicated over an entire life-time. Therefore, the question of whether or not we "have led a good life" can only be answered by those who remain alive after we're gone. Although Aristotle never really said so (to my knowledge) it follows that "how we die" is very important in terms of our personal moral legacy. Some virtues associated with dying are pretty straight-forward. Probably the central virtue associated with dying is how we deal with the emotion of fear. Hence, a couageous death is virtuous and a death marked by the vices of either cowardice (vice of deficiency) or foolhardiness (vice of excess) are not. In short, a "good person" aims midway between fearing death too much and not fearing it enough. Now moral virtue, according to Aristotle is achieved by deliberately establishing habitually virtuous behavioral patterns. Given that we only die once, we are naturally deprived of practicing that habit, therefore, we must cultivate the virtue of courage in other ways; often vicariously...by seeing how others die.

If Aristotle was around today, he would probably conclude that in the United States our collective "culture of death" is dominated by cowardice and spendthriftiness. Death is usually regarded as the worse thing that any person can undergo and therefore we do anything and everything we can to prolong life for ourselves and for others. That's why most Americans die in hospitals and nursing homes attended by and army of physicians, nurses, and others that "fight" to keep us alive as long as possible. In fact, most of our health care dollars are expended during the last six months of our lives. Thus the virtues and vices  associated with spending are also relevant; that is the virtue of moderation and the vices of spendthriftiness (vice of excess) and tightwadness (vice of deficiency).

If we look at how today's role models shape our culture of dying, what do we see? When we die, most of our role models will die under the influence of mind-numbing drugs designed to "artificially" ease their culturally induced fear of pain and dying. And after they die, their loved ones will expend extraordinary sums of money on funerals, embalming, and caskets. And the more money their survivors spend on that casket and funeral home the better. Unless I'm wrong...I think Aristotle would say that death as experienced in the U.S.models primarily cowardice and spendthriftiness.

So what can "good role models" do to alter our fear-mongered, spendthrift "culture of death?" Well, for a starter...a virtuous death does not necessitate that we welcome death with open arms, and/or hasten our own death via suicide of euthanasia. But it does require that we be mindful of how our dying behavior is effecting survivors and what kind of behavior we are modelling for the future. On the other hand, speaking as a libertarian, I would also point out that if you are wealthy (or poor) and choose to spend the remainder of your life savings fighting off an immanent death, you certainly have that right. It's your money! In fact, in some contexts it might be the best thing you can do with it. If you have a lot of money saved up, but have no friends or family, why not use it to pay the salaries of physicians, nurses, and funeral directors? But under most circumstances, it's neither courageous nor is it a very productive way to spend your last few dollars.

In conclusion, at a bare minimum Aristotle and I would almost certainly argue that we all must be mindful of how we live out those final days on earth, and remember that we are teaching the next generation how to die through our actions.        

           

Friday, May 24, 2013

The Philosophy of Work

Lately I've been thinking a lot more about the concept of "work." It's one of those human activities that most of us just do, without thinking very deeply about it. Let's see if we can plumb the depths of work...just a bit.

There are several related concepts that conspire to shape our beliefs about work. Perhaps the most obvious is our distinction between "work" and "leisure." At it's most basic level "leisure" is understood to be something pleasurable that takes place independent of work; with perhaps the unspoken connotation that work is not necessarily pleasurable. For most of us in the U.S.,  the standard work week is about 40 hours, or 8 hours a day (9-5), five days a week. Leisure activity usually takes place after 5 PM, M-F, weekends, holidays, vacations, and after retirement. We also distinguish between different kinds of work; most notably based on how dirty we get at work. Hence the longstanding disinction between "blue collar" and "white collar" work. Blue collar work (or work where your clothes get dirty) is usually compensated on an hourly basis, while "white collar" work is often paid via a set "salary." As a general rule, "white collar" workers tend to earn more than "blue collar workers, but not necessarily. There are two hidden assumptions here that most of us accept without much thought: 1.) the more hours you expend at work, the fewer hours you can expend on leisure; and, 2.) the less you earn at work, the less money you can afford to spend on products, services and leisure activity. Given the amount of time that we expend "at work," our "co-workers" tend to play an increasingly important role in our lives. Most of our closest "friends" work with us, and more often than not we "date" and/or "marry" people we meet at work.

Compensation in the U.S. is comprised of both wages and benefits. (Some employers also give out honorary prizes such "certificates of merit.") The total amount of compensation that workers receive, is invisibly shaped government policies; especially tax policies (which tax different kinds of work at different rates) retirement savings (Social Security), health care (Emploment-Based), and a mountain of regulations that control what can and cannot be done under the auspices of "work." Some kinds of work require a license or a "degree" from a governmentally "accredited" educational institution: high school, undergraduate, and graduate. We also distinguish work that produces products and work that provides services. Some kinds of work require previous experience. As a general rule, the more education and experience that is required for any work, the higher the level of compensation....but not necessarily. There is also the unspoken assumption that the "harder you work," and the "better you work" the more your employer will "reward you" in terms of promotion, higher wages, better benefits, and/or personal or public praise.

Now here's where the philosophy of work enters in! In the U.S., our personal identity (who we are) is based largely on what we do at work. When we meet someone, one of the first questions in  conversation is usually: "What do you do for a living?" What's interesting here is that although it's socially acceptable (even obligatory) to ask a stranger "what do you do," it is NOT acceptable to ask: "How much do you make?" Nor is it acceptable to volunteer that information. "Hi! my name is Ron White, I earn $100,000. a year? (I wish...) Yet, most of us unconsciously base our "self-worth" on what we do, who we work for, and how much we earn. Some work carries with it a positive connotation:"I am a doctor." Some negative: "I am an auditor for the IRS." Or. "I am a used car salesman." Some work carries a mixed connotation: "I am a professional musician."

In the U.S. one's "self worth" is seriously undermined by the admission: "I am currently 'out-of-work' or "unemployed." But why? I think there are three unspoken variables here. First, if you are "out of work" others unconsciously conclude that you are either: lazy, living on welfare, or uneducated. Second, if you are "out of work," you lack income and therefore can't afford buy the kinds of things that impress other Americans like a new car, big house, or an expensive vacation. And third, if you're "out of work" other Americans equate being "out-of-work" with being "on vacation," and therefore, out of sheer jealousy, they assume that that your days are spent in undeserved-leisure. "Gee, I wish I could stay at home all day and watch television too.! But I've gotta go to work!"

As a philosopher, I'm certainly well aware of these hidden dimensions of work, therefore, I try to look objectively at how I ought to spend the limited time I have left on this earth. (I'm 62 years old.) The basic question for me is how much time should I expend at work? As a tenured, salaried "professor" I have a lot more control over how much time and energy I spend at work. However, like other Americans I feel trapped by the cultural-assumption that I ought to work more, not less. As an older (experienced) teacher-scholar, there is the added fact that I'm currently "at the top of my game." Therefore,  since I'm finally a "good teacher" I ought to teach more; and/or, since I'm finally a "good scholar," I ought to write and publish more, participate in more scholarly meetings. On the other hand, I'd really like to spend more time, energy, and resources with my wife and kids, playing guitar, and sleeping out back by the pool. So how should I go about rationing the last few years (hopefully 20 years?) between work, family, and leisure? Although I still enjoy the "teaching" aspect of my work, the "culture of teaching" has changed to the point where I spend more time doing things unrelated to teaching, such as complying with unrelated mandates.  (My book orders for Spring 2013 are now past due.) I also spend more time complying with mandates associated with "assessment" of teaching and learning. Moreover, although I still enjoy the thrill of getting something published, I already have about 100 publications (of various kinds). Given that I am already at the top of my salary scale, there's no longer a "reward" for excellent teaching or scholarship. Thus, most of my motivation to excel at teaching and scholarship is internal rather than external.

So how have I been doing, recently, in reallocating my time, energy, and resources? Well, I'm working on (hopefully) my last journal article, but I also scheduled two scholarly presentations for next fall, and I agreed to be the program director for another national conference. I also have a book review due next week. Overall, I'd conclude that, for me at least, it's a lot easier to think about untangling my self-identity from "work" than it is to actually do it.                               

Monday, May 20, 2013

The Ethics of Whistle-Blowing

Whistle-blowing is surprisingly complex. It's basically about controlling the flow of information within and between individuals, organizations, and government. It's also about monitoring and enforcing laws and rules. The "ethics of whistle-blowing," therefore involves the values that underlie the act of whistle-blowing and how it affects the various stakeholders. Thus, a would-be whistle-blower must make a moral decision as to whether to blow that whistle, who to blow the whistle on, and who to blow the whistle to. Some whistle-blowers are motivated by moral concerns, others by other less altruistic motives such as retribution. Some allegations are true while others are false. Hence, some acts of whistle-blowing are "justified" others are not. Conversely, an organization must decide how to treat the whistle-blower, whether to act upon the information provided by whistle-blowers, and how much time, effort, and resources it is willing to expend encouraging or discouraging this form of internal surveillance. Finally, the government has erected a legal regulatory regime that affects the entire process. 

By definition, whistle-blowing targets harm, and therefore, it involves normative judgements involving legalitymorality, or both. Click here to see my discussion of harm. Thus one might "blow the whistle" on an individual, several individuals, or an entire organization (public or private) that violates laws (criminality) or violates moral rules (morality). In both cases there may be greater or lesser degrees of harm (the magnitude of the harm of "killing humans" is obviously greater than "jaywalking.") And of course, not all illegal acts are immoral acts and not all immoral acts are illegal acts. Violations of legality and/or morality imply the imposition of sanctions. We can argue over whether those sanctions ought to be imposed in order to enforce retributive justice or to deter future wrongdoing. Ilegality sometimes sanctions acts of "harmless immorality" such as the violation of "blue laws" that make it illegal to open stores on Sunday. Legal Philosophers call the legal (governmental) enforcement of harmless immorality "legal moralism."

Most illegality sanctions "harm to others," however some laws are "paternalistic" and therefore attempt to sanction "harm to self." Therefore, whistle-blowers can "blow the whistle" on greater and lesser degrees of harm that are subject to either legal and/or moral sanctions. If an act is illegal, then it is subject to legal sanctions such as fines, incarceration, death etc. If an act is merely immoral there would be only a moral sanction, which might involve being ostracised or being condemned to hell by religious authorities. Worldwide there are many religions that sanction different harmless immoralities such as: shaving beards, eating specific foods, wearing revealing clothing, dancing, using condoms, or buying liquor on Sunday. Some political regimes extensively enforce morality via legality (Saudi Arabia), therefore, moral violations become crimes sanctioned by the state. Now, let's get back to those basic moral questions.

What kinds of acts acts ought to be subject to whistle-blowing?    

...Given that organizations must expend time, energy, and resources monitoring and enforcing whistle-blowing, it makes good economic sense to limit whistle-blowing to major harms, which are usually sanctioned by legality. For example, it probably isn't worth it  for a Catholic organization to act on whistle-blowing for a member's use of condoms, getting a vasectomy, or missing mass on Sunday. The cost of investigating most harmless immoralities would exceed the benefits. Thus, most reasonable whisle-blowing policies focus on major legalities, or crimes.

...No organization wants to act on false claims, therefore, the credibility of the whistle-blower must be taken into account. Does the whistle-blower have access to the evidence presented? Is the whistle-blower merely disgruntled with the organization and seeking retribution? Does the whistle-blower have an "ax to grind?" Does the allegation make sense? Who is responsible for the alleged wrongdoing?

...If it seems unlikely that a crime has been committed, is it worth an organization's time, energy, and resources to investigate it?

Who should the whistle-blower blow the whistle on?

...Obviously, if possible whistle-blowers ought to blow the whistle ONLY on guilty parties not innocent parties. Some crimes involve one party, some are conspiracies that involve many cooperating parties.

...Sometimes conspiracies involve the top leaders of an organization, which makes internal investigation difficult.   

Who should the whistle-blower blow the whistle to?

...If it seems likely that a major crime has been committed the whistle-blower can elect to "blow the whistle" either inside the organization or outside the organization. If inside, it must be blown within the proper organizational channels, usually by starting from the bottom and working your way up. If the whistle-blower chooses to blow the whistle outside the organization it's usually to the appropriate governmental agency or the media.

...Some crimes MUST (by legality) be reported to the government, and therefore, attempts to conceal criminal activity from the government are subject to legal sanctions.   

To what degree ought an organization either encourage or discourage whistle-blowing?

...Organizations can provide either disincentives or incentives for whistle-blowing. Some organizations make it too easy to whistle-blow, others make it too difficult. Some organizations retaliate against whisle-blowers by imposing sanctions, which might include expulsion from the organization. Others offer financial incentives to encourage internal whistle-blowing (within the organization) in the form of financial rewards or guaranteeing anonymity for whistle-blowers. In recent years, the government has encouraged whistle-blowing by offering rewards for external whistle-blowing.

...There are unanticipated consequences associated with whistle-blowing. One of the least appreciated is the fact that when an organization encourages whistle-blowing, it eventually fosters a culture of distrust among its members. Therefore, members tend to be distrustful of other members and the organization as a whole and members become more secretive. Conversely, if an organization discourages whistle-blowing and relies more on trust, then a "culture of trust" is more likely to develop, but then the system tends to be more vulnerable to opportunism. 

Here's my general take on whistle-blowing. The basic problem is the illusion (or "Fatal Conceit") that organizations can, in fact, control whistle-blowing. First of all, the government requires that most organizations (especially business organizations) have some kind of grievance process in place. If that process over-incentivizes whistle-blowing, then the organization will have to expend more time, energy, and resources investigating frivilous and false claims. If a whistle-blower is not satisfied with the internal organizational investigation, the whistle-blower can always choose to blow outside of the organization. If the process discourages internal whistle-blowing, then whistle blowers will simply go outside of the organization. Therefore, I would argue that any organization that believes that it can control whistle-blowing is delusional. The best policy, therefore, is to comply with legality...and no more.  

Thursday, April 18, 2013

Organizational Distrust

In my last post I suggested that trust and distrust are both subject to Aristotelian contextual analysis based on person, time, place, and degree. Let's take a look at how distrust works in the context of organizations.

First, let's recall that organizations are cooperative, teleological entities that pursue specific ends via specific means. Ethical organizations pursue "moral ends" via "moral means."  Organizations are  comprised of individual leaders and individual followers. Leaders may (or may not) trust followers, and followers may (or may not) trust leaders. In a fair world, trustworthiness is based on objective "track records." Given that followers are naturally predisposed to trust leaders, in the absence of  "track records," followers tend to trust leaders. Since leadership trust is the default position, at least initially, followers tend to give leaders the benefit of the doubt. Sometimes too much benefit!

There's a difference in trust as manifest in small and large organizations. In small organizations, leaders and followers tend to trust each other because they can personally monitor each other's track records. However, monitoring the behavior of leaders and followers in large organizations is, by its very nature, is impersonal and requires institutionalized systems. Organizations based on distrust involve the codification of formalized rules that are monitored and enforced by a system of surveilance. Most organizations today have evolved increasingly efficient institutions and surveillance technologies that target, primarily, the behavior of followers. Leaders are usually in charge of monitoring and enforcement,  and therefore their own behavior is not easily monitored. (Enron!) In fact, leaders often use their positions of power to hide or distort their track records. Ironically, followers are often expected to trust leaders, despite the fact that those leaders do not trust their followers.

In recent years, both large and small organizations have adopted strategies based on distrust that also incorporate the use of "carrots," or rewards for cooperative behavior and/or "sticks," or punishments for uncooperative behavior, usually in the form of threats of punishment such as seizure of property (fines) or expulsion from the organization (fired. It's also true that when leaders and followers are not trusted, and subjected to increasingly intrusive monitoring and enforcement they tend be be less cooperative and engage in less trustworthy behavior, which leads to increasingly more intrusive surveillance systems of monitoring and enforcement and larger sticks and tastier carrots. But why do we respond negatively to distrust? I don't know about you all, but I generally do not respond positively to either personal or institutionalized distrust. Leaders get a lot more cooperation out of me by trusting me to do the right thing. On the other hand, there are contexts where we all have lousy track records. My wife rightfully does not trust me to take care of our finances. In fact, I don't trust myself in that context. Thus, sometimes the track records of leaders and/or followers rightly, justify distrust and monitoring and enforcement. However, deserved distrust might also warrant expulsion from the organization and being replaced by more trustworthy leaders or followers. Nevertheless, in our society based on distrust we have all been conditioned to trust organizations that have the most intrusive systems of monitoring and enforcement. Indeed, today, the idea that leaders ought to trust followers and that followers ought to trust leaders seems quaint, as the global default position continues to gravitate toward distrust and surveillance. Today, most of us harbor unprecedented distrust of both political leaders and business leaders; and political and business leaders rarely trust followers.  Is you e-mail and Facebook account being monitored? Is there a drone circling your house right now?

The most common form of institutionalized distrust is whistle-blowing, where leaders and followers are actively encouraged (with carrots and sticks) to "blow the whistle" on each other for legal and/or moral wrong-doing. I'll cover that in my next blog.          

Tuesday, April 16, 2013

Trust

I'm not much of a proponent of virtue-based ethics, but lately I've been warming up to it it...One virtue I've been thinking about is "Trust."

First, let's agree that we use the word in a bewildering variety of contexts. We say that we "trust" or "distrust" both living and non-living entities. I "trust" my wife, my dogs, and the brakes on my Toyota. I would "distrust" my wife to perform brain surgery. I certainly wouldn't "trust" my dogs to drive my Toyota. We also "trust" or "distrust" organizations, leaders, and or followers. For now, let's focus on individual and organizational trust.

In virtue ethics, I'm a big fan of Aristotle's emphasis on context. He certainly would NOT argue that it is virtuous to completely trust all persons, leaders, followers, or organizations, at all times, in all places. To be trusted one must be ... or trustworthy. Let's break this down a bit. Aristotle thought that trust (and distrust) are virtues, and that virtuous behavior is habitual. Therefore, the decision of whether to trust a person or organization is contingent upon access to information about habitual behavior. If John Doe has a history of drinking all of your most expensive beer when you're not home, then you probably wouldn't want to leave an unguarded case of Heineken in your refrigerator. You might, however, trust that John will not steal your wife, Toyota, or your dogs. Hence, trust really is contextual. Moreover, if you habitually trust persons or organizations that, obviously, are untrustworthy, you are not virtuous. You're a "sucker." If you habitually "distrust" persons or organizations that are demonstrably trustworthy you are not virtuous either. You are a "cynic." In short, Aristotelian virtue requires that we trust the right person, at the right time, in the right place, to the right degree. We must be discerning.

As a general rule we trust family and friends, at least in part, because we know their "track record." On the other hand, we generally, distrust "strangers;" that is persons and organizations whose track record is unknown. However, if we're virtuous we naturally seek out "information" that might indicate trustworthiness or untrustworthiness.  Of course, that leaves us open to the criticism that we're biased or discriminatory; especially when we distrust classes of strangers based on limited information. So here is the basic question! When we are deciding whether to trust or distrust strangers, what should be the default position? Should we trust strangers until we possess reliable information that warrants distrust; or should we distrust strangers until we possess reliable information that warrants trust? Unfortunately, some individuals and organizations are opportunistic, and are highly skilled at disguising their untrustworthiness. Hence, many positions of trust are invaded by untrustworthy opportunists. At least in recent years, politicians have probably been the most opportunistic. Here, the basic problem is that politicians have almost unlimited ability to disguise their opportunism by manipulating the machinery of government to their advantage. Transparency, therefore, is a necessary condition for guarding against opportunism. But transparency undermines opportunism, and therefore persons and organizations often manipulate language to make it difficult for the rest of us to decode their track record. Hence, the rise of "private languages," that serve to undermine our ability to know who to trust. Most professions that serve as "agents" for the rest of us, employ private languages: lawyers, priests, ministers, physicians, politicians, used car salesmen, insurance brokers etc. I call it "private language fraud."

I've been rightfully identified as a cynic; that is, I am not readily predisposed to blindly trust strangers, especially those that expect me to trust them to decode a private language. In fact, I always deploy my philosophical skills to "deconstruct" those private languages on my own, or at least find someone that I trust to do it for me. The most conspicuous example of private language fraud on earth is the U.S. tax code. Until that mess is translated into "public language" that the rest of us can understand, I'll continue to distrust politicians.                           

Saturday, March 2, 2013

Self Sacrifice for the Common Good

"Common good arguments" inevitably call for "self-sacrifice" by individuals, small groups, and large groups. Self-sacrifice entails the allocation of time, energy, and our resources in the interest of other individuals and/or groups. Hence, I might be urged to tune my guitar for the "common good" of my band. My band might be urged to rehearse at a lower volume for "common good" of my neighborhood. My neighborhood might be urged to clean up trash on the streets for the "common good" of the city of Cincinnati...etc. Democrats and Republicans are being urged to "sacrifice" their "partisan" interests for the good of the whole country. The U.S. is asked to sacrifice its "partisan" interest in cheap energy for the "common good" of cleaner air for the rest of the world. In each instance, self sacrifice calls for individuals or groups to either "do something" unpleasant, or "not do" something pleasant for the "common good." Moreover, there is always an added correlary: non-cooperative "bad persons" can be justifiably coerced to "do" or "not do" these goods.  While I would never argue that self-sacrifice is "bad," I do think that whether it is justified or not in any context requires a complex moral argument. Merely declaring that X serves the "common good" is not enough...at least for a philosopher.

"Self-sacrifice arguments" are part of a larger complex of moral beliefs bound together by the central notion that "good persons" and "good groups" willingly sacrifice their interests (time, energy, or resources) for the common good; and that "bad persons" do not. The more you are willing to sacrifice, the "better."  As I pointed out in my previous blog, the most idealistic "common good arguments" urge us to sacrifice our own short-term, individual (or small group) interests in order to advance long-term, larger group interests, or global interests. Hence, the moral ideal consists in self-impovershment for the sake of the universal, long-term common good.

Because "self-sacrifice arguments" are usually rhetorically persuasive, more and more "interest groups" invoke them, and demand increasing levels of "self-sacrifice" out of all us. Government is the most zealous proponent of "self-sacrifice." The TSA now expects all of us to "sacrifice" increasing levels of privacy in order advance the "common good" of airline security. If I object to aggressive testicular groping I am not a "good citizen" or a "good person." Here in Cincinnati we are being asked to pay higher taxes to pay for two sports stadiums, a zoo, several museums, an orchestra, a new bridge across the Ohio River, and new a street car...all in order to advance that proverbial "common good"over the "long-run." If I object to sacrificing my own financial interests in order to advance any of these "common goods," I'm immediately labelled as selfish, uncooperative, and a bad citizen or person. My question is this. If "self-sacrifice" for the "common good"  is our primary social value and the basis for "social justice," and if, over the long-run, there is an infinite number of possible "common goods," then how can we avoid self-sacrificing ourselves into abject poverty?

Right right now I bet you're thinking something like this..."Obviously we can't advance all "common goods," therefore we are only "obligated" to advance the most salient "common goods" that fill the "needs" of other individuals and/or groups? In other words, it is morally acceptable to ration our self-sacrifice and thereby avoid self-inflicted poverty. That works fine in the private sphere where we have control over our own rationing process. It doesn't work very well in the public sphere where others coercively ration our time, energy, and resources. In either case, here's the basic question: "How do we know that any self-sacrificial act will, in fact, advance the "common good" and not merely advance the short-term"partisan good" of those who stand to benefit from our sacrifices?" Obviously, its harder to know universal, long-term consequences.  

In sum, I am not opposed to the "common good" or "self-sacrifice," however I am wary of the dangers of unquestioned devotion these abstract principles. Both terms are what I call "conveniently vague" which means they tend to evoke consensus, even when there really isn't. Moreover, failure to deconstruct these arguments leaves us vulnerable to opportunists who will "cash in" on our unbridled generocity; especially corporations. Hence, the spread of corporatism in the Western world, as corporations skillfully deploy "common good" and "self-sacrifice" arguments to advance their own partisan interests. In recent years, the most prolific beneficiaries of blind self-sacrifice have been associated with the health care industry, especially: health insurance companies, pharmaceutical companies, hospitals, and physicians. But that's another topic.                             

Thursday, February 28, 2013

The "Common Good"

Tomorrow my college is conducting a faculty retreat to teach us what it means to advance the "common good," how we can know specifically what it requires out of us morally, how to do what's right in advancing the "common good," and how to teach students to advance the "common good." I've been working on this for more than 30 years and haven't been able to do much with it. After 2500 years of Western philosophical debate on that issue, I'm glad a committee at my college, finally, figured it all out. Here's what I expect to learn.

First, of all when we hear the admonition, "Do X in order to advance the 'common good" it's intended to serve as the atomic bomb of philosophical discourse. When non-philosophers hear it, they inevitably submit to it's authority. After all, how can one reasonably present a counter argument? "Do X in order to not serve the 'common good.'" Or, "Do X in order to advance the "common bad." Secondly, when it is invoked within any given context it excites an emotional response within us; namely, a feeling of "solidarity" with other humans and that "we're all in this together." Therefore, "common good arguments" tend to resonate with most of us both rationally and emotionally. Hence, over thousands of years of intellectual history it shouldn't be surprising to find that the "common good argument" has been invoked (and continues to be invoked) in a wide variety of contexts. Most wars (including the Crusades) have been deemed both "justified" and "unjustified" based on the "common good" along with the torture of prisoners of war, internment of foreign nationals during wartime (Japanese),  and recent drone strikes against "terrorist leaders" in populated areas. Other "common good arguments" have been offered both for and against: slavery, capital punishment, gun ownership, and the confiscation of private property by government. Over the past 300 (or so) years, the most persuasive "common good argument" offered by political regimes has been that, inevitably, we must sacrifice our personal liberty in order for government to provide us with "security." The most obvious application here has been our willingness to submit to increasingly invasive "searches" under the guise of "airport security." Over the years, the concept of "security" has expanded exponentially to include not only: physical security (from criminals and invaders), but also economic security, safety, health, even offence or embarassment.

 OK, I know what you're thinking! "Ron, just because someone invokes the "common good argument" (or it's variants: greater good, public good, social good etc.) doesn't mean that the argument is sound. If X does not, in fact, serve the "common good' then we ought to reject that argument." Fair enough! So what we really need to know is: the meaning of the "common good," know whether X in fact advances it, and if X advances the "common good" we need to know exactly how to do it

 What is the meaning of the "common good?"  Better yet...what is the meaning of "the Good?" If we don't know exactly what "the Good" means, we cannot know the "common good." So what are we really saying when declare that "X is good?" Well, we use the word "good" in many contexts. My new suit looks good. Red wine tastes good. Garlic smells good. The Maladroits sound good. etc. We also use the words "good" and "functional" interchangeably. A BIC razor is good for shaving my head. Plato identified at least three different species of "the Good:" extrinsic goods (good for what they bring about, e.g. money) intrinsic goods (good for their own sake e.g. happiness) and the best things in life are both.  Is "the Good" an idea? If so, where did that idea originate? Is it in all of our minds, or only a few of our minds? If so, if only a few of us know "the Good," who knows it, and how do we know that they "know it?" Is it because knowledge of "the Good" runs in families and that it is passed on via genetics? Is knowledge of "the Good" acquired via teaching and learning? Is "the Good" a feeling? When I say that Bota Box Wine is good, am I reporting how I feel when I drink it? If you drink it and report that it's bad, who is "right" and who is "wrong?" What if I'm the only one in the world that reports that Bota is good? Am I wrong? If I say that F.A. Hayek is a "good philosopher" is it just another report, or am I doing something other than stating my approval? Or, is "the Good" a property of things that we can see, feel, touch, or smell...once we learn how to do it? Again, after 2500 years of philosophical debate, I'm glad that someone finally figured all this stuff out.

Tomorrow, just after I learn the meaning of "the Good," I'll finally learn the the meaning of the "common good," how to know it when I see it, and how to advance it. I suppose the key to "knowing" the "common good" is identifying the "interests" that I have "in common" with others. Of course, that assumes that I "know" my interests and "others"know" their interests. But more than that, what does "others" mean? Everyone employed at my college has an "interest" in increasing student enrollment and retention. However, I don't think other competing colleges share those interests. In fact, they have an interest in my college having lower enrollment and lower retention. So what is "good" can be specific to different "commons:" what's good for Cincinnati, Ohio, the United States, or the whole world. Now the atomic bomb of all "common good arguments" is to argue that "X is good for everyone on earth." Universality trumps particularity. Therefore, if  "X is "less good" (or even bad) for me or Cincinnati, then we ought to "cooperate" anyway in order to advance the more "common good." Those arguments are usually complicated even more by the passage of time. Hence, "common good arguments" often differentiate between what serves the common good over the long-run or the short run.'" Here we have yet another atomic bomb; "whatever serves the "common good" over the "long-run," necessarily, trumps what merely serves the common good over the short run. Thus, "common good arguments" aim toward the ideal of advancing "what's good for everyone over the long run." As a philosopher that only occasionally "knows" what's good for himself over the short run, I'm really looking forward to tomorrow.

Finally, after tomorrow I'll not only know what's good for everyone over the long-run, I'll also know how to bring it about. As a lifelong pacifist, I have no doubt that "world peace" is in everyone's interest over-the-long run. However, I also recall from history that war mongers have always argued that we need to fight just one more war "to end all wars." We hear that same argument in defense of the drug war. "All we need to do is incarcerate a few more drug lords and drug users, and, over the long-run, there eventually will be no more drugs or drug violence. As you might imagine, I'm really looking forward to learning more about how I can advance the "common good" over the long-run. I'll definately, pass it onto my students. My next blog will explore "self-sacrifice" in pursuit of that elisive "common good."