Friday, December 26, 2014

The Philosophy of Torture

In response to the recent publicized revelations on the CIA’s use of torture in the ongoing War on Terrorism, I thought I’d add my two cents on the philosophy and ethics of torture. Much of the work of philosophers concerns involves untangling the meanings of complex words that shade the premises of arguments over Truth and Value. As stated in an earlier blog….the distinction between “descriptive” and “prescriptive” concepts is notoriously problematic. The word “torture” swings both ways, therefore we must explore both the facts (“what it is”) and the values (“whether it’s good”).  

What is torture? Well, first of all let’s all agree that “torture” necessarily involves at least two persons; whereby one inflicts harm on the other for various purposes. Hence we must distinguish between the “torturer” and the person being tortured. Torture is a teleological or goal-directed act; intentionally employed in pursuit of one (or more) of three main purposes:

 1.) Torture for the sheer pleasure of inflicting pain on another person (sociopathic torture).

 2.) Torture in order to proportionately punish others for wrongdoing in the name of justice ((retributive torture): “an eye-for-an eye.” .

 3.) Torture in order to secure information (utilitarian torture). Here, I’ll focus on utilitarian torture as it is employed within human warfare; especially in the context of the interrogation of prisoners of war. 

 Torture entails the infliction of harm. Harm is the invasion of an “interest.” And, therefore, it is subject to variable degrees; that is, there are greater and lesser interests. Torture is an “other-regarding act” that most often involves the infliction of greater degrees of pain. So, logically, you can mistakenly (or deliberately) “harm yourself” but you cannot “torture yourself.” Torture, by definition, implies the invasion of the “individual interests.” It is most often employed in the context of warfare; invading the “individual interests” of POWs in order to advance the “collective interests” of others. Invariably, it is employed by interrogators in order to secure “strategically important” information.

There are two main ways for interrogators to secure information from any sentient being: by employing “carrots and/or sticks.” Humans and other sentient creatures respond to both. Carrots are pleasure-based “enticements” offered in reciprocal exchange for information. Torture is pain-based. All persons with functioning central nervous systems tend to prefer pleasure-based experiences and avoid pain-based experiences. Some of us are capable of enduring higher levels of various kinds of physical pain for a longer period of time than others. Non-sentient beings that lack a central nervous system cannot experience pain (or pleasure) and therefore they cannot be enticed or tortured.  Similarly, you won’t get much useful information by enticing or torturing comatose or brain dead prisoners. The same holds true for torturing dead persons: “Dead men tell no tales.” And, interrogators waste their time torturing masochists.

As stated at the beginning of this blog, most philosophical debate is rooted in language. And, as Foucault has noted, there is a complex relationship between “power and knowledge.” The CIA, a well-established power monger, employs its own “private language” to distinguish between “torture” and what it calls “enhanced interrogation.” Enhanced interrogation usually involves the use of lower-level, physical harms such as the short-term deprivation of sleep, food, water, or social interaction. These deprivations rise to the level of torture when employed over longer periods of time. The CIA routinely admits that it employs enhanced interrogation but it never tortures anyone. Philosophically, we might argue over at what point deprivation becomes torture. Withholding food for a day is different from withholding it for a week! Torture can also involve the direct infliction of physical pain via exposure to extreme heat, extreme cold, and or sharp objects. The key philosophical question here is at what point does enhanced interrogation become torture and how does the torturer know when to stop short of torture?

A few years back there was an extended public debate over whether “waterboarding” of prisoners constitutes enhanced interrogation or torture. The technique creates the “illusion” of drowning, but that experience is non-lethal and non-permanent. But at what point does the experience of that “illusion of drowning” become torture? How many times can an interrogator “water board” a prisoner before crossing that line? How long per session? How many sessions per day?

An interrogator might simply “threaten” to torture a POW. Sometimes threats “work” very well, sometimes not. The key variable here is credibility. The threatened POW must be convinced that that the would-be torturer will (in fact) “follow-up” on that threat. If the Pope threatens to skin you alive…don’t worry. Humans have other interests other than avoiding their own physical pain. Thus, highly-skilled torturers often inflict (or threaten to inflict) physical pain on third parties….especially the family and friends of the target. “If you don’t talk NOW we’ll skin you and your family alive...family first.” However, threatening to torture a POW’s enemies, is not really a threat but an enticement.  

There are at least four epistemological questions that muddle the contemporary debate over the use of torture in warfare.

First, does the POW (in fact) “know” something of strategic value? If so, how does the torturer know that the POW knows something of “strategic value?” Torture is obviously futile in cases involving low-level combatants who don’t know anything. Competent military leaders are very conscious of this fact, and therefore reveal information to low-level soldiers on a “need-to-know” basis. Those who really “know something” of “strategic value” are rarely found on the battlefield. In wars involving decentralized enemies, where low-level combatants do not follow orders from a central authority, torture is also of dubious strategic value.

Second, if there is (in fact) a high-probability that a POW “knows something,” is the information that is being sought by the torturer (in fact), of sufficient “strategic value” to outweigh the costs of using torture? Therefore, philosophically, we must unpack what is meant by “strategically important information;” and how to most effectively acquire it. The concept of “strategic value” is extraordinarily malleable, or “socially constructed.” It usually means that this information will help “win a battle” or “win a war.” But if a piece of information is (in fact) strategically important, one might also question whether torture is the most efficient way to acquire that information? Would the offering of enticements (such as: money, immigration privileges, better food etc.) be more likely to yield results? Moreover, torturing a POW in order to win a battle (short-term) might not, necessarily, win wars (long-term). In complex phenomena like warfare, cost-benefit analysis is inevitably dogged by unanticipated consequences. Sometimes losing a battle can be a short-term setback, but a long-term advantage; especially if it inspires recruitment of new, highly-motivated soldiers. Similarly, torturing POWs might inspire more enemy volunteers seeking retribution for torturing “their brothers.”

Third, if a POW does (in fact) know something of strategic importance, does torture, necessarily, yield useful information? If a torturer is waterboarding a POW who doesn’t know anything, is that POW highly likely to lie in order to end that torture? Most soldiers who know something are trained to lie effectively under interrogation. What are the long-term costs of acting on false information?

Fourth, what would be the sociopolitical, legal, or moral consequences of accidentally killing a POW under interrogation? Would the opposing warring regime be more likely to torture and/or kill POWs under its control in retribution? Therefore, does the use of torture necessitate killing POWs that have been tortured, in order to prevent the enemy from knowing that torture has been utilized? One might also reasonably argue that the use of torture necessitates a shroud of secrecy? If so, is that shroud consistent with a nation’s moral identity? If the United States hopes to maintain its reputation as the beacon of Western democracy, how would public knowledge of the use of torture affect that reputation? Is any “shroud of secrecy” consistent with democracy? Do you really “blindly trust” the institutions of government to operate behind that shroud? Do you trust congress, the president, or the courts to supervise the CIA’s use of torture? If you are a torturer, would you readily admit to your supervisors that you tortured that prisoner, but didn’t get any information? If you are that supervisor, do you REALLY want to know that your subordinate used torture? Wouldn’t you prefer to operate on the basis of “plausible deniability” and be able to say “I didn’t know that X was torturing prisoners? He was a ‘rogue torturer!’”              

 In light of the conceptual ambiguities associated with the use of torture in the context of war, the pursuit of strategically important information provides an open invitation for warring nations to torture prisoners. But torturers really do not “know” exactly, what the prisoner knows. If they did, they wouldn’t have to torture him. Moreover, torturers really do not know (beforehand) whether that information is strategically important or not. War strategy, by necessity, is classified as “Top Secret,” therefore there is no incentive on the part of torturers to reveal whether the information procured via torture was (in fact) strategically important or not. Unfortunately, all of the incentives associated with torture promote institutionalized lying. Unless confronted with powerful evidence, no “civilized nation” will ever admit using torture. If confronted by irrefutable evidence, no highly-skilled torturer will admit that he didn’t secure strategically important information. Therefore, the universal response to publicized, irrefutable evidence of torture: “We saved thousands of lives on both side of the war by torturing this one person.” 

I’ll discuss the “Ethics of Torture” in my next blog.  

Saturday, December 6, 2014

The Philosophy of Retirement

In the United States, most of our culturally-bound "philosophy of retirement" is highly dispersed and mostly invisible. From an early age, we are taught to base our “personal identity” on "how we make a living.” As young children we are not expected to work…but we are legally required to spend the first 17 years of our lives attending primary and secondary educational institutions, which (at least in theory) prepare us to ultimately decide “what to do when we grow up.” Many high school students, both, attend school and work at least part-time; allegedly, to teach them the values associated with work: that is; show up on time, do what your boss tells you, dress appropriately etc. After primary and secondary school, many of us attend college and graduate school, and amass enormous long-term debt, allegedly, so we can “land a better job.” Sometimes those students even “graduate!” If you look closely, you’ll see that colleges and universities have gradually become servants to an ever-narrowing philosophy of work that emphasizes “work training” over “life training.” Not surprisingly, our schools (primary, middle, high, and college) devalue “life-training” subjects such as: art, literature, music, and philosophy in favor of “work training.” This philosophy of work has become so deeply ingrained that many of us choose work over family, friends, hobbies, vacation or retirement.  

The foundation for our prevailing “philosophy of retirement” is the cultural belief that, there is a "one-size-fits-all" point in time when we all “ought" to stop working. Three main questions arise: When should I retire?” Who ought to decide when I retire?  And, on what basis should that decision be made? 

So, when should I retire and who should decide?  In the United States, the legal minimum age for Social Security retirement is 65 years of age. Other private retirement savings programs follow that precedent. Thus, almost all retirement programs punish you with fees and/or taxation for retiring too early. Thus that “one-size-fits-all” retirement age really shapes our decision of when to retire. As we approach that magic retirement age, we most often must decide, exactly, when to retire. Rationally, that decision ought to involve weighing both other-regarding and self-regarding reasons. Typical other-regarding reasons include “for the sake of:” our family, our employer, and/or, currently unemployed.  Self-regarding reasons include retiring in order to have more time for: family, friends, hobbies, and/or travel. However, ultimately, our decision of whether and/or when to retire is shaped, primarily, by whether or not you saved enough for that retirement. 
 
I have been saving for retirement for over 30 years. I lost one-third of it in the "mortgage meltdown," but I've recently recovered it. Assuming that I have saved a sufficiently amount, I might reasonably conclude that, if financially feasible, I ought to retire sooner rather than later…while I'm still healthy enough to enjoy that retirement. However, given that most middle class Americans (like myself) live longer, public policymakers are now contemplating raising the minimum age for collecting Social Security to age 70; and by implication affecting other private retirement programs such as TIAA CREF. In short, government believes that it for the "greater good" that I retire later rather than sooner 

Now here's my libertarian rant. To what degree should government be empowered to influence when I choose to retire? On what basis should it be able to manipulate the "choice architecture" (legal incentives and disincentives) that shape my retirement decision?  And, what "public good" is served by keeping oldsters like myself at work until we're 70?  Right now, the Social Security Administration will pay me substantially much more on a monthly basis if I wait until I'm 70 to retire. If the stock market continues to rise, my TIAA-CREF account might be able to provide me and my wife with a decent retirement income at age 70. However, if I choose to retire before that I may run out of money and/or get driven into bankruptcy and dependency by our predatory health care system. Thus, our philosophy of retirement is wrought with unanticipated consequences. That's because that decision is framed by a complex, tax-based a legal architecture that assumes that we all "ought" to retire between the ages of 65 and 70; and that it's for the "greater good" that we all wait until we're 70. Unfortunately, both of my parents died in their early 60s, so if "the apple does not fall far from the tree," I’m probably not going to enjoy much (if any) “retirement. But my wife will...which is obviously a good thing; but I'd rather help her spend that largess. 
 
In sum, assuming that the "bad apple" does not fall before that, I'll probably work until I'm 70. As long as I can continue to do a decent job teaching students philosophy, that's OK. However, given the rapidly devolving culture of higher education, it's now consuming more and more of my time, energy, and resources to teach students about Truth and Goodness. Thus, over the next 6 1/2 years, not only will I miss out on retirement but I'll also have less time for my family, friends, and my guitar. Thank God...that our "philosophy of retirement" serves the "greater good," because it certainly does not serve mine!    

 

Sunday, March 23, 2014

Individual Autonomy and Evolutionary Psychology: Thin or Thick Theory?



ABSTRACT

           
The fundamental theoretical consideration among libertarian social and political theorists is whether libertarianism requires a “thin theory” or a “thick theory.” This, obviously, includes the corollary questions of: “How thin is thin?” and “How thick is thick?” In this paper, I shall extend that same distinction to the Theory of Autonomy. Hence, the basic question becomes: “At a bare-minimum, what kind of a Theory of Autonomy does Libertarian political theory require?” I will argue that, although a rigorous and relatively complete “thick theory of autonomy” is both desirable and possible over the long run, libertarianism requires only a “thin theory” grounded in the Non-Aggression Axiom. Any thicker theory, I shall argue, must be able to generate both descriptions of human nature consistent with evolutionary psychology; and justifiable normative prescriptions. However, any Theory of Autonomy must also be consistent with libertarianism’s central tenet; the so-called “Non-Aggression Axiom.” Indeed, the first order of business for any evolutionarily-centered political theory must be to generate both facts and values about the use of human aggression. But regardless of those facts, libertarians must embrace autonomy as a value and flush out exactly which (if any) legal and/or moral duties are entailed by the Axiom.

 
INTRODUCTION

  Before I get too far along, let’s agree on some basic concepts. Human inquiry is about asking questions and posing answers. Theories are answers to questions. Broadly speaking, there are two kinds of theories. Descriptive theories answer questions involving “matters of fact” (what is True), while prescriptive theories (or normative theories) answer questions about “matters of value” (what is Good). Any theory of individual autonomy will involve BOTH descriptive and prescriptive inquiry. All political regimes employ two ways of monitoring and enforcing prescriptive values: morality and legality. Both involve the imposition of duties.  While positive duties prescribe that we ought to “do something,” negative duties prescribe that we ought to “not do something.”


Since the eighteenth-century, there has been widespread acknowledgment within the Western liberal philosophical tradition that autonomous individuals (rational, competent adults) are (in fact) capable of exercising “self-rule,” while non-autonomous individuals (young children, mentally ill etc.) are (in fact) incapable of self-rule and therefore are (in fact) heterologous, and therefore,  ought to be to “ruled by others.” In that tradition, the foundational question of political philosophy is” “Which normative values are to be enforced by morality and which by legality (legal coercion), and whether those duties are “positive” or “negative.”

The Classical Liberal (libertarian) theory of individual autonomy is rooted in its own distinctive theory of human nature, which includes the following elements: individualism (humans are individuals and ought to be treated legally and/or morally as such), self-ownership (human individuals “own” their bodies and/or ought to be treated legally and/or morally as such); rationality (humans are rational and ought to be treated legally and/or morally as such), self-interested (humans employ rationality to advance their own interests and/or ought to be treated legally and/or morally as such), freedom of the will (humans are free to choose to act or not act and/or ought to be treated legally and/or morally as such), and responsibility (humans are morally and/or legally responsible for their freely-chosen actions and ought to be treated legally and/or morally as such). If any human lacks one or more of these elements, he or she is deemed non-autonomous, incapable of exercising self-rule. In these cases, those individuals are labeled as heterologous, and therefore subject to “rule by others.” But who draws that autonomous/heterologous line and on what basis is it drawn and/or ought to be drawn?

Historically there have been two lines of empirical research that scientists have offered in opposition to any theory of individual autonomy: cultural evolution (nurture) and biological evolution (nature). Many social scientists argue that individual autonomy (the capacity for self-rule) is undermined by the fact that humans are “social animals” programmed (determined) by culture via teaching and learning. Many biologists argue that autonomy is undermined by, what Rawls called the “Natural Lottery;” that is, the fact that the natural attributes that constitute natural “advantages” and “disadvantage” are distributed unfairly. Between these two lines of causal determinism, scientists argue that at least some (if not all) humans are non-autonomous, and therefore, subject to rule by others.      

In recent years, with advancements in evolutionary psychology and neuroscience, we now know a lot more about how that “natural lottery” works. Communitarian critics of individual autonomy conclude that (taken as a whole) the newfound “facts” undermine both the “facts” and “values” associated with the Classical Liberal theory of autonomy. But as Scott James persuasively demonstrated; the reduction of ethics to biology “can mean different things to different people.” (James 2). In short, the relationship between “facts” and “values,” raises daunting philosophical complexities, so daunting that I’ll let Scott sort out those issues. Nevertheless, I’ll simply point out that any theory of autonomy raises mind-boggling fact-value complexities and that knowledge of “facts” does not necessarily tell us everything we need to know about “values.”  

Today, it is a fact that throughout the Western world individual autonomy assessed in terms of greater and lesser degrees. Psychiatrists (and psychiatrists) are legally empowered to assess variable degrees of autonomy: individualism, rationality, self-interestedness, freedom, and responsibility. Their authority is often justified based on the findings of social science and/or biological science. But to what degree is the legal empowerment of experts, morally (and/or legally) consistent with libertarianism’s central value, the Non-Aggression Axiom?        
 

THE LANDSCRAPE OF “SELF-RULE”
 
As Andrew Sneddon has clearly shown, any theory of autonomy builds upon a well-trodden body of Western liberal thought. However, many theories fail to distinguish between various contexts of “autonomy,” including:  autonomy of choice, autonomy of persons and/or autonomy of actions. There is also a menagerie of other notoriously ambiguous, overlapping terms that haunt these theories, such as: non-autonomous, (not autonomous) heteronomous (ruled by others presumably for the good of those who are non-autonomous), coercion (the use of aggression to force individuals to do or not do certain things), and perhaps incompetence (the inability of an individual to rationally decide what to do or not do.) And, of course all of these distinctions are subject to both legal and moral monitoring and enforcement.     

Again, the basic question of all post-eighteenth century social and political philosophy revolves around the concept of “sovereignty;” or “who rules?” And, under what conditions might “rule by self,” “rule by others,” and/or “rule of law” be justified?  Prototypically, young children and mentally ill adults are classified as non-autonomous, and therefore, are labeled heteronomous (subject to paternalistic intervention by others). In the case of “rule of law,” utility-based libertarians argue that we must weigh the costs and benefits of paternalistic intervention exercised by various classes of benefactors. Whose interests ought to be advanced by any given legal intervention and whose interests are (in fact) advanced by that intervention? In the case of “rule by others,” rights-based libertarians question which specific class of “others” (if any) ought to rule on behalf of non-autonomous persons: family, friends, medical experts, corporations, or government (executive, legislative, or judicial branch).

Here it is important to note that individual autonomy is only part of the post-eighteenth century philosophical landscape. Political philosophers have also raised questions concerning group autonomy; that is, autonomous and/or non-autonomous interrelationships between macro-groups (large groups) and micro-groups (small groups). Thus, political philosophers argue endlessly over which groups are (in fact) autonomous, and which groups ought to be autonomous; and which groups are (in fact) heterologous, and which groups ought to be heterologous. While this issue will not be addressed in this paper, it does reveal the complexities associated with “self-rule.”

Within libertarian theory, the relationship between individual autonomy, non-aggression, and heteronomy is, obviously, complex. All libertarians (by definition) reject the use of aggression (physical force) in pursuit of individual and collective goals. Rights-based libertarians postulate non-aggression as a self-evident, unproven axiom, while utility-based libertarians justify non-aggression based on the negative utility ratios that result from the unbridled use of aggression. Some utility-based libertarians argue that there are some exceptions to the Axiom, and that we occasionally have a positive duty to employ coercive force in order to provide unwanted benefits or remove harms from non-autonomous individuals (known as individual paternalism). However, most utility-based libertarians reject state paternalism (legal intervention by any branch of government); and are at least suspicious of individual paternalism (legal intervention by other parties such as family, friends, etc.). This essay will argue that we libertarians need to flush out what we mean by non-autonomy and heteronomy, and identify the duties (positive or negative, moral or legal) that are entailed by Non-Aggression Axiom.         
     

AUTONOMY AND EVOLUTIONARY PSYCHOLOGY

So how might recent research in evolutionary psychology and neuroscience elucidate and/or resolve some (or all) of the classic questions of sovereignty: most notably “who rules, and why?” And, “Who ought to rule, and why?” There is an overlapping consensus among evolutionary psychologists and neuroscientists that, if there are (in fact) any autonomous individuals (decisions, and/or acts) they must be consistent with what we know about the structure of the human brain. Evolutionary psychologists now agree that the human brain is comprised of “modules,” which have evolved over the last 3.5 million years to solve specific problems associated with reproducing and surviving in groups. Neuroscientists agree that individual brains are not only computer-like organs of “reason,” but complex adaptive systems, or “neuronal networks” that address these problems. The primary goal of neuroscience research, therefore, is to identify those specific modules, “locate” the brain structures that underlie these modules, and ultimately generate explanations, predictions, and perhaps facilitate internal and/or external control over brains. Most scientists agree with Bickle that the ultimate explanation, and most likely the ultimate target for generating useful predictions and facilitating useful control of these networks lies at the microcosmic level. In short: brain science ultimately aims at the reduction from living entities (neurons and genes) to non-living entities (molecules atoms).      

Scientists have long observed that individual modular brains also have the natural propensity to collectively “network” with other modular brains, but argue over how networked brains contribute to human reproductive success (transmission of genes between generations), and/or the survival of individual humans, groups of humans, and/or the human species etc. So how might this emerging body of scientific knowledge about “networked,” brains enlighten “autonomy theory?”      

Today, a growing number communitarian scholars, like Michael Sandel, Peter Corning, and Franz de Waal have long argued that (what I call) this emerging “network ontology” provides empirical disproof of one or more of the traditional components of classical liberalism (individualism, rationality, self-ownership, self-interestedness, freedom, and/or responsibility). They argue that human beings are (by nature) sympathetic, cooperative, heteronomous, social animals, and therefore, “are” and/or “ought to be” be ruled by external others or forces. But is there really a compelling argument in support of the claim that networked modular brains ought to be subject to “rule by others” (or “rule of law”) based on the evolutionary biology and/or neuroscience?  

My argument is that although this emerging “neuronal network ontology” appears to undermine the various components of the classical liberal theory of human nature, that ontology does not, (and cannot), undermine the Non-Aggression Axiom. Therefore, libertarians may choose to develop a “thick theory of autonomy” that might someday “prove” that individual brains are (in fact) autonomous. But that line of research assumes that non-autonomy is purely at matter of fact. My argument is that, regardless of the future pronouncements of evolutionary biology, we must continue to embrace the Non-Aggression Axiom as a normative value. However, we must also more rigorously flush-out exactly which duties (positive or negative) that are imposed by the Axiom.   


THROUGH THICK AND THIN

For better or worse, many recent Western Liberal social and political theorists embrace John Rawls’ original distinction between a “thin theory of the good” and a “thick theory.” (Rawls) In that original context, Rawls argued that Western liberalism (welfare liberalism and libertarianism) must defend “individual rights” (or justice) over “communal conceptions of the good.” (Forst p. 30). Thus, liberalism (both left and right) implies both a “thin theory of the good;” “and “ethical neutrality of the law.” The relative “thickness” or “thinness,” refers to the degree to which any liberal theory (liberal left or libertarian right) can justify laws that advance specific social commitments and/or values beyond what’s required by the non-aggression axiom. Rawls went on to develop a theory of “primary goods,” which he argued justified a short list of “positive rights” that all civilized societies would voluntarily provide the “least advantaged;” most notably “equal liberty” and “the difference principle.” Thus, under Rawlsian theory “beneficence” (the provision of benefits and removal of harms) is regarded as a positive duty.       

The Non-Aggression axiom states that “No one has the right to initiate aggression against the person or the property of anyone else.” (Boaz p.74) Again, thoughtful libertarians observe that the Axiom applies to, not only relationships between individuals but also: relationships between individuals and groups, relationships within and between micro-groups, and relationships between macro-groups and micro-groups. For libertarians the Axiom implies at least a negative duty that forbids us (individually or collectively) to use (or threaten to use) physical aggression against anyone who has not initiated it. This negative duty serves as the basis for the corresponding negative right of individuals and groups not to be subjected to aggression or threats of aggression.

The Non-Aggression Axiom is also subject to “thinner” and “thicker” analysis. In fact, libertarians disagree over which “acts” are specifically forbidden by the Axiom and therefore might justify aggressive legal or moral intervention. Some actions violate negative duties, and are obviously morally (and legally) wrong: murder, rape, assault, robbery, kidnapping, and fraud. (Boaz 75). In these cases, it is easier to justify the use of defensive aggression. Other “acts,” that might also justify aggressive intervention are open to debate, such as: defense of the non-autonomous (legal paternalism), defense of friends or allies, or the defense of weak persons (poor, sick). Another core dispute (which I won’t discuss) is whether taxation violates the Axiom.    

Defenders of a “thin theory” argue that libertarianism is a moral and/or political theory based on non-aggression, and nothing more. Thus, (what I call) Thin-Theory Libertarianism tends to resist the expansion of legality (government) to include specific, socially embedded concepts of the good. Thus, thin theory libertarians oppose Rawls’ attempt to justify any “positive rights.” In contrast, proponents of a “thick(er)-theory” might argue that it is also possible (and/or necessary) for libertarianism to employ aggression in support of other legal and/or moral values; especially beneficence (the duty to provide benefits and remove harms on behalf of others).  Left-leaning, minarchist, libertarians (like F.A. Hayek), therefore, support laws that provide for a (very basic) social safety net, a minimum wage, or even engagement in a “just war.”        

We can also employ the thick-thin distinction in the context of a Theory of Autonomy.  A “Thin Theory of Autonomy” will defend any individual’s “negative right” to pursue his/her concept of the good, as long as that pursuit does not violate the Non-Aggression Axiom. In contrast, a “thicker theory” of autonomy might seek to identify the necessary and sufficient conditions for autonomous human persons (or groups), decisions, and/or actions. Thus, a “thicker” theory of individual autonomy might distinguish between autonomous, non-autonomous, and heteronomous individuals and groups. However, thicker theories might (but not necessarily) also legally empower experts (mostly psychologists, psychiatrists) to objectively distinguish between varying degrees of autonomy and thereby dole out variable degrees of moral and/or legal responsibility. The problem here is that the legal empowerment of experts to discern varying degrees of autonomy can violate of the Non-Aggression Axiom.

Again, what I want to emphasize here is that libertarianism does not theoretically require a “thick theory of libertarianism, a “thick theory of non-aggression,” or a “thick theory of autonomy.” We do not need to “know” the necessary and sufficient conditions for autonomous personhood or autonomous decisions, or autonomous actions. Why? Because “non-aggression” is a prescriptive concept (a value), and therefore, cannot be “falsified” by the descriptive “facts” of evolutionary psychology or neuroscience. Moreover, as Hayek (Hayek) and Feyerabend (Feyerabend) have argued, we libertarians must be much more of legally empowering experts, to violate the Non-Aggression Axiom, even if violating it “benefits” certain individuals and/or groups (ourselves, our friends, our allies, or the weak).    

At a bare minimum, the Non-Aggression Axiom holds that humans are a priori morally obligated to treat humans “as if” they are autonomous, morally responsible agents; even if Science discovers that some (or even all) persons lack some (or all) of the neuronal capacities associated with effective, self-rule. So, even if Science “proves” that all humans are in fact non-autonomous, that would not necessarily imply that all persons and/or decisions are heterologous, and therefore “ought” to be ruled by “others” or ruled by “laws.” That’s because we’d still have the prescriptive moral questions remaining: “who ought to rule and why?” In short, if there are (in fact) non-autonomous persons or groups (decisions and/or actors), those persons or groups (decisions and/or actions) would not necessarily be heterologous. Treating some individuals (and/or groups) as heteronomous and subjecting them to beneficent coercive intervention by others violates the Non-Aggression Axiom.

On the other hand, let’s also admit that the Non-Aggression Axiom is inconveniently vague and that libertarians really need to spell out what it morally and legally entails. A “thin theory” might justify a moral right to employ aggression ONLY in defense of ourselves (individually or collectively). That would suggest a non-interventionist morality. A thicker theory might justify interventionism on behalf of (at least some) others (most notably: in defense of the non-autonomous (children and the mentally ill) and/or defense of friends and/or the weak) that are suffering from aggressive acts inflicted by third-parties. The more we libertarians thicken the Non-Aggression Axiom, and the Theory of Autonomy, the closer we move toward an interventionist, welfare-liberal social and political philosophy.  
        

CONCLUSION
 
In conclusion, I have argued that the Non-Aggression Axiom serves as the necessary condition for a Thin Theory of Libertarianism. The development of a “Thicker Theory,” that questions the human capacity to “know” and “do” what is in our individual and collective interests (based on the findings of biological evolution, genetics, and/or neuroscience) is perfectly acceptable if not desirable. However, it is imperative that we acknowledge that the Non-Aggression Axiom is a normative (moral and legal) concept and therefore, it is not (by definition) subject to empirical verification or falsification. Therefore, libertarians like John Bickle can continue to scientifically study the neurological basis of individual (and collective) autonomy. But those findings can neither confirm nor disconfirm the Non-Aggression Axiom, nor can they confirm or disconfirm any one social and political theory. Although scientists can certainly offer us useful information, products, and/or services the Axiom prevents those experts from forcing us to act on that information, and/or buying the products and/or services that they offer, for “our own good.”  The prescription “x is for our own good” implies that ultimately experts know more about what is good for us than we do. Moreover, these prescriptions, invariably, force us to conform to what others believe is good for us. Even if scientists could (in fact) differentiate between autonomous and non-autonomous brains, it would not necessarily imply that non-autonomous persons are heterologous or than any one person or group is more qualified to decide we want. In sum, if some (or all) human brains are (in fact) wired to be “ruled by others,” questions of value would remain unanswered, most notably: “Who” ought to rule those brains from the outside and “why.” These are ultimately moral and philosophical issues, therefore, evolutionary psychology and neuroscience will not (and cannot) shed much light on them.    
REFERENCES
 Bickle, John, Philosophy and Neuroscience: a Ruthlessly Reductive Account (Kluwer Publishing Company: 2003)
 
 
Boas, David, Libertarianism: A Primer (Free Press: 1997)
 
 
Christman, John ed. The Inner Citadel: Essays on Individual Autonomy (Oxford University Press: 1989)

Fairfield, Paul. Moral Selfhood in the Liberal Tradition (University of Toronto Press: 2000)

 Forst, Rainer. Contexts of Justice: Political Philosophy Beyond Liberalism and Communitarianism (University of California Press: 2002)

 James, Scott M. An Introduction to Evolutionary Ethics (Wiley-Blackwell: 2011)


Mulhall, Stephen and Adam Swift. Liberals and Communitarians. second edition. (Blackwell Publishers: 1996) 
 
Oshana, Marina. Personal Autonomy in Society (Ashgate: 2006)
 
Rawls, John. Political Liberalism (Columbia University Press: 1993).

Sneddon, Andrew. Autonomy (Bloomsbury: 2013)
 
 Waller, Bruce N. The Natural Selection of Autonomy (Suny Press: 1998)

Thursday, January 2, 2014

The Fear Market

Libertarians tend to look at the world through the lens of reciprocity, or the act of "buying and selling;" hence, our advocacy of "free markets." Even human emotions are subject to market analysis, especially "FEAR." Let's take a look at that "Fear Market."

First of all, fear is an emotion that is manufactured by the the inner-most regions of the human brain. Hence, if the human brain is evolutionarily organized like an archeological site, then other species can also "feel fear" and exhibit "fear behavior," especially mammals. There is a certain body of universality associated mammalian fear, as evidenced by the familiar "fight or flight mechanism." Among human fear is universally recognized as a profound source of human motivation...perhaps even the most prolific source of motivation. It invariably motivates us to either "do something" or "not do something." Many human fears have been shaped by years of mamallian evolution and play a major role in the preservation of life. Hence, we share many fears with other species, especially Bonobos and Chimpanzees...most notably a nearly universal fear of snakes. For most of us, our fear of snakes and be overcome by knowledge of which kinds of snakes bite, are poisonous, and capable of constricting us to death AND where these snakes are most likely to live. We can also learn how to safely handle, even poisonous snakes. So despite our natural fear of snakes, many humans purchase books and videos about snakes and buy them as pets. Similarly, although we are all programmed with a natural "fear of death," we all pay good money to watch horror films, especially those involving zombies and monsters. We also spend way too much money on health care and funerals. In short, imaginary fear plays a central role in the global market. 

Think about it...how much of the global market is based on fear mongering? Here's a few caveats: health care industry (fear of disease and/or disability), banking industry (fear of theft), surveillance technology industry (fear of death and/or theft), weapons industry (fear of death and/or theft), insurance industry (fear of death, theft, property damage), etc. Let's also add numerous public institutions such as: the military (fear of invasion), police departments (fear of crooks), and fire departments (fear of fire) All of these industries and institutions, therefore, have an interest is selling us fear. Thus they all employ marketing strategies to convince us that we ought to "fear X" and therefore purchase products and services that alleviate those fears. Good advertising manipulates fear among buyers to the benefit of sellers. Thus, the fear industry tends to focus on major harms. The greater the magnitude of a harm, the greater the fear, and the greater the motivation. However, today most of the major harms that we face are really low-probability, long-run harms. Therefore, the most effective marketing strategies tends to transform minor harms into major harms, improbable harms into highly probable harms, and/or long-term harms into short-term harms.

Media technology has drastically expanded the fear market. Watch the local news in Cincinnati and most of it involves reporting of crimes, fires, and a host of other local, national, and global disasters. When the local meteorologist predicts "bad weather" regular programming is usually interrupted...especially if tornados are "possible." More importantly, whenever an "act of terrorism" is committed anywhere in the world, it dominates the news global media. One car bomb explodes in Afghanistan we all begin to fear car bombs in Cincinnati. Therefore, the police feel justified in searching everyone's cars for explosives, and all packages found by the side of the road are opened by "explosive experts" that paid to take courses in detecting and disarming explosive devices. One school shooting at one out of thousands of U.S. schools, and we're all afraid there will be one at our school. Hence, the growing market for metal detectors, body scanners, ALICE training at schools, home schooling, conceiled carry classes, and increased scrutiny of the mentally ill.

One of the consequences of the mass marketing of fear has been our growing propensity toward "risk intolerance." Over my 62 years I have seen the gradual rise of risk intolerance. When I was a child: no one wore bike helmets, seatbelts, or locked their doors at night. We all played "Lawn Jarts" in the back yard, broke thermometers and played with the mercury, "rough-tackle football" without equipment, baseball without batters helmets, we played with beebee guns, knives, fireworks and roamed the neighborhood (and further) without adult supervision. We did, however, have fire and civil defense drills at school...we were taught to hide under our desks, if a nuclear bomb was dropped on Syracuse.

Libertarianism requires a minimal degree of courage and a willingness to take at least some risks despite global fear mongering. Therefore, in a world dominated by a global media, we must resist the mass marketing of fear. Let's start by not "buying" imaginary fears concocted by sellers. Let's try to control our natural fight or flight mechanism and limit fear responses to real, major, highly probable, short-term harms. For dinner tonite...how about a nice greasy cheeseburger topped with some of that nice chedder cheese that Marion gave me?