Monday, December 26, 2011

Responsibility

The question of "responsibility" plays a central role in retribution and is central to our feelings and thought about justice. There are two different forms:  legal responsibility and moral responsibility.

Legal responsibility is not necessarily identical to moral responsibility. Not everything that is “legal” is “moral,” and not everything that is moral is legal. The basic difference between legality and morality lies in the distinction between “laws” and “rules” and how each are monitored and enforced. Laws are monitored and enforced via the coercive power of the state. If you break a law and get caught, you can get punished by a government. In the Western world, governments “sanction” law breakers via fines, incarceration, and sometimes via death (United States). Some non-Western countries whip or beat law breakers. If you break a moral rule, members of the community will praise you or blame you. Most governments limit what groups can do to enforce morality. In some countries, like Saudi Arabia, almost everything that is immoral is also illegal.  

Moral responsibility involves the basic question of what kinds of persons are “fair” targets for moral praise and moral blame. Simply put, we praise or reward persons that do good things, and we blame persons that do bad things. But what is it about the nature of persons that justifies holding them responsible for their behavior? And, why do we, in fact, hold each other morally responsible for our actions? Well, at least in the Western moral tradition we assess responsibility based on two main criteria: rationality and free will.

We praise and blame persons that are capable of understanding and applying moral rules and reasoning about consequences before they act. The assessment of degrees of rationality usually involves assessing mental processes such as logical reasoning, forethought, learning from experience, processing information etc. Thus, mind or mentality is a necessary condition for the assessment of moral responsibility, but not a sufficient condition. Not all persons that possess mentality are morally responsible. We do not hold young children responsible for their behavior. But as they get older we tend to hold them more responsible. Nor do we hold persons that have a "cognitive or defect" responsible for their actions. And, obviously we do not always hold other persons responsible for acts born out of ignorance of the rules or the consequences. We generally do not hold animals morally responsible for their good or bad behavior, although we may praise or blame them in order to encourage or discourage future behaviors.  

We also praise and blame responsible persons for acts of free will; that is, acts that they are capable of controlling. Basically, this means that we do not praise or blame persons for acts that are coerced by other persons or forced by internal or external circumstances. Personal coercion generally involves the use of personal threats and enticements enforced by others. Both threats and enticements come in various degrees. Major threat: "Rob that bank or I'll kill your family!" Minor threat: "Rob that bank or I'll take your shoes!" Major enticement: "Rob that bank and I'll give you 10 million dollars!" Minor enticement: "Rob that bank and I'll give you one dollar." Generally speaking, we hold moral agents responsible for bad acts that were performed in exchange for enticements and we do not usually praise people that do good things in exchange for major enticements. In other words, responsible persons ought to be able to resist at least some low-level threats and/or enticements. Philosophers argue over whether and/or to what degree threats and enticements undermine free will, and whether the concept of free will even makes sense.

As a general rule, we do not praise or blame others for good or bad consequences that are brought on by chance, or moral luck. If I accidentally run into a fleeing bank robber, I probably will not be praised as a hero. Unless, I could convince the media that I knew he was a fleeing bank robber and that I deliberately tackled him. If I accidentally killed that robber, I might even be held legally responsible and blamed for his death. More on that later.       

Not only do we hold persons morally responsible for their actions, we also hold groups of individuals legally and morally responsible for their actions. But collective responsibility is much more difficult to assess. Here’s why. First of all, our individual association with groups is not always framed by rationality or free will. Sometimes we are coerced into associating with others, and sometimes we associate ourselves with groups without really knowing everything that they do. Sometimes we associate ourselves with groups based on tradition alone.  Most of us remain associated with the same religious group that we grew up with.   

Voluntary associations are those groups that we rationally and freely choose to associate.  These associations are often organized hierarchies that involve leaders and followers. Generally speaking, we hold both leaders and followers morally responsible for their actions. But the responsibility of followers is contingent upon what they knew beforehand and the presence of coercive influences. When we really know exactly what an organization does and when we freely choose to follow its leaders, we are held individually responsible for what that organization does. Hence, responsibility is diminished commensurate to both information and freedom. Unfortunately, in the real world followers do not always possess perfect knowledge or perfect freedom.

Moreover, hierarchies often delegate responsibility, which means that leaders at the top of an organization may not always know what lower level leaders are doing and sometimes upper level leaders employ coercive force on lower level leaders. For example, many of the Nazi doctors claimed that they tortured their patients because they would have been killed if they disobeyed orders.

Corporate responsibility is especially convoluted. Who is ultimately responsible for good and bad corporate behavior? Should we hold the CEO or the Board of Trustees of a multi-national corporation responsible for everything that takes place within that corporation?  Should the leaders get paid for what followers produce? Should leaders be held responsible for immoral and/or illegal behavior of followers? In short, this notion of collective (or shared) responsibility turns out to be very complex.

Another source of complexity has to do with the dynamics of how human beings behave in groups. Historically, philosophers have identified two sources of determinism that limit moral responsibility” biology and social structure. Biological determinists argue that at least some human behavior is “natural,” or caused by our brains and genes. Therefore, biological determinists argue that at least some human behavior lies outside of the realm of rationality and free will and that praise and/or blame cannot alter those behaviors.  

So, how does social structure affect human rationality and free will, and to what degree does "social causation" diminish individual and/or collective responsibility? This question raises a host of other questions concerning the nature and extent of circumstantial coercion, the malleability of human nature, and the "nature v. nurture controversy."  To what degree are human beings conditioned by their social environment and their genetic makeup? There are two wrong answers:

1. Human behavior is infinitely malleable via manipulation of the social environment (social determinism). Therefore individual responsibility is impossible.

2. Human behavior is not malleable at all, but determined by our social environment and biology (genetic determinism). Therefore, the assessment of individual responsibility is impossible.

If the truth lies somewhere between these extremes, then how do we (in fact) go about assessing personal and collective moral responsibility in our everyday lives? How should we?

Now the relationship between legality and morality is itself subject to a long line of philosophical inquiry. Historically, many philosophers have argued that morality is timelessly universal and “objective” and that legality is relative to time and place. Other philosophers have argued that universal morality always trumps legality. Some say that both morality and legality are temporally and culturally relative. Therefore, “When in Rome do as the Romans do.”

Sunday, December 18, 2011

Metaphysical Academic Freedom

Okay...so I copped out on my last blog entry on academic freedom. If you all insist, I'll take a crack at the ever-elusive concept of "metaphysical academic freedom." To me, metaphysical academic freedom generally refers to either the descriptive (factual) capacity for human to "freely" produce a theory and the prescriptive (normative) conditions that might be involved. Descriptive academic freedom would  entail addressing the larger problems of biological and cultural determinism. Prescriptive academic freedom might involve the rights and duties that would undergird that "freedom of inquiry." Answers to academic questions are called theories (or conjectures) that either explain, predict, or control phenomena. So it's fair to ask two questions: "To what extent do I have a right to propose theories?" And, "To what extent do I have a right to propose theories to an academic community?"

First of all, what do we mean by a right to propose a theory? There are two kinds of rights: positive rights and negative rights. If I claim a positive right to propose a theory, then someone else has a duty to assist me. If I have a negative right, that means that my right is implies the duty of non-interference by others and does not necessarily mean that others have a duty to assist me or enable me to publish my theory. That is: "Ron you have a right to publish that book, I won't interfere with that! But I don't have a duty to read it, and I don't have a duty to publish it in my journal. How about a case study?

Ron W. is going up for promotion next year and believes that he needs one more publication. So he surfs the Internet and finds a journal titled the Journal of Arcane and Useless Philosophy, which has an acceptance rate of 98.2% and a circulation of 73 subscribers. It is published by the Society for Arcane and Useless Philosophy, which has a membership of 73. Ron thinks to himself: "Ah...the perfect home for my most recent scholarly essay: "Does Academic Freedom Imply Positive or Negative Rights?" He sends the essay to the editor, who then sends it to two "referees," who read it six months later and submit their respective reviews. Reader A loves it, and offers three minor revisions. Reader B hates it and recommends that it be rejected. The editor, however, notes that only three essays were submitted to the journal in the last three months (all were accepted) and that Volume 11 Number 16 needs one more article. So he decides to publish Ron's essay. As a result, Ron was promoted to full professor. Since then, three scholars read that article (one of them finished it!), the Society for Arcane Philosophy has disbanded, and their journal discontinued. However, that article still appears prominently on Ron's curriculum vitae. Question: Does "academic freedom" include a scholar's right to publish research in journals that no one reads? Do academic institutions have a right or a duty to read what their faculty publish? And, if so who and how much should that reader get paid? So much for metaphysical academic freedom.     

Saturday, December 17, 2011

Academic Freedom

A friend asked me to articulate my views on "Academic Freedom" from a libertarian perspective. As a philosopher, my usual strategy is to take a close look at the key concepts. In this case, let's look at the meaning(s) of "academic" and "freedom."

The term "academic" describes a human activity, a profession, and an industry. It also implies a unique set of instutional  associations: colleges, universities, departments, publishing companies, professional organizations etc. However, the activity is nothing more than the process of human inquiry, or the distinctly human capacity to ask questions and propose answers between individuals, groups, and generations. I would also argue that academic inquiry (or inquiry conducted within an academic institutional structure) involves the epistemic pursuit of either Descriptive Truth (is) or Prescriptive Value (ought).

The concept of "freedom" within the context of academic inquiry turns out to be enormously complex and therefore wide-open to metaphysical interpretation. For us libertarians, freedom is a political concept that refers to the relationship between individuals and governments. Some of us take on the challenge of addressing "metaphysical freedom," or "freedom of the will," but most of us prefer to focus on freedom as the absence of coercion by government. Minarchist libertarians seek to limit the use of coercive power of governments to tax citizens and limit the use of tax money to the performance of specific functions, such as a: police force, judiciary, a military, and perhaps the provision of a very basic social safety net. Anarcho-capitalist libertarians seek to eliminate all involuntary forms of taxation and all coercive government.

So in light of the above, what can I say about "academic freedom?" For a minarchist like myself, I would say that we need to distinguish between public and private colleges and universities. Most of us argue the publically funded educational institutions violate the basic tenets of minarchy, and that all educational institutions ought to be private institutions. So I really can't say much about what academic freedom might mean in the context of a public college or university. But I can say something about what it might mean within a private institution.

Education is an industry. It involves the interaction of both buyers and sellers, and employers and employees.  Academic freedom in private institutions is nothing more than what's mutually agreed upon within a contract. When you agree to accept employment in a private college the limits of your academic freedom are contained within that contract. If you do not accept those limits, then you have the freedom to decline the job offer. Of course, if you willingly sign that contract then your academic freedom does not include the freedom to violate the conditions of that contract. Unfortunately, as an employer the institution does have the "academic freedom" to unilaterally alter that contract without your consent, but you also have the academic freedom to either abide by that contract or resign (freedom to exit). Or, perhaps you might find solace in the "black market" and conceal or disguise your activities.   

Over the years, academic institutions have adopted a variety of traditions intended to increase job security for faculty and expand the concept of academic freedom. The original idea behind tenure was to protect faculty from revolving administrations that constantly seek to revise the terms of employment of faculty, especially salary, working conditions, and academic freedom. Now we libertarians can argue over whether tenure ought to be included within any academic setting, especially over the cost/benefit ratios. But we are reluctant to argue about academic freedom outside of that contractual framework.

So what can I say about my own academic freedom within a Roman Catholic college? Well, obviously I cannot expect the college to grant me the freedom to publish articles with titles like: "The Virtues of Abortion" or "Why God does Not Exist." If I did, it would violate the terms of my contract and I'll get fired, or at least get reprimanded by my superiors. But since, I'm not interested in "inquiring" into the virtues of abortion or defending atheism, I don't regard it as a limit on my metaphysical academic freedom. However, I do disagree with the church's legal strategy for dealing with abortion and I am more of a pantheist than a theist. Now for the "million dollar question?" How did a libertarian philosopher that publishes regularly in journals like Independent Review survive for 25 years at a Roman Catholic college? The answer is simple. Roman Catholic colleges are, in fact, among the last bastions of metaphysical academic freedom in the United States. While it is true that some are more socially conservative than others, for the most part we enjoy much more freedom than other public or private institutions. I do have a few limits, but overall, my college values diverse points of view. In fact, we have not only a handful of outspoken libertarians like myself on the faculty and staff, but also a large number of non-Catholics and atheists. Let me add that all of the libertarians that I know on the MSJ faculty also have tenure. And at least one libertarian on the maintenance staff stops by office every morning to talk politics. But admittedly we're grossly outnumbered by welfare liberals.

Wednesday, November 23, 2011

The Super-Failure: Why the Super-Committee Failed

I don't usually blog on national political news, but this one is hard to resist! As we all know, the so-called "Super- Committee" failed to reach a consensus on budget cuts and revenue. Although, the blogosphere is teeming with complex post-mortem explanations as to why it failed, I have a much simpler explanation. They were asked to "cut the budget" and/or "enhance revenue." What they really needed to do is reform wasteful programs and departments within that budget and reform the tax code.

Congress has a moral and political responsibility to make sure that tax dollars are spent as efficiently as possible. However, nearly every government program and department is riddled with well-documented wasteful spending. By far the worst budget-busters are the departments of Defense and Homeland Security. The "visible waste" in  these bloated, unsupervised bureaucracies is legendary, while the "invisible waste" is hidden away in budgetary "black holes," where little is known and budgets are fictional. There's no telling how many investigations have been conducted by both governmental and non-governmental watchdog groups: costly research that invariably disappears into other well known "black hole:" the "congressional archives." Ironically, some of the best unread research is conducted by Congressional Budget Office. In short, everyone agrees that these programs and departments need reform, and there's plenty of information out there prescribing what needs to be done. The same can be said for Medicare, Medicaid, and Social Security. But simply cutting the budgets of inefficient, bureacracies will only make those programs less efficient and more wasteful.

What about revenue enhancement? Again, everyone in Congress knows that our tax system has become a bastion of unfairness and incomprehensibility. It's so arcane and complex that most Americans employ third parties to file their taxes. It's even incomprehensible to the IRS! The astronomical error rate for the chronically understaffed, under-brained IRS has been documented many times over, in 6,000 page reports. Unfortunately, those reports are ultimately filed away in the Congressional Archives, or well hidden from Internet search engines, never to be seen again. Moreover, everyone knows that the tax code needs to be reformed from top to bottom, but that's not  even on the political horizon. Herman Cain's feeble-minded 9-9-9 tax proposal clearly illustrates this tendency to ignore well-known facts. 

So, if Congress already has access to the information necessary to reform and/or eliminate wasteful spending, why don't they do it? That's a good question. Here's my theory. First of all, individual congressmen rarely if ever read anything substantive. Why not? First of all, there's so much information, misinformation, and disinformation manufactured by government agencies that it's impossible for any one human to read even a small portion of it. That's why individual lawmakers hire an army of congressional staffers to read it and summarize it. Now, there's no guarantee that staff members are themselves competent and/or actually read and comprehend those 6,000 page reports. Nevertheless, staffers pass on those summaries to illiterate congressmen, who may (or may not) read them.  

So why is it that politicians don't read and act on this vast body of well-documented information? Well, it's because they are politicians and therefore spend most of their time, energy, and resources running for re-election. Watch C-Span and marvel at the quasi-articulate speeches delivered before an empty rooms! But then again, when they do show up for work, they rarely vote on anything. Check out what congress has actually accomplished this year! Your jaw will drop! 

So why did the super-committee fail? Well, for starters they were pursuing the wrong goal. Why pay a group of lawyers to argue endlessly over whether to cut budget items and/or increase revenue under the guise of half-baked political and economic theories? They should have been actually fixing and/or cutting inefficient programs and departments and the reforming the tax code. My modest solution? I think we need a whole new batch of politicians with a strong work ethic; statesmen that read more, attend more, and simply act more. I would also argue that we have WAY too many lawyers in Congress that stay in office WAY too long. Let's insist that the forthcoming 2012 Congress actually read, attend, and act upon what's already known. In sum, we Americans have been WAY too tolerant of incompetent lawmakers. Next November, I propose that we vote out every single incumbant in the House and Senate, and whenever possible vote for third party candidates. In short, let's fix the system ourselves.                 

Wednesday, November 9, 2011

Why Read Machiavelli's THE PRINCE?

Over the years non-philosophers done a pretty effective job of disparaging the philosophical writings of Nicolo Machiavelli's writings. Leadership scholars simply dismiss his works under the rubric of "bad leadership." Among psychologists the term "Machiavellian intelligence" refers to the human (and Chimpanzee) propensity for deception. In fact, the adjective itself, "Machiavellian," has become synonymous with deception and immorality. Philosophers, political scientists, and scholars that have actually read (and understand) his seminal works: The Prince, Discourses on Livy, and  The Art of War have a much more nuanced assessment.

As you read The Prince it's important to understand Machiavelli's philosophical moorings. First and foremost, Machiavelli is a descriptive empiricist. Therefore, in contrast to the a priori, deductive, rational epistemology employed by Plato in The Republic, Machiavelli believes that human knowledge can only be ascertained via inductive observation of the world, or as Plato would say "in the cave." In other words, if you want to know how to organize a group of human beings, Machiavelli suggests that the best way to begin that process is to uncover the descriptive facts: How are sucessful organizations organized?

Second, Machiavelli is a prescriptive utilitarian; that is, he (like Plato) believes that a good organization produces more happiness than unhappiness. Like Plato, Machiavelli believed that political leadership is especially important and that "bad leaders" destroy themselves and their organizations and "good leaders" preserve themselves and their organization. Based on historical evidence, Machiavelli described how thoroughout the course of human history, leaders and followers tend to behave within certain kinds of organizational contexts. At the beginning of The Prince he identifies two different organizational contexts: republics (shared governance) and principalities (monarchy).  The Prince explores the "nature" of principalities. So as you read The Prince keep in mind that he is not "prescribing" monarchy as the best way to organizise human beings, he's only "describing" the kind of behavior that you'll observe in principalities. Of course, some principalities survive for a long time, while others have suffer extinction.

Now it's important to emphasize that both Plato and Machiavelli are both utilitarians of sorts, and therefore agree that the primary value of collective life is that it brings about more happiness than unhappiness. Philosophers call this flourishment. Therefore, Plato and Machiavelli that are trying identify the underlying political arrangements that increase happiness and decrease unhappiness. Recall that in The Republic Plato attempted to prove that tyrants are unhappy and the organizations that they head are also unhappy. Well, what Machiavelli is going to argue in The Prince is exactly what Thrasymachus would have argued in The Republic if he had chosen to remain engaged in the dialectic with Socrates. In other words, Machiavelli is going to do philosophy within "the cave," in a world where humans actually possess the equivalent of "Gyges Ring."

One of Machiavelli's of The Prince is the idea that sometimes leaders have to make decisions where none of anticipated outcomes are ideal. In short sometimes leaders must get their hands dirty; that is "choose between lesser evils." Philosophers call this the "ethics of dirty hands." Political leadership often takes place within this context, however you might argue that other leadership contexts also entail getting your hands dirty. Machiavelli argued that virtue-based ethics in the Greek and Judeo-Christian traditions cannot provide moral guidance in these contexts and that "virtuous leaders" (in the Platonic sense) are deposed by followers and/or conquered by other nations. The question for you is as follows: "Are there times when leaders MUST "enter into evil" in order to avoid catastrophic consequences? If so, does this apply to other leadership contexts?"             

Sunday, November 6, 2011

African Development

A Facebook friend of mine from Ghana asked me what I thought about "African Development." That's a tough question...given the fact that I don't know very much about Africa or the various nation states that currently dominate political discussions. But like most philosophers, I won't let my ignorance of the facts stand in the way of participating in that discussion. That's because there are philosophical issues that I can help sort out. The following is my edited and expanded response. 

First, what do you mean by "African development?" It could mean many things. Philosophically, the concept of development signifies progress toward a desirable goal via some means. So to talk "development" we need to identify both a GOAL and the MEANS of achieving that goal. Also, we need to clarify what you mean by "Africa." Are you referring to the entire continent of Africa as a whole or the nation-state of Ghana? There are also "long-term goals" and "short-term goals" and various means of achieving both. Some "goals" are realistic (means are known and can be implimented) and others are idealistic (imaginable but impossible to realize). Of course, we don't always KNOW with certainty which goals are "realistic" and what's "idealistic." Right now, it's hard to set long-term or short-term development goals for Africa as a whole because there's very little social, cultural, or political unity. Even national identity in many African countries are currently in a flux. Do you envision African unity, where all the various tribes, religions, and nation states "cooperate" toward a specific ENDS? If so, by what means MEANS to you plan to to impliment in bringing about that end?

Having said all that, let's return to the original question: "What do you mean by African development?" I assume that you are referring to "economic development." Does your vision for African economic development refer to the ability of Africa to operate (survive or thrive?) independent of the rest of the world (autonomy and self-sufficiency); or,  do you want Africa to develop the capacity to participate in the global economy? Both goals have implications for one's vision of the future of African identity. If economic autonomy and self-sufficiency are your long term goals, then what social, cultural, and political traditions do you embrace, and how do you instill those values? How do you reject the old values? In short, how does economic development relate to sociopolitical development?

Now if your ultimate goal is for Africa to participate in a global economy, you'll have to promote social, cultural, and political traditions that are not hostile to participation in global trade. First of all, Africa would have to embrace rule of law; that is Africans will have to obey rules (that apply to everyone) not what leaders say.  Africans would have reject violence, theft, fraud and breach of contract. If Africa is unable or unwilling to monitor and enforce these basic moral rules, then other nations will not willingly trade with Africa. If you want Africa to be "autonomous and self-sustaining" those same moral and legal rules would have to be enforced. In fact, I would argue that any nation on earth (at any time or any place) that wants to "develop" will have to follow these rules.

 I wish I could say that the United States provides a useful role model, but our society has fallen far short of the "ideal." In fact, much of the economic recession can be attributed to our own failure to uphold these values. So...back again to your original question. What is your vision for Africa 20 years from now? Do you seek a unified Africa (one single sociopolitical entity), or many different competing and cooperating states (like the US and European Union)? Do you want the everyday lives of future Africans to revolve around work, religion, family? Do you want Africans to own a lot of stuff (homes, automobiles, airports, trains, Chinese food, McDonalds food, Western clothes, Western music, Western art etc.)? Or, would you rather revive lost traditions, such as tribal association or subsistence agriculture? What do you think? What is your long-term vision for Africa and how might that vision be realized? What must you do in the sort-term to realize this goal?

Friday, October 28, 2011

Plato's, REPUBLIC: In defense of the "A Priori Method"

Recall that Peirce identified four methods for the fixation of belief: tenacity, authority, a priori, and science. Plato is the undisputed king of the a priori method, therefore it's worth looking into the epistemological assumptions that underlie Plato's ideal state. As you dive into "Book 1" of the Republic, it becomes immediately evident that Socrates is playing an active role in guiding the conversation toward a conclusion. Plato called this process of "questioning and answering" the dialectic, which is known today as the basis of all human inquiry. In the early going, the main question to be addressed is clearly identified: "What is justice?" Of course, the underlying assumption is that it is possible to "know" what justice "really is" and whether this knowledge has some kind of a foundation, and whether this knowledge is accessible to everyone. Throughout history, philosophers have defended one of two alternative "foundations" for human knowledge: the empirical foundation and the rational foundation. Empiricist philosophers anchor human knowledge in the observation of either: an exterior material world that resides "outside" of the human mind; or, an interior world of ideas that lies "inside" the human mind. Today, most philosophers and scientists are empiricists in the materialist tradition. Since Plato, rationalism has been associated with with the reality of this internal world of a priori ideas.

Plato grounded his rational foundation in a priori knowledge; that is, knowledge that exists "before human experience." In other words, Plato believed that the human mind comes front-loaded with a body of knowledge. Plato and subsequent rationalists tend to identify "real knowledge" with timeless universality. However, according to Plato, if you want to gain access to this vast body of timelessly, universal Truth you must possess both innate intelligence (nature) and many years of training and education (nurture). Plato argued that stable political regimes must have leaders that possess timelessly universal knowledge. Thus political science is mostly about identifying naturally intelligent leaders and educating them. One of the necessary conditions for entering the a priori world is the natural ability to learn mathematics and geometry. Therefore, before any student could enroll in Plato's school, "The Academy," they had to know mathematics and geometry.

What's so special about mathematics and geometry? Well, they both provide access to timelessly, universal truth; that is, 7+5=12 always was and always will be true...everywhere in the universe! Therefore, knowledge of mathematics and geometry provides students with an initial glimpse into this internal world of a priori truth.  Now...back to justice! If you want to answer the question "What is justice and how do you come to know it?" There are two possibilities. You can "look out" at instances of justice in the material world, or you can "look in" at the idea of justice. Obviously, if you look at material world, you'll probably find more "injustice" than "justice," therefore that's not a very promising strategy. Therefore, Socrates chooses to explore the ideas of "justice" and "injustice." Socrates argues that the concepts of  "justice" and "injustice" possess a timelessly, universal form that transcends all human languages. Unfortunately, not all of us are capable of knowing the difference between justice and injustice, therefore we must acknowledge (and trust) the authority of those naturally gifted, well-trained experts. So in an ideal political regime, authority must be based on the possession of privileged, timelessly universal knowledge that only a few can possess. Followers must be taught to submit to the authority of this naturally gifted, carefully groomed "class" of "philosopher kings."

Now in Book 1, Thrasymachus argues that "justice is in the interest of the stronger." Most of his arguments, and Glaucon's and Adeimantus' subsequent arguments are based mostly on empirical observation of how powerful humans, in fact, behave in the "material world." But does the apparent fact that the unjust tend to fare better than the unjust, necessarily imply that injustice is preferable to injustice? Interestingly, over the next nine books, Plato attempts to prove that the unjust do not, in fact, fare better than the rest of us. So...by now you're you're probably wondering, "What is justice?" Well, you'll have to read the rest of the book. As you read, pay close attention to "Gyges Ring" and the "Allegory of the Metals" (gold, silver, and bronze).                   

Wednesday, October 26, 2011

Plato's, REPUBLIC: Introductory Remarks

For the next few weeks I'll be using the Freedom's Philosopher blog to post supplemental reading for my philosophy courses. The first few entries will be on Plato's Republic.

Although there is very little agreement among contemporary philosophers...about anything! I suspect that most of us agree the Plato's Republic is the greatest and most important work in all of Western Philosophy. It has, without a doubt, exerted a profound influence on the subsequent development of both Western philosophy and political science. It's certainly one of those timelessly universal classics. At times, you'll get the sense that Plato is writing directly to those of us living in the twenty-first century. Although it was written in 375 B.C., I want you to read the Republic as if it were written in 2011. What kinds of questions does Plato ask? What are his answers? Are these questions and answers relevant today? Can we learn anything from this ancient Greek philosopher?

Well...who was Plato? Actually, we know a lot about him. There are many references to Plato made by ancient philosophers, historians, and other writers. Plato also left behind and astonishing body of philosophical work, about 35 dialogues that address a wide variety philosophical questions. In these dialogues, Socrates is the main character. We also know that Socrates was, in fact, the "teacher" of Plato and his brothers Adeimantus and Glaucon and that Socrates, apparently, never wrote anything. Therefore, most of what we know about him and his philosophy is from other sources. One might question whether Plato's dialogues accurately document Socrates' thought, or wheter Plato merely used Socrates as the main character in his own dialogues to articulate his own views. I'm not particularly interested in that question.

I assume that Plato was profoundly influenced by not only Socrates, but also by other historical forces. For example, we know that the dialogue depicts a fictional conversation that took place between 431 and 411. We know that Plato was deeply influenced by the Pelopponesian War (431-404) between the city-states (and empires) of Athens and Sparta. Therefore, we know that most of Plato's political thought responds to the political arrangements of those warring city states: Athens was a Democracy, Sparta was an Oligarchy. We know that Socrates served honorably in that war, and that although he was a stone mason by trade, he spent most of his time teaching philosophy to the "youth of Athens" (young men).  Finally, we also know that Socrates was put to death in 399 by the Athenian government, for "corrupting the youth of Athens" and "worshipping false Gods" and that Plato wrote a series of dialogues that depict the trial and death of his teacher.  (Apology, Crito, and Phaedo)

The over-riding theme of the Republic is established in the Book 1, where Socrates is invited to a party, where the conversation eventually converges on the question: "What is justice?" After Socrates effectively dismisses several incomplete theories, Thrasymachus (a local sophist) argued that justice is "nothing more than the interest of the stronger." Although, Socrates initially presented Thrasymachus with several semi-plausible counter-arguments, Thrasymachus left the discussion rather abruptly, and Plato's brothers took over the argument. Throughout the next nine books, Socrates attempts to convince Glaucon and Adeimantus that Thrasymachus' views is wrong. As you read the Republic be aware of the following elements: "Justice is in the interest of the stronger," Gyges Ring, The Divided Line and the Allegory of the Sun (Plato's theory of knowledge), the Noble Lie (gold, silver, bronze), three part division of the soul and the state, tyranny, democracy, oligarchy, timocracy, and aristocracy, and the Myth of Er. Our class discussions will focus on these elements. My next blog will outline Plato's theory of knowledge and how it relates to the refutation of Thrasymachus.           

Sunday, October 23, 2011

"Is Bigger Really Better?" Part 2

As I pointed out in my last blog entry, we now live in a world dominated by large, complex, impersonal, and ineffective social institutions: corporations, schools, hospitals, and a variety of governmental entities. At the moment we're just now beginning to realize that governmental entities such as Social Security, Medicare, and Medicaid are in dire need of reform. The U.S. military has been on a functional losing streak here recently, despite the fact that we spend more on defense than the rest of the world combined. Interestingly, we rarely conclude that these entities are just too large and complex to be functional. Instead, we vacuously argue that all they really need is a little reform; that is, we believe that we need new and more effective rules and/or more intelligent, dedicated, monitors and enforcers. However, when we look large-scale exemplars of functionality that might serve as a models for other large institutions, we find ourselves at a loss. My point here is that we've all been blinded to large scale dysfunctionality by an ideology that tells us, repeatedly, that "bigger is better," despite an enormous body of evidence to the contrary.
So how has most of Western civilization been infected by an ideological disease that has led to our prevailing epidemic of institutional dysfunctionality? Much of it can be explained by human nature. The fact is that human beings have an uncanny ability to "identify" with groups. Throughout most of human history we identified with small groups: families, clans, and tribes. The agricultural revolution led to the formation of much larger groups (cities, empires, and nations) that were held together not by face-to-face personal relationships but by the imposition of abstract rules and laws. The belief in the sanctity of laws led to a corresponding belief in the sanctity of lawmakers. In fact, many large-scale rulers still rule by "Divine Right." Today, we still teach our children to "respect" leaders, especially their parents, teachers, religious leaders, and government officials. This demand for respect for formal authority was "backed up" by systems of monitoring and enforcement. As the number of formal rules increased, so did the number of monitors and enforcers. And of course, those "bureaucrats" really do not produce anything of value, but nevertheless draw paychecks. In fact, not only do they draw paychecks, they now draw heftier paychecks than those who actually produce something of value. Thus, today we have a growing number of highly-paid bureaucrats and a dwindling number of lowly-paid producers. Ironically, under this cultural anamoly, when we get "promoted" it usually means that we take on more "responsibility" in terms of monitoring and enforcement, but produce less value.

Now let's be honest! This is not a mere cultural anamoly, it's a full blown epidemic. Today we're faced with the realization that our lives are shaped by more rules and laws than we can possibly know, and more monitors and enforcers than we can afford to pay. And of course, as the number monitors and enforcers increase, the number of innovators declines proportionately. Unquestioned rule compliance, therefore,  invariably leads to a stagnant society of obedient followers led by non-substantive, ineffective, inefficient leaders. Although, I blame the "bigger is better" ideology for our current epidemic of dysfunctionality, we must ultimately blame ourselves for believing it.

Thursday, September 22, 2011

Is "Bigger" Really "Better?" Libertarian Ruminations on the Economy of Scale

Unless you've been living under a rock, it should be obvious that human social organization in the Western world is increasingly being dominated by large-scale organizational structures. Examples are obvious and bountiful. Why is the financial world today controlled by a handful and large Wall Street banks that have grown to the point where they are "too big to fail," and why are small local banks struggling to stay in business? Why are a few large scale retailers like Amazon, Walmart, Home Depot, and Target, thriving while small local "mom and pop" stores are out of business? Why are individual schools getting larger and often bundled and "administrated" into large school districts or corporate entities? Why are small family-owned farms being usurped by large "factory farms" owned by huge corporations? And finally, why are Federal and State governments growing by leaps and bounds, while small local governments are struggling to remain solvent?

Orthodox economic theory says that the natural evolution of markets inevitably leads to dominance by a few large scale competitors and that the rise of oligopolies is a sign of economic maturity or progress. In other words, "Big is Good!" But what if orthodoxy is wrong? What if our current state of social organization is actually a malaise? What if the "bigger is better" thesis is actually a well-disguised ideology that has created large scale social structures that are really "too big to survive?" In short: What if "smaller is better?"

For the sake of argument, let's assume that most of what Adam Smith identified as the basics of free market competition is more-or-less accurate. Namely, that economic activity involves competition between buyers, between sellers, and between buyers and sellers. And that all competition takes place on two axis: quality and price. The quality of a good or service is determined by the free choices made by the buyers and sellers in the form of a contract. The price refers to what the buyer is willing to pay for it, and what the seller is willing to accept. The "Holy Grail" of economic competition is to provide high quality goods or services at the lowest price. The defenders of the "bigger is better" thesis say that larger organizations naturally benefit from "economy of scale" because larger organizations are more innovative and efficient than small organizations, and therefore can produce higher quality products and services at a lower price. Thus, it seems as though the "free market" leads inevitably to dominance by a few large scale organizations, or oligarchy. The problem with all this is that "bigger is better" contradicts almost everyone's "real world" experience with large-scale, centralized, bureaucratic social structures.

I currently teach at a small liberal arts college, but I have also taught at a major state university. Most small private liberal arts colleges are now "out of business" while the larger ones are thriving. Orthodoxy says that large universities benefit from "economy of scale" and therefore are more innovative, efficient, and therefore less costly. Of course anyone that ever attended or taught at a major university will readily question all three of those statements. How can a major university provide a superior education if most of the courses are taught by graduate students and adjuncts? If large universities are indeed "better" then why are the retention rates of large universities so dismal in comparison to small colleges? Admittedly, the issue is much more complex than this, but the question remains: "Is bigger really better?"

Libertarians like myself argue that the rise of large scale corporations are often the product of "crony capitalism" and not "free market capitalism." Thus, governments drive small organizations into extinction by: subsidizing the costs of large organizations, issuing expensive regulations that small organizations cannot afford, and via tax policy. In my next blog, I'll explore the coevolution of "big government" and "big corporations" and the difference between "crony capitalism" and "free market capitalism." I will argue that, contrary to what orthodoxy says, with few exceptions, in the absence of large scale government intervention, "smaller is almost always better."            

Thursday, September 15, 2011

The Fixation of Belief: The Method of Science

So far, Peirce has identified three inferior methods that we all use to fixate our beliefs: authority, tenacity, a priori. He then argues that there is one method that is more likely to settle our opinions: the method of science.

First of all, Peirce is an epistemological realist, which means that he believes that there is something out there called reality whose nature remains constant relative to our beliefs, and that truth and falsity relate to that external reality. In other words, at least some of our individual and/or collective beliefs are true and others are false, whether we like it or not. Now, Peirce realizes that many philosophers in his time rejected realism and that it's impossible to "prove" that the real world exists. His argument is that the process of inquiry, however, can provide us with some guidance. The fact of the matter is that the overwheming majority of individuals and collectives believe that there is a real world (of some kind), and therefore, we do not doubt that it exists. A few philosophers doubt it, but so what? The pragmatic (common-sense) truth of the matter is that we all act based on our beliefs, unless that belief is in fact doubted. Doubt cannot be turned on and off. It's something that naturally arises, whether we like it or not! Therefore, Peirce argues that we are entitled to believe in a real world, at least until we actually doubt it.

So what is the "Method of Science?" Well, it's nothing more than the "process of elimination," or "trial and error." If something "works" we keep it! If it "doesn't work" we don't keep it. In short, Peirce is proposing an evolutionary epistemology, whereby Truth and Falsity are sorted out by the process of inquiry over time. Methodologically, Peirce argues that human knowledge advances based on evolution, especially variation and selection. Over time, our individual and collective bodies of belief evolve by weeding out the unfit. Later philosophers called this process "creative destruction." Hence, nature "creatively destroyed" dinosaurs, buggy whips, the geocentric map of the universe. Within the realm of belief, the process of inquiry requires that we willingly expose our beliefs to the falsification process, which implies avoiding the methods of tenacity, authority, and a priori. We can't know for certain what's true, but we can know what's false.

What's important here is that Peirce observes that we do (in fact) employ those inferior methods, however, we must deliberately fight that natural impulse. As Thomas Kuhn later observed, scientific theories (beliefs) are often willfully protected from the forces of creative destruction by self-interested scientists, scientific organizations, and governments that have invested their time, effort, and resources in maintaining the status quo. In short, there is a sociology of knowledge that often works against scientific human inquiry. Peirce also argued that scientific knowledge is highly fallible (his epistemological doctrine of fallibilism) and that we ought to guard against the rising tide of scientific positivism.

How many of our cultural beliefs are overdue for creative destruction but remain intact because they have been propped up by sociopolitical power structures?  Libertarians argue that socialism is long overdue for creative destruction.  I would add, that I seriously doubt that the Cincinnati Bengals will make it to the playoffs this year.                  

Monday, September 12, 2011

The Fixation of Belief: The A Priori Method

The "A Priori Method" of belief fixation is based on the idea that the human mind (or brain) has direct access the a body of knowledge prior to experience. Thus, if you want to know the Truth all you have to do is think real hard about it and you instantly ascertain "know" the Truth. As Peirce suggests, there are two problems here: 1.) There is very little agreement among philosophers in terms of a list of universally accepted a priori empirical truths. 2.) When we introspect our consciousness, we are actually looking in at relative culturally-based truths that are usually based on authority.

However, Peirce is not as hostile to the A Priori Method as one might think. In his other writings, he acknowledges that when faced with an enormously complex question (like the structure of DNA) humans have an uncanny ability to guess the right answer and that hypotheses often originate as feelings. In fact, Crick and Watson literally guessed the double-helical structure of DNA out of thin air! In fact many scientific theories "emerge" out of dream states. What's important here is that Peirce differentiates between the process of generating theories (intuition or feeling) and the process of determining whether those intuitions are, in fact, True or False. Peirce insists that although we often guess right, we still cannot rely solely on authority or a priori intuition. Truth, Peirce argues, has experimental consequences. That is, if a theory is True it should enable you to either predict or control that phenomena. Although many a priori theories generate highly plausible, psychologically pleasing explanations, Truth is ultimately "Pragmatic." As William James later observed Truth must ultimately exhibit "cash value." Just because the double helical structure of DNA originated a priori, it was not "True" until it's "cash value" was established in the laboratory. And of course, today the "cash value" of their discovery continue to roll in, as illustrated by genetic testing and genetic therapy.

Another point worth mentioning here is that Peirce did not believe that scientific theories could ever be finally verified in the laboratory. Why? Because of David Hume's "problem of induction," which observes that, eventually, future experiments (observations) almost always falsify previous previous observations. Therefore Peirce argued that all theories are, therefore, fallible and subject to future revision. So when we say that "DNA has a double helical structure," we're really saying that Crick and Watson's theory has not been falsified. In my next blog, I'll sketch in Peirce's Scientific Theory of the Fixation of Belief.             

Wednesday, August 31, 2011

The Fixation of Belief: The Method of Tenacity

Peirce argued that the laws of nature are habitual behavior and that our beliefs shape our  behavior.  Beliefs are individual, collective, generational, and inter-generational. Peirce distinguished between scientific beliefs (scientific theories) and non-scientific beliefs (or non-scientific theories).  Some of our beliefs are True and other are False, thus Peirce's theory of belief implies a theory of knowledge or epistemology. At a pragmatic level, sometimes our beliefs contribute to our long-term (and/or short-term) survival as individuals or collectives, sometimes our beliefs are neutral, and sometimes our beliefs work against us. When our individual or collective beliefs about the laws of nature enable us to predict, explain and control nature, we say that our beliefs are True. Ultimately, our survival is often contingent upon whether our beliefs are true or not: "Will that tiger eat me?" If you falsely believe that tigers are vegetarians you are not likely to survive that first encounter.  In my last blog I argued that the Fixation of Belief based on the Method of Authority, is natural, but highly fallible. 

The Method of Tenacity is also a "natural," but unreliable method of belief fixation. This method involves the willful avoidance of circumstances that might stimulate doubt within your current repertoir of beliefs. Recall that human inquiry is an involuntary process triggered by feelings of doubt, and that our old beliefs, naturally, compete with aspiring new beliefs. But Mother Nature stacks the deck in favor of our old habits, as our oldest habits are often the hardest to break. There's a lot of truth in the old saying: "You can't teach an old dog new tricks!" (I have been teaching my Ethics course at 9:00 AM for the past 25 years. This semester it was changed to 10:00 AM. Guess what time I showed up for class!) So, we humans are naturally conservative and therefore tend to act based on our old habits, and protect those old beliefs from the onslaught of doubt. Now remember, the Method of Tenacity (like the Method of Authority) is perfectly natural. We all do it individually and collectively! It's just not very likely to lead to True beliefs.

Let's look at how the Method of Tenacity operates at the collective and individual levels. All organized social groups tenaciously protect their core ideological beliefs. That's why the core beliefs espoused by the oldest world religions have changed very little over thousands of years. For example, the Roman Catholic Church still tenaciously protects its belief that only men can become priests and therefore it actively discourages nuns from discussing or teaching the ordination of women. Indeed, censorship is the primary instrument for exercising collective tenacity. Although, we usually associate ideological censorship with religious organizations, all organizations do it. Scientists tend to tenaciously protect their most important theories, and resist the onset of doubt by marginalizing scientists that seek to undermine scientific orthodoxy. 

Political regimes are especially adept at protecting their ideological moorings. The well-known positive techniques for implimenting collective tenacity include: the institutionalization of regime reinforcing symbols, oaths (..."I pledge allegiance to the flag of the...), patriotic songs and stories. Negative techiques include  censorship of the media and criminalization of dissent. Now obviously, our own individual beliefs are often shaped by collective ideological tenacity. But as Peirce recognized the "social impulse" tends to undermine the method of tenacity. Today, communication technologies such as the television, Internet, and cell phones have made it especially difficult for all of us to protect our core beliefs from the irritation of doubt. It has become increasingly difficult for religious political regimes to deprive women of the right to vote, drive automobiles, receive health care, receive an education, get a job, and/or control their own reproductive lives. In short, the Method of Tenacity just won't work as well as it used to!

Today the irritation of doubt is difficult, if not impossible, to control. For Peirce, that's a good thing. We'll never know if our old beliefs are True or False unless we allow them to compete with new ideas. The methods of Authority and Tenacity both undermine human inquiry and impede our quest for true belief. Now, what about the A Priori Method?  

Sunday, August 28, 2011

The Fixation of Belief: The Method of Authority

So far I've suggested that we act based on our beliefs and that those beliefs that address either matters of Fact (Truth or Falsity) and matters of Value (Good and Bad). Peirce argues that we "fixate" our beliefs in four different ways. All of these methods have advantages and disadvantages. Among humans, the most common method for fixating our beliefs is based on authority. Let's take a close look at it.

Given our natural propensity to live in groups and organize those groups, hierarchically, based on leadership and followership, we all fixate at least some of our beliefs based on the "authority" of others. The hallmark of any authority is that we trust them! Given our longstanding attraction to religious authority, many neuroscientists argue that our brains have been programmed by biological evolution to believe in a God, trust God, and obey God. Hence, the question of whether the pronouncements of religious authorities are True or Good is settled based on whether we trust that the authority is truthfully interpreting the pronouncements of God. If we trust an authority we tend to believe them and willingly give them power over us. Worldwide, most human beliefs are fixated based on the religious authority. God is the supreme religious authority. Sometimes religious authority is based on what leaders of the past have written in the form of "sacred texts." But usually, religious authority is based on how contemporary religious leaders interpret those sacred writings. Thus inquiry into the Truth or Goodness of the pronouncements of religious leaders is about "who they are" not "what they say." Today we know a lot more about the psychological basis for trust in authorities than Peirce knew. For example, we know that most of us will do things that we ordinarily wouldn't do when we are told to so by trusted authorities. The Holocaust and the Jonestown Masssacre are prime examples.

Today we all trust many different authorities: physicians, scientists, journalists, and celebrities. Although Peirce doesn't say much about it, there are more and less trustworthy authorities. I trust my family physician because I've known him for 20 years and because he has taken good care of me and my family. For libertarians the most pernicious form of authority is government. One of the most serious problems we have here in the United States is that the vast majority of Americans do not trust our government? Why because it hasn't taken very good care of us for a long time. Libertarians argue that government tends to take care of itself and its cronies, often at the expense of the rest of us. If we don't believe what the government says, it forces us to submit to it's edicts, whether we believe those edicts or not.  

Today, many political scientists question whether its possible for us humans to escape from the influence of various authorities. Peirce was a realist, which means that he believes it is at least possible to base our beliefs on something other than authority. But he might be wrong. It may be the case that Truth is nothing more than what the prevailing authorities say is True and that Truth is "socially constructed" based on the self-interest of leaders. Maybe Truth is "manufactured" by those who hold power over us and not really discovered?                       

Saturday, August 27, 2011

The Fixation of Belief

In 1877, Charles Sanders Peirce published a series of articles in Popular Science Monthly. The first essay, entitled: "The Fixation of Belief," has had a profound influence on my philosophical approach. So I thought it would be interesting to share some of those basic ideas and outline how I have expanded upon Peirce's original architecture.

First of all, Peirce was one of the first philosophers to acknowledge that the question of the nature of human belief is an important area of philosophical inquiry. Beliefs, according to Peirce, underlie many (if not most) of our actions. He argued the the formation, or "fixation" of a belief is the product of a natural process, which he called human inquiry. This process is initiated by an identifiable "feeling of doubt" that is generated by our brain and central nervous system. It is this "feeling" that initiates and sustains the involuntary process of inquiry until a new belief is established. Peirce suggested that the feeling that accompanies belief is more pleasurable than the state of doubt. So the psychological states of doubt and belief are marked by distinctive "feelings," and all humans naturally know the difference between the two. We, therefore, naturally, seek the pleasure of belief and avoid the pain associated with the state of doubt.  Over the course of our lifetimes many our "old beliefs" are cast into doubt by inquiry and are replaced by "new beliefs." Just because we happen believe or doubt something, either individually or collectively, does not mean that it is True or False.  Hence, Peirce is an epistemological realist in the sense that he believes that Truth is a correspondence between what be believe and something external to that belief.  

Now let's stop and think about all this so far. First, note that Peirce's theory of inquiry is rooted in biology, which implies an ultimate evolutionary explanation; and that this biological process generates mental states that we interpret as doubt or belief. Second, Peirce argues that since we "act" on the basis of our beliefs, there are social implications. Third, Peirce argues that there are better and worse ways for us to forge our beliefs. He therefore identifies four methods for the fixation of belief that all human beings have adopted over the long course of human history: method of authority, method of tenacity, the a priori method, and the scientific method. Although all four methods are "natural," but only one is likely to generate beliefs that are relatively stable over the long run and likely to be True. I'll explore these four methods in subsequent blogs.

           

Saturday, August 20, 2011

Stewardship Part 2

So the concept stewardship is usually invoked in a context where something valuable is being shared over time. In ethical terms that suggests that sharing over time is good; that is to say, it's a virtue, a duty, or a preferable consequence.

If stewardship is a virtue it signifies excellence in character. For Aristotle, it would relate to distributive justice. Hence, a "good person" takes no more nor less than he/she deserves. Now Aristotle's Nicomachean Ethics really doesn't get into intergenerational justice, but we know from his other writings that potential beings would have moral standing. We also know that if stewardship is a virtue, it would involve teaching, learning, and the establishment of a habit. But beyond that I'm not sure Aristotle takes us very far. The Judeo-Christian concept of the virtue of stewardship is based on the idea that God gave the universe to humans to share. There is a built in sense of value associated with any "gift" that comes from an omnipotent, omniscient, omnipresent, and "Good" being. God, by definition, does not give lousy gifts! No ugly ties or exploding cigars! Moreover, since God created Homo sapiens as an intergenerational community, he certainly would not favor early generations over subsequent ones. Nature is not a Ponzi scheme! Hence, previous generations took their fair share, we take, our fair share, and future generations take their fair share. The problem here is how do we know how much each generation can consume without shortchanging the next generation? The history of humans on earth suggests that virtue-based concepts of stewardship have led to intergenerational exploitation rather than intergenerational justice.

If stewardship is a duty, then we must think about it in a different way. First of all, we'd have to establish that all generations have an equal right to that good thing that is being shared between generations. So if we establish that each generation has an equal right to the fish in ocean then, each generation has a duty to determine it's fair share of fish and preserve the rest for the next generation. But there's a lot more here than meets the eye. Suppose subsequent generations will have more living persons than our generation? Do we have a duty to take into account the fact that the next generation will "need" more than we do? And, of course how far into the future does this obligation extend? Moreover, the human ability to exploit the earth's bounty changes relative to technology. We can catch more fish, cut down more trees, extract more oil and coal than previous generations. Hence, the potential for intergenerational injustice is magnified over time. All I want to say here is that even if we all agree that we have a duty to hold back a few fish for future generations we really don't know what this duty entails. In short, it's a lot easier to assign rights and duties across generations than it is to fulfill them.

Finally, if stewardship is conceived as the means to a preferable set of consequences or outcomes across generations, we must be able to assess cost/benefit ratios across generations. Thus stewardship might imply aiming at the "greatest happiness for the greatest numbers" across generations. If this is our goal, then it's not at all clear how much we ought to hold back, given that we really don't know how many generations will follow us. I would argue that if stewardship requires utilitarian calculus exercised across generations, each generation's rate of consumption would be minicule if not zero. Of course, that would solve the obesity problem, but that's another topic worth exploring.

So what does all of this say about stewardship as a moral concept? First of all, there's a lot of muddled thinking about "sustainability." If we have a moral obligation to consume at a "sustainable" rate we have to decide how far into the future that obligation extends. If we limit our obligation to the next generation we might be able to calculate a sustainable rate of consumption. On the other hand, if todays politicians seek to be responsible stewards of the earth's bounty and neglect the needs (and wants) of our present generation in order to save a few fish for future generations, those politicians won't be in office very long. Future generations aren't old enough to vote!                 
                 

Sunday, August 14, 2011

Stewardship Part I.

The moral concept of "stewardship" is deeply rooted in the Judeo-Christian tradition.  Today, it is used primarily as a set of moral obligations that are associated with the responsible use of something that belongs to someone else. Or as Merriam Webster puts it: "...the careful and responsible management of something entrusted to one's care." In light of my recent rants on intergenerational justice, I thought I'd deconstruct that slippery concept a bit.

First of all, today "stewardship" is most often encountered in the context of the sustainable management of resources. A "steward" is someone that takes care of someone else's property, hence the notion of ownership plays a key role. A "responsible steward," therefore, preserves or "sustains" the value of that property while an "irresponsible steward" does not. Stewards are not entrusted to sustain something of no value. And there's also a temporal component to stewardship; that is to say, the steward's "sustenance" of something valuable extends over a period of time.Therefore, stewardship has two moral components: the goal (or motive) of sustaining valuable something and knowledge of the means of sustaining that value. Thus, a responsible steward must be motivated to sustain something and know how to do it. An irresponsible steward (or a non-steward) either lacks the motivation to sustain something, or doesn't know how to do it. Thus, stewardship involves both knowing the Good and being able to do the Good.

Suppose, I agree to loan you one of my guitars for a week. Naturally, I expect you to be a responsible steward of it. At a bare minimum, I would expect it to be as returned without loss of value. A responsible steward might return it to me in better condition: maybe polish the finish and change the strings. You might replace the pickups or refinish it, which might or might not meet my approval. WARNING: Do not repaint my orange guitar with black spray paint. So responsible stewardship implies knowledge of the interests of the owner.

Stewardship also shares common ground with the concept of "agency," where one person serves as an "agent" for another person (or "principal"). A responsible agent, who possesses specific knowledge or skills is expected to serve the interests of the principal. Thus, the common ground is the expectation that another person will serve the interests of someone else. But there is also an underlying expectation that the steward or agent will benefit from serving as a steward or an agent. Today, we generally pay agents (insurance agents, financial advisors, and physicians) to serve our interests while stewards benefit by using what they are entrusted to sustain.

So far, the idea of stewardship between individuals seems fairly clear cut, and hardly worthy of a philosophical diatribe. But when stewardship is applied in the context of collective ownership, and/or collective stewardship the waters get muddy fast! I'll dive into that morass next.               

                

Saturday, July 23, 2011

Our Intergenerational Credit Card

The current debate in Congress over raising the debt ceiling plays into one of thorniest issues in all of moral philosophy: the problem of future generations. Although it lurks most often in the context of environmental issues (usually pollution and resource depletion) it is readily applicable to the debt crisis.

Imagine the following scenario: Congress is issued an intergenerational credit card, and thereby can access a never-endling line of credit. Now, if you are a Congressman who would like to get re-elected, how would you use that credit card? There are multiple possibilities. First of all, whose interest would you serve? A.) Present generation that can either benefit or be harmed by your spending habits, or B.) Future generations. Would you pay off the balance on that credit card every month, or would you just pay the minimum balance and accumulate debt and pass the deby onto those vulnerable future generations? Of course, the The beauty of the intergenerational credit card is that future generations do not yet exist, and therefore it's relatively easy to pass the credit card balance onto them. And of course, since future generations cannot vote, there is no reason to fear them as a voting block. There is no lobbying group to represent them. Thus, all of the political rhetoric tends to focus on what would happen to the present generation if Congress decides to either default on that credit card or begin to pay down the debt without buying more stuff to benefit the present generation. Of course, Congress might decide to "invest in the future" by asking the present generation to pay for projects that might benefit future generations at the expense of the present generation. For example, we could embark on a long-term railroad building project that would take 20 years to complete. Unfortunately, our efforts to benefit the future might be thwarted by some new transportation technology (personal aircraft?) that would decrease the value of that investment. Now as long the present generation pays the cost of providing this benefit to future generations, we're at least acting responsibly. However, if we decide to benefit the future generations and pass on the cost of providing that benefit to future generations we're on shaky moral ground. Why? Who is really benefitting from these intergenerational projects? The workers in the present generation that land those jobs. If those costs can be paid for with that intergenerational credit card,  then Congress can get re-elected by the present generation and future generations can pay their salaries. If that's not a Ponzi Scheme, I don't what what to call it. But clearly, the idea of an intergenerational credit card is a prescription for taking unfair advantage of future generations.                     

Saturday, July 16, 2011

Two Challenges of Minarchy

In my last blog I suggested that minarchy lies midway between the ideals of anarchy and unbridled progressivism. This raises two obvious questions: "How small is small?" and "How do we prevent small from becoming large?" How any government (national, state, or local)  answers to these questions determines the degree of personal liberty within that jurisdiction. I'll try to focus on the United States, but I think these issues apply equally to all governments.

Almost all minarchists argue that there are three main functions that require a publically-funded governmental monopoly: a defensive military, a domestic police force, and a criminal justice system. (Some of us are also willing to provide a basic safety net.) All three represent a realistic collective response to the darker side of human nature. Some individuals and groups of humans are willing to violate the non-aggression axiom and/or the anti-theft axiom,  in order to advance their interests. Anarchists argue that even these functions ought to be be privatized, which would subject those functions to competition, increase quality, and reduce cost. The question here is whether minarchies can maintain competition without caving into cronyism.  

Obviously, all nations need a military force. The the size of that military is contingent upon how governments use those militaries. Minarchists argue that the the military must be defensive in nature. Thus, we must be able to thwart an invasion. If, governments expand that mandate to include "potential invasions," then the size of the military is likely to expand. According to Ron Paul and Dennis Kucinich, the United States' military has expanded way out of control, and as a result we have troops stationed all over the world and several wars underway. Cronyism is a primary source of spiraling military budgets. The corporations that supply the "War Machine" with supplies are especially powerful, mostly because military contracts are notoriously opaque and non-competitive. Hence, the proverbial $400 toilet seats! One might argue that we've already "privatized" the U.S. military, however, what we've really done is disabled competition via cronyism. In short, privatization does not necessarily imply free markets.

Minarchists also support a publically-funded police force. Now, first of all, the size of a police force is contingent upon the number of laws that it is required to montor and enforce. The more more laws there are "on the books" the larger the police force you'll need. Minarchists argue that lawmaking powers of congress must be limited the laws that address the harm principle; that is "harm to other persons" (assault, murder etc.) and "harm to the property of others" (theft, fraud, breach of contract etc.). The problem with the harm principle is that it tends to become irrationally pre-occupied with preventing low-magnitude harms and low-probability harms. Paternalistic laws that protect us from "self-inflicted harms" have led to an extraordinary expansion of police forces, especially drug laws, laws against gambling, alcohol abuse. The United States leads the world in incarceration, most of those prisoners committed are non-violent drug crimes. The actual cost of providing a police force is also contingent upon how those policemen are paid. Libertarians argue that policement ought to be paid based on free market forces, where the best policement get paid the most (within the bounds of the free market) and the worst policemen get fired. Of course, crony relationships between politicians and law enforcement are way too common and lead to bloated, over-paid and under-paid policemen.   

Minarchists also accept a publically-funded criminal justice system. Like the police force, the size of the judiciary is contingent upon the number of laws it is expected to enforce. The more laws and the more policemen, the more lawyers and judges that are needed. The exponential growth of the judiciary (at all levels) is also related to the fact that our law schools crank out a lot of lawyers, those lawyers often become politicians, and they have a powerful lobby. Thus, cronyism also contributes to bloated judiciaries. The cost of maintaining a judiciary is also contingent upon how much lawyers and judges are paid, and whether they are appointed or elected. Another important cause of  "judiciary creep" is the longstanding legal tradition of writing laws in a private language known only to lawyers, and judges who are empowered to "interpret" those laws. This also artificially enforces their monopoly.

In sum, I have argued that minarchists seek to limit the size and scope of government to military, police, and judicial functions. However, even if our government limited itself to those three functions, there is no guarantee that small government would not morph into large government. Therefore, the most efficient way to limit the size of government is to limit the ability of politicians to endlessly expand the criminal code beyond crimes against persons and properties. I think both libertarians and progressives agree that the U.S. government has expanded way beyond minarchy.                                         

Wednesday, July 13, 2011

Two Ideological Perspectives on the Use of Coercive Political Power

The current political malaise in the United States can be best understood in terms of two conflicting ideological perspectives on the the nature and use of political power. On the far left we have the progressives, on the far right we have the libertarians. There is a lot of variation within both ideologies. However, in the United States progressives usually lean toward the far left of the Democratic Party, the libertarians usually lean toward the far right of the Republican Party. However, neither progressives nor libertarians dominate their respective parties. The progressive-libertarian debate is a very old and important debate, therefore it's still worth reviewing those two ideologies.

Progressives are idealists that believe that political entities (cities, states, and nations) must deploy the coercive of power of government in order to serve the public good. Progressive government requires that altruistic, impartial, objective leaders that will exercise political power in pursuit of the "public good." Good government, therefore, requires that political entities seek out these "good leaders." When government fails to serve the public good it is because the political system has become infested with self-serving leaders that use their political power to enrich themselves, their families, and friends at the expense of the public good: call it cronyism. Therefore, the key to progressive government is to devise a political system that selects these "good leaders." Plato believed that in order to maintain a sufficient supply of "good leaders," the state must develop selective breeding programs and specialtized education programs. Contemporary American progressives reject selective breeding, but place invest heavily in law degrees earned at elite private schools, most notably Yale and Harvard. Coercive force is exercised in the form of a "progressive" tax code, which provides the funds to serve the public good. In the United States, progressives tend to equate serving the public good to implimenting government programs that serve the unmet needs of the "least advantaged," the poor, workers, consumers, elderly, sick, racial minorities, and women. This agenda requires large numbers of workers employed by tax-funded government agencies. For most progressives, knowledge of the "public good" and knowledge of how to achieve it usually relegated to empowered social scientists. On foreign policy, many progressives support the use of U.S. military power to advance the "public good."        

Libertarians are idealists that believe that coercive power is always wrong, either because it violates the property rights of others or because it leads to bad comsequences. Taxation is regarded as problematic because it resembles theft; that is the involutary appropriation of another person's property. Libertarians also argue that knowledge of the public good and how to achieve it is elusive, if not impossible. Social scientists routinely identify the "public good" with the good of the social scientists themselves, or their cronies in government that empowered them. Libertarians are especially wary of the rise of cronyism, where government serves the good of specific interest groups, especially: corportations, labor unions, churches, and the military. All libertarians are against the use of military power unless we're actually invaded by a foreign nation. Therefore, according to libertarians, the secret to good government is to either eliminate the coercive power of government (anarchism) or limit the coercive power of government (minarchy).

Anarchists reject government outright, and idealistically believe that if individuals (and groups of cooperating individuals) are left to make their own decisions and live with the consequences of those decisions, human society would thrive. Anarchists argue that because progressive governments "spend other people's money," they tend to be overly-generous to the least advantaged and public employees, and less concerned with "bang for the buck" efficiency. Moreover, anarchists observe that over time, collectivized power tends to corrupt even the most altruistic, and impartial leaders. Social science is similarly corrupted. Therefore, progressive governments tend to collapse under the weight of military adventurism coupled with the high levels of taxation needed to serve the bureaucracies that serve the "military-industrial complex" and the ever-growing ranks of "least advantaged." Anarchists argue that eventually everyone becomes either a soldier or "least advantaged." Thus, under anarchy all collective functions are met by non-governmental entities, including the: military, police force, criminal justice system, and social welfare.

Minarchists embrace limited government; that is government that is limited to using tax money to provide a defensive military, police force, and judiciary. Some minarchists, like myself, are also willing to include a "basic safety net" to protect the "least advantaged." Anarchists, however, insist that minarchy is unsustainable and that, over time, minarchism becomes progressivism. Altruistic politicians and social scientists are eventually corrupted by power. Progressives argue that the limited power of minarchism, inevitably leads to under-funded military, police, judiciary, and safety nets.

Today, the idealists on the far left (progressives) and the far right (anarchists) are unwilling to compromise and therefore the U.S. government now mired in gridlock. Since minarchists draw criticism from both the far left and the far right, they now occupy the centrist position in U.S. Politics. I would argue that the future of the United States lies in the formation of a coalition of progressives and libertarians that are willing to limit the exercise of concentrated political power, but not necessarily eliminate it. Politically, this might result in the formation of a political alliance between Ron Paul and Dennis Kucinich.                                        

Wednesday, May 25, 2011

Nuanced Cooperation

In my previous blog entry I suggested that social scientists often overlook some of the finer points in human cooperation. Let's continue that diatribe! Obviously, the word "cooperation" is notoriously vague. It is a relational term that can mean just about anything. So let's call my new approach: "nuanced cooperation." Obviously, cooperation requires at least two persons, unless you suffer from multiple-personality disorder. Under normal, healthy psychological circumstances, you cannot cooperate with yourself or with a thing. Individuals can cooperate with other individuals and/or groups, and groups can cooperate with other groups and/or individuals. One of the nuances that is almost entirely overlooked in discussions of cooperation is the obvious fact that there are varying degrees of cooperation. So why do we cooperate in varying degrees? Well...because we live in a finite world and therefore individuals and groups cannot afford to contribute time, energy, and resources to all cooperative enterprise. Hence, finite humans ration their cooperative time, energy, and resources. Many if not most individual enterprises involve both cooperation and competition from other enterprises. Even Bill Gates rations his finite resources! Given the vast number of cooperative opportunities in our social environment, we are all probably more non-cooperative than cooperative. Therefore, it's probably more accurate to portray modern humans as a non-cooperative.

All cooperative enterprises involve planning in pursuit of ends or goals. Collective enterprises orchestrated by groups (or organizations) require collective planning, which in turn requires leader-follower relationships. A good leader attracts followers, a bad leader repels followers. Individual enterprises require individual planning.  And, of course, there are better and worse forms of individual and collective planning. Ineffective planning usually leads to failure to achieve one's goals. We are non-cooperative when we prefer not to contribute our time, energy or resources to any given enterprise. Even when we choose to cooperate, we limit the amount of time, energy, and resources that we commit to various cooperative enterprises. Consequently, most enterprises eventually suffer from extinction due to competition from other enterprises. The number of cooperators associated with the horse and buggy industry and the Flat Earth Society has been greatly diminished.   

Of course, there's a lot moral discourse associated with how individuals and groups ration time, energy, and resources, especially when it comes to beneficent enterprises. There are many enterprises conducted by charitable organizations that are worthy of our cooperation. But it's impossible to contribute time, effort, and resources to all of them! Our personal friends and relatives often request our beneficent cooperation in pursuit of their personal ends (or goals). If you have a lot of "needy" friends, these requests can really drain your time, energy, and resources. Parents with young children and adult children with old parents experience the finitude of time, energy, and resources. Everyone believes that their own particular enterprise is worthy of the time, effort, and resources of others, and therefore, we are often offended when others choose to be non-cooperative, or contribute less to our enterprise than we'd like.

Some cooperative enterprises are immoral and therefore, not only do we refuse to cooperate, we actively seek to undermine those enterprises. Some immoral enterprises pursue immoral goals, some employ immoral means, and some pursue both immoral ends via immoral means. Unfortunately, we ration our cooperation based on imperfect information, and unwittingly contribute time, effort, or resources to illegal and/or immoral enterprises. One way to dress up a repulsive enterprise is to lie about it's means and/or ends. How many bogus charitable organizations have you contributed to?
Moreover, human cooperation has always been conditioned by both enticements (rewards) and coercive threats (punishments). Many cooperative enterprises involve reciprocity; that is cooperation is based on "you scratch my back and I'll scratch yours." However, we all willingly contribute variable quantities of our time, energy, and resources to various enterprises in the absence of either enticements or threats. And, depending on your supply of time, energy, and resources, if you commit yourself to to many cooperative enterprises, your contribution to any one enterprise might be negligible, or even border on non-cooperative.

Finally, given the inevitable competition between cooperative enterprises, all enterprises eventually suffer extinction. Where's that Roman Empire now? One way to avoid extinction is to force individuals and groups to contribute time, effort, and resources to your particular enterprise. However, forced cooperation is no panacea. Those who control enterprises that provide the coercive force that sustains involuntary cooperation, also ration their own time, energy, and resources, and therefore, usually demand more and more of your time, effort, and resources to monitor and control those who prefer non-cooperation. How many DEA agents, fences, courts, and prisons will it take to monitor and enforce the ongoing Drug War enterprise? There is a general understanding that Libertarians are committed to voluntary cooperation and therefore they reject forceful seizure of the time, energy, and resources of others; and the expansion of coercive force necessary to insure all forms of involuntary cooperation.