Just to let you know ahead of time, the stuff in this chapter worries me. Why? Well, given the quick review in the rest stop, it appears that free-will is critical to us as persons or agents. If there is no God, if man is in the world and has to create himself and the values of the world, then free-will is a necessary condition for such things to happen. Robots cannot be morally responsible and can hardly be thought of as creative agents, for their actions are determined by their programs and hardwire systems. Only persons with free-will can be morally responsible. At least, it looks that way. The tough part is that most scientists and many philosophers are materialists and determinists. Strangely enough, they also hold people morally responsible. But, consistency demands that we see whether or not they can get away with such conflicting views. My suspicion is that they cannot. However, knowing the persuasive believability of the physical sciences, they are more likely to give up free-will than determinism. I am not saying that they are making a bad decision, only one which is supported by a great deal of evidence. But, therein lies the rub. Stick with materialism and determinism and out go persons and moral responsibility. Stick with persons and free-will and in comes some unexplainable power which is totally different from all the operative powers with which science deals. The power of free-will borders on witchcraft. The $64 dollar question is, how can free-will exist in a world which looks to be made up entirely of deterministic matter in motion? We could go along with Descartes and his metaphysical dualism, but the problem of interactionism raises its ugly and irresolvable head. Let's get more into the problem through a story.
INSERT: "Fondly Fahrenheit," by Alfred Bester. Story is found in Book #3 of The Road to Science Fiction, (New York: Mentor Books, 1979).
What's so scary about this story is that it is not clear at all whether the human or android have free-will. In certain situations, it looks as though only the human does; for example, the android, at low temperatures towards the beginning of the story, will not break rules that are programmed into it. It is the human who commits crimes by free-will. As the story progresses, the roles tend to be reversed and confused. The end makes us wonder what or who is it that is making decisions, and are those decisions free-will choices or choices made from an insane determinate mind which cannot do other than what the temperature regulated bodily system dictates.
Let's cut to the quick.. Are we persons with free-will or androids (organic machines) which are determined in their activities?
SECTION ONE. THE CASE FOR HARD-DETERMINISM.
Most materialists will argue the position of hard-determinism. The argument is so straightforward that I may as well put it in bullet form.
1. Determinism is incompatible with free-will and moral responsibility.
· The world, including man, is constituted by quanta in motion.
· These quanta are strictly determined to behave according to the materialistic principles that govern them. We refer to these principles as the physical and chemical laws of the universe.
· We can predict with reasonable accuracy the outcome of bodily interactions using the laws of the universe as templates.
· These predictions are very good to precise at microscopic and macroscopic levels. On the sub-atomic level, a principle of randomness enters in quantum mechanics and thermodynamics.
· Given the nature of physical entities (quanta) and the laws that govern their behavior, there is no principle of autonomy (free-choice). Quanta cannot make free-will decisions at any level of organization, especially the macroscopic (such as complex organizations of quanta called human beings).
· There are no such things as choices made by free-will. All actions are determined by antecedent causes. Given the antecedent causes, the actions cannot be other than what they are.
· Consequently, there is no such thing as moral responsibility. A machine (a thermostat on the simple scale, a human being on the complex scale) cannot be held morally responsible for what it is determined to do.
Hard-determinists are called "hard" because their position is very strict; all events in the universe (including those of man) are strictly determined. Consequently, there is no such thing as moral responsibility since it is considered by them to be incompatible with determinism.
Let's see where the incompatibility lies.
Suppose the thermostat in my house breaks and turns up the furnace. It keeps it running while I am away on vacation and sets the house on fire. The fire department catches the blaze before it does anything but minor damage. The firemen report to me, upon my return, that I had a faulty thermostat which caused the fire. How should I feel towards the thermostat and what should I do about it?
One thing I could do with the thermostat is to put it outside and let it bake in the sun. Imagine that you see me do this, and ask what I am doing. I respond that, "The thermostat made a bad reaction to stimuli and nearly burned down my house, and that I was punishing it for its mistake. I am mad that it treated me so badly. Such punishment is what it deserves." Given my response, you're going to wonder a bit about me. For one, a thermostat isn't the kind of thing that could even be conscious of punishment; it's a gizmo type of servo mechanism that responds to temperature changes. Considering it to have feelings of guilt that it mistreated someone is making a major category mistake. Not only do thermostats not have feelings, but feelings of human beings towards machines should be tempered with the knowledge that machines don't behave intentionally. Persons are bad in virtue of their intentions, and should be punished if they put into deed those bad intentions. Punishment is just not a concept that applies to such a machine because machines do not have intentions. You would think me nuts were I to say that "The thermostat intentionally planned to burn down my house, and now I'm going to make it pay for it by punishing it with a painful condition." Secondly, I can hold something blameworthy and subject to punishment only if it has done the action intentionally and through free-choice after deliberation. The thermostat certainly doesn't deliberate and obviously doesn't have free-choice. To hold it morally blameworthy is a big mistake.
You would probably take me to the side and say, "Look, the thermostat is just a simple feedback mechanism. It did what it did because there was a malfunction in its circuit. Don't put it out in the sun to "punish" it --that's ridiculous. Just go done to the hardware store and by a new part for the circuit and fix it. Then it will operate properly." Sounds like good advice, and it is for dealing with deterministic machines.
2. Determinism is compatible with causal responsibility.
But, what if we are android machines whose actions depend a lot upon all kinds of things, including temperature. Riots and murders in the big cities go way up in the summer. Suppose that humans really are sophisticated biological computers or androids which, in many cases, react to temperature just like the android in the story. If that is the case, can we blame a person or android for what he/she did anymore than the robot or thermostat? Doesn't look like we can.
But, as with the thermostat, we can determine agency. That is, we can determine that it was this thermostat which caused the fire, that it was this android which did the murders. And, what do we do with faulty thermostats or androids? We fix them. We either put in a new part, reprogram them or, in the worst case, junk them. In other words, with determinism, there is no concept of moral responsibility that applies to the agents that do an action, but there is a concept of responsibility (causal) which applies. The machine (be it human or thermostat) which did the "faulty" action is to be fixed so that it will no longer malfunction. The thermostat is fixed by replacing the circuit that reacts improperly to heat. The same would be true for humans. Those who become violent in hot weather are to be given drugs or cooling mechanisms to prevent their becoming violent. If those "fix-it" procedures do not work, then other measures may have to be taken --for example incarceration or "brain-washing" to either prevent the behavior from having an effect externally in the world or preventing the behavior from occurring internally at all. The seriousness of the "fix" of the mechanism would depend upon the undesirable action and its effects on the world. Playing a stereo in the dorm would probably only require a few mild to medium electro shocks to end the behavior, whereas car-theft may require brain-washing and neuro-surgery to completely eradicate the behavior. If you have not read the book or seen the movie Clockwork Orange, give it a read or view; it's a masterpiece which deals with this subject.
To sum up hard-determinism, the hard-determinist says that the events in the universe are strictly determined. What happens now is determined by the events prior to those occurring now. Since there is no free-will (no undetermined causal power which could oppose and change the strict determinate causal influence of prior events to determine present events), there is no moral responsibility. Even if there were a usable concept of moral responsibility, it would be incompatible with determinism. Determinism and moral responsibility are incompatible concepts. A morally responsible android is a round-square.
Take me to the Next Part
take me to the Table of contents