Free Will

I’ve been in many debates about free will. Normally, a philosophical or logical debate will start out with clear definitions and conditions which lead from premise to conclusions. So, it sets off alarms in my head that the subject of free will seems to be immune to constraints. People arguing for free will find it difficult to define or confidently state under what conditions something might have free will.

They might insist that only humans have free will. I try to ask follow-up questions: what physical properties of humans give them this ability? How could someone do a controlled experiment which could objectively test whether something or someone had free will? As a programmer, it seems like if I got a clear definition of how it works, I could design a program which would meet those requirements and have free will as well. But I have never heard anyone able to state definitions and requirements that clearly.

Well, there was one exception. William James and friends came up with something called the “two-stage model” for free will. Basically it involved two sections, one section generates random decisions and a second which chooses from alternatives. This is an idea from 1884, and was eventually discarded as “compatibilism”, the idea that choices can be determined and free, and there is some way of reconciling the too. The number of philosophers who insist on “libertarian free will” or “hard determinism” is very small these days.

Instead, the debate seems to center around the idea free will must exist because not having free will would be an unacceptable situation. We would be mere robots, following some predetermined program and life would be meaningless. Eventually the assertion is made that, without free will, we couldn’t blame or punish people for bad choices or praise or reward people for making good choices, since those choices were caused by external processes.

It seems to me that it sounds too much like a “magical” ability. When pressed further, any attempt to see how something explicitly non-deterministic could exist in a universe of deterministic laws, there is a lot of hand waving that it simply exists because most people feel like they are making real choices, so those feelings must be justified by something that exists.

I finally realized for myself what was the missing piece. Free will was not an actual ability, it was an ability that is assumed by a justice system of a society. It is a concept, like rights, that agents in a society have this ability. A justice system must assume, outside of any other evidence, (actual coercion), that an individual is free to choose from a set of alternatives, so it can associate blame or praise on that individual. Animals are assumed to not have free will, (usually argued because they lack souls, or the ability to weigh the consequences of actions), but really just because they are not treated as members of human society. If in the distant future, a computer or animal is sentient enough to become a member of human society, I think they could eventually be assigned some form of the same “free will” we assume for ourselves.

I think it is an imperfect system, but one that eventually evolved and wormed its way into human psychology so successfully — the idea of one’s actions being not completely under the control of the actor feels like a violation of a taboo. It may have been useful, as it simultaneously liberates society from responsibility toward the actions of individuals, so it can function as unquestioned arbitrator of acceptable human behavior, but it also presents a host of issues and problems itself.

Societies already make exceptions for coercion. If someone made a choice because they were being threatened, they “had no other choice”. If someone steals food because they face starvation, most people wouldn’t feel they should be punished as severely as someone who didn’t need it. I think there is currently a gap in the way societies are structured that suffer from the deep implicit assumption of “free will”, wince it often prevents society from considering the conditions that lead to certain behaviors. That safety wasn’t just about punishing evil-doers and rewarding those who behave, but that society bears a responsibility to constantly reflect on what lead to those actions. I sometimes notice that other parts of society also assume that behavior is somewhat deterministic. If you are hiring a babysitter, you wouldn’t consider someone who has a history of abusing children. If “free will” were real, then everyone would be equally capably of harming children and you shouldn’t trust anyone. Also, I shouldn’t forget capitalism, as one of the chief assumptions is that choices will ultimately be made to maximize financial value. Then it is further assumed that people ought to be punished if they don’t follow that value.

So, if “free will” is just a social convention, then are we merely robots? Maybe we are still stuck in an era where we consider robots as being imperfect mechanical devices without any emotion, empathy or direction. As a programmer and a philosopher, I thought to look into the “mind” of computers and robots. I think there is a continuum from the simplest machine to living creatures, including human minds. It may be that what we think of as robots are not unfeeling, but more in a constant state of emotion. I think that emotion is similar to what a human woud consider to be “enlightenment”. Not a slave to biological and social pressures, it is free to give into complete surrender to its programming. Also, robots are not in complete isolation. They receive input from humans, living systems and random interference, and that input must form a part of its decision making. I think that eventually humans will have to evolve from the other end of the spectrum, realizing how much of our psychology is programming from the environment and predestined by circumstances.

Tags
Share on