As noted previously, the philosophical spectrum on freedom of the will
is a wide one, ranging from outright denial to the claim that all physical
events are goal-driven and caused by a divine agent, that nothing happens
by chance, that everything is, ultimately, willed. The most beautiful
idea, perhaps, is that freedom and determinism can peacefully coexist:
If our brains are causally determined in the right way, if they make us
causally sensitive to moral considerations and rational arguments, then
this very fact makes us free. Determinism and free will are compatible.
However, I take no position on free will here, because I am interested in
two other points. I address the first by asking one simple question: What
does ongoing scientific research on the physical underpinnings of actions
and of conscious will tell us about this age-old controversy?
Probably most professional philosophers in the field would hold that
given your body, the state of your brain, and your specific environment,
you could not act differently from the way you’re acting now—that your
actions are preordained, as it were. Imagine that we could produce a
perfect duplicate of you, a functionally identical twin who is an exact
copy of your molecular structure. If we were to put your twin in exactly
the same situation you’re in right now, with exactly the same sensory
stimuli impinging on him or her, then initially the twin could not act differently
from the way you’re acting. This is a widely shared view: It is,
simply, the scientific worldview. The current state of the physical universe
always determines the next state of the universe, and your brain is
a part of this universe.The phenomenal Ego, the experiential content of the human selfmodel,
clearly disagrees with the scientific worldview—and with the
widely shared opinion that your functionally identical doppelgänger
could not have acted otherwise. If we take our own phenomenology seriously,
we clearly experience ourselves as beings that can initiate new
causal chains out of the blue—as beings that could have acted otherwise given exactly the same situation. The unsettling point about modern
philosophy of mind and the cognitive neuroscience of will, already apparent
even at this early stage, is that a final theory may contradict the
way we have been subjectively experiencing ourselves for millennia.
There will likely be a conflict between the scientific view of the acting
self and the phenomenal narrative, the subjective story our brains tell us
about what happens when we decide to act.
We now have a theory in hand that explains how subpersonal brain
events (for instance, those that specify action goals and assemble suitable
motor commands) can become the contents of the conscious self.
When certain processing stages are elevated to the level of conscious experience
and bound into the self-model active in your brain, they become
available for all your mental capacities. Now you experience them
as your own thoughts, decisions, or urges to act—as properties that belong
to you, the person as a whole. It is also clear why these events popping
up in the conscious self necessarily appear spontaneous and
uncaused. They are the first link in the chain to cross the border from
unconscious to conscious brain processes; you have the impression that
they appeared in your mind “out of the blue,” so to speak. The unconscious
precursor is invisible, but the link exists. (Recently, this has been
shown for the conscious veto, as when you interrupt an intentional action
at the last instant.)13 But in fact the conscious experience of intention
is just a sliver of a complicated process in the brain. And since this
fact does not appear to us, we have the robust experience of being able
to spontaneously initiate causal chains from the mental into the physical
realm. This is the appearance of an agent. (Here we also gain a deeper
understanding of what it means to say that the self-model is transparent.
Often the brain is blind to its own workings, as it were.)
The science of the mind is now beginning to reintroduce those hidden
facts forcefully into the Ego Tunnel. There will be a conflict between
the biological reality tunnel in our heads and the neuroscientific image
of humankind, and many people sense that this image might present a
danger to our mental health. I think the irritation and deep sense of resentment
surrounding public debates on the freedom of the will have little
to do with the actual options on the table. These reactions have to do with the (perfectly sensible) intuition that certain types of answers will
not only be emotionally disturbing but ultimately impossible to integrate
into our conscious self-models. This is the first point.
A note on the phenomenology of will: It is not as well defined as you
might think; color experience, for example, is much crisper. Have you
ever tried to observe introspectively what happens when you decide to
lift your arm and then the arm lifts? What exactly is the deep, finegrained
structure of cause and effect? Can you really observe how the
mental event causes the physical event? Look closely! My prediction is
that the closer you look and the more thoroughly you introspect your
decision processes, the more you’ll realize that conscious intentions are
evasive: The harder you look at them, the more they recede into the
background. Moreover, we tend to talk about free will as if we all shared
a common subjective experience. This is not entirely true: Culture and
tradition exert a strong influence on the way we report such experiences.
The phenomenology itself may well be shaped by this, because a
self-model also is the window connecting our inner lives with the social
practice around us. Free will does not exist in our minds alone—it is also
a social institution. The assumption that something like free agency exists,
and the fact that we treat one another as autonomous agents, are
concepts fundamental to our legal system and the rules governing our
societies—rules built on the notions of responsibility, accountability,
and guilt. These rules are mirrored in the deep structure of our PSM,
and this incessant mirroring of rules, this projection of higher-order assumptions
about ourselves, created complex social networks. If one day
we must tell an entirely different story about what human will is or is
not, this will affect our societies in an unprecedented way. For instance,
if accountability and responsibility do not really exist, it is meaningless
to punish people (as opposed to rehabilitating them) for something they
ultimately could not have avoided doing. Retribution would then appear
to be a Stone Age concept, something we inherited from animals. When
modern neuroscience discovers the sufficient neural correlates for willing,
desiring, deliberating, and executing an action, we will be able to
cause, amplify, extinguish, and modulate the conscious experience of
will by operating on these neural correlates. It will become clear that the actual causes of our actions, desires, and intentions often have very little
to do with what the conscious self tells us. From a scientific, thirdperson
perspective, our inner experience of strong autonomy may look
increasingly like what it has been all along: an appearance only. At the
same time, we will learn to admire the elegance and the robustness with
which nature built only those things into the reality tunnel that organisms
needed to know, rather than burdening them with a flood of information
about the workings of their brains. We will come to see the
subjective experience of free will as an ingenious neurocomputational
tool. Not only does it create an internal user-interface that allows the organism
to control and adapt its behavior, but it is also a necessary condition
for social interaction and cultural evolution.
Imagine that we have created a society of robots. They would lack
freedom of the will in the traditional sense, because they are causally
determined automata. But they would have conscious models of themselves
and of other automata in their environment, and these models
would let them interact with others and control their own behavior.
Imagine that we now add two features to their internal self- and otherperson
models: first, the erroneous belief that they (and everybody
else) are responsible for their own actions; second, an “ideal observer”
representing group interests, such as rules of fairness for reciprocal, altruistic
interactions. What would this change? Would our robots develop
new causal properties just by falsely believing in their own
freedom of the will? The answer is yes; moral aggression would become
possible, because an entirely new level of competition would emerge—
competition about who fulfills the interests of the group best, who gains
moral merit, and so on. You could now raise your own social status by
accusing others of being immoral or by being an efficient hypocrite. A
whole new level of optimizing behavior would emerge. Given the right
boundary conditions, the complexity of our experimental robot society
would suddenly explode, though its internal coherence would remain. It
could now begin to evolve on a new level. The practice of ascribing
moral responsibility—even if based on delusional PSMs—would create
a decisive, and very real, functional property: Group interests would become
more effective in each robot’s behavior. The price for egotism would rise. What would happen to our experimental robot society if we
then downgraded its members’ self-models to the previous version—
perhaps by bestowing insight?
A passionate public debate recently took place in Germany on freedom
of the will—a failed debate, in my view, because it created more
confusion than clarity. Here is the first of the two silliest arguments for
the freedom of will: “But I know that I am free, because I experience myself
as free!” Well, you also experience the world as inhabited by colored
objects, and we know that out there in front of your eyes are only wavelength
mixtures of various sorts. That something appears to you in conscious
experience and in a certain way is not an argument for anything.
The second argument goes like this: “But this would have terrible consequences!
Therefore, it cannot be true.” I certainly share that worry (think
of the robot society thought experiment), but the truth of a claim must
be assessed independently of its psychological or political consequences.
This is a point of simple logic and intellectual honesty. But
neuroscientists have also added to the confusion—and, interestingly, because
they often underestimate the radical nature of their positions.
This will be my second point in this section.
Neuroscientists like to speak of “action goals,” processes of “motor
selection,” and the “specification of movements” in the brain. As a
philosopher (and with all due respect), I must say that this, too, is conceptual
nonsense. If one takes the scientific worldview seriously, no such
things as goals exist, and there is nobody who selects or specifies an action.
There is no process of “selection” at all; all we really have is dynamical
self-organization. Moreover, the information-processing taking
place in the human brain is not even a rule-based kind of processing. Ultimately,
it follows the laws of physics. The brain is best described as a
complex system continuously trying to settle into a stable state, generating
order out of chaos.
According to the purely physical background assumptions of science,
nothing in the universe possesses an inherent value or is a goal in itself;
physical objects and processes are all there is. That seems to be the
point of the rigorous reductionist approach—and exactly what beings
with self-models like ours cannot bring themselves to believe. Of course,there can be goal representations in the brains of biological organisms,
but ultimately—if neuroscience is to take its own background assumptions
seriously—they refer to nothing. Survival, fitness, well-being, and
security as such are not values or goals in the true sense of either word;
obviously, only those organisms that internally represented them as
goals survived. But the tendency to speak about the “goals” of an organism
or a brain makes neuroscientists overlook how strong their very
own background assumptions are. We can now begin to see that even
hardheaded scientists sometimes underestimate how radical a naturalistic
combination of neuroscience and evolutionary theory could be: It
could turn us into beings that maximized their overall fitness by beginning
to hallucinate goals.
I am not claiming that this is the true story, the whole story, or the final
story. I am only pointing out what seems to follow from the discoveries
of neuroscience and how these discoveries conflict with our
conscious self-model. Subpersonal self-organization in the brain simply
has nothing to do with what we mean by “selection.” Of course, complex
and flexible behaviors caused by inner images of “goals” still exist, and
we may also continue to call these behaviors “actions.” But even if actions,
in this sense, continue to be part of the picture, we may learn that
agents do not—that is, there is no entity doing the acting
The study of phantom limbs helped us understand how parts of our
bodies can be portrayed in the phenomenal self-model even if they do
not exist or have never existed. Out-of-body experiences and full-body
illusions demonstrated how a minimal sense of self and the experience
of “global ownership” can emerge. A brief look at the Alien Hand and
the neural underpinnings of the willing self gave us an idea of how the
feeling of agency would, by necessity, appear in our conscious brains
and how this fact could have contributed to the formation of complex
societies. Next, investigating the Ego Tunnel during the dream state will
give us even deeper insight into the conditions under which a true subject
of experience emerges. How does the Dream Tunnel become an Ego
Tunnel?
Nema komentara:
Objavi komentar