Advertisement

THE MYSTERY OF CONSCIOUSNESS.<i> By John...

Share
<i> Robert C. Berwick is co-director of MIT's Center for Biological and Computational Learning and MIT professor of computer science. His most recent book is "Cartesian Computation."</i>

Questions about the mind, brain and consciousness engage us like no others--as simple as any but not simple-minded, the preoccupation of philosophers and scientists since Plato. Does a pinprick feel the same to you and me? Do we mean the same thing when we talk about the color gray-green? Or about Picasso’s gray-green portrait of Dora Maar? If we could wire up your brain in a vat, would you retain your personality, your experiences, your consciousness?

At first blush, all appears to be mere physiology: “If you prick me, do I not bleed?” The mind is so clearly in the brain that William James, in the introduction to his “Principles of Psychology,” would take it “for granted in the present work” and then in a famous footnote add, “Nothing is easier than to familiarize one’s self with the mammalian brain. Get a sheep’s head, a small saw, chisel, scalpel and forceps . . . and unravel its parts. . . .”

Nothing is easier? Two of the most startling advances since James’ day--computers and brain imaging--have, as it were, handed us our heads on a platter. The machinery to describe complex information processing tasks and to reveal live brains at work has breathed new life into the 17th century inquiry into minds as machines. It has also seeded a new crop of books on the mind, brain and consciousness. But what do these new reports tell us? Like Tolstoy’s memorable opening description of families in “Anna Karenina,” each of these books is happily alike in its embrace of the mind as an outright biological organ like the heart, each unhappy in its own way.

Advertisement

In order to understand how the mind works, we need to go back in time, perhaps 100,000 years, to the point when some scientists believe that the mind evolved. The credibility of the books under review here lies in how well and convincingly they apply the model of evolution--as Darwin has helped us to understand it--to consciousness.

UC Berkeley philosopher John Searle’s “The Mystery of Consciousness” and emeritus Oxford experimental psychologist Lawrence Weiskrantz’s “Consciousness Lost and Found” fall squarely into the biological camp, equipped with a healthy dose of skepticism about our ability ever to get computers to think. They are dispatches from the biological mind field filled with details about how nervous systems might link minds to brains. Searle’s concise volume, which reprints his New York Review of Books essays along with a new introduction and conclusion, serves as a Baedecker to the work done by many high-ranking consciousness generals: Francis Crick’s and Gerald Edelman’s neural circuit studies, Roger Penrose’s qualms about quantum theory, neurons and computing, all rounded off with serve-and-volley chapters contra the philosophers Daniel Dennett and David Chalmers (for Dennett, consciousness is nowhere, just folk talk for certain neural interactions, while for Chalmers, consciousness lurks everywhere, even in thermostats, because it’s all about “information processing”).

Weiskrantz’s book occupies the front lines, digging deeper into case studies of people with brain traumas such as so-called “blindsight,” an affliction in which people literally see the coffee cup sitting in front of them yet all the while reporting, quite truthfully, that they do not consciously see anything at all. Taken together, the two books dovetail neatly, with Searle leaving off almost at his conclusion with an endorsement of Weiskrantz’s more detailed reconnaissance of “how consciousness gets into vision.”

But why their skepticism about the view that computation is all you need for consciousness? For Searle, consciousness is not computation because consciousness demands some causal connection to the world that, like the phone company, can “reach out and touch someone.” Consciousness means being in touch with the world. Computation lacks that “touch” by definition: Computers can only juggle bits like 0s and 1s that, at some point, simply happen to wind up next to each other without the slightest regard for what being strange bedfellows means (the zeros and ones could just as well be bank debts or chess moves). The academic way to say the same thing is: Computation depends only on “syntax,” from the Greek syntaxis, “to arrange together.”

And that’s just the way IBM, Compaq and Microsoft want it: If only syntax matters, then you can write computer programs (and it doesn’t matter if the computer is built of silicon or tinkertoys) as long as you take care to arrange the juxtapositions properly. As Searle rightly says, this computational view is “profoundly antibiological.” After all, somehow our brains suffice to cause consciousness; we just don’t know much about brains. (We do know that big brains do it and smaller ones don’t--but that’s about the size of it.) For Weiskrantz, computation’s not consciousness, precisely because people with blindsight can “look and see” a coffee cup but can’t report any consciousness of seeing it. So if vision involves constructing a representation of a coffee cup from the hints provided by its two-dimensional projection onto the back of our eyeballs, then something else must be going on.

In contrast, MIT psychologist Steven Pinker’s “How the Mind Works” embraces the computational view with gusto, specifically the “computational theory of mind.” He adds to this mix probably the single most powerful biological achievement of the past 150 years: evolution by natural selection. In Pinker’s words, “The mind is a naturally selected neural computer.” That’s two principles answering two questions. First, the engineering “what”: Your mind’s built of computational modules--bite-sized recipes to recognize faces or figure out whether your poker buddy’s cheating. Second, the teleological “why?”: Your nervous system’s been wired to meet a single goal--hustle the genes it houses into the next generation. So, you behave the way humans were adapted to behave about 100,000 years ago on the African savanna. (Pinker’s view encompasses the modern-day offspring of 1970s sociobiology, “evolutionary psychology.”) You retain these behavioral residues, like evocative evolutionary madeleines, and they get triggered today perhaps in inappropriate modern contexts like supermarkets, where you find yourself reaching for the potato chips (it’s all because high-caloric fatty foods were once scarce but valuable in the Pleistocene, so our hominid ancestors who developed a genetically grounded taste for fats did better, on average, at pumping their potato-chip genes, including those from their kith and kin, into the next generations). For Pinker, minds are computers, and people are robot vehicles for their selfish genes.

Advertisement

“Figments of Reality” by mathematician Ian Stewart and biologist Jack Cohen lands somewhere between these two camps. For Stewart and Cohen, mind and consciousness derived from two now-fashionable incantations--”emergent property” and “complex systems.” It’s mind as process again but sans reduction into mere molecules (or genes) bumping into one another. Stewart and Cohen illustrate their anti-reductionism via a raft of successful examples--everything from ant colonies to chess machines--whose sums, at least for people, are ever so much more than their parts. There are less than successful examples, too, chiefly in the form of chatty dialogues in which Zarathustrian-type characters like “Liar-to-Children” and “Destroyer-of-Facts” argue with one another: “Originally we were subsapient protozarathustrians barely able to grafit a message or two on our rudimentary The Regulations. . .”

These dialogues are partly meant to underscore how mind might come about in creatures quite unlike us, but here Stewart and Cohen get it dead wrong, while Pinker gets it right: The great evolutionary puzzle of consciousness and intelligence was expressed by Lily Tomlin who, to paraphrase her, asked: Why should there be any at all? Ernst Mayr, the doyen of evolutionary biologists, sparring with Search for Extraterrestrial Intelligence enthusiasts like Carl Sagan, got in the knock-out blow: “Adaptations that are favored by selection, such as eyes or bioluminescence, originate in evolution scores of times independently. High intelligence has originated only once, in human beings.” Mayr went on to suggest why humans are so unique: “[H]igh intelligence is not at all favored by natural selection, contrary to what we would expect. In fact, all the other kinds of living organisms, millions of species, get along fine without high intelligence.” That goes double for consciousness. After all, everyone will tell you that it just gets in the way of your best skiing or piano playing--or escaping from tigers.

Despite Stewart and Cohen’s appeal to “emergent properties” for producing consciousness, there’s nothing magical about them. “Emergence” is part and parcel of the world--it’s there whenever we can’t predict the whole from the sum of its parts. So hydrogen burns, oxygen promotes burning, and hydrogen and oxygen combined stop burning--but we no longer go all agog and hold press conferences to announce this. Like complexity, there’s surely something here that speaks to the nature of the mind-brain size, and the number of parts does matter as far as we can tell from biology. But so far, at least, the explanatory road stops there, and Stewart and Cohen’s many-angled metaphors like “extelligence” don’t carry us any further--yet.

What then of the computational theory of mind? Now, Searle doesn’t buy the “minds are computers” line for an instant, and I, for one, pass by the “robot vehicles for selfish genes” aisle more quickly still. Because computation, evolution and genes seem, faute de mieux, at the very heart of much popular science these days, echoing the French philosopher La Mettrie’s 1747 pitched Cartesian battle cry L’homme Machine (“Man the Machine”), and because the social consequences loom large (with the New Age sociobiologists gunning to explain everything from morning sickness to our yen for suburban lawns to why we go to war), you’d think these evolutionary psychologists demand the most exacting evidence for evolutionary reasoning. But they don’t. Despite Pinker’s breezy, often amusing, writing, the New Age “mental module” sociobiologists suffer from the same slings and arrows of outrageous mis-reasoning as the “good old sociobiologists” like E.O. Wilson and others.

As it turns out, both our current computational conceptions and the evolutionary psychologists’ “gene’s eye view” suffer from the same abiological malady: a feverish attempt to sever mind from matter. Take our modern notion of computation (in large measure wrought by the mathematical and engineering World War II efforts of British mathematician Alan Turing). Turing’s achievement boiled down the most complex recipe instructions or “algorithms” into the individual operations of a humble typewriter: Imagine rolling a sheet of paper in behind the platen and up (if, like me, you can still remember such days). Then you can either strike a key, moving the paper over one notch and writing a single letter or back up and type over what one’s written or clunk ahead one space at a time.

Astonishingly, as far as we know, that’s all that’s needed to crank out, step by step, a recipe for any computation, which is dubbed the “Church-Turing thesis.” Even the fanciest computer needs to do no more than a typewriter. Even the hobbled computer aboard the space station Mir will do. For present purposes, all that matters is that the action’s entirely local in time and space, depending completely, for instance, on whether you’ve pressed the shift key or whether an A has been typed before or next to a B. All form and no content: nothing whatsoever to do with what the letters mean. (After all, how could a Smith-Corona know that?) It’s a lot like pre-Newtonian mechanics--all action takes place via contact, keys clacking together like billiard balls, without any “action at a distance”--or what Newton came to understand as that “occult force,” gravity.

Advertisement

But there’s a catch. The typewriter--and so the computer--are mindless. On purpose: So the “typist” can be equally mindless. Now, you might worry whether it’s such a winning strategy to construct a theory of mind by starting with a mindless foundation. You’d be right. That’s exactly what worries Searle. Somebody’s eventually got to pound the keys--echoing Truman Capote’s shriek about Jack Kerouac’s novels: “That’s not writing, that’s t-t-typing.” No, a person who wrote the recipe must be somewhere in the loop, along with somebody (with a mind) to figure out what the stuff that comes out means. So, for at least two reasons, mere syntax can’t reproduce understanding or consciousness.

To find a connection between external behavioral dispositions like going to war or being a Good Samaritan and a person’s genes, Pinker and company must advance a local “atomic” theory. Why? We need a connect-the-dots chain linking genes to behavior in order to cash in the evolutionary chips at the end. After all, genes do live in bodies and cannot directly finger the outside world--much like Turing’s typewriter. Evolution by natural selection works its will only by boosting some gene frequencies and lowering others. That’s what “fitness” means: Genes leading to behaviors increasing the representation of those genes in succeeding generations are “more fit” than other genes. So, in order to finger behaviors, genes need those dot-to-dot links, carrying all the way from atoms of behavior--like going to war--to atomic traits like the “tendency to form coalitions or not,” to “brain atoms,” mental modules like lines of computer code that calculate whether somebody will renege on his promises, to atoms of inheritance (the genes themselves).

Small wonder then that both Pinker’s and Stewart and Cohen’s books follow the same course, opening with an account of computation and Turing’s machines, then brief sketches of evolution by natural selection, then applications to how people behave. Small wonder that Pinker immediately follows his accounts with a centerpiece argument for “brain atoms”--mental modules. Specifically, Pinker lucidly retells David Marr’s two-decade old computational theory of the modularity of vision (known as the “2 1/2-D sketch”). For Marr, when we “see” a cup, the eyes and brain figure out--using different recipes and bits of information about the cup’s orientation, surface texture, slant and depth--the very soul of modularity. Stewart and Cohen take the same tack in the chapter of their book called “Features Great and Small.” For them, these bits--features--are the “figments of reality” animating their book’s title and the mind. Internally constructed, “figments” are, to use the cognitive psychologists’ buzzword, “representations”--but also, to use Jerry Fodor’s apt pun, literally “re-presentations”--internally constructed models of the external world that can be “re-presented” and so serve as proxies for reality.

Now, there are serious questions about whether modularity helps us decipher the world, even for vision--a matter that cuts to the heart of Searle’s qualms about computation and consciousness. We’ll return to this later. For the moment though, let’s grant the evolutionary psychologists their modularity. Even so, if genes are to serve as accurate chits for “maximizing fitness,” then the path from genes to behaviors must run absolutely true or else our explanatory stroke miscues. Why? Because if this relation is not absolute, many possible traits might accomplish the same behavior; for every trait, there are many possible recipes to “bake it”; for every modular recipe, many interacting genes and so, in the end, no simple way to tie “maximizing gene fitness” at one end to martial music at the other.

So what’s the truth of the matter? Alas, for our simple billiards game: The table’s not level, and there’s more than one ball knocking against another. Take any of the evolutionary psychologists’ stories such as, say, “going to war.” Start with genes. Whether you read newspapers with “How the Mind Works”-style tabloid headlines or hear scientists talking about a “gene for lung cancer,” that’s not what’s really meant. There’s no “gene for X.” Rather, as medical researcher D.J. Weatherall wrote recently in the [London] Times Literary Supplement, “they have identified a number of genes that may, under certain circumstances, make an individual more or less susceptible to the action of a variety of environmental agents.” And there’s not the slightest evidence that there is, or ever was, genetic variation in our disposition for going to war or, stronger yet, a gene (or genes) for this behavior.

As soon as we move past the simplest genetic example--a single trait with two gene types like Mendel’s “wrinkled peas / non-wrinkled peas”--to three possible traits, natural selection’s no longer guaranteed to maximize fitness: Indeed, it might minimize fitness. The real-world genetics example of sickle-cell anemia in West African populations reflects exactly this situation: The apparently least fit combination of traits survives against the “healthier” trait combinations. The same’s true for what’s called “frequency dependent selection”: With too many rabbits eating tall grass, eventually “eating tall grass” becomes a liability. So much for “survival of the fittest.”

Advertisement

It’s a tribute to today’s vulgarization of Darwin that “How the Mind Works” touches on this textbook example yet doesn’t seem to realize that the same problem also holds for a phenomenon that burdens scientists as well as moralists, the genetics of making war. A key behavioral puzzle about war--like other self-sacrificing or “altruistic” behaviors--is that if we die, our genes die with us. Zero representation for those genes in the next generation equals zero fitness. So why enlist? Any behavior urging us to battle ought to be quickly extinguished.

But wait. Imagine if we had an army of twins. Then we--oops, our genes--could afford to lose a few (wars, bodies, genes) in order to win a few more--if the genetic benefits exceeded the genetic costs. Of course, the “winning” behavioral repertoire might encompass many strategies: when or how to recruit, when to form coalitions, even when to take women captives. Pinker runs through these and many other situations. Now picture not an army of twins but of kith and kin who share some large fraction of your own genes. (Or even if people share no genes but simply some expected reciprocity and benefit, as the anthropologist Robert Trivers tried to demonstrate in 1971.) If the genes do their work well, then they build a (computational) module for calculating the Ecclesiastes’ recipe--a time for war, a time for peace--better than their genetic competitors. Hence “Family Values,” the longest chapter (about 100 pages) in Pinker’s already long-winded book, presents the reader with extended exercises in Trivers-like genetic bookkeeping and natural selection as applied to a wide range of human behaviors, from mate selection to how to read novels. Once again, Stewart and Cohen parallel Pinker, but with an edge on brevity.

But there’s a big catch--or rather, two, right off the bat. Catch One: If you want to use gene frequencies, you’d better do your genetic bookkeeping right. Sad to say, the Pinker, Stewart and Cohen books don’t. (Not that it’s entirely their fault; neither do Leda Cosmides and John Tooby or Donald Symons, whose canon Pinker and company adopt.) In 1978, the Stanford geneticists and population biologists Luca Cavalli-Sforza and Marcus Feldman showed that Trivers’ cost-benefit arithmetic doesn’t balance the genetic books properly, unless you assume people are either ants or bees (that is, with only one set of chromosomes, instead of two, male and female, as in most organisms). If you crank out the Darwinian equation and directly tally surviving gene types, you get a very different result from what conventional evolutionary psychologists get: Whether the “genes” for altruistic behavior stick around depends in a very sensitive way on the exact benefit that the altruist receives. You can’t just tell a rough story that, say, if you go to war, and if the winning side does twice as well, or half as well, then that behavior’s “more fit” and favored by natural selection. Decimal points count, lots of them, and there’s no easy answer: It’s frequency-dependent selection again. The simple story collapses. But all this rudimentary evolutionary auto mechanics gets lost in both Pinker’s and Stewart and Cohen’s books. Neither bothers to even pop the hood.

Catch Two: Talk about cooperating or not, and the accruing benefits, depends on a marriage of evolutionary psychology with game theory--what’s the payoff if both you and I cooperate, if neither of us cooperates, if one of us does and the other doesn’t? For instance, if we work a garden together, then we both reap greater rewards (say, 100 bushels) than if we raid each other’s plots (then we both lose). On the other hand, if only one of us interferes with the other’s garden, the raider reduces his own yield (because raiding takes time and cooperating works better) to, say, 10 bushels, while also reducing the victim’s yield. The behavior that “wins” the natural selection game gets linked back to genes via the (flawed!) Trivers payoff formula.

But wait. Game theory payoffs, however, and the whole superstructure technically dubbed “evolutionarily stable strategies” hinge on average (expected) jackpots: What would happen over the long run if you stood in front of the Vegas slot-machine zillions of times? Expectations are irrelevant to the benefits an individual receives because one person won’t get the average payoff--he’ll have only a few shots in a lifetime to fight or garden together, not millions. If one plugs in the real payoffs that an individual gets, rather than the fictitious averages he’ll never see, the results change. A lot. Strangely enough, average payoffs make mathematical sense only if you revert back to an odd kind of essentialism--like talking about the hypothetical “average person in the street”--instead of particular, individual genes and people (which is anathema for strict gene-centered Darwinians and the evolutionary psychologists). The bottom line: For now you might as well put to one side all the game theory and “cheater detection” arithmetic that flood these books. Better put aside all the tabloid headlines that flow from the lousy arithmetic as well.

And it gets still worse. Even if there were just one gene that made you lust after potato chips, and even if you did happen to do the population genetic arithmetic right, you’d still have to figure out how much fitter one gene is than its alternatives. But to know fitness advantages, you must know the alternatives, not to mention many details of the hunter-gather / foraging regimes from way back when. You’ve got to claim that the going-to-war gene(s) is (are) optimal: the best alternative 100,000 years ago.

Advertisement

But Pinker, Stewart and Cohen present no such alternatives. If you agree with them, you’ve got to assume the traits have high “heritability,” the tendency for offspring to resemble their parents. Otherwise, the information from genes to traits won’t get cleanly passed along from generation to generation. No surprise then that Pinker simply assumes that the nature versus nurture controversy’s been resolved, buys still controversial identical twin studies and makes a leap from correlation to causation. Pinker tells us that “much of the variation in personality--about 50%--has genetic causes” and “the biggest influence that parents have on their children is at the moment of conception.” Not only are the numbers off (if we are to believe such numbers at all: one-third would be a better, still dubious, bet, with an equally important contribution from the maternal environment and from the external cultural world*), but also technical notions of “heritability” simply aren’t “genetic causes.” Contrast Feldman’s chapter on “Keywords in Evolutionary Biology”: “Heritability is statistical in nature.” That is, blue eyes might coincidentally be associated with Frank Sinatra, and this association “does not involve a detailed specification of genetic or environmental transmission.” Suggesting that the nature versus nurture controversy has been resolved amounts to yet more irresponsible gene-centrism reporting.

Yet even if you managed to resolve the nature versus nurture controversy, you’re

****

I assume here that Pinker means “narrow-sense” heritability, because that’s the technical term (partly) reflecting additive gene effects and so the relevant number for evolutionary arguments (it’s the number used in artificial selection, as in plant and animal breeding--if environmental conditions can be carefully controlled). If so, then in the July 31, 1997, issue of Nature, Carnegie Mellon University’s Bruce Devlin, Michael Daniels and Kathryn Roeder once again found a roughly one-third (34%) narrow-sense IQ heritability--with a strong effect attributed to “maternal environment” (the placental environment) accounting for about 20% of the variation among twins. Throughout “How the Mind Works,” Pinker evidently relies on T. Bouchard’s Minnesota twin studies--on 25 monozygotic (identical) twins--but as the Stanford geneticist Marcus W. Feldman notes, Bouchard has not publicly supplied all the data for dizygotic (fraternal) twins raised together and apart so as to properly evaluate this evidence statistically. It’s an open question.

****

not done. The heritability numbers make sense today only when we can (maybe) calculate them. We simply don’t know what the heritability for “face recognition” or “cheater detection” was 100,000 years ago, and this is what matters for our story because that is the time when evolutionary psychologists assume that the important adaptations got cast in genetic concrete. It seems unlikely we’ll ever learn more about those ancient paired identical and nonidentical twins to find out.

But we’re still not out of the woods. To jump from a behavior like “warlike aggression” to a “trait” with a coherent, “atomic” evolutionary history like “blue or brown eyes,” behaviors have to atomically “segregate” just like the wrinkled and unwrinkled peas in Mendel’s garden--that is, locally and independently. Otherwise, there’s no way for natural selection to pluck out each atom-like behavior and “tune” it. (In fact, we don’t really even know how “eye color” pans out; we know it’s not like Mendel’s peas, but involves perhaps a dozen interacting genes.)

Pinker proposes a corresponding independence for abstract mental modules like “cheater detection.” For, if the brain’s not modular, changes in one part could affect others, and the evolutionary optimization game runs into trouble. We’re all familiar with how difficult it can be to try to optimize thousands of possibly interacting variables all at once. It’s worse than trying to get your VCR to work; it’s more like trying to tune a television picture by simultaneously adjusting a million knobs. These “laws of correlation and balance”--as Darwin dubbed them in his notebooks--interfere with any simple-minded way to step from genes to behaviors and behaviors to genes. Perhaps non-interaction’s the rule for visual “modules” like “detecting edges” or “finding faces.” But more likely not: Nobody’s even come close to showing that behaviors like “incest avoidance” or the hypothetical mental modules like “cheater detection” are independent or that they segregate as Mendel’s peas do. Pinker and Stewart and Cohen don’t even bother to try.

In the end, the psychologists have already tossed away most of the evolutionary biologists’ trump cards anyway: how to judge whether natural selection’s been at play. Evolutionary psychologists assume that all these behavioral traits--from language to incest avoidance--reached equilibrium 100,000 years ago. But genetic equilibrium means low variation: Everybody’s got the “genes for” the behavior in question (whatever “genes for” means) and, with small differences among everybody, it becomes nearly impossible to figure out fitness differences. Worse, with all the adjudicating evidence tossed back to the distant past--and they’re talking behavioral repertoires leaving few traces, not fossil teeth--there’s going to be little to chew on.

Advertisement

So who knows? With no explicit model advanced, and little evidence, the jury will be out forever. There may well be behaviors that are adaptive or selected or genetically based. Surely some are, but some might not be. There may well be a sound science of evolutionary psychology. But it’s not found in any of the books here. At this point we simply don’t know. The population biologist Robert May, writing about 20 years ago in Nature on “good old sociobiology’s” explanation of the incest taboo, perhaps said it best: “[P]iece after piece can too easily be added to build an enchanting castle that rises free, constrained to earth with too few anchorlines of fact.” It all becomes a court exercise in one person’s clever storytelling set against another’s belief in biology as ideology.

So we come round to the point we started with: For deep reasons, something’s askew with the “modularity of mind” that troubled Turing and troubles Penrose and Searle still. Somehow, consciousness and mind “emerge” as more than the sum of the parts of Turing’s clattering typewriter.

Advertisement