Gladwel’'s fourth book comprises various contributions to the New Yorker and makes for an intriguing and often hilarious look at the hidden extraordinary. He wonders what… hair dye tell[s] us about twentieth century history, and observes firsthand dog whisperer Cesar Millan's uncanny ability to understand and be understood by his pack. Gladwell pulls double duty as author and narrator; while his delivery isn't the most dramatic or commanding, the material is frequently astonishing, and his reading is clear, heartfelt, and makes for genuinely pleasurable listening.

Malcolm Gladwell

WHAT THE DOG SAW

and other adventures

For Henry and David

Preface

1.

When I was a small child, I used to sneak into my father’s study and leaf through the papers on his desk. He is a mathematician. He wrote on graph paper, in pencil—long rows of neatly written numbers and figures. I would sit on the edge of his chair and look at each page with puzzlement and wonder. It seemed miraculous, first of all, that he got paid for what seemed, at the time, like gibberish. But more important, I couldn’t get over the fact that someone whom I loved so dearly did something every day, inside his own head, that I could not begin to understand.

This was actually a version of what I would later learn psychologists call the other minds problem. One-year-olds think that if they like Goldfish Crackers, then Mommy and Daddy must like Goldfish Crackers, too: they have not grasped the idea that what is inside their head is different from what is inside everyone else’s head. Sooner or later, though, children come to understand that Mommy and Daddy don’t necessarily like Goldfish, too, and that moment is one of the great cognitive milestones of human development. Why is a two-year-old so terrible? Because she is systematically testing the fascinating and, to her, utterly novel notion that something that gives her pleasure might not actually give someone else pleasure-and the truth is that as adults we never lose that fascination. What is the first thing that we want to know when we meet someone who is a doctor at a social occasion? It isn’t “What do you do?” We know, sort of, what a doctor does. Instead, we want to know what it means to be with sick people all day long. We want to know what it feels like to be a doctor, because we’re quite sure that it doesn’t feel at all like what it means to sit at a computer all day long, or teach school, or sell cars. Such questions are not dumb or obvious. Curiosity about the interior life of other people’s day-to-day work is one of the most fundamental of human impulses, and that same impulse is what led to the writing you now hold in your hands.

2.

All the pieces in What the Dog Saw come from the pages of The New Yorker, where I have been a staff writer since 1996. Out of the countless articles I’ve written over that period, these are my favorites. I’ve grouped them into three categories. The first section is about obsessives and what I like to call minor geniuses—not Einstein and Winston Churchill and Nelson Mandela and the other towering architects of the world in which we live, but people like Ron Popeil, who sold the Chop-O-Matic, and Shirley Polykoff, who famously asked, “Does she or doesn’t she? Only her hairdresser knows for sure.” The second section is devoted to theories, to ways of organizing experience. How should we think about homelessness, or financial scandals, or a disaster like the crash of the Challenger? The third section wonders about the predictions we make about people. How do we know whether someone is bad, or smart, or capable of doing something really well? As you will see, I’m skeptical about how accurately we can make any of those judgments.

In the best of these pieces, what we think isn’t the issue. Instead, I’m more interested in describing what people who think about homelessness or ketchup or financial scandals think about homelessness or ketchup or financial scandals. I don’t know what to conclude about the Challenger crash. It’s gibberish to me—neatly printed indecipherable lines of numbers and figures on graph paper. But what if we look at that problem through someone else’s eyes, from inside someone else’s head?

You will, for example, come across an article in which I try to understand the difference between choking and panicking. The piece was inspired by John F. Kennedy Jr.’s fatal plane crash in July of 1999. He was a novice pilot in bad weather who “lost the horizon” (as pilots like to say) and went into a spiral dive. To understand what he experienced, I had a pilot take me up in the same kind of plane that Kennedy flew, in the same kind of weather, and I had him take us into a spiral dive. It wasn’t a gimmick. It was a necessity. I wanted to understand what crashing a plane that way felt like, because if you want to make sense of that crash, it’s simply not enough to just know what Kennedy did. “The Picture Problem” is about how to make sense of satellite images, like the pictures the Bush administration thought it had of Saddam Hussein’s weapons of mass destruction. I got started on that topic because I spent an afternoon with a radiologist looking at mammograms, and halfway through—completely unprompted—he mentioned that he imagined that the problems people like him had in reading breast X-rays were a lot like the problems people in the CIA had in reading satellite photos. I wanted to know what went on inside his head, and he wanted to know what went on inside the heads of CIA officers. I remember, at that moment, feeling absolutely giddy. Then there’s the article after which this book is named. It’s a profile of Cesar Millan, the so-called dog whisperer. Millan can calm the angriest and most troubled of animals with the touch of his hand. What goes on inside Millan’s head as he does that? That was what inspired me to write the piece. But after I got halfway through my reporting, I realized there was an even better question: When Millan performs his magic, what goes on inside the dog’s head? That’s what we really want to know—what the dog saw.

3.

The question I get asked most often is, Where do you get your ideas? I never do a good job of answering that. I usually say something vague about how people tell me things, or my editor, Henry, gives me a book that gets me thinking, or I say that I just plain don’t remember. When I was putting together this collection, I thought I’d try to figure that out once and for all. There is, for example, a long and somewhat eccentric piece in this book on why no has ever come up with a ketchup to rival Heinz. (How do we feel when we eat ketchup?) That idea came from my friend Dave, who is in the grocery business. We have lunch every now and again, and he is the kind of person who thinks about things like that. (Dave also has some fascinating theories about melons, but that’s an idea I’m saving for later.) Another article, called “True Colors,” is about the women who pioneered the hair color market. I got started on that because I somehow got it in my head that it would be fun to write about shampoo. (I think I was desperate for a story.) Many interviews later, an exasperated Madison Avenue type said to me, “Why on earth are you writing about shampoo? Hair color is much more interesting.” And so it is.

The trick to finding ideas is to convince yourself that everyone and everything has a story to tell. I say trick but what I really mean is challenge, because it’s a very hard thing to do. Our instinct as humans, after all, is to assume that most things are not interesting. We flip through the channels on the television and reject ten before we settle on one. We go to a bookstore and look at twenty novels before we pick the one we want. We filter and rank and judge. We have to. There’s just so much out there. But if you want to be a writer, you have to fight that instinct every day. Shampoo doesn’t seem interesting? Well, dammit, it must be, and if it isn’t, I have to believe that it will ultimately lead me to something that is. (I’ll let you judge whether I’m right in that instance.)

The other trick to finding ideas is figuring out the difference between power and knowledge. Of all the people whom you’ll meet in this volume, very few of them are powerful, or even famous. When I said that I’m most interested in minor geniuses, that’s what I meant. You don’t start at the top if you want to find the story. You start in the middle, because it’s the people in the middle who do the actual work in the world. My friend Dave, who taught me about ketchup, is a middle guy. He’s worked on ketchup. That’s how he knows about it. People at the top are self-conscious about what they say (and rightfully so) because they have position and privilege to protect—and self-consciousness is the enemy of “interestingness.” In “The Pitchman” you’ll meet Arnold Morris, who gave me the pitch for the “Dial-O-Matic” vegetable slicer one summer day in his kitchen on the Jersey Shore: “Come on over, folks. I’m going to show you the most amazing slicing machine you have ever seen in your life,” he began. He picked up a package of barbecue spices and used it as a prop. “Take a look at this!” He held it in the air as if he were holding up a Tiffany vase.

He held it in the air as if he were holding up a Tiffany vase. That’s where you find stories, in someone’s kitchen on the Jersey Shore.

4.

Growing up, I never wanted to be a writer. I wanted to be a lawyer, and then in my last year of college, I decided I wanted to be in advertising. I applied to eighteen advertising agencies in the city of Toronto and received eighteen rejection letters, which I taped in a row on my wall. (I still have them somewhere.) I thought about graduate school, but my grades weren’t quite good enough. I applied for a fellowship to go somewhere exotic for a year and was rejected. Writing was the thing I ended up doing by default, for the simple reason that it took me forever to realize that writing could be a job. Jobs were things that were serious and daunting. Writing was fun.

After college, I worked for six months at a little magazine in Indiana called the American Spectator. I moved to Washington, DC, and freelanced for a few years, and eventually caught on with the Washington Post—and from there came to The New Yorker. Along the way, writing has never ceased to be fun, and I hope that buoyant spirit is evident in these pieces. Nothing frustrates me more than someone who reads something of mine or anyone else’s and says, angrily, “I don’t buy it.” Why are they angry? Good writing does not succeed or fail on the strength of its ability to persuade. Not the kind of writing that you’ll find in this book, anyway. It succeeds or fails on the strength of its ability to engage you, to make you think, to give you a glimpse into someone else’s head—even if in the end you conclude that someone else’s head is not a place you’d really like to be. I’ve called these pieces adventures, because that’s what they are intended to be. Enjoy yourself.

PART ONE

Obsessives, Pioneers, and Other Varieties of Minor Genius

“To a worm in horseradish, the world is horseradish.”

The Pitchman

RON POPEIL AND THE CONQUEST OF THE AMERICAN KITCHEN

1.

The extraordinary story of the Ronco Showtime Rotisserie & BBQ begins with Nathan Morris, the son of the shoemaker and cantor Kidders Morris, who came over from the Old Country in the 1880s, and settled in Asbury Park, New Jersey. Nathan Morris was a pitchman. He worked the boardwalk and the five-and-dimes and county fairs up and down the Atlantic coast, selling kitchen gadgets made by Acme Metal, out of Newark. In the early forties, Nathan set up N. K. Morris Manufacturing—turning out the KwiKi-Pi and the Morris Metric Slicer—and perhaps because it was the Depression and job prospects were dim, or perhaps because Nathan Morris made such a compelling case for his new profession, one by one the members of his family followed him into the business. His sons Lester Morris and Arnold (the Knife) Morris became his pitchmen. He set up his brother- in-law Irving Rosenbloom, who was to make a fortune on Long Island in plastic goods, including a hand grater of such excellence that Nathan paid homage to it with his own Dutch Kitchen Shredder Grater. He partnered with his brother Al, whose own sons worked the boardwalk, alongside a gangly Irishman by the name of Ed McMahon. Then, one summer just before the war, Nathan took on as an apprentice his nephew Samuel Jacob Popeil. S.J., as he was known, was so inspired by his uncle Nathan that he went on to found Popeil Brothers, based in Chicago, and brought the world the Dial-O-Matic, the Chop-O-Matic, and the Veg-O-Matic. S. J. Popeil had two sons. The elder was Jerry, who died young. The younger is familiar to anyone who has ever watched an infomercial on late-night television. His name is Ron Popeil.

In the postwar years, many people made the kitchen their life’s work. There were the Klinghoffers of New York, one of whom, Leon, died tragically in 1985, during the Achille Lauro incident, when he was pushed overboard in his wheelchair by Palestinian terrorists. They made the Roto-Broil 400, back in the fifties, an early rotisserie for the home, which was pitched by Lester Morris. There was Lewis Salton, who escaped the Nazis with an English stamp from his father’s collection and parlayed it into an appliance factory in the Bronx. He brought the world the Salton Hotray—a sort of precursor to the microwave—and today Salton, Inc., sells the George Foreman Grill.

But no rival quite matched the Morris-Popeil clan. They were the first family of the American kitchen. They married beautiful women and made fortunes and stole ideas from one another and lay awake at night thinking of a way to chop an onion so that the only tears you shed were tears of joy. They believed that it was a mistake to separate product development from marketing, as most of their contemporaries did, because to them the two were indistinguishable: the object that sold best was the one that sold itself. They were spirited, brilliant men. And Ron Popeil was the most brilliant and spirited of them all. He was the family’s Joseph, exiled to the wilderness by his father only to come back and make more money than the rest of the family combined. He was a pioneer in taking the secrets of the boardwalk pitchmen to the television screen. And, of all the kitchen gadgets in the Morris-Popeil pantheon, nothing has ever been quite so ingenious in its design, or so broad in its appeal, or so perfectly representative of the Morris-Popeil belief in the interrelation of the pitch and the object being pitched, as the Ronco Showtime Rotisserie & BBQ, the countertop oven that can be bought for four payments of $39.95 and may be, dollar for dollar, the finest kitchen appliance ever made.

2.

Ron Popeil is a handsome man, thick through the chest and shoulders, with a leonine head and striking, oversize features. He is in his midsixties and lives in Beverly Hills, halfway up Coldwater Canyon, in a sprawling bungalow with a stand of avocado trees and a vegetable garden out back. In his habits Popeil is, by Beverly Hills standards, old school. He carries his own bags. He has been known to eat at Denny’s. He wears T-shirts and sweatpants. As often as twice a day, he can be found buying poultry or fish or meat at one of the local grocery stores—in particular Costco, which he favors because the chickens there are $0.99 a pound, as opposed to a $1.49 at standard supermarkets. Whatever he buys, he brings back to his kitchen, a vast room overlooking the canyon, with an array of industrial appliances, a collection of fifteen hundred bottles of olive oil, and, in the corner, an oil painting of him, his fourth wife, Robin (a former Frederick’s of Hollywood model), and their baby daughter, Contessa. On paper, Popeil owns a company called Ronco Inventions, which has two hundred employees and a couple of warehouses in Chatsworth, California, but the heart of Ronco is really Ron working out of his house, and many of the key players are really just friends of Ron’s who work out of their houses, too, and who gather in Ron’s kitchen when, every now and again, Ron cooks a soup and wants to talk things over.

In the last thirty years, Ron has invented a succession of kitchen gadgets, among them the Ronco Electric Food Dehydrator and the Popeil Automatic Pasta and Sausage Maker, which featured a thrust bearing made of the same material used in bulletproof glass. He works steadily, guided by flashes of inspiration. In August of 2000, for instance, he suddenly realized what product should follow the Showtime Rotisserie. He and his right-hand man, Alan Backus, had been working on a bread-and-batter machine, which would take up to ten pounds of chicken wings or scallops or shrimp or fish fillets and do all the work—combining the eggs, the flour, the breadcrumbs—in a few minutes, without dirtying either the cook’s hands or the machine. “Alan goes to Korea, where we have some big orders coming through,” Ron explained recently over lunch—a hamburger, medium-well, with fries—in the VIP booth by the door in the Polo Lounge, at the Beverly Hills Hotel. “I call Alan on the phone. I wake him up. It was two in the morning there. And these are my exact words: ‘Stop. Do not pursue the bread-and-batter machine. I will pick it up later. This other project needs to come first.’” The other project, his inspiration, was a device capable of smoking meats indoors without creating odors that can suffuse the air and permeate furniture. Ron had a version of the indoor smoker on his porch—“a Rube Goldberg kind of thing” that he’d worked on a year earlier—and, on a whim, he cooked a chicken in it. “That chicken was so good that I said to myself”—and with his left hand Ron began to pound on the table—“This is the best chicken sandwich I have ever had in my life.” He turned to me: “How many times have you had a smoked-turkey sandwich? Maybe you have a smoked-turkey or a smoked-chicken sandwich once every six months. Once! How many times have you had smoked salmon? Aah. More. I’m going to say you come across smoked salmon as an hors d’oeuvre or an entrée once every three months. Baby-back ribs? Depends on which restaurant you order ribs at. Smoked sausage, same thing. You touch on smoked food”—he leaned in and poked my arm for emphasis—“but I know one thing, Malcolm. You don’t have a smoker.”

The idea for the Showtime came about in the same way. Ron was at Costco when he suddenly realized that there was a long line of customers waiting to buy chickens from the in-store rotisserie ovens. They touched on rotisserie chicken, but Ron knew one thing: they did not have a rotisserie oven. Ron went home and called Backus. Together, they bought a glass aquarium, a motor, a heating element, a spit rod, and a handful of other spare parts, and began tinkering. Ron wanted something big enough for a fifteen-pound turkey but small enough to fit into the space between the base of an average kitchen cupboard and the countertop. He didn’t want a thermostat, because thermostats break, and the constant clicking on and off of the heat prevents the even, crispy browning that he felt was essential. And the spit rod had to rotate on the horizontal axis, not the vertical axis, because if you cooked a chicken or a side of beef on the vertical axis the top would dry out and the juices would drain to the bottom. Roderick Dorman, Ron’s patent attorney, says that when he went over to Coldwater Canyon he often saw five or six prototypes on the kitchen counter, lined up in a row. Ron would have a chicken in each of them, so that he could compare the consistency of the flesh and the browning of the skin, and wonder if, say, there was a way to rotate a shish kebab as it approached the heating element so that the inner side of the kebab would get as brown as the outer part. By the time Ron finished, the Showtime prompted no fewer than two dozen patent applications. It was equipped with the most powerful motor in its class. It had a drip tray coated with a nonstick ceramic, which was easily cleaned, and the oven would still work even after it had been dropped on a concrete or stone surface ten times in succession, from a distance of three feet. To Ron, there was no question that it made the best chicken he had ever had in his life.

It was then that Ron filmed a television infomercial for the Showtime, twenty-eight minutes and thirty seconds in length. It was shot live before a studio audience, and aired for the first time on August 8, 1998. It has run ever since, often in the wee hours of the morning, or on obscure cable stations, alongside the get-rich schemes and the Three’s Company reruns. The response to it has been such that within the next three years total sales of the Showtime should exceed a billion dollars. Ron Popeil didn’t use a single focus group. He had no market researchers, R &D teams, public-relations advisers, Madison Avenue advertising companies, or business consultants. He did what the Morrises and the Popeils had been doing for most of the century, and what all the experts said couldn’t be done in the modern economy. He dreamed up something new in his kitchen and went out and pitched it himself.

3.

Nathan Morris, Ron Popeil’s great-uncle, looked a lot like Cary Grant. He wore a straw boater. He played the ukulele, drove a convertible, and composed melodies for the piano. He ran his business out of a low-slung, whitewashed building on Ridge Avenue, near Asbury Park, with a little annex in the back where he did pioneering work with Teflon. He had certain eccentricities, such as a phobia he developed about traveling beyond Asbury Park without the presence of a doctor. He feuded with his brother Al, who subsequently left in a huff for Atlantic City, and then with his nephew S. J. Popeil, whom Nathan considered insufficiently grateful for the start he had given him in the kitchen-gadget business. That second feud led to a climactic legal showdown over S. J. Popeil’s Chop-O-Matic, a food preparer with a pleated, W-shaped blade rotated by a special clutch mechanism. The Chop-O-Matic was ideal for making coleslaw and chopped liver, and when Morris introduced a strikingly similar product, called the Roto-Chop, S. J. Popeil sued his uncle for patent infringement. (As it happened, the Chop-O-Matic itself seemed to have been inspired by the Blitzhacker, from Switzerland, and S.J. later lost a patent judgment to the Swiss.)

The two squared off in Trenton, in May of 1958, in a courtroom jammed with Morrises and Popeils. When the trial opened, Nathan Morris was on the stand, being cross-examined by his nephew’s attorneys, who were out to show him that he was no more than a huckster and a copycat. At a key point in the questioning, the judge suddenly burst in. “He took the index finger of his right hand and he pointed it at Morris,” Jack Dominik, Popeil’s longtime patent lawyer, recalls, “and as long as I live I will never forget what he said. ‘I know you! You’re a pitchman! I’ve seen you on the boardwalk!’ And Morris pointed his index finger back at the judge and shouted, ‘No! I’m a manufacturer. I’m a dignified manufacturer, and I work with the most eminent of counsel!’” (Nathan Morris, according to Dominik, was the kind of man who referred to everyone he worked with as eminent.) “At that moment,” Dominik goes on, “Uncle Nat’s face was getting red and the judge’s was getting redder, so a recess was called.” What happened later that day is best described in Dominik’s unpublished manuscript, “The Inventions of Samuel Joseph Popeil by Jack E. Dominik—His Patent Lawyer.” Nathan Morris had a sudden heart attack, and S.J. was guilt-stricken. “Sobbing ensued,” Dominik writes. “Remorse set in. The next day, the case was settled. Thereafter, Uncle Nat’s recovery from his previous day’s heart attack was nothing short of a miracle.”

Nathan Morris was a performer, like so many of his relatives, and pitching was, first and foremost, a performance. It’s said that Nathan’s nephew Archie (the Pitchman’s Pitchman) Morris once sold, over a long afternoon, gadget after gadget to a well-dressed man. At the end of the day, Archie watched the man walk away, stop and peer into his bag, and then dump the whole lot into a nearby garbage can. The Morrises were that good. “My cousins could sell you an empty box,” Ron says.

The last of the Morrises to be active in the pitching business is Arnold (the Knife) Morris, so named because of his extraordinary skill with the Sharpcut, the forerunner of the Ginsu. He is in his early seventies, a cheerful, impish man with a round face and a few wisps of white hair, and a trademark move whereby, after cutting a tomato into neat, regular slices, he deftly lines the pieces up in an even row against the flat edge of the blade. Today, he lives in Ocean Township, a few miles from Asbury Park, with Phyllis, his wife of twenty-nine years, whom he refers to (with the same irresistible conviction that he might use to describe, say, the Feather Touch Knife) as “the prettiest girl in Asbury Park.” One morning recently, he sat in his study and launched into a pitch for the Dial-O-Matic, a slicer produced by S. J. Popeil some forty years ago.

“Come on over, folks. I’m going to show you the most amazing slicing machine you have ever seen in your life,” he began. Phyllis, sitting nearby, beamed with pride. He picked up a package of barbecue spices, which Ron Popeil sells alongside his Showtime Rotisserie, and used it as a prop. “Take a look at this!” He held it in the air as if he were holding up a Tiffany vase. He talked about the machine’s prowess at cutting potatoes, then onions, then tomatoes. His voice, a marvelous instrument inflected with the rhythms of the Jersey Shore, took on a singsong quality: “How many cut tomatoes like this? You stab it. You jab it. The juices run down your elbow. With the Dial-O-Matic, you do it a little differently. You put it in the machine and you wiggle”—he mimed fixing the tomato to the bed of the machine. “The tomato! Lady! The tomato! The more you wiggle, the more you get. The tomato! Lady! Every slice comes out perfectly, not a seed out of place. But the thing I love my Dial-O-Matic for is coleslaw. My mother-in-law used to take her cabbage and do this.” He made a series of wild stabs at an imaginary cabbage. “I thought she was going to commit suicide. Oh, boy, did I pray—that she wouldn’t slip! Don’t get me wrong. I love my mother-in-law. It’s her daughter I can’t figure out. You take the cabbage. Cut it in half. Coleslaw, hot slaw. Pot slaw. Liberty slaw. It comes out like shredded wheat…”

It was a vaudeville monologue, except that Arnold wasn’t merely entertaining; he was selling. “You can take a pitchman and make a great actor out of him, but you cannot take an actor and always make a great pitchman out of him,” he says. The pitchman must make you applaud and take out your money. He must be able to execute what in pitchman’s parlance is called “the turn”—the perilous, crucial moment where he goes from entertainer to businessman. If, out of a crowd of fifty, twenty-five people come forward to buy, the true pitchman sells to only twenty of them. To the remaining five, he says, “Wait! There’s something else I want to show you!” Then he starts his pitch again, with slight variations, and the remaining four or five become the inner core of the next crowd, hemmed in by the people around them, and so eager to pay their money and be on their way that they start the selling frenzy all over again. The turn requires the management of expectation. That’s why Arnold always kept a pineapple tantalizingly perched on his stand. “For forty years, I’ve been promising to show people how to cut the pineapple, and I’ve never cut it once,” he says. “It got to the point where a pitchman friend of mine went out and bought himself a plastic pineapple. Why would you cut the pineapple? It cost a couple bucks. And if you cut it they’d leave.” Arnold says that he once hired some guys to pitch a vegetable slicer for him at a fair in Danbury, Connecticut, and became so annoyed at their lackadaisical attitude that he took over the demonstration himself. They were, he says, waiting for him to fail: he had never worked that particular slicer before and, sure enough, he was massacring the vegetables. Still, in a single pitch he took in $200. “Their eyes popped out of their heads,” Arnold recalls. “They said, ‘We don’t understand it. You don’t even know how to work the damn machine.’ I said, ‘But I know how to do one thing better than you.’ They said, ‘What’s that?’ I said, ‘I know how to ask for the money.’ And that’s the secret to the whole damn business.”

4.

Ron Popeil started pitching his father’s kitchen gadgets at the Maxwell Street flea market in Chicago, in the midfifties. He was thirteen. Every morning, he would arrive at the market at five and prepare fifty pounds each of onions, cabbages, and carrots, and a hundred pounds of potatoes. He sold from six in the morning until four in the afternoon, bringing in as much as $500 a day. In his late teens, he started doing the state- and county-fair circuit, and then he scored a prime spot in the Woolworth’s at State and Washington, in the Loop, which at the time was the top-grossing Woolworth’s store in the country. He was making more than the manager of the store, selling the Chop-O-Matic and the Dial-O-Matic. He dined at the Pump Room and wore a Rolex and rented $150-a-night hotel suites. In pictures from the period, he is beautiful, with thick dark hair and blue-green eyes and sensuous lips, and, several years later, when he moved his office to 919 Michigan Avenue, he was called the Paul Newman of the Playboy Building. Mel Korey, a friend of Ron’s from college and his first business partner, remembers the time he went to see Ron pitch the Chop-O-Matic at the State Street Woolworth’s. “He was mesmerizing,” Korey says. “There were secretaries who would take their lunch break at Woolworth’s to watch him because he was so good-looking. He would go into the turn, and people would just come running.” Several years ago, Ron’s friend Steve Wynn, the founder of the Mirage resorts, went to visit Michael Milken in prison. They were near a television, and happened to catch one of Ron’s infomercials just as he was doing the countdown, a routine taken straight from the boardwalk, where he says, “You’re not going to spend two hundred dollars, not a hundred and eighty dollars, not one-seventy, not one-sixty…” It’s a standard pitchman’s gimmick: it sounds dramatic only because the starting price is set way up high. But something about the way Ron did it was irresistible. As he got lower and lower, Wynn and Milken—who probably know as much about profit margins as anyone in America—cried out in unison, “Stop, Ron! Stop!”

Was Ron the best? The only attempt to settle the question definitively was made some forty years ago when Ron and Arnold were working a knife set at the Eastern States Exposition, in West Springfield, Massachusetts. A third man, Frosty Wishon, who was a legend in his own right, was there, too. “Frosty was a well-dressed, articulate individual and a good salesman,” Ron says. “But he thought he was the best. So I said, ‘Well, guys, we’ve got a ten-day show, eleven, maybe twelve hours a day. We’ll each do a rotation, and we’ll compare how much we sell.” In Morris-Popeil lore, this is known as “the shoot-out,” and no one has ever forgotten the outcome. Ron beat Arnold, but only by a whisker—no more than a few hundred dollars. Frosty Wishon, meanwhile, sold only half as much as either of his rivals. “You have no idea the pressure Frosty was under,” Ron continues. “He came up to me at the end of the show and said, ‘Ron, I will never work with you again as long as I live.’”

No doubt Frosty Wishon was a charming and persuasive person, but he assumed that this was enough—that the rules of pitching were the same as the rules of celebrity endorsement. When Michael Jordan pitches McDonald’s hamburgers, Michael Jordan is the star. But when Ron Popeil or Arnold Morris pitched, say, the Chop-O-Matic, his gift was to make the Chop-O-Matic the star. It was, after all, an innovation. It represented a different way of dicing onions and chopping liver: it required consumers to rethink the way they went about their business in the kitchen. Like most great innovations, it was disruptive. And how do you persuade people to disrupt their lives? Not merely by ingratiation or sincerity, and not by being famous or beautiful. You have to explain the invention to customers—not once or twice but three or four times, with a different twist each time. You have to show them exactly how it works and why it works, and make them follow your hands as you chop liver with it, and then tell them precisely how it fits into their routine, and, finally, sell them on the paradoxical fact that, revolutionary as the gadget is, it’s not at all hard to use.

Thirty years ago, the videocassette recorder came on the market, and it was a disruptive product, too: it was supposed to make it possible to tape a television show so that no one would ever again be chained to the prime-time schedule. Yet, as ubiquitous as the VCR became, it was seldom put to that purpose. That’s because the VCR was never pitched: no one ever explained the gadget to American consumers—not once or twice but three or four times—and no one showed them exactly how it worked or how it would fit into their routine, and no pair of hands guided them through every step of the process. All the VCR-makers did was hand over the box with a smile and a pat on the back, tossing in an instruction manual for good measure. Any pitchman could have told you that wasn’t going to do it.

Once, when I was over at Ron’s house in Coldwater Canyon, sitting on one of the high stools in his kitchen, he showed me what real pitching is all about. He was talking about how he had just had dinner with the actor Ron Silver, who was playing Ron’s friend Robert Shapiro in a new movie about the O. J. Simpson trial. “They shave the back of Ron Silver’s head so that he’s got a bald spot, because, you know, Bob Shapiro’s got a bald spot back there, too,” Ron said. “So I say to him, ‘You’ve gotta get GLH.’” GLH, one of Ron’s earlier products, is an aerosol spray designed to thicken the hair and cover up bald spots. “I told him, ‘It will make you look good. When you’ve got to do the scene, you shampoo it out.’”

At this point, the average salesman would have stopped. The story was an aside, no more. We had been discussing the Showtime Rotisserie, and on the counter behind us was a Showtime cooking a chicken and next to it a Showtime cooking baby-back ribs, and on the table in front of him Ron’s pasta maker was working, and he was frying some garlic so that we could have a little lunch. But now that he had told me about GLH, it was unthinkable that he would not also show me its wonders. He walked quickly over to a table at the other side of the room, talking as he went. “People always ask me, ‘Ron, where did you get that name GLH?’ I made it up. Great-Looking Hair.” He picked up a can. “We make it in nine different colors. This is silver-black.” He picked up a hand mirror and angled it above his head so that he could see his bald spot. “Now, the first thing I’ll do is spray it where I don’t need it.” He shook the can and began spraying the crown of his head, talking all the while. “Then I’ll go to the area itself.” He pointed to his bald spot. “Right here. OK. Now I’ll let that dry. Brushing is fifty percent of the way it’s going to look.” He began brushing vigorously, and suddenly Ron Popeil had what looked like a complete head of hair. “Wow,” I said. Ron glowed. “And you tell me ‘Wow.’ That’s what everyone says. ‘Wow.’ That’s what people say who use it. ‘Wow.’ If you go outside”—he grabbed me by the arm and pulled me out onto the deck—“if you are in bright sunlight or daylight, you cannot tell that I have a big bald spot in the back of my head. It really looks like hair, but it’s not hair. It’s quite a product. It’s incredible. Any shampoo will take it out. You know who would be a great candidate for this? Al Gore. You want to see how it feels?” Ron inclined the back of his head toward me. I had said, “Wow,” and had looked at his hair inside and outside, but the pitchman in Ron Popeil wasn’t satisfied. I had to feel the back of his head. I did. It felt just like real hair.

5.

Ron Popeil inherited more than the pitching tradition of Nathan Morris. He was very much the son of S. J. Popeil, and that fact, too, goes a long way toward explaining the success of the Showtime Rotisserie. S.J. had a ten-room apartment high in the Drake Towers, near the top of Chicago’s Magnificent Mile. He had a chauffeured Cadillac limousine with a car phone, a rarity in those days, which he delighted in showing off (as in “I’m calling you from the car”). He wore three-piece suits and loved to play the piano. He smoked cigars and scowled a lot and made funny little grunting noises as he talked. He kept his money in T-bills. His philosophy was expressed in a series of epigrams: To his attorney, “If they push you far enough, sue”; to his son, “It’s not how much you spend, it’s how much you make.” And, to a designer who expressed doubts about the utility of one of his greatest hits, the Pocket Fisherman, “It’s not for using; it’s for giving.” In 1974, S.J.’s second wife, Eloise, decided to have him killed, so she hired two hit men—one of whom, aptly, went by the name of Mr. Peeler. At the time, she was living at the Popeil estate in Newport Beach with her two daughters and her boyfriend, a thirty-seven-year-old machinist. When, at Eloise’s trial, S.J. was questioned about the machinist, he replied, “I was kind of happy to have him take her off my hands.” That was vintage S.J. But eleven months later, after Eloise got out of prison, S.J. married her again. That was vintage S.J., too. As a former colleague of his puts it, “He was a strange bird.”

S. J. Popeil was a tinkerer. In the middle of the night, he would wake up and make frantic sketches on a pad he kept on his bedside table. He would disappear into his kitchen for hours and make a huge mess, and come out with a faraway look on his face. He loved standing behind his machinists, peering over their shoulders while they were assembling one of his prototypes. In the late forties and early fifties, he worked almost exclusively in plastic, reinterpreting kitchen basics with a subtle, modernist flair. “Popeil Brothers made these beautiful plastic flour sifters,” Tim Samuelson, a curator at the Chicago Historical Society and a leading authority on the Popeil legacy, says. “They would use contrasting colors, or a combination of opaque plastic with a translucent swirl plastic.” Samuelson became fascinated with all things Popeil after he acquired an original Popeil Brothers doughnut maker, in red-and-white plastic, which he felt “had beautiful lines”; to this day, in the kitchen of his Hyde Park high-rise, he uses the Chop-O-Matic in the preparation of salad ingredients. “There was always a little twist to what he did,” Samuelson goes on. “Take the Popeil automatic egg turner. It looks like a regular spatula, but if you squeeze the handle the blade turns just enough to flip a fried egg.”

Walter Herbst, a designer whose firm worked with Popeil Brothers for many years, says that S.J.’s modus operandi was to “come up with a holistic theme. He’d arrive in the morning with it. It would be something like”—Herbst assumes S.J.’s gruff voice—“ ‘We need a better way to shred cabbage.’ It was a passion, an absolute goddam passion. One morning, he must have been eating grapefruit, because he comes to work and calls me and says, ‘We need a better way to cut grapefruit!’” The idea they came up with was a double-bladed paring knife, with the blades separated by a fraction of an inch so that both sides of the grapefruit membrane could be cut simultaneously. “There was a little grocery store a few blocks away,” Herbst says. “So S.J. sends the chauffeur out for grapefruit. How many? Six. Well, over the period of a couple of weeks, six turns to twelve and twelve turns to twenty, until we were cutting thirty to forty grapefruits a day. I don’t know if that little grocery store ever knew what happened.”

S. J. Popeil’s finest invention was undoubtedly the Veg-O-Matic, which came on the market in 1960 and was essentially a food processor, a Cuisinart without the motor. The heart of the gadget was a series of slender, sharp blades strung like guitar strings across two Teflon-coated metal rings, which were made in Woodstock, Illinois, from 364 Alcoa, a special grade of aluminum. When the rings were aligned one on top of the other so that the blades ran parallel, a potato or an onion pushed through would come out in perfect slices. If the top ring was rotated, the blades formed a crosshatch, and a potato or an onion pushed through would come out diced. The rings were housed in a handsome plastic assembly, with a plunger to push the vegetables through the blades. Technically, the Veg-O-Matic was a triumph: the method of creating blades strong enough to withstand the assault of vegetables received a US patent. But from a marketing perspective it posed a problem. S.J.’s products had hitherto been sold by pitchmen armed with a mound of vegetables meant to carry them through a day’s worth of demonstrations. But the Veg-O-Matic was too good. In a single minute, according to the calculations of Popeil Brothers, it could produce 120 egg wedges, 300 cucumber slices, 1,150 potato shoestrings, or 3,000 onion dices. It could go through what used to be a day’s worth of vegetables in a matter of minutes. The pitchman could no longer afford to pitch to just a hundred people at a time; he had to pitch to a hundred thousand. The Veg-O-Matic needed to be sold on television, and one of the very first pitchmen to grasp this fact was Ron Popeil.

In the summer of 1964, just after the Veg-O-Matic was introduced, Mel Korey joined forces with Ron Popeil in a company called Ronco. They shot a commercial for the Veg-O-Matic for $500, a straightforward pitch shrunk to two minutes, and set out from Chicago for the surrounding towns of the Midwest. They cold-called local department stores and persuaded them to carry the Veg-O-Matic on guaranteed sale, which meant that whatever the stores didn’t sell could be returned. Then they visited the local television station and bought a two- or three-week run of the cheapest airtime they could find, praying that it would be enough to drive traffic to the store. “We got Veg-O-Matics wholesale for $3.42,” Korey says. “They retailed for $9.95, and we sold them to the stores for $7.46, which meant that we had four dollars to play with. If I spent a hundred dollars on television, I had to sell twenty-five Veg-O-Matics to break even.” It was clear, in those days, that you could use television to sell kitchen products if you were Procter & Gamble. It wasn’t so clear that this would work if you were Mel Korey and Ron Popeil, two pitchmen barely out of their teens selling a combination slicer-dicer that no one had ever heard of. They were taking a wild gamble, and, to their amazement, it paid off. “They had a store in Butte, Montana—Hennessy’s,” Korey goes on, thinking back to those first improbable years. “Back then, people there were still wearing peacoats. The city was mostly bars. It had just a few three-story buildings. There were twenty-seven thousand people, and one TV station. I had the Veg-O-Matic, and I go to the store, and they said, ‘We’ll take a case. We don’t have a lot of traffic here.’ I go to the TV station and the place is a dump. The only salesperson was going blind and deaf. So I do a schedule. For five weeks, I spend three hundred and fifty dollars. I figure if I sell a hundred and seventy-four machines—six cases—I’m happy. I go back to Chicago, and I walk into the office one morning and the phone is ringing. They said, ‘We sold out. You’ve got to fly us another six cases of Veg-O-Matics.’ The next week, on Monday, the phone rings. It’s Butte again: ‘We’ve got a hundred and fifty oversold.’ I fly him another six cases. Every few days after that, whenever the phone rang we’d look at each other and say, ‘ Butte, Montana.’” Even today, decades later, Korey can scarcely believe it. “How many homes in total in that town? Maybe several thousand? We ended up selling two thousand five hundred Veg-O-Matics in five weeks!”

Why did the Veg-O-Matic sell so well? Doubtless, Americans were eager for a better way of slicing vegetables. But it was more than that: the Veg-O-Matic represented a perfect marriage between the medium (television) and the message (the gadget). The Veg-O-Matic was, in the relevant sense, utterly transparent. You took the potato and you pushed it through the Teflon-coated rings and—voilà!—you had French fries. There were no buttons being pressed, no hidden and intimidating gears: you could show-and-tell the Veg-O-Matic in a two-minute spot and allay everyone’s fears about a daunting new technology. More specifically, you could train the camera on the machine and compel viewers to pay total attention to the product you were selling. TV allowed you to do even more effectively what the best pitchmen strove to do in live demonstrations—make the product the star.

6.

This was a lesson Ron Popeil never forgot. In his infomercial for the Showtime Rotisserie, he opens not with himself but with a series of shots of meat and poultry, glistening almost obscenely as they rotate in the Showtime. A voice-over describes each shot: a “delicious six-pound chicken,” a “succulent whole duckling,” a “mouthwatering pork-loin roast…” Only then do we meet Ron, in a sports coat and jeans. He explains the problems of conventional barbecues, how messy and unpleasant they are. He bangs a hammer against the door of the Showtime, to demonstrate its strength. He deftly trusses a chicken, impales it on the patented two-pronged Showtime spit rod, and puts it into the oven. Then he repeats the process with a pair of chickens, salmon steaks garnished with lemon and dill, and a rib roast. All the time, the camera is on his hands, which are in constant motion, manipulating the Showtime apparatus gracefully, with his calming voice leading viewers through every step: “All I’m going to do here is slide it through like this. It goes in very easily. I’ll match it up over here. What I’d like to do is take some herbs and spices here. All I’ll do is slide it back. Raise up my glass door here. I’ll turn it to a little over an hour…Just set it and forget it.”

Why does this work so well? Because the Showtime—like the Veg-O-Matic before it—was designed to be the star. From the very beginning, Ron insisted that the entire door be a clear pane of glass, and that it slant back to let in the maximum amount of light, so that the chicken or the turkey or the baby-back ribs turning inside would be visible at all times. Alan Backus says that after the first version of the Showtime came out Ron began obsessing over the quality and evenness of the browning and became convinced that the rotation speed of the spit wasn’t quite right. The original machine moved at four revolutions per minute. Ron set up a comparison test in his kitchen, cooking chicken after chicken at varying speeds until he determined that the optimal speed of rotation was actually six r.p.m. One can imagine a bright-eyed MBA clutching a sheaf of focus-group reports and arguing that Ronco was really selling convenience and healthful living, and that it was foolish to spend hundreds of thousands of dollars retooling production in search of a more even golden brown. But Ron understood that the perfect brown is important for the same reason that the slanted glass door is important: because in every respect the design of the product must support the transparency and effectiveness of its performance during a demonstration—the better it looks onstage, the easier it is for the pitchman to go into the turn and ask for the money.

If Ron had been the one to introduce the VCR, in other words, he would not simply have sold it in an infomercial. He would also have changed the VCR itself, so that it made sense in an infomercial. The clock, for example, wouldn’t be digital. (The haplessly blinking unset clock has, of course, become a symbol of frustration.) The tape wouldn’t be inserted behind a hidden door—it would be out in plain view, just like the chicken in the rotisserie, so that if it was recording you could see the spools turn. The controls wouldn’t be discreet buttons; they would be large, and they would make a reassuring click as they were pushed up and down, and each step of the taping process would be identified with a big, obvious numeral so that you could set it and forget it. And would it be a slender black, low-profile box? Of course not. Ours is a culture in which the term “black box” is synonymous with incomprehensibility. Ron’s VCR would be in red-and-white plastic, both opaque and translucent swirl, or maybe 364 Alcoa aluminum, painted in some bold primary color, and it would sit on top of the television, not below it, so that when your neighbor or your friend came over he would spot it immediately and say, “Wow, you have one of those Ronco Tape-O-Matics!”

7.

Ron Popeil did not have a happy childhood. “I remember baking a potato. It must have been when I was four or five years old,” he told me. We were in his kitchen, and had just sampled some baby-back ribs from the Showtime. It had taken some time to draw the memories out of him, because he is not one to dwell on the past. “I couldn’t get that baked potato into my stomach fast enough, because I was so hungry.” Ron is normally in constant motion, moving his hands, chopping food, bustling back and forth. But now he was still. His parents split up when he was very young. S.J. went off to Chicago. His mother disappeared. He and his older brother, Jerry, were shipped off to a boarding school in upstate New York. “I remember seeing my mother on one occasion. I don’t remember seeing my father, ever, until I moved to Chicago, at thirteen. When I was in the boarding school, the thing I remember was a Sunday when the parents visited the children, and my parents never came. Even knowing that they weren’t going to show up, I walked out to the perimeter and looked out over the farmland, and there was this road.” He made an undulating motion with his hand to suggest a road stretching off into the distance. “I remember standing on the road crying, looking for the movement of a car miles away, hoping that it was my mother and father. And they never came. That’s all I remember about boarding school.” Ron remained perfectly still. “I don’t remember ever having a birthday party in my life. I remember that my grandparents took us out and we moved to Florida. My grandfather used to tie me down in bed—my hands, my wrists, and my feet. Why? Because I had a habit of turning over on my stomach and bumping my head either up and down or side to side. Why? How? I don’t know the answers. But I was spread-eagle, on my back, and if I was able to twist over and do it my grandfather would wake up at night and come in and beat the hell out of me.” Ron stopped, and then added, “I never liked him. I never knew my mother or her parents or any of that family. That’s it. Not an awful lot to remember. Obviously, other things took place. But they have been erased.”

When Ron came to Chicago, at thirteen, with his grandparents, he was put to work in the Popeil Brothers factory—but only on the weekends, when his father wasn’t there. “Canned salmon and white bread for lunch, that was the diet,” he recalls. “Did I live with my father? Never. I lived with my grandparents.” When he became a pitchman, his father gave him just one advantage: he extended his son credit. Mel Korey says that he once drove Ron home from college and dropped him off at his father’s apartment. “He had a key to the apartment, and when he walked in his dad was in bed already. His dad said, ‘Is that you, Ron?’ And Ron said, ‘Yeah.’ And his dad never came out. And by the next morning Ron still hadn’t seen him.” Later, when Ron went into business for himself, he was persona non grata around Popeil Brothers. “Ronnie was never allowed in the place after that,” one of S.J.’s former associates recalls. “He was never let in the front door. He was never allowed to be part of anything.” My father, Ron says simply, “was all business. I didn’t know him personally.”

Here is a man who constructed his life in the image of his father—who went into the same business, who applied the same relentless attention to the workings of the kitchen, who got his start by selling his father’s own products—and where was his father? “You know, they could have done wonders together,” Korey says, shaking his head. “I remember one time we talked with K-tel about joining forces, and they said that we would be a war machine—that was their word. Well, Ron and his dad, they could have been a war machine.” For all that, it is hard to find in Ron even a trace of bitterness. Once, I asked him, “Who are your inspirations?” The first name came easily: his good friend Steve Wynn. He was silent for a moment, and then he added, “My father.” Despite everything, Ron clearly found in his father’s example a tradition of irresistible value. And what did Ron do with that tradition? He transcended it. He created the Showtime, which is indisputably a better gadget, dollar for dollar, than the Morris Metric Slicer, the Dutch Kitchen Shredder Grater, the Chop-O-Matic, and the Veg-O-Matic combined.

When I was in Ocean Township, visiting Arnold Morris, he took me to the local Jewish cemetery, Chesed Shel Ames, on a small hilltop just outside town. We drove slowly through the town’s poorer sections in Arnold’s white Mercedes. It was a rainy day. At the cemetery, a man stood out front in an undershirt, drinking a beer. We entered through a little rusty gate. “This is where it all starts,” Arnold said, by which he meant that everyone—the whole spirited, squabbling clan—was buried here. We walked up and down the rows until we found, off in a corner, the Morris headstones. There was Nathan Morris, of the straw boater and the opportune heart attack, and next to him his wife, Betty. A few rows over was the family patriarch, Kidders Morris, and his wife, and a few rows from there Irving Rosenbloom, who made a fortune in plastic goods out on Long Island. Then all the Popeils, in tidy rows: Ron’s grandfather Isadore, who was as mean as a snake, and his wife, Mary; S.J., who turned a cold shoulder to his own son; Ron’s brother, Jerry, who died young. Ron was from them, but he was not of them. Arnold walked slowly among the tombstones, the rain dancing off his baseball cap, and then he said something that seemed perfectly right. “You know, I’ll bet you you’ll never find Ronnie here.”

8.

One Saturday night, Ron Popeil arrived at the headquarters of the television shopping network QVC, a vast gleaming complex nestled in the woods of suburban Philadelphia. Ron is a regular on QVC. He supplements his infomercials with occasional appearances on the network, and, for twenty-four hours beginning that midnight, QVC had granted him eight live slots, starting with a special “Ronco” hour between midnight and 1 a.m. Ron was traveling with his daughter Shannon, who had got her start in the business selling the Ronco Electric Food Dehydrator on the fair circuit, and the plan was that the two of them would alternate throughout the day. They were pitching a Digital Jog Dial version of the Showtime, in black, available for one day only, at a “special value” of $129.72.

In the studio, Ron had set up eighteen Digital Jog Dial Showtimes on five wood-paneled gurneys. From Los Angeles, he had sent, via Federal Express, dozens of Styrofoam containers with enough meat for each of the day’s airings: eight fifteen-pound turkeys, seventy-two hamburgers, eight legs of lamb, eight ducks, thirty-odd chickens, two dozen or so Rock Cornish game hens, and on and on, supplementing them with garnishes, trout, and some sausage bought that morning at three Philadelphia-area supermarkets. QVC’s target was thirty-seven thousand machines, meaning that it hoped to gross about $4.5 million during the twenty-four hours—a huge day, even by the network’s standards. Ron seemed tense. He barked at the team of QVC producers and cameramen bustling around the room. He fussed over the hero plates—the ready-made dinners that he would use to showcase meat taken straight from the oven. “Guys, this is impossible,” he said, peering at a tray of mashed potatoes and gravy. “The level of gravy must be higher.” He was limping a little. “You know, there’s a lot of pressure on you,” he said wearily. “ ‘How did Ron do? Is he still the best?’”

With just a few minutes to go, Ron ducked into the greenroom next to the studio to put GLH in his hair: a few aerosol bursts, followed by vigorous brushing. “Where is God right now?” his co-host, Rick Domeier, yelled out, looking around theatrically for his guest star. “Is God backstage?” Ron then appeared, resplendent in a chef’s coat, and the cameras began to roll. He sliced open a leg of lamb. He played with the dial of the new digital Showtime. He admired the crispy, succulent skin of the duck. He discussed the virtues of the new food-warming feature—where the machine would rotate at low heat for up to four hours after the meat was cooked in order to keep the juices moving—and, all the while, bantered so convincingly with viewers calling in on the testimonial line that it was as if he were back mesmerizing the secretaries in the Woolworth’s at State and Washington.

In the greenroom, there were two computer monitors. The first displayed a line graph charting the number of calls that came in at any given second. The second was an electronic ledger showing the total sales up to that point. As Ron took flight, one by one, people left the studio to gather around the computers. Shannon Popeil came first. It was 12:40 a.m. In the studio, Ron was slicing onions with one of his father’s Dial-O-Matics. She looked at the second monitor and gave a little gasp. Forty minutes in, and Ron had already passed $700,000. A QVC manager walked in. It was 12:48 a.m., and Ron was roaring on: $837,650. “It can’t be!” he cried out. “That’s unbelievable!” Two QVC producers came over. One of them pointed at the first monitor, which was graphing the call volume. “Jump,” he called out. “Jump!” There were only a few minutes left. Ron was extolling the virtues of the oven one final time, and, sure enough, the line began to take a sharp turn upward, as all over America viewers took out their wallets. The numbers on the second screen began to change in a blur of recalculation—rising in increments of $129.72 plus shipping and taxes. “You know, we’re going to hit a million dollars, just on the first hour,” one of the QVC guys said, and there was awe in his voice. It was one thing to talk about how Ron was the best there ever was, after all, but quite another to see proof of it, before your very eyes. At that moment, on the other side of the room, the door opened, and a man appeared, stooped and drawn but with a smile on his face. It was Ron Popeil, who invented a better rotisserie in his kitchen and went out and pitched it himself. There was a hush, and then the whole room stood up and cheered.[1]

October 30, 2000

The Ketchup Conundrum

MUSTARD NOW COMES IN DOZENS OF VARIETIES. WHY HAS KETCHUP STAYED THE SAME?

1.

Many years ago, one mustard dominated the supermarket shelves: French’s. It came in a plastic bottle. People used it on hot dogs and bologna. It was a yellow mustard, made from ground white mustard seed with turmeric and vinegar, which gave it a mild, slightly metallic taste. If you looked hard in the grocery store, you might find something in the specialty-foods section called Grey Poupon, which was Dijon mustard, made from the more pungent brown mustard seed. In the early seventies, Grey Poupon was no more than a hundred-thousand-dollar-a-year business. Few people knew what it was or how it tasted, or had any particular desire for an alternative to French’s or the runner-up, Gulden’s. Then one day the Heublein Company, which owned Grey Poupon, discovered something remarkable: if you gave people a mustard taste test, a significant number had only to try Grey Poupon once to switch from yellow mustard. In the food world that almost never happens; even among the most successful food brands, only about one in a hundred has that kind of conversion rate. Grey Poupon was magic.

So Heublein put Grey Poupon in a bigger glass jar, with an enameled label and enough of a whiff of Frenchness to make it seem as if it were still being made in Europe (it was made in Hartford, Connecticut, from Canadian mustard seed and white wine). The company ran tasteful print ads in upscale food magazines. They put the mustard in little foil packets and distributed them with airplane meals—which was a brand-new idea at the time. Then they hired the Manhattan ad agency Lowe Marschalk to do something, on a modest budget, for television. The agency came back with an idea: A Rolls-Royce is driving down a country road. There’s a man in the backseat in a suit with a plate of beef on a silver tray. He nods to the chauffeur, who opens the glove compartment. Then comes what is known in the business as the reveal. The chauffeur hands back a jar of Grey Poupon. Another Rolls-Royce pulls up alongside. A man leans his head out the window. “Pardon me. Would you have any Grey Poupon?”

In the cities where the ads ran, sales of Grey Poupon leaped 40 to 50 percent, and whenever Heublein bought airtime in new cities sales jumped by 40 to 50 percent again. Grocery stores put Grey Poupon next to French’s and Gulden’s. By the end of the 1980s Grey Poupon was the most powerful brand in mustard. “The tagline in the commercial was that this was one of life’s finer pleasures,” Larry Elegant, who wrote the original Grey Poupon spot, says, “and that, along with the Rolls-Royce, seemed to impart to people’s minds that this was something truly different and superior.”

The rise of Grey Poupon proved that the American supermarket shopper was willing to pay more—in this case $3.99 instead of $1.49 for eight ounces—as long as what they were buying carried with it an air of sophistication and complex aromatics. Its success showed, furthermore, that the boundaries of taste and custom were not fixed: that just because mustard had always been yellow didn’t mean that consumers would use only yellow mustard. It is because of Grey Poupon that the standard American supermarket today has an entire mustard section. And it is because of Grey Poupon that a man named Jim Wigon decided, four years ago, to enter the ketchup business. Isn’t the ketchup business today exactly where mustard was thirty years ago? There is Heinz and, far behind, Hunt’s and Del Monte and a handful of private-label brands. Jim Wigon wanted to create the Grey Poupon of ketchup.

Wigon is from Boston. He’s a thickset man in his fifties, with a full salt-and-pepper beard. He runs his ketchup business—under the brand World’s Best Ketchup—out of the catering business of his partner, Nick Schiarizzi, in Norwood, Massachusetts, just off Route 1, in a low-slung building behind an industrial-equipment-rental shop. He starts with red peppers, Spanish onions, garlic, and a high-end tomato paste. Basil is chopped by hand, because the buffalo chopper bruises the leaves. He uses maple syrup, not corn syrup, which gives him a quarter of the sugar of Heinz. He pours his ketchup into a clear glass ten-ounce jar, and sells it for three times the price of Heinz, and for the past few years he has crisscrossed the country, peddling World’s Best in six flavors—regular, sweet, dill, garlic, caramelized onion, and basil—to specialty grocery stores and supermarkets. If you were in Zabar’s on Manhattan’s Upper West Side a few months ago, you would have seen him at the front of the store, in a spot between the sushi and the gefilte fish. He was wearing a World’s Best baseball cap, a white shirt, and a red-stained apron. In front of him, on a small table, was a silver tureen filled with miniature chicken and beef meatballs, a box of toothpicks, and a dozen or so open jars of his ketchup. “Try my ketchup!” Wigon said, over and over, to anyone who passed. “If you don’t try it, you’re doomed to eat Heinz the rest of your life.”

In the same aisle at Zabar’s that day two other demonstrations were going on, so that people were starting at one end with free chicken sausage, sampling a slice of prosciutto, and then pausing at the World’s Best stand before heading for the cash register. They would look down at the array of open jars, and Wigon would impale a meatball on a toothpick, dip it in one of his ketchups, and hand it to them with a flourish. The ratio of tomato solids to liquid in World’s Best is much higher than in Heinz, and the maple syrup gives it an unmistakable sweet kick. Invariably, people would close their eyes, just for a moment, and do a subtle double take. Some of them would look slightly perplexed and walk away, and others would nod and pick up a jar. “You know why you like it so much?” he would say, in his broad Boston accent, to the customers who seemed most impressed. “Because you’ve been eating bad ketchup all your life!” Jim Wigon had a simple vision: build a better ketchup—the way Grey Poupon built a better mustard—and the world will beat a path to your door. If only it were that easy.

2.

The story of World’s Best Ketchup cannot properly be told without a man from White Plains, New York, named Howard Moskowitz. Moskowitz is sixty, short and round, with graying hair and huge gold-rimmed glasses. When he talks, he favors the Socratic monologue—a series of questions that he poses to himself, then answers, punctuated by “ahhh” and much vigorous nodding. He is a lineal descendant of the legendary eighteenth-century Hasidic rabbi known as the Seer of Lublin. He keeps a parrot. At Harvard, he wrote his doctoral dissertation on psychophysics, and all the rooms on the ground floor of his food-testing and market-research business are named after famous psychophysicists. (“Have you ever heard of the name Rose Marie Pangborn? Ahhh. She was a professor at Davis. Very famous. This is the Pangborn kitchen.”) Moskowitz is a man of uncommon exuberance and persuasiveness: if he had been your freshman statistics professor, you would today be a statistician. “My favorite writer? Gibbon,” he burst out, when we met not long ago. He had just been holding forth on the subject of sodium solutions. “Right now I’m working my way through the Hales history of the Byzantine Empire. Holy shit! Everything is easy until you get to the Byzantine Empire. It’s impossible. One emperor is always killing the others, and everyone has five wives or three husbands. It’s very Byzantine.”

Moskowitz set up shop in the seventies, and one of his first clients was Pepsi. The artificial sweetener aspartame had just become available, and Pepsi wanted Moskowitz to figure out the perfect amount of sweetener for a can of Diet Pepsi. Pepsi knew that anything below 8 percent sweetness was not sweet enough and anything over 12 percent was too sweet. So Moskowitz did the logical thing. He made up experimental batches of Diet Pepsi with every conceivable degree of sweetness—8 percent, 8.25 percent, 8.5, and on and on up to 12—gave them to hundreds of people, and looked for the concentration that people liked the most. But the data were a mess—there wasn’t a pattern—and one day, sitting in a diner, Moskowitz realized why. They had been asking the wrong question. There was no such thing as the perfect Diet Pepsi. They should have been looking for the perfect Diet Pepsis.

It took a long time for the food world to catch up with Howard Moskowitz. He knocked on doors and tried to explain his idea about the plural nature of perfection, and no one answered. He spoke at food-industry conferences, and audiences shrugged. But he could think of nothing else. “It’s like that Yiddish expression,” he says. “Do you know it? To a worm in horseradish, the world is horseradish!” Then, in 1986, he got a call from the Campbell’s Soup Company. They were in the spaghetti-sauce business, going up against Ragú with their Prego brand. Prego was a little thicker than Ragú, with diced tomatoes as opposed to Ragú’s purée, and, Campbell’s thought, had better pasta adherence. But, for all that, Prego was in a slump, and Campbell’s was desperate for new ideas.

Standard practice in the food industry would have been to convene a focus group and ask spaghetti eaters what they wanted. But Moskowitz does not believe that consumers—even spaghetti lovers—know what they desire if what they desire does not yet exist. “The mind,” as Moskowitz is fond of saying, “knows not what the tongue wants.” Instead, working with the Campbell’s kitchens, he came up with forty-five varieties of spaghetti sauce. These were designed to differ in every conceivable way: spiciness, sweetness, tartness, saltiness, thickness, aroma, mouth feel, cost of ingredients, and so forth. He had a trained panel of food tasters analyze each of those varieties in depth. Then he took the prototypes on the road—to New York, Chicago, Los Angeles, and Jacksonville—and asked people in groups of twenty-five to eat between eight and ten small bowls of different spaghetti sauces over two hours and rate them on a scale of one to a hundred. When Moskowitz charted the results, he saw that everyone had a slightly different definition of what a perfect spaghetti sauce tasted like. If you sifted carefully through the data, though, you could find patterns, and Moskowitz learned that most people’s preferences fell into one of three broad groups: plain, spicy, and extra-chunky, and of those three the last was the most important. Why? Because at the time there was no extra-chunky spaghetti sauce in the supermarket. Over the next decade, that new category proved to be worth hundreds of millions of dollars to Prego. “We all said, ‘Wow!’” Monica Wood, who was then the head of market research for Campbell’s, recalls. “Here there was this third segment—people who liked their spaghetti sauce with lots of stuff in it—and it was completely untapped. So in about 1989 or 1990 we launched Prego extra-chunky. It was extraordinarily successful.”

It may be hard today, twenty years later—when every brand seems to come in multiple varieties—to appreciate how much of a breakthrough this was. In those years, people in the food industry carried around in their heads the notion of a platonic dish—the version of a dish that looked and tasted absolutely right. At Ragú and Prego, they had been striving for the platonic spaghetti sauce, and the platonic spaghetti sauce was thin and blended because that’s the way they thought it was done in Italy. Cooking, on the industrial level, was consumed with the search for human universals. Once you start looking for the sources of human variability, though, the old orthodoxy goes out the window. Howard Moskowitz stood up to the Platonists and said there are no universals.

Moskowitz still has a version of the computer model he used for Prego. It has all the coded results from the consumer taste tests and the expert tastings, split into the three categories (plain, spicy, and extra-chunky) and linked up with the actual ingredients list on a spreadsheet. “You know how they have a computer model for building an aircraft,” Moskowitz said as he pulled up the program on his computer. “This is a model for building spaghetti sauce. Look, every variable is here.” He pointed at column after column of ratings. “So here are the ingredients. I’m a brand manager for Prego. I want to optimize one of the segments. Let’s start with Segment 1.” In Mosko-witz’s program, the three spaghetti-sauce groups were labeled Segment 1, Segment 2, and Segment 3. He typed in a few commands, instructing the computer to give him the formulation that would score the highest with those people in Segment 1. The answer appeared almost immediately: a specific recipe that, according to Moskowitz’s data, produced a score of 78 from the people in Segment 1. But that same formulation didn’t do nearly as well with those in Segment 2 and Segment 3. They scored it 67 and 57, respectively. Moskowitz started again, this time asking the computer to optimize for Segment 2. This time the ratings came in at 82, but now Segment 1 had fallen 10 points, to 68. “See what happens?” he said. “If I make one group happier, I piss off another group. We did this for coffee with General Foods, and we found that if you create only one product, the best you can get across all the segments is a 60—if you’re lucky. That’s if you were to treat everybody as one big happy family. But if I do the sensory segmentation, I can get 70, 71, 72. Is that big? Ahhh. It’s a very big difference. In coffee, a 71 is something you’ll die for.”

When Jim Wigon set up shop that day in Zabar’s, then, his operating assumption was that there ought to be some segment of the population that preferred a ketchup made with Stanislaus tomato paste and hand-chopped basil and maple syrup. That’s the Moskowitz theory. But there is theory and there is practice. By the end of that long day, Wigon had sold ninety jars. But he’d also got two parking tickets and had to pay for a hotel room, so he wasn’t going home with money in his pocket. For the year, Wigon estimates, he’ll sell fifty thousand jars—which, in the universe of condiments, is no more than a blip. “I haven’t drawn a paycheck in five years,” Wigon said as he impaled another meatball on a toothpick. “My wife is killing me.” And it isn’t just World’s Best that is struggling. In the gourmet-ketchup world, there is River Run and Uncle Dave’s, from Vermont, and Muir Glen Organic and Mrs. Tomato Head Roasted Garlic Peppercorn Catsup, in California, and dozens of others—and every year Heinz’s overwhelming share of the ketchup market just grows.

It is possible, of course, that ketchup is waiting for its own version of that Rolls-Royce commercial, or the discovery of the ketchup equivalent of extra-chunky—the magic formula that will satisfy an unmet need. It is also possible, however, that the rules of Howard Moskowitz, which apply to Grey Poupon and Prego spaghetti sauce and to olive oil and salad dressing and virtually everything else in the supermarket, don’t apply to ketchup.

3.

Tomato ketchup is a nineteenth-century creation—the union of the English tradition of fruit and vegetable sauces and the growing American infatuation with the tomato. But what we know today as ketchup emerged out of a debate that raged in the first years of the last century over benzoate, a preservative widely used in late-nineteenth-century condiments. Harvey Washington Wiley, the chief of the Bureau of Chemistry in the Department of Agriculture from 1883 to 1912, came to believe that benzoates were not safe, and the result was an argument that split the ketchup world in half. On one side was the ketchup establishment, which believed that it was impossible to make ketchup without benzoate and that benzoate was not harmful in the amounts used. On the other side was a renegade band of ketchup manufacturers, who believed that the preservative puzzle could be solved with the application of culinary science. The dominant nineteenth-century ketchups were thin and watery, in part because they were made from unripe tomatoes, which are low in the complex carbohydrates known as pectin, which add body to a sauce. But what if you made ketchup from ripe tomatoes, giving it the density it needed to resist degradation? Nineteenth-century ketchups had a strong tomato taste, with just a light vinegar touch. The renegades argued that by greatly increasing the amount of vinegar, in effect protecting the tomatoes by pickling them, they were making a superior ketchup: safer, purer, and better tasting. They offered a money-back guarantee in the event of spoilage. They charged more for their product, convinced that the public would pay more for a better ketchup, and they were right. The benzoate ketchups disappeared. The leader of the renegade band was an entrepreneur out of Pittsburgh named Henry J. Heinz.

The world’s leading expert on ketchup’s early years is Andrew F. Smith, a substantial man, well over six feet, with a graying mustache and short wavy black hair. Smith is a scholar, trained as a political scientist, intent on bringing rigor to the world of food. When we met for lunch not long ago at the restaurant Savoy in SoHo (chosen because of the excellence of its hamburger and French fries, and because Savoy makes its own ketchup—a dark, peppery, and viscous variety served in a white porcelain saucer), Smith was in the throes of examining the origins of the croissant for the upcoming Oxford Encyclopedia of Food and Drink in America, of which he is the editor-in-chief. Was the croissant invented in 1683, by the Viennese, in celebration of their defeat of the invading Turks? Or in 1686, by the residents of Budapest, to celebrate their defeat of the Turks? Both explanations would explain its distinctive crescent shape—since it would make a certain cultural sense (particularly for the Viennese) to consecrate their battlefield triumphs in the form of pastry. But the only reference Smith could find to either story was in the Larousse Gastronomique of 1938. “It just doesn’t check out,” he said, shaking his head wearily.

Smith’s specialty is the tomato, however, and over the course of many scholarly articles and books—“The History of Home-Made Anglo-American Tomato Ketchup,” for Petits Propos Culinaires, for example, and “The Great Tomato Pill War of the 1830s,” for The Connecticut Historical Society Bulletin—Smith has argued that some critical portion of the history of culinary civilization could be told through this fruit. Cortez brought tomatoes to Europe from the New World, and they inexorably insinuated themselves into the world’s cuisines. The Italians substituted the tomato for eggplant. In northern India, it went into curries and chutneys. “The biggest tomato producer in the world today?” Smith paused, for dramatic effect. “ China. You don’t think of tomato being a part of Chinese cuisine, and it wasn’t ten years ago. But it is now.” Smith dipped one of my French fries into the homemade sauce. “It has that raw taste,” he said, with a look of intense concentration. “It’s fresh ketchup. You can taste the tomato.” Ketchup was, to his mind, the most nearly perfect of all the tomato’s manifestations. It was inexpensive, which meant that it had a firm lock on the mass market, and it was a condiment, not an ingredient, which meant that it could be applied at the discretion of the food eater, not the food preparer. “There’s a quote from Elizabeth Rozin I’ve always loved,” he said. Rozin is the food theorist who wrote the essay “Ketchup and the Collective Unconscious,” and Smith used her conclusion as the epigraph of his ketchup book: ketchup may well be “the only true culinary expression of the melting pot, and… its special and unprecedented ability to provide something for everyone makes it the Esperanto of cuisine.” Here is where Henry Heinz and the benzoate battle were so important: in defeating the condiment Old Guard, he was the one who changed the flavor of ketchup in a way that made it universal.

4.

There are five known fundamental tastes in the human palate: salty, sweet, sour, bitter, and umami. Umami is the proteiny, full-bodied taste of chicken soup, or cured meat, or fish stock, or aged cheese, or mother’s milk, or soy sauce, or mushrooms, or seaweed, or cooked tomato. “Umami adds body,” Gary Beauchamp, who heads the Monell Chemical Senses Center, in Philadelphia, says. “If you add it to a soup, it makes the soup seem like it’s thicker—it gives it sensory heft. It turns a soup from salt water into a food.” When Heinz moved to ripe tomatoes and increased the percentage of tomato solids, he made ketchup, first and foremost, a potent source of umami. Then he dramatically increased the concentration of vinegar, so that his ketchup had twice the acidity of most other ketchups; now ketchup was sour, another of the fundamental tastes. The post-benzoate ketchups also doubled the concentration of sugar—so now ketchup was also sweet—and all along ketchup had been salty and bitter. These are not trivial issues. Give a baby soup, and then soup with MSG (an amino-acid salt that is pure umami), and the baby will go back for the MSG soup every time, the same way a baby will always prefer water with sugar to water alone. Salt and sugar and umami are primal signals about the food we are eating—about how dense it is in calories, for example, or, in the case of umami, about the presence of proteins and amino acids. What Heinz had done was come up with a condiment that pushed all five of these primal buttons. The taste of Heinz’s ketchup began at the tip of the tongue, where our receptors for sweet and salty first appear, moved along the sides, where sour notes seem the strongest, then hit the back of the tongue, for umami and bitter, in one long crescendo. How many things in the supermarket run the sensory spectrum like this?

A number of years ago, the H. J. Heinz Company did an extensive market-research project in which researchers went into people’s homes and watched the way they used ketchup. “I remember sitting in one of those households,” Casey Keller, who was until recently the chief growth officer for Heinz, says. “There was a three-year-old and a six-year-old, and what happened was that the kids asked for ketchup and Mom brought it out. It was a forty-ounce bottle. And the three-year-old went to grab it himself, and Mom intercepted the bottle and said, ‘No, you’re not going to do that.’ She physically took the bottle away and doled out a little dollop. You could see that the whole thing was a bummer.” For Heinz, Keller says, that moment was an epiphany. A typical five-year-old consumes about 60 percent more ketchup than a typical forty-year-old, and the company realized that it needed to put ketchup in a bottle that a toddler could control. “If you are four—and I have a four-year-old—he doesn’t get to choose what he eats for dinner, in most cases,” Keller says. “But the one thing he can control is ketchup. It’s the one part of the food experience that he can customize and personalize.” As a result, Heinz came out with the so-called EZ Squirt bottle, made out of soft plastic with a conical nozzle. In homes where the EZ Squirt is used, ketchup consumption has grown by as much as 12 percent.

There is another lesson in that household scene, though. Small children tend to be neophobic: once they hit two or three, they shrink from new tastes. That makes sense, evolutionarily, because through much of human history that is the age at which children would have first begun to gather and forage for themselves, and those who strayed from what was known and trusted would never have survived. There the three-year-old was, confronted with something strange on his plate—tuna fish, perhaps, or Brussels sprouts—and he wanted to alter his food in some way that made the unfamiliar familiar. He wanted to subdue the contents of his plate. And so he turned to ketchup, because, alone among the condiments on the table, ketchup could deliver sweet and sour and salty and bitter and umami, all at once.

5.

A few months after Jim Wigon’s visit to Zabar’s, Edgar Chambers IV, who runs the sensory-analysis center at Kansas State University, conducted a joint assessment of World’s Best and Heinz. He has seventeen trained tasters on his staff, and they work for academia and industry, answering the often difficult question of what a given substance tastes like. It is demanding work. Immediately after conducting the ketchup study, Chambers dispatched a team to Bangkok to do an analysis of fruit—bananas, mangoes, rose apples, and sweet tamarind. Others were detailed to soy and kimchi in South Korea, and Chambers’s wife led a delegation to Italy to analyze ice cream.

The ketchup tasting took place over four hours, on two consecutive mornings. Six tasters sat around a large, round table with a lazy Susan in the middle. In front of each panelist were two one-ounce cups, one filled with Heinz ketchup and one filled with World’s Best. They would work along fourteen dimensions of flavor and texture, in accordance with the standard fifteen-point scale used by the food world. The flavor components would be divided two ways: elements picked up by the tongue and elements picked up by the nose. A very ripe peach, for example, tastes sweet but it also smells sweet—which is a very different aspect of sweetness. Vinegar has a sour taste but also a pungency, a vapor that rises up the back of the nose and fills the mouth when you breathe out. To aid in the rating process, the tasters surrounded themselves with little bowls of sweet and sour and salty solutions, and portions of Contadina tomato paste, Hunt’s tomato sauce, and Campbell’s tomato juice, all of which represent different concentrations of tomato-ness.

After breaking the ketchup down into its component parts, the testers assessed the critical dimension of “amplitude,” the word sensory experts use to describe flavors that are well blended and balanced, that “bloom” in the mouth. “The difference between high and low amplitude is the difference between my son and a great pianist playing ‘Ode to Joy’ on the piano,” Chambers says. “They are playing the same notes, but they blend better with the great pianist.” Pepperidge Farm shortbread cookies are considered to have high amplitude. So are Hellmann’s mayonnaise and Sara Lee poundcake. When something is high in amplitude, all its constituent elements converge into a single gestalt. You can’t isolate the elements of an iconic, high-amplitude flavor like Coca-Cola or Pepsi. But you can with one of those private-label colas that you get in the supermarket. “The thing about Coke and Pepsi is that they are absolutely gorgeous,” Judy Heylmun, a vice president of Sensory Spectrum, Inc., in Chatham, New Jersey, says. “They have beautiful notes—all flavors are in balance. It’s very hard to do that well. Usually, when you taste a store cola it’s”—and here she made a series of pik! pik! pik! sounds—“all the notes are kind of spiky, and usually the citrus is the first thing to spike out. And then the cinnamon. Citrus and brown spice notes are top notes and very volatile, as opposed to vanilla, which is very dark and deep. A really cheap store brand will have a big, fat cinnamon note sitting on top of everything.”

Some of the cheaper ketchups are the same way. Ketchup aficionados say that there’s a disquieting unevenness to the tomato notes in Del Monte ketchup: tomatoes vary, in acidity and sweetness and the ratio of solids to liquid, according to the seed variety used, the time of year they are harvested, the soil in which they are grown, and the weather during the growing season. Unless all those variables are tightly controlled, one batch of ketchup can end up too watery and another can be too strong. Or try one of the numerous private-label brands that make up the bottom of the ketchup market and pay attention to the spice mix; you may well find yourself conscious of the clove note or overwhelmed by a hit of garlic. Generic colas and ketchups have what Moskowitz calls a hook—a sensory attribute that you can single out, and ultimately tire of.

The tasting began with a plastic spoon. Upon consideration, it was decided that the analysis would be helped if the ketchups were tasted on French fries, so a batch of fries was cooked up and distributed around the table. Each tester, according to protocol, took the fries one by one, dipped them into the cup—all the way, right to the bottom—bit off the portion covered in ketchup, and then contemplated the evidence of their senses. For Heinz, the critical flavor components—vinegar, salt, tomato ID (overall tomato-ness), sweet, and bitter—were judged to be present in roughly equal concentrations, and those elements, in turn, were judged to be well blended. The World’s Best, though, “had a completely different view, a different profile, from the Heinz,” Chambers said. It had a much stronger hit of sweet aromatics—4.0 to 2.5—and outstripped Heinz on tomato ID by a resounding 9 to 5.5. But there was less salt, and no discernible vinegar. “The other comment from the panel was that these elements were really not blended at all,” Chambers went on. “The World’s Best product had really low amplitude.” According to Joyce Buchholz, one of the panelists, when the group judged aftertaste, “it seemed like a certain flavor would hang over longer in the case of World’s Best—that cooked-tomatoey flavor.”

But what was Jim Wigon to do? To compete against Heinz, he had to try something dramatic, like substituting maple syrup for corn syrup, ramping up the tomato solids. That made for an unusual and daring flavor. World’s Best Dill ketchup on fried catfish, for instance, is a marvelous thing. But it also meant that his ketchup wasn’t as sensorily complete as Heinz, and he was paying a heavy price in amplitude. “Our conclusion was mainly this,” Buchholz said. “We felt that World’s Best seemed to be more like a sauce.” She was trying to be helpful.

There is an exception, then, to the Moskowitz rule. Today there are thirty-six varieties of Ragú spaghetti sauce, under six rubrics—Old World Style, Chunky Garden Style, Robusto, Light, Cheese Creations, and Rich & Meaty—which means that there is very nearly an optimal spaghetti sauce for every man, woman, and child in America. Measured against the monotony that confronted Howard Moskowitz twenty years ago, this is progress. Happiness, in one sense, is a function of how closely our world conforms to the infinite variety of human preference. But that makes it easy to forget that sometimes happiness can be found in having what we’ve always had and everyone else is having. “Back in the seventies, someone else—I think it was Ragú—tried to do an ‘Italian’-style ketchup,” Moskowitz said. “They failed miserably.” It was a conundrum: what was true about a yellow condiment that went on hot dogs was not true about a tomato condiment that went on hamburgers, and what was true about tomato sauce when you added visible solids and put it in a jar was somehow not true about tomato sauce when you added vinegar and sugar and put it in a bottle. Moskowitz shrugged. “I guess ketchup is ketchup.”

September 6, 2004

Blowing Up

HOW NASSIM TALEB TURNED THE INEVITABILITY OF DISASTER INTO AN INVESTMENT STRATEGY

1.

One day in 1996, a Wall Street trader named Nassim Nicholas Taleb went to see Victor Niederhoffer. Victor Niederhoffer was one of the most successful money managers in the country. He lived and worked out of a thirteen-acre compound in Fairfield County, Connecticut, and when Taleb drove up that day from his home in Larchmont he had to give his name at the gate, and then make his way down a long, curving driveway. Niederhoffer had a squash court and a tennis court and a swimming pool and a colossal, faux-alpine mansion in which virtually every square inch of space was covered with eighteenth- and nineteenth-century American folk art. In those days, he played tennis regularly with the billionaire financier George Soros. He had just written a best-selling book, The Education of a Speculator, dedicated to his father, Artie Niederhoffer, a police officer from Coney Island. He had a huge and eclectic library and a seemingly insatiable desire for knowledge. When Niederhoffer went to Harvard as an undergraduate, he showed up for the very first squash practice and announced that he would someday be the best in that sport; and, sure enough, he soon beat the legendary Shariff Khan to win the US Open squash championship. That was the kind of man Niederhoffer was. He had heard of Taleb’s growing reputation in the esoteric field of options trading and summoned him to Connecticut. Taleb was in awe.

“He didn’t talk much, so I observed him,” Taleb recalls. “I spent seven hours watching him trade. Everyone else in his office was in his twenties, and he was in his fifties, and he had the most energy of them all. Then, after the markets closed, he went out to hit a thousand backhands on the tennis court.” Taleb is Greek-Orthodox Lebanese and his first language was French, and in his pronunciation the name Niederhoffer comes out as the slightly more exotic Niederhoffer. “Here was a guy living in a mansion with thousands of books, and that was my dream as a child,” Taleb went on. “He was part chevalier, part scholar. My respect for him was intense.” There was just one problem, however, and it is the key to understanding the strange path that Nassim Taleb has chosen, and the position he now holds as Wall Street’s principal dissident. Despite his envy and admiration, he did not want to be Victor Niederhoffer—not then, not now, and not even for a moment in between. For when he looked around him, at the books and the tennis court and the folk art on the walls—when he contemplated the countless millions that Niederhoffer had made over the years—he could not escape the thought that it might all have been the result of sheer dumb luck.

Taleb knew how heretical that thought was. Wall Street was dedicated to the principle that when it came to playing the markets there was such a thing as expertise, that skill and insight mattered in investing just as skill and insight mattered in surgery and golf and flying fighter jets. Those who had the foresight to grasp the role that software would play in the modern world bought Microsoft in 1985 and made a fortune. Those who understood the psychology of investment bubbles sold their tech stocks at the end of 1999 and escaped the Nasdaq crash. Warren Buffett was known as the “sage of Omaha ” because it seemed incontrovertible that if you started with nothing and ended up with billions, then you had to be smarter than everyone else: Buffett was successful for a reason. Yet how could you know, Taleb wondered, whether that reason was responsible for someone’s success, or simply a rationalization invented after the fact? George Soros seemed to be successful for a reason, too. He used to say that he followed something called the theory of reflexivity. But then, later, Soros wrote that in most situations his theory “is so feeble that it can be safely ignored.” An old trading partner of Taleb’s, a man named Jean-Manuel Rozan, once spent an entire afternoon arguing about the stock market with Soros. Soros was vehemently bearish, and he had an elaborate theory to explain why, which turned out to be entirely wrong. The stock market boomed. Two years later, Rozan ran into Soros at a tennis tournament. “Do you remember our conversation?” Rozan asked. “I recall it very well,” Soros replied. “I changed my mind, and made an absolute fortune.” He changed his mind! The truest thing about Soros seemed to be what his son Robert had once said:

My father will sit down and give you theories to explain why he does this or that. But I remember seeing it as a kid and thinking, Jesus Christ, at least half of this is bullshit. I mean, you know the reason he changes his position on the market or whatever is because his back starts killing him. It has nothing to do with reason. He literally goes into a spasm, and it’s this early warning sign.

For Taleb, then, the question why someone was a success in the financial marketplace was vexing. Taleb could do the arithmetic in his head. Suppose that there were ten thousand investment managers out there, which is not an outlandish number, and that every year half of them, entirely by chance, made money and half of them, entirely by chance, lost money. And suppose that every year, the losers were tossed out and the game was replayed with those who remained. At the end of five years, there would be three hundred and thirteen people who had made money in every one of those years, and after ten years there would be nine people who had made money every single year in a row, all out of pure luck. Niederhoffer, like Buffett and Soros, was a brilliant man. He had a PhD in economics from the University of Chicago. He had pioneered the idea that through close mathematical analysis of patterns in the market an investor could identify profitable anomalies. But who was to say that he wasn’t one of those lucky nine? And who was to say that in the eleventh year Niederhoffer would be one of the unlucky ones, who suddenly lost it all, who suddenly, as they say on Wall Street, “blew up”?

Taleb remembered his childhood in Lebanon and watching his country turn, as he puts it, from “paradise to hell” in six months. His family once owned vast tracts of land in northern Lebanon. All of that was gone. He remembered his grandfather, the former deputy prime minister of Lebanon and the son of a deputy prime minister of Lebanon and a man of great personal dignity, living out his days in a dowdy apartment in Athens. That was the problem with a world in which there was so much uncertainty about why things ended up the way they did: you never knew whether one day your luck would turn and it would all be washed away.

So here is what Taleb took from Niederhoffer. He saw that Niederhoffer was a serious athlete, and he decided that he would be, too. He would bicycle to work and exercise in the gym. Niederhoffer was a staunch empiricist who turned to Taleb that day in Connecticut and said to him sternly, “Everything that can be tested must be tested,” and so when Taleb started his own hedge fund, a few years later, he called it Empirica. But that is where it stopped. Nassim Taleb decided that he could not pursue an investment strategy that had any chance of blowing up.

2.

Nassim Taleb is a tall, muscular man in his early forties, with a salt-and-pepper beard and a balding head. His eyebrows are heavy and his nose is long. His skin has the olive hue of the Levant. He is a man of moods, and when his world turns dark the eyebrows come together and the eyes narrow and it is as if he were giving off an electrical charge. It is said, by some of his friends, that he looks like Salman Rushdie, although at his office his staff have pinned to the bulletin board a photograph of a mullah they swear is Taleb’s long-lost twin, while Taleb himself maintains, wholly implausibly, that he resembles Sean Connery. He lives in a four-bedroom Tudor with twenty-six Russian Orthodox icons, nineteen Roman heads, and four thousand books, and he rises at dawn to spend an hour writing. He is the author of two books, the first a technical and highly regarded work on derivatives, and the second a treatise entitled Fooled by Randomness, which is to conventional Wall Street wisdom approximately what Martin Luther’s Ninety-five Theses were to the Catholic Church. Some afternoons, he drives into the city and attends a philosophy lecture at City University. During the school year, in the evenings, he teaches a graduate course in finance at New York University, after which he can often be found at the bar at Odeon Café in Tribeca, holding forth, say, on the finer points of stochastic volatility or his veneration of the Greek poet C. P. Cavafy.

Taleb runs Empirica Capital out of an anonymous concrete office park somewhere in the woods outside Greenwich, Connecticut. His offices consist, principally, of a trading floor about the size of a Manhattan studio apartment. Taleb sits in one corner, in front of a laptop, surrounded by the rest of his team—Mark Spitznagel, the chief trader; another trader, named Danny Tosto; a programmer named Winn Martin; and a graduate student named Pallop Angsupun. Mark Spitznagel is perhaps thirty. Winn, Danny, and Pallop look as if they belong in high school. The room has an overstuffed bookshelf in one corner, and a television muted and tuned to CNBC. There are two ancient Greek heads, one next to Taleb’s computer and the other, somewhat bafflingly, on the floor, next to the door, as if it were being set out for the trash. There is almost nothing on the walls, except for a slightly battered poster for an exhibition of Greek artifacts, the snapshot of the mullah, and a small pen-and-ink drawing of the patron saint of Empirica Capital, the philosopher Karl Popper.

On a recent spring morning, the staff of Empirica were concerned with solving a thorny problem having to do with the square root of n, where n is a given number of random set of observations, and what relation n might have to a speculator’s confidence in his estimations. Taleb was up at a whiteboard by the door, his marker squeaking furiously as he scribbled possible solutions. Spitznagel and Pallop looked on intently. Spitznagel is blond and from the Midwest and does yoga: in contrast to Taleb, he exudes a certain laconic levelheadedness. In a bar, Taleb would pick a fight. Spitznagel would break it up. Pallop is of Thai extraction and is doing a PhD in financial mathematics at Princeton. He has longish black hair and a slightly quizzical air. “Pallop is very lazy,” Taleb will remark, to no one in particular, several times over the course of the day, although this is said with such affection that it suggests that laziness, in the Talebian nomenclature, is a synonym for genius. Pallop’s computer was untouched and he often turned his chair around so that he faced completely away from his desk. He was reading a book by the cognitive psychologists Amos Tversky and Daniel Kahneman, whose arguments, he said a bit disappointedly, were “not really quantifiable.” The three argued back and forth about the solution. It appeared that Taleb might be wrong, but before the matter could be resolved the markets opened. Taleb returned to his desk and began to bicker with Spitznagel about what exactly would be put on the company boom box. Spitznagel plays the piano and the French horn and has appointed himself the Empirica DJ. He wanted to play Mahler, and Taleb does not like Mahler. “Mahler is not good for volatility,” Taleb complained. “Bach is good. St. Matthew’s Passion!” Taleb gestured toward Spitznagel, who was wearing a gray woolen turtleneck. “Look at him. He wants to be like von Karajan, like someone who wants to live in a castle. Technically superior to the rest of us. No chitchatting. Top skier. That’s Mark!” As Spitznagel rolled his eyes, a man whom Taleb refers to, somewhat mysteriously, as Dr. Wu wandered in. Dr. Wu works for another hedge fund, down the hall, and is said to be brilliant. He is thin and squints through black-rimmed glasses. He was asked his opinion on the square root of n but declined to answer. “Dr. Wu comes here for intellectual kicks and to borrow books and to talk music with Mark,” Taleb explained after their visitor had drifted away. He added darkly, “Dr. Wu is a Mahlerian.”

Empirica follows a very particular investment strategy. It trades options, which is to say that it deals not in stocks and bonds but with bets on stocks and bonds. Imagine, for example, that General Motors stock is trading at $50, and imagine that you are a major investor on Wall Street. An options trader comes up to you with a proposition. What if, within the next three months, he decides to sell you a share of GM at $45? How much would you charge for agreeing to buy it at that price? You would look at the history of GM and see that in a three-month period it has rarely dropped 10 percent, and obviously the trader is only going to make you buy his GM at $45 if the stock drops below that point. So you say you’ll make that promise, or sell that option, for a relatively small fee, say, a dime. You are betting on the high probability that GM stock will stay relatively calm over the next three months, and if you are right, you’ll pocket the dime as pure profit. The trader, on the other hand, is betting on the unlikely event that GM stock will drop a lot, and if that happens, his profits are potentially huge. If the trader bought a million options from you at a dime each and GM drops to $35, he’ll buy a million shares at $35 and turn around and force you to buy them at $45, making himself suddenly very rich and you substantially poorer.

That particular transaction is called, in the argot of Wall Street, an out-of-the-money option. But an option can be configured in a vast number of ways. You could sell the trader a GM option at $30, or, if you wanted to bet against GM stock going up, you could sell a GM option at $60. You could sell or buy options on bonds, on the S &P index, on foreign currencies, or mortgages, or on the relationship among any number of financial instruments of your choice; you can bet on the market booming, or the market crashing, or the market staying the same. Options allow investors to gamble heavily and turn one dollar into ten. They also allow investors to hedge their risk. The reason your pension fund may not be wiped out in the next crash is that it has protected itself by buying options. What drives the options game is the notion that the risks represented by all of these bets can be quantified; that by looking at the past behavior of GM, you can figure out the exact chance of GM hitting $45 in the next three months, and whether at $1 that option is a good or a bad investment. The process is a lot like the way insurance companies analyze actuarial statistics in order to figure out how much to charge for a life-insurance premium, and to make those calculations every investment bank has, on staff, a team of PhDs, physicists from Russia, applied mathematicians from China, and computer scientists from India. On Wall Street, those PhDs are called quants.

Nassim Taleb and his team at Empirica are quants. But they reject the quant orthodoxy, because they don’t believe that things like the stock market behave in the way that physical phenomena like mortality statistics do. Physical events, whether death rates or poker games, are the predictable function of a limited and stable set of factors, and tend to follow what statisticians call a normal distribution, a bell curve. But do the ups and downs of the market follow a bell curve? The economist Eugene Fama once studied stock prices and pointed out that if they followed a normal distribution, you’d expect a really big jump, what he specified as a movement five standard deviations from the mean, once every seven thousand years. In fact, jumps of that magnitude happen in the stock market every three or four years, because investors don’t behave with any kind of statistical orderliness. They change their mind. They do stupid things. They copy one another. They panic. Fama concluded that if you charted the ups and downs of the stock market, the graph would have a “fat tail,” meaning that at the upper and lower ends of the distribution there would be many more outlying events than statisticians used to modeling the physical world would have imagined.

In the summer of 1997, Taleb predicted that hedge funds like Long Term Capital Management were headed for trouble because they did not understand this notion of fat tails. Just a year later, LTCM sold an extraordinary number of options, because its computer models told it that the markets ought to be calming down. And what happened? The Russian government defaulted on its bonds; the markets went crazy; and in a matter of weeks LTCM was finished. Spitznagel, Taleb’s head trader, says that he recently heard one of the former top executives of LTCM give a lecture in which he defended the gamble that the fund had made. “What he said was, ‘Look, when I drive home every night in the fall I see all these leaves scattered around the base of the trees,’” Spitznagel recounts. “There is a statistical distribution that governs the way they fall, and I can be pretty accurate in figuring out what that distribution is going to be. But one day I came home and the leaves were in little piles. Does that falsify my theory that there are statistical rules governing how leaves fall? No. It was a man-made event.” In other words, the Russians, by defaulting on their bonds, did something that they were not supposed to do, a once-in-a-lifetime, rule-breaking event. But this, to Taleb, is just the point: in the markets, unlike in the physical universe, the rules of the game can be changed. Central banks can decide to default on government-backed securities.

One of Taleb’s earliest Wall Street mentors was a short-tempered Frenchman named Jean-Patrice, who dressed like a peacock and had an almost neurotic obsession with risk. Jean-Patrice would call Taleb from Regine’s at three in the morning, or take a meeting in a Paris nightclub, sipping champagne and surrounded by scantily clad women, and once Jean-Patrice asked Taleb what would happen to his positions if a plane crashed into his building. Taleb was young then and brushed him aside. It seemed absurd. But nothing, Taleb soon realized, is absurd. Taleb likes to quote David Hume: “No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.” Because LTCM had never seen a black swan in Russia, it thought no Russian black swans existed. Taleb, by contrast, has constructed a trading philosophy predicated entirely on the existence of black swans, on the possibility of some random, unexpected event sweeping the markets. He never sells options, then. He only buys them. He’s never the one who can lose a great deal of money if GM stock suddenly plunges. Nor does he ever bet on the market moving in one direction or another. That would require Taleb to assume that he understands the market, and he doesn’t. He hasn’t Warren Buffett’s confidence. So he buys options on both sides, on the possibility of the market moving both up and down. And he doesn’t bet on minor fluctuations in the market. Why bother? If everyone else is vastly underestimating the possibility of rare events, then an option on GM at, say, $40 is going to be undervalued. So Taleb buys out-of-the-money options by the truckload. He buys them for hundreds of different stocks, and if they expire before he gets to use them, he simply buys more. Taleb doesn’t even invest in stocks, not for Empirica and not for his own personal account. Buying a stock, unlike buying an option, is a gamble that the future will represent an improved version of the past. And who knows whether that will be true? So all of Taleb’s personal wealth, and the hundreds of millions that Empirica has in reserve, is in Treasury bills. Few on Wall Street have taken the practice of buying options to such extremes. But if anything completely out of the ordinary happens to the stock market, if some random event sends a jolt through all of Wall Street and pushes GM to, say, $20, Nassim Taleb will not end up in a dowdy apartment in Athens. He will be rich.

Not long ago, Taleb went to a dinner in a French restaurant just north of Wall Street. The people at the dinner were all quants: men with bulging pockets and open-collared shirts and the serene and slightly detached air of those who daydream in numbers. Taleb sat at the end of the table, drinking pastis and discussing French literature. There was a chess grand master at the table, with a shock of white hair, who had once been one of Anatoly Karpov’s teachers, and another man who over the course of his career had worked, in order, at Stanford University, Exxon, Los Alamos National Laboratory, Morgan Stanley, and a boutique French investment bank. They talked about mathematics and chess and fretted about one of their party who had not yet arrived and who had the reputation, as one of the quants worriedly said, of “not being able to find the bathroom.” When the check came, it was given to a man who worked in risk management at a big Wall Street bank, and he stared at it for a long time, with a slight mixture of perplexity and amusement, as if he could not remember what it was like to deal with a mathematical problem of such banality. The men at the table were in a business that was formally about mathematics but was really about epistemology, because to sell or to buy an option requires each party to confront the question of what it is he truly knows. Taleb buys options because he is certain that, at root, he knows nothing, or, more precisely, that other people believe they know more than they do. But there were plenty of people around that table who sold options, who thought that if you were smart enough to set the price of the option properly, you could win so many of those $1 bets on General Motors that, even if the stock ever did dip below $45, you’d still come out far ahead. They believe that the world is a place where, at the end of the day, leaves fall more or less in a predictable pattern.

The distinction between these two sides is the divide that emerged between Taleb and Niederhoffer all those years ago in Connecticut. Niederhoffer’s hero is the nineteenth- century scientist Francis Galton. Niederhoffer called his eldest daughter Galt, and there is a full-length portrait of Galton in his library. Galton was a statistician and a social scientist (and a geneticist and a meteorologist), and if he was your hero, you believed that by marshaling empirical evidence, by aggregating data points, you could learn whatever it was you needed to know. Taleb’s hero, on the other hand, is Karl Popper, who said that you could not know with any certainty that a proposition was true; you could only know that it was not true. Taleb makes much of what he learned from Niederhoffer, but Niederhoffer insists that his example was wasted on Taleb. “In one of his cases, Rumpole of the Bailey talked about being tried by the bishop who doesn’t believe in God,” Niederhoffer says. “Nassim is the empiricist who doesn’t believe in empiricism.” What is it that you claim to learn from experience, if you believe that experience cannot be trusted? Today, Niederhoffer makes a lot of his money selling options, and more often than not the person to whom he sells those options is Nassim Taleb. If one of them is up a dollar one day, in other words, that dollar is likely to have come from the other. The teacher and pupil have become predator and prey.

3.

Years ago, Nassim Taleb worked at the investment bank First Boston, and one of the things that puzzled him was what he saw as the mindless industry of the trading floor. A trader was supposed to come in every morning and buy and sell things, and on the basis of how much money he made buying and selling he was given a bonus. If he went too many weeks without showing a profit, his peers would start to look at him funny, and if he went too many months without showing a profit, he would be gone. The traders were for the most part well educated and wore Savile Row suits and Ferragamo ties. They dove into the markets with a frantic urgency. They read the Wall Street Journal closely and gathered around the television to catch breaking news. “The Fed did this, the Prime Minister of Spain did that,” Taleb recalls. “The Italian Finance Minister says there will be no competitive devaluation, this number is higher than expected, Abby Cohen just said this.” It was a scene that Taleb did not understand.

“He was always so conceptual about what he was doing,” says Howard Savery, who was Taleb’s assistant at the French bank Indosuez in the 1980s. “He used to drive our floor trader (his name was Tim) crazy. Floor traders are used to precision: “Sell a hundred futures at eighty-seven.” Nassim would pick up the phone and say, “Tim, sell some.” And Tim would say, “How many?” And he would say, “Oh, a social amount.” It was like saying, “I don’t have a number in mind, I just know I want to sell.” There would be these heated arguments in French, screaming arguments. Then everyone would go out to dinner and have fun. Nassim and his group had this attitude that we’re not interested in knowing what the new trade number is. When everyone else was leaning over their desks, listening closely to the latest figures, Nassim would make a big scene of walking out of the room.”

At Empirica, then, there are no Wall Street Journals to be found. There is very little active trading, because the options that the fund owns are selected by computer. Most of those options will be useful only if the market does something dramatic, and, of course, on most days the market doesn’t. So the job of Taleb and his team is to wait and to think. They analyze the company’s trading policies, back-test various strategies, and construct ever more sophisticated computer models of options pricing. Danny, in the corner, occasionally types things into the computer. Pallop looks dreamily off into the distance. Spitznagel takes calls from traders, and toggles back and forth between screens on his computer. Taleb answers e-mails and calls one of the firm’s brokers in Chicago, affecting, as he does, the kind of Brooklyn accent that people from Brooklyn would have if they were actually from northern Lebanon: “Howyoudoin?” It is closer to a classroom than to a trading floor.

“Pallop, did you introspect?” Taleb calls out as he wanders back in from lunch. Pallop is asked what his PhD is about. “Pretty much this,” he says, waving a languid hand around the room.

“It looks like we will have to write it for him,” Taleb chimes in, “because Pollop is very lazy.”

What Empirica has done is to invert the traditional psychology of investing. You and I, if we invest conventionally in the market, have a fairly large chance of making a small amount of money in a given day from dividends or interest or the general upward trend of the market. We have almost no chance of making a large amount of money in one day, and there is a very small, but real, possibility that if the market collapses we could blow up. We accept that distribution of risks because, for fundamental reasons, it feels right. In the book that Pallop was reading by Kahneman and Tversky, for example, there is a description of a simple experiment, where a group of people were told to imagine that they had $300. They were then given a choice between (a) receiving another $100 or (b) tossing a coin, where if they won they got $200 and if they lost they got nothing. Most of us, it turns out, prefer (a) to (b). But then Kahneman and Tversky did a second experiment. They told people to imagine that they had $500 and then asked them if they would rather (c) give up $100 or (d) toss a coin and pay $200 if they lost and nothing at all if they won. Most of us now prefer (d) to (c). What is interesting about those four choices is that, from a probabilistic standpoint, they are identical. Nonetheless, we have strong preferences among them. Why? Because we’re more willing to gamble when it comes to losses, but are risk averse when it comes to our gains. That’s why we like small daily winnings in the stock market, even if that requires that we risk losing everything in a crash.

At Empirica, by contrast, every day brings a small but real possibility that they’ll make a huge amount of money in a day; no chance that they’ll blow up; and a very large possibility that they’ll lose a small amount of money. All those dollar, and fifty-cent, and nickel options that Empirica has accumulated, few of which will ever be used, soon begin to add up. By looking at a particular column on the computer screens showing Empirica’s positions, anyone at the firm can tell you precisely how much money Empirica has lost or made so far that day. At 11:30 a.m., for instance, they had recovered just 28 percent of the money they had spent that day on options. By 12:30, they had recovered 40 percent, meaning that the day was not yet half over and Empirica was already in the red to the tune of several hundred thousand dollars. The day before that, it had made back 85 percent of its money; the day before that, 48 percent; the day before that, 65 percent; and the day before that also 65 percent; and, in fact, with a few notable exceptions—like the few days when the market reopened after September 11—Empirica has done nothing but lose money since last April. “We cannot blow up, we can only bleed to death,” Taleb says, and bleeding to death, absorbing the pain of steady losses, is precisely what human beings are hardwired to avoid. “Say you’ve got a guy who is long on Russian bonds,” Savery says. “He’s making money every day. One day, lightning strikes and he loses five times what he made. Still, on three hundred and sixty-four out of three hundred and sixty-five days he was very happily making money. It’s much harder to be the other guy, the guy losing money three hundred and sixty-four days out of three hundred and sixty-five, because you start questioning yourself. Am I ever going to make it back? Am I really right? What if it takes ten years? Will I even be sane ten years from now?” What the normal trader gets from his daily winnings is feedback, the pleasing illusion of progress. At Empirica, there is no feedback. “It’s like you’re playing the piano for ten years and you still can’t play ‘Chopsticks,’” Spitznagel say, “and the only thing you have to keep you going is the belief that one day you’ll wake up and play like Rachmaninoff.” Was it easy knowing that Niederhoffer—who represented everything they thought was wrong—was out there getting rich while they were bleeding away? Of course it wasn’t. If you watched Taleb closely that day, you could see the little ways in which the steady drip of losses takes a toll. He glanced a bit too much at the Bloomberg. He leaned forward a bit too often to see the daily loss count. He succumbs to an array of superstitious tics. If the going is good, he parks in the same space every day; he turned against Mahler because he associates Mahler with the last year’s long dry spell. “Nassim says all the time that he needs me there, and I believe him,” Spitznagel says. He is there to remind Taleb that there is a point to waiting, to help Taleb resist the very human impulse to abandon everything and stanch the pain of losing. “Mark is my cop,” Taleb says. So is Pallop: he is there to remind Taleb that Empirica has the intellectual edge.

“The key is not having the ideas but having the recipe to deal with your ideas,” Taleb says. “We don’t need moralizing. We need a set of tricks.” His trick is a protocol that stipulates precisely what has to be done in every situation. “We built the protocol, and the reason we did was to tell the guys, Don’t listen to me, listen to the protocol. Now, I have the right to change the protocol, but there is a protocol to changing the protocol. We have to be hard on ourselves to do what we do. The bias we see in Niederhoffer we see in ourselves.” At the quant dinner, Taleb devoured his roll, and as the busboy came around with more rolls Taleb shouted out, “No, no!” and blocked his plate. It was a never-ending struggle, this battle between head and heart. When the waiter came around with wine, he hastily covered the glass with his hand. When the time came to order, he asked for steak frites—“without the frites, please!”—and then immediately tried to hedge his choice by negotiating with the person next to him for a fraction of his frites.

The psychologist Walter Mischel has done a series of experiments where he puts a young child in a room and places two cookies in front of him, one small and one large. The child is told that if he wants the small cookie he need only ring a bell and the experimenter will come back into the room and give it to him. If he wants the better treat, though, he has to wait until the experimenter returns on his own, which might be anytime in the next twenty minutes. Mischel has videotapes of six-year-olds sitting in the room by themselves, staring at the cookies, trying to persuade themselves to wait. One girl starts to sing to herself. She whispers what seems to be the instructions—that she can have the big cookie if she can only wait. She closes her eyes. Then she turns her back on the cookies. Another little boy swings his legs violently back and forth, and then picks up the bell and examines it, trying to do anything but think about the cookie he could get by ringing it. The tapes document the beginnings of discipline and self-control—the techniques we learn to keep our impulses in check—and to watch all the children desperately distracting themselves is to experience the shock of recognition: that’s Nassim Taleb!

There is something else as well that helps to explain Taleb’s resolve—more than the tics and the systems and the self-denying ordinances. It happened a year or so before he went to see Niederhoffer. Taleb had been working as a trader at the Chicago Mercantile Exchange, and he’d developed a persistently hoarse throat. At first, he thought nothing of it: a hoarse throat was an occupational hazard of spending every day in the pit. Finally, when he moved back to New York, he went to see a doctor, in one of those Upper East Side prewar buildings with a glamorous facade. Taleb sat in the office, staring out at the plain brick of the courtyard, reading the medical diplomas on the wall over and over, waiting and waiting for the verdict. The doctor returned and spoke in a low, grave voice: “I got the pathology report. It’s not as bad as it sounds.” But, of course, it was: he had throat cancer. Taleb’s mind shut down. He left the office. It was raining outside. He walked and walked and ended up at a medical library. There he read frantically about his disease, the rainwater forming a puddle under his feet. It made no sense. Throat cancer was the disease of someone who has spent a lifetime smoking heavily. But Taleb was young, and he barely smoked at all. His risk of getting throat cancer was something like one in a hundred thousand, almost unimaginably small. He was a black swan! The cancer is now beaten, but the memory of it is also Taleb’s secret, because once you have been a black swan—not just seen one but lived and faced death as one—it becomes easier to imagine another on the horizon.

As the day came to an end, Taleb and his team turned their attention once again to the problem of the square root of n. Taleb was back at the whiteboard. Spitznagel was looking on. Pallop was idly peeling a banana. Outside, the sun was beginning to settle behind the trees. “You do a conversion to p1 and p2,” Taleb said. His marker was once again squeaking across the whiteboard. “We say we have a Gaussian distribution, and you have the market switching from a low-volume regime to a high-volume. P21. P22. You have your igon value.” He frowned and stared at his handiwork. The markets were now closed. Empirica had lost money, which meant that somewhere off in the woods of Connecticut Niederhoffer had no doubt made money. That hurt, but if you steeled yourself and thought about the problem at hand, and kept in mind that someday the market would do something utterly unexpected because in the world we live in something utterly unexpected always happens, then the hurt was not so bad. Taleb eyed his equations on the whiteboard and arched an eyebrow. It was a very difficult problem. “Where is Dr. Wu? Should we call in Dr. Wu?”

4.

A year after Nassim Taleb came to visit him, Victor Niederhoffer blew up. He sold a very large number of options on the S &P index, taking millions of dollars from other traders in exchange for promising to buy a basket of stocks from them at current prices, if the market ever fell. It was an unhedged bet, or what was called on Wall Street a naked put, meaning that he bet everyone on one outcome: he bet in favor of the large probability of making a small amount of money, and against the small probability of losing a large amount of money—and he lost. On October 27, 1997, the market plummeted 8 percent, and all of the many, many people who had bought those options from Niederhoffer came calling all at once, demanding that he buy back their stocks at pre-crash prices. He ran through $130,000,000—his cash reserves, his savings, his other stocks—and when his broker came and asked for still more, he didn’t have it. In a day, one of the most successful hedge funds in America was wiped out. Niederhoffer had to shut down his firm. He had to mortgage his house. He had to borrow money from his children. He had to call Sotheby’s and sell his prized silver collection—the massive nineteenth-century Brazilian “sculptural group of victory” made for the Visconde De Figueirdeo, the massive silver bowl designed in 1887 by Tiffany & Co. for the James Gordon Bennett Cup yacht race, and on and on. He stayed away from the auction. He couldn’t bear to watch.

“It was one of the worst things that has ever happened to me in my life, right up there with the death of those closest to me,” Niederhoffer said recently. It was a Saturday in March, and he was in the library of his enormous house. Two weary-looking dogs wandered in and out. He is a tall man, an athlete, thick through the upper body and trunk, with a long, imposing face and baleful, hooded eyes. He was shoeless. One collar on his shirt was twisted inward, and he looked away as he talked. “I let down my friends. I lost my business. I was a major money manager. Now I pretty much have had to start from ground zero.” He paused. “Five years have passed. The beaver builds a dam. The river washes it away, so he tries to build a better foundation, and I think I have. But I’m always mindful of the possibility of more failures.” In the distance, there was a knock on the door. It was a man named Milton Bond, an artist who had come to present Niederhoffer with a painting he had done of Moby Dick ramming the Pequod. It was in the folk-art style that Niederhoffer likes so much, and he went to meet Bond in the foyer, kneeling down in front of the painting as Bond unwrapped it. Niederhoffer has other paintings of the Pequod in his house, and paintings of the Essex, the ship on which Melville’s story was based. In his office, on a prominent wall, is a painting of the Titanic. They were, he said, his way of staying humble. “One of the reasons I’ve paid lots of attention to the Essex is that it turns out that the captain of the Essex, as soon as he got back to Nantucket, was given another job,” Niederhoffer said. “They thought he did a good job in getting back after the ship was rammed. The captain was asked, ‘How could people give you another ship?’ And he said, ‘I guess on the theory that lightning doesn’t strike twice.’ It was a fairly random thing. But then he was given the other ship, and that one foundered, too. Got stuck in the ice. At that time, he was a lost man. He wouldn’t even let them save him. They had to forcibly remove him from the ship. He spent the rest of his life as a janitor in Nantucket. He became what on Wall Street they call a ghost.” Niederhoffer was back in his study now, his lanky body stretched out, his feet up on the table, his eyes a little rheumy. “You see? I can’t afford to fail a second time. Then I’ll be a total washout. That’s the significance of the Pequod.”

A month or so before Niederhoffer blew up, Taleb had dinner with him at a restaurant in Westport, and Niederhoffer told him that he had been selling naked puts. You can imagine the two of them across the table from each other, Niederhoffer explaining that his bet was an acceptable risk, that the odds of the market going down so heavily that he would be wiped out were minuscule, and Taleb listening and shaking his head, and thinking about black swans. “I was depressed when I left him,” Taleb said. “Here is a guy who goes out and hits a thousand backhands. He plays chess like his life depends on it. Here is a guy who, whatever he wants to do when he wakes up in the morning, he ends up doing better than anyone else. Whatever he wakes up in the morning and decides to do, he did better than anyone else. I was talking to my hero…” This was the reason Taleb didn’t want to be Niederhoffer when Niederhoffer was at his height—the reason he didn’t want the silver and the house and the tennis matches with George Soros. He could see all too clearly where it all might end up. In his mind’s eye, he could envision Niederhoffer borrowing money from his children, and selling off his silver, and talking in a hollow voice about letting down his friends, and Taleb did not know if he had the strength to live with that possibility. Unlike Niederhoffer, Taleb never thought he was invincible. You couldn’t if you had watched your homeland blow up, and had been the one person in a hundred thousand who gets throat cancer, and so for Taleb there was never any alternative to the painful process of insuring himself against catastrophe.

This kind of caution does not seem heroic, of course. It seems like the joyless prudence of the accountant and the Sunday school teacher. The truth is that we are drawn to the Niederhoffers of this world because we are all, at heart, like Niederhoffer: we associate the willingness to risk great failure—and the ability to climb back from catastrophe—with courage. But in this we are wrong. That is the lesson of Taleb and Niederhoffer, and also the lesson of our volatile times. There is more courage and heroism in defying the human impulse, in taking the purposeful and painful steps to prepare for the unimaginable.

In the fall of 2001, Niederhoffer sold a large number of options, betting that the markets would be quiet, and they were, until out of nowhere two planes crashed into the World Trade Center. “I was exposed. It was nip and tuck.” Niederhoffer shook his head, because there was no way to have anticipated September 11. “That was a totally unexpected event.”[2]

April 22 and 29, 2002

True Colors

HAIR DYE AND THE HIDDEN HISTORY OF POSTWAR AMERICA

1.

During the Depression—long before she became one of the most famous copywriters of her day—Shirley Polykoff met a man named George Halperin. He was the son of an Orthodox rabbi from Reading, Pennsylvania, and soon after they began courting he took her home for Passover to meet his family. They ate roast chicken, tzimmes, and sponge cake, and Polykoff hit it off with Rabbi Halperin, who was warm and funny. George’s mother was another story. She was Old World Orthodox, with severe, tightly pulled back hair; no one was good enough for her son.

“How’d I do, George?” Shirley asked as soon as they got in the car for the drive home. “Did your mother like me?”

He was evasive. “My sister Mildred thought you were great.”

“That’s nice, George,” she said. “But what did your mother say?”

There was a pause. “She says you paint your hair.” Another pause. “Well, do you?”

Shirley Polykoff was humiliated. In her mind she could hear her future mother-in-law: Fahrbt zi der huer? Oder fahrbt zi nisht? Does she color her hair? Or doesn’t she?

The answer, of course, was that she did. Shirley Polykoff always dyed her hair, even in the days when the only women who went blond were chorus girls and hookers. At home in Brooklyn, starting when she was fifteen, she would go to Mr. Nicholas’s beauty salon, one flight up, and he would “lighten the back” until all traces of her natural brown were gone. She thought she ought to be a blonde—or, to be more precise, she thought that the decision about whether she could be a blonde was rightfully hers, and not God’s. Shirley dressed in deep oranges and deep reds and creamy beiges and royal hues. She wore purple suede and aqua silk, and was the kind of person who might take a couture jacket home and embroider some new detail on it. Once, in the days when she had her own advertising agency, she was on her way to Memphis to make a presentation to Maybelline and her taxi broke down in the middle of the expressway. She jumped out and flagged down a Pepsi-Cola truck, and the truck driver told her he had picked her up because he’d never seen anyone quite like her before. “Shirley would wear three outfits, all at once, and each one of them would look great,” Dick Huebner, who was her creative director, says. She was flamboyant and brilliant and vain in an irresistible way, and it was her conviction that none of those qualities went with brown hair. The kind of person she spent her life turning herself into did not go with brown hair. Shirley’s parents were Hyman Polykoff, small-time necktie merchant, and Rose Polykoff, housewife and mother, of East New York and Flatbush, by way of the Ukraine. Shirley ended up on Park Avenue at Eighty-second. “If you asked my mother ‘Are you proud to be Jewish?’ she would have said yes,” her daughter, Alix Nelson Frick, says. “She wasn’t trying to pass. But she believed in the dream, and the dream was that you could acquire all the accouterments of the established affluent class, which included a certain breeding and a certain kind of look. Her idea was that you should be whatever you want to be, including being a blonde.”

In 1956, when Shirley Polykoff was a junior copywriter at Foote, Cone & Belding, she was given the Clairol account. The product the company was launching was Miss Clairol, the first hair-color bath that made it possible to lighten, tint, condition, and shampoo at home, in a single step—to take, say, Topaz (for a champagne blond) or Moon Gold (for a medium ash), apply it in a peroxide solution directly to the hair, and get results in twenty minutes. When the Clairol sales team demonstrated their new product at the International Beauty Show, in the old Statler Hotel, across from Madison Square Garden, thousands of assembled beauticians jammed the hall and watched, openmouthed, demonstration after demonstration. “They were astonished,” recalls Bruce Gelb, who ran Clairol for years, along with his father, Lawrence, and his brother Richard. “This was to the world of hair color what computers were to the world of adding machines. The sales guys had to bring buckets of water and do the rinsing off in front of everyone, because the hairdressers in the crowd were convinced we were doing something to the models behind the scenes.”

Miss Clairol gave American women the ability, for the first time, to color their hair quickly and easily at home. But there was still the stigma—the prospect of the disapproving mother-in-law. Shirley Polykoff knew immediately what she wanted to say, because if she believed that a woman had a right to be a blonde, she also believed that a woman ought to be able to exercise that right with discretion. “Does she or doesn’t she?” she wrote, translating from the Yiddish to the English. “Only her hairdresser knows for sure.” Clairol bought thirteen ad pages in Life in the fall of 1956, and Miss Clairol took off like a bird. That was the beginning. For Nice ’n Easy, Clairol’s breakthrough shampoo-in hair color, she wrote, “The closer he gets, the better you look.” For Lady Clairol, the cream-and-bleach combination that brought silver and platinum shades to Middle America, she wrote, “Is it true blondes have more fun?” and then, even more memorably, “If I’ve only one life, let me live it as a blonde!” (In the summer of 1962, just before The Feminine Mystique was published, Betty Friedan was, in the words of her biographer, so “bewitched” by that phrase that she bleached her hair.) Shirley Polykoff wrote the lines; Clairol perfected the product. And from the fifties to the seventies, when Polykoff gave up the account, the number of American women coloring their hair rose from 7 percent to more than 40 percent.

Today, when women go from brown to blond to red to black and back again without blinking, we think of hair-color products the way we think of lipstick. On drugstore shelves there are bottles and bottles of hair-color products with names like Hydrience and Excellence and Preference and Natural Instincts and Loving Care and Nice ’n Easy, and so on, each in dozens of different shades. Feria, the new, youth-oriented brand from L’Oréal, comes in Chocolate Cherry and Champagne Cocktail—colors that don’t ask “Does she or doesn’t she?” but blithely assume “Yes, she does.” Hair dye is now a billion-dollar-a-year commodity.

Yet there was a time, not so long ago—between, roughly speaking, the start of Eisenhower’s administration and the end of Carter’s—when hair color meant something. Lines like “Does she or doesn’t she?” or the famous 1973 slogan for L’Oréal’s Preference—“Because I’m worth it”—were as instantly memorable as “Winston tastes good like a cigarette should” or “Things go better with Coke.” They lingered long after advertising usually does and entered the language; they somehow managed to take on meanings well outside their stated intention. Between the fifties and the seventies, women entered the workplace, fought for social emancipation, got the Pill, and changed what they did with their hair. To examine the hair-color campaigns of the period is to see, quite unexpectedly, all these things as bound up together, the profound with the seemingly trivial. In writing the history of women in the postwar era, did we forget something important? Did we leave out hair?

2.

When the “Does she or doesn’t she?” campaign first ran, in 1956, most advertisements that were aimed at women tended to be high glamour—“cherries in the snow, fire and ice,” as Bruce Gelb puts it. But Shirley Polykoff insisted that the models for the Miss Clairol campaign be more like the girl next door—“Shirtwaist types instead of glamour gowns,” she wrote in her original memo to Clairol. “Cashmere-sweater-over-the-shoulder types. Like larger-than-life portraits of the proverbial girl on the block who’s a little prettier than your wife and lives in a house slightly nicer than yours.” The model had to be a Doris Day type—not a Jayne Mansfield—because the idea was to make hair color as respectable and mainstream as possible. One of the earliest “Does she or doesn’t she?” television commercials featured a housewife in the kitchen preparing hors d’oeuvres for a party. She is slender and pretty and wearing a black cocktail dress and an apron. Her husband comes in, kisses her on the lips, approvingly pats her very blond hair, then holds the kitchen door for her as she takes the tray of hors d’oeuvres out for her guests. It is an exquisitely choreographed domestic tableau, down to the little dip the housewife performs as she hits the kitchen light switch with her elbow on her way out the door. In one of the early print ads—which were shot by Richard Avedon and then by Irving Penn—a woman with strawberry-blond hair is lying on the grass, holding a dandelion between her fingers, and lying next to her is a girl of about eight or nine. What’s striking is that the little girl’s hair is the same shade of blond as her mother’s. The “Does she or doesn’t she?” print ads always included a child with the mother to undercut the sexual undertones of the slogan—to make it clear that mothers were using Miss Clairol, and not just “fast” women—and, most of all, to provide a precise color match. Who could ever guess, given the comparison, that Mom’s shade came out of a bottle?

The Polykoff campaigns were a sensation. Letters poured in to Clairol. “Thank you for changing my life,” read one, which was circulated around the company and used as the theme for a national sales meeting. “My boyfriend, Harold, and I were keeping company for five years but he never wanted to set a date. This made me very nervous. I am twenty-eight and my mother kept saying soon it would be too late for me.” Then, the letter writer said, she saw a Clairol ad in the subway. She dyed her hair blond, and “that is how I am in Bermuda now on my honeymoon with Harold.” Polykoff was sent a copy with a memo: “It’s almost too good to be true!” With her sentimental idyll of blond mother and child, Shirley Polykoff had created something iconic.

“My mother wanted to be that woman in the picture,” Polykoff’s daughter, Frick, says. “She was wedded to the notion of that suburban, tastefully dressed, well-coddled matron who was an adornment to her husband, a loving mother, a long-suffering wife, a person who never overshadowed him. She wanted the blond child. In fact, I was blond as a kid, but when I was about thirteen my hair got darker and my mother started bleaching it.” Of course—and this is the contradiction central to those early Clairol campaigns—Shirley Polykoff wasn’t really that kind of woman at all. She always had a career. She never moved to the suburbs. “She maintained that women were supposed to be feminine, and not too dogmatic and not overshadow their husband, but she greatly overshadowed my father, who was a very pure, unaggressive, intellectual type,” Frick says. “She was very flamboyant, very emotional, very dominating.”

One of the stories Polykoff told about herself repeatedly—and that even appeared after her death in her New York Times obituary—was that she felt that a woman never ought to make more than her husband, and that only after George’s death, in the early sixties, would she let Foote, Cone & Belding raise her salary to its deserved level. “That’s part of the legend, but it isn’t the truth,” Frick says. “The ideal was always as vividly real to her as whatever actual parallel reality she might be living. She never wavered in her belief in that dream, even if you would point out to her some of the fallacies of that dream, or the weaknesses, or the internal contradictions, or the fact that she herself didn’t really live her life that way.” For Shirley Polykoff, the color of her hair was a kind of useful fiction, a way of bridging the contradiction between the kind of woman she was and the kind of woman she felt she ought to be. It was a way of having it all. She wanted to look and feel like Doris Day without having to be Doris Day. In twenty-seven years of marriage, during which she bore two children, she spent exactly two weeks as a housewife, every day of which was a domestic and culinary disaster. “Listen, sweetie,” an exasperated George finally told her. “You make a lousy little woman in the kitchen.” She went back to work the following Monday.

This notion of the useful fiction—of looking the part without being the part—had a particular resonance for the America of Shirley Polykoff’s generation. As a teenager, Shirley Polykoff tried to get a position as a clerk at an insurance agency and failed. Then she tried again, at another firm, applying as Shirley Miller. This time, she got the job. Her husband, George, also knew the value of appearances. The week Polykoff first met him, she was dazzled by his worldly sophistication, his knowledge of out-of-the-way places in Europe, his exquisite taste in fine food and wine. The second week, she learned that his expertise was all show, derived from reading the Times. The truth was that George had started his career loading boxes in the basement of Macy’s by day and studying law at night. He was a faker, just as, in a certain sense, she was, because to be Jewish—or Irish or Italian or African-American or, for that matter, a woman of the fifties caught up in the first faint stirrings of feminism—was to be compelled to fake it in a thousand small ways, to pass as one thing when, deep inside, you were something else. “That’s the kind of pressure that comes from the immigrants’ arriving and thinking that they don’t look right, that they are kind of funny-looking and maybe shorter than everyone else, and their clothes aren’t expensive,” Frick says. “That’s why many of them began to sew, so they could imitate the patterns of the day. You were making yourself over. You were turning yourself into an American.” Frick, who is also in advertising (she’s the chairman of Spier NY), is a forcefully intelligent woman, who speaks of her mother with honesty and affection. “There were all those phrases that came to fruition at that time—you know, ‘clothes make the man’ and ‘first impressions count.’” So the question “Does she or doesn’t she?” wasn’t just about how no one could ever really know what you were doing. It was about how no one could ever really know who you were. It really meant not “Does she?” but “Is she?” It really meant “Is she a contented homemaker or a feminist, a Jew or a Gentile—or isn’t she?”

3.

In 1973, Ilon Specht was working as a copywriter at the McCann-Erickson advertising agency, in New York. She was a twenty-three-year-old college dropout from California. She was rebellious, unconventional, and independent, and she had come East to work on Madison Avenue, because that’s where people like that went to work back then. “It was a different business in those days,” Susan Schermer, a longtime friend of Specht’s, says. “It was the seventies. People were wearing feathers to work.” At her previous agency, while she was still in her teens, Specht had written a famous television commercial for the Peace Corps. (Single shot. No cuts. A young couple lying on the beach. “It’s a big, wide wonderful world” is playing on a radio. Voice-over recites a series of horrible facts about less fortunate parts of the world: in the Middle East half the children die before their sixth birthday, and so forth. A news broadcast is announced as the song ends, and the woman on the beach changes the station.)

“Ilon? Omigod! She was one of the craziest people I ever worked with,” Ira Madris, another colleague from those years, recalls, using the word crazy as the highest of compliments. “And brilliant. And dogmatic. And highly creative. We all believed back then that having a certain degree of neurosis made you interesting. Ilon had a degree of neurosis that made her very interesting.”

At McCann, Ilon Specht was working with L’Oréal, a French company that was trying to challenge Clairol’s dominance in the American hair-color market. L’Oréal had originally wanted to do a series of comparison spots, presenting research proving that their new product—Preference—was technologically superior to Nice ’n Easy because it delivered a more natural, translucent color. But at the last minute the campaign was killed because the research hadn’t been done in the United States. At McCann, there was panic. “We were four weeks before air date and we had nothing—nada,” Michael Sennott, a staffer who was also working on the account, says. The creative team locked itself away: Specht, Madris—who was the art director on the account—and a handful of others. “We were sitting in this big office,” Specht recalls. “And everyone was discussing what the ad should be. They wanted to do something with a woman sitting by a window, and the wind blowing through the curtains. You know, one of those fake places with big, glamorous curtains. The woman was a complete object. I don’t think she even spoke. They just didn’t get it. We were in there for hours.”

Ilon Specht has long, thick black hair, held in a loose knot at the top of her head, and lipstick the color of maraschino cherries. She talks fast and loud, and swivels in her chair as she speaks, and when people walk by her office they sometimes bang on her door, as if the best way to get her attention is to be as loud and emphatic as she is. Reminiscing not long ago about the seventies, she spoke about the strangeness of corporate clients in shiny suits who would say that all the women in the office looked like models. She spoke about what it meant to be young in a business dominated by older men, and about what it felt like to write a line of copy that used the word woman and have someone cross it out and write girl.

“I was a twenty-three-year-old girl—a woman,” she said. “What would my state of mind have been? I could just see that they had this traditional view of women, and my feeling was that I’m not writing an ad about looking good for men, which is what it seems to me that they were doing. I just thought, Fuck you. I sat down and did it, in five minutes. It was very personal. I can recite to you the whole commercial, because I was so angry when I wrote it.”

Specht sat stock still and lowered her voice: “I use the most expensive hair color in the world. Preference, by L’Oréal. It’s not that I care about money. It’s that I care about my hair. It’s not just the color. I expect great color. What’s worth more to me is the way my hair feels. Smooth and silky but with body. It feels good against my neck. Actually, I don’t mind spending more for L’Oréal. Because I’m”—and here Specht took her fist and struck her chest—“worth it.”

The power of the commercial was originally thought to lie in its subtle justification of the fact that Preference cost ten cents more than Nice ’n Easy. But it quickly became obvious that the last line was the one that counted. On the strength of “Because I’m worth it,” Preference began stealing market share from Clairol. In the 1980s, Preference surpassed Nice ’n Easy as the leading hair-color brand in the country, and in 1997 L’Oréal took the phrase and made it the slogan for the whole company. An astonishing 71 percent of American women can now identify that phrase as the L’Oréal signature, which, for a slogan—as opposed to a brand name—is almost without precedent.

4.

From the very beginning, the Preference campaign was unusual. Polykoff’s Clairol spots had male voice-overs. In the L’Oréal ads, the model herself spoke, directly and personally. Polykoff’s commercials were “other-directed”—they were about what the group was saying (“Does she or doesn’t she?”) or what a husband might think (“The closer he gets, the better you look”). Specht’s line was what a woman says to herself. Even in the choice of models, the two campaigns diverged. Polykoff wanted fresh, girl-next-door types. McCann and L’Oréal wanted models who somehow embodied the complicated mixture of strength and vulnerability implied by “Because I’m worth it.” In the late seventies, Meredith Baxter Birney was the brand spokeswoman. At that time, she was playing a recently divorced mom going to law school on the TV drama Family. McCann scheduled her spots during Dallas and other shows featuring so-called silk blouse women—women of strength and independence. Then came Cybill Shepherd, at the height of her run as the brash, independent Maddie on Moonlighting, in the eighties. She, in turn, was followed by Heather Locklear, the tough and sexy star of the 1990s hit Melrose Place . All the L’Oréal spokeswomen are blondes, but blondes of a particular type. In his brilliant 1995 book, Big Hair: A Journey into the Transformation of Self, the Canadian anthropologist Grant McCracken argued for something he calls the “blondness periodic table,” in which blondes are divided into six categories: the bombshell blonde (Mae West, Marilyn Monroe), the sunny blonde (Doris Day, Goldie Hawn), the brassy blonde (Candice Bergen), the dangerous blonde (Sharon Stone), the society blonde (C. Z. Guest), and the cool blonde (Marlene Dietrich, Grace Kelly). L’Oréal’s innovation was to carve out a niche for itself in between the sunny blondes—the “simple, mild, and innocent” blondes—and the smart, bold, brassy blondes, who, in McCracken’s words, “do not mediate their feelings or modulate their voices.”

This is not an easy sensibility to capture. Countless actresses have auditioned for L’Oréal over the years and been turned down. “There was one casting we did with Brigitte Bardot,” Ira Madris recalls (this was for another L’Oréal product), “and Brigitte, being who she is, had the damnedest time saying that line. There was something inside of her that didn’t believe it. It didn’t have any conviction.” Of course it didn’t: Bardot is bombshell, not sassy. Clairol made a run at the Preference sensibility for itself, hiring Linda Evans in the eighties as the pitchwoman for Ultress, the brand aimed at Preference’s upscale positioning. This didn’t work, either. Evans, who played the adoring wife of Blake Carrington on Dynasty, was too sunny. (“The hardest thing she did on that show,” Michael Sennott says, perhaps a bit unfairly, “was rearrange the flowers.”)

Even if you got the blonde right, though, there was still the matter of the slogan. For a Miss Clairol campaign in the seventies, Polykoff wrote a series of spots with the tag line “This I do for me.” But “This I do for me” was at best a halfhearted approximation of “Because I’m worth it”—particularly for a brand that had spent its first twenty years saying something entirely different. “My mother thought there was something too brazen about ‘I’m worth it,’” Frick told me. “She was always concerned with what people around her might think. She could never have come out with that bald-faced an equation between hair color and self-esteem.”

The truth is that Polykoff’s sensibility—which found freedom in assimilation—had been overtaken by events. In one of Polykoff’s “Is it true blondes have more fun?” commercials for Lady Clairol in the sixties, for example, there is a moment that by 1973 must have been painful to watch. A young woman, radiantly blond, is by a lake, being swung around in the air by a darkly handsome young man. His arms are around her waist. Her arms are around his neck, her shoes off, her face aglow. The voice-over is male, deep and sonorous. “Chances are,” the voice says, “she’d have gotten the young man anyhow, but you’ll never convince her of that.” Here was the downside to Shirley Polykoff’s world. You could get what you wanted by faking it, but then you would never know whether it was you or the bit of fakery that made the difference. You ran the risk of losing sight of who you really were. Shirley Polykoff knew that the all-American life was worth it, and that “he”—the handsome man by the lake, or the reluctant boyfriend who finally whisks you off to Bermuda—was worth it. But, by the end of the sixties, women wanted to know that they were worth it, too.

5.

Why are Shirley Polykoff and Ilon Specht important? That seems like a question that can easily be answered in the details of their campaigns. They were brilliant copywriters, who managed in the space of a phrase to capture the particular feminist sensibilities of the day. They are an example of a strange moment in American social history when hair dye somehow got tangled up in the politics of assimilation and feminism and self-esteem. But in a certain way their stories are about much more: they are about the relationship we have to the products we buy, and about the slow realization among advertisers that unless they understood the psychological particulars of that relationship—unless they could dignify the transactions of everyday life by granting them meaning—they could not hope to reach the modern consumer. Shirley Polykoff and Ilon Specht perfected a certain genre of advertising that did just this, and one way to understand the Madison Avenue revolution of the postwar era is as a collective attempt to define and extend that genre. The revolution was led by a handful of social scientists, chief among whom was an elegant, Viennese-trained psychologist by the name of Herta Herzog. What did Herta Herzog know? She knew—or, at least, she thought she knew—the theory behind the success of slogans like “Does she or doesn’t she?” and “Because I’m worth it,” and that makes Herta Herzog, in the end, every bit as important as Shirley Polykoff and Ilon Specht.

Herzog worked at a small advertising agency called Jack Tinker & Partners, and people who were in the business in those days speak of Tinker the way baseball fans talk about the 1927 Yankees. Tinker was the brainchild of the legendary adman Marion Harper, who came to believe that the agency he was running, McCann- Erickson, was too big and unwieldy to be able to consider things properly. His solution was to pluck a handful of the very best and brightest from McCann and set them up, first in the Waldorf Towers (in the suite directly below the Duke and Duchess of Windsor’s and directly above General Douglas MacArthur’s) and then, more permanently, in the Dorset Hotel, on West Fifty-fourth Street, overlooking the Museum of Modern Art. The Tinker Group rented the penthouse, complete with a huge terrace, Venetian-tiled floors, a double-height living room, an antique French polished-pewter bar, a marble fireplace, spectacular skyline views, and a rotating exhibit of modern art (hung by the partners for motivational purposes), with everything—walls, carpets, ceilings, furnishings—a bright, dazzling white. It was supposed to be a think tank, but Tinker was so successful so fast that clients were soon lined up outside the door. When Buick wanted a name for its new luxury coupe, the Tinker Group came up with Riviera. When Bulova wanted a name for its new quartz watch, Tinker suggested Accutron. Tinker also worked with Coca-Cola and Exxon and Westinghouse and countless others, whose names—according to the strict standards of secrecy observed by the group—they would not divulge. Tinker started with four partners and a single phone. But by the end of the sixties it had taken over eight floors of the Dorset.

What distinguished Tinker was its particular reliance on the methodology known as motivational research, which was brought to Madison Avenue in the 1940s by a cadre of European intellectuals trained at the University of Vienna. Advertising research up until that point had been concerned with counting heads—with recording who was buying what. But the motivational researchers were concerned with why: Why do people buy what they do? What motivates them when they shop? The researchers devised surveys, with hundreds of questions, based on Freudian dynamic psychology. They used hypnosis, the Rosenzweig Picture-Frustration Study, role-playing, and Rorschach blots, and they invented what we now call the focus group. There was Paul Lazarsfeld, one of the giants of twentieth-century sociology, who devised something called the Lazarsfeld-Stanton Program Analyzer, a little device with buttons to record precisely the emotional responses of research subjects. There was Hans Zeisel, who had been a patient of Alfred Adler’s in Vienna and went to work at McCann-Erickson. There was Ernest Dichter, who had studied under Lazarsfeld at the Psychological Institute in Vienna and who did consulting for hundreds of the major corporations of the day. And there was Tinker’s Herta Herzog, perhaps the most accomplished motivational researcher of all, who trained dozens of interviewers in the Viennese method and sent them out to analyze the psyche of the American consumer.

“For Puerto Rican rum once, Herta wanted to do a study of why people drink, to tap into that below-the-surface kind of thing,” Rena Bartos, a former advertising executive who worked with Herta in the early days, recalls. “We would invite someone out to drink and they would order whatever they normally order, and we would administer a psychological test. Then we’d do it again at the very end of the discussion, after the drinks. The point was to see how people’s personality was altered under the influence of alcohol.” Herzog helped choose the name of Oasis cigarettes, because her psychological research suggested that the name—with its connotations of cool, bubbling springs—would have the greatest appeal to the orally fixated smoker.

“Herta was graceful and gentle and articulate,” Herbert Krugman, who worked closely with Herzog in those years, says. “She had enormous insights. Alka-Seltzer was a client of ours, and they were discussing new approaches for the next commercial. She said, ‘You show a hand dropping an Alka-Seltzer tablet into a glass of water. Why not show the hand dropping two? You’ll double sales.’ And that’s just what happened. Herta was the gray eminence. Everybody worshipped her.”

After retiring from Tinker, Herzog moved back to Europe, first to Germany and then to Austria, her homeland. She wrote an analysis of the TV show Dallas for the academic journal Society. She taught college courses on communications theory. She conducted a study on the Holocaust for the Vidal Sassoon Center for the Study of Anti-Semitism, in Jerusalem. Today, she lives in the mountain village of Leutasch, half an hour’s hard drive up into the Alps from Innsbruck, in a white picture-book cottage with a sharply pitched roof. She is a small woman, slender and composed, her once dark hair now streaked with gray. She speaks in short, clipped, precise sentences, in flawless, though heavily accented, English. If you put her in a room with Shirley Polykoff and Ilon Specht, the two of them would talk and talk and wave their long, bejeweled fingers in the air, and she would sit unobtrusively in the corner and listen. “Marion Harper hired me to do qualitative research—the qualitative interview, which was the specialty that had been developed in Vienna at the Österreichische Wirtschaftspsychologische Forschungsstelle,” Herzog told me. “It was interviewing not with direct questions and answers but where you open some subject of the discussion relevant to the topic and then let it go. You have the interviewer not talk but simply help the person with little questions like ‘And anything else?’ As an interviewer, you are not supposed to influence me. You are merely trying to help me. It was a lot like the psychoanalytic method.” Herzog was sitting, ramrod straight, in a chair in her living room. She was wearing a pair of black slacks and a heavy brown sweater to protect her against the Alpine chill. Behind her was row upon row of bookshelves, filled with the books of a postwar literary and intellectual life: Mailer in German, Reisman in English. Open and facedown on a long couch perpendicular to her chair was the latest issue of the psychoanalytic journal Psyche. “Later on, I added all kinds of psychological things to the process, such as word-association tests, or figure drawings with a story. Suppose you are my respondent and the subject is soap. I’ve already talked to you about soap. What you see in it. Why you buy it. What you like about it. Dislike about it. Then at the end of the interview I say, ‘Please draw me a figure—anything you want—and after the figure is drawn tell me a story about the figure.’”

When Herzog asked her subjects to draw a figure at the end of an interview, she was trying to extract some kind of narrative from them, something that would shed light on their unstated desires. She was conducting, as she says, a psychoanalytic session. But she wouldn’t ask about hair-color products in order to find out about you, the way a psychoanalyst might; she would ask about you in order to learn about hair-color products. She saw that the psychoanalytic interview could go both ways. You could use the techniques of healing to figure out the secrets of selling. “Does she or doesn’t she?” and “Because I’m worth it” did the same thing: they not only carried a powerful and redemptive message, but—and this was their real triumph—they succeeded in attaching that message to a five-dollar bottle of hair dye. The lasting contribution of motivational research to Madison Avenue was to prove that you could do this for just about anything—that the products and the commercial messages with which we surround ourselves are as much a part of the psychological furniture of our lives as the relationships and emotions and experiences that are normally the subject of psychoanalytic inquiry.

“There is one thing we did at Tinker that I remember well,” Herzog told me, returning to the theme of one of her, and Tinker’s, coups. “I found out that people were using Alka-Seltzer for stomach upset, but also for headaches,” Herzog said. “We learned that the stomach ache was the kind of ache where many people tended to say ‘It was my fault.’ Alka-Seltzer had been mostly advertised in those days as a cure for overeating, and overeating is something you have done. But the headache is quite different. It is something imposed on you.” This was, to Herzog, the classic psychological insight. It revealed Alka-Seltzer users to be divided into two apparently incompatible camps—the culprit and the victim—and it suggested that the company had been wooing one at the expense of the other. More important, it suggested that advertisers, with the right choice of words, could resolve that psychological dilemma with one or, better yet, two little white tablets. Herzog allowed herself a small smile. “So I said the nice thing would be if you could find something that combines these two elements. The copywriter came up with ‘the blahs.’” Herzog repeated the phrase, the blahs, because it was so beautiful. “The blahs was not one thing or the other—it was not the stomach or the head. It was both.”

6.

This notion of household products as psychological furniture is, when you think about it, a radical idea. When we give an account of how we got to where we are, we’re inclined to credit the philosophical over the physical, and the products of art over the products of commerce. In the list of sixties social heroes, there are musicians and poets and civil-rights activists and sports figures. Herzog’s implication is that such a high-minded list is incomplete. What, say, of Vidal Sassoon? In the same period, he gave the world the Shape, the Acute Angle, and the One-Eyed Ungaro. In the old “cosmology of cosmetology,” McCracken writes, “the client counted only as a plinth… the conveyor of the cut.” But Sassoon made individualization the hallmark of the haircut, liberating women’s hair from the hair styles of the times—from, as McCracken puts it, those “preposterous bits of rococo shrubbery that took their substance from permanents, their form from rollers, and their rigidity from hair spray.” In the Herzogian world view, the reasons we might give to dismiss Sassoon’s revolution—that all he was dispensing was a haircut, that it took just half an hour, that it affects only the way you look, that you will need another like it in a month—are the very reasons that Sassoon is important. If a revolution is not accessible, tangible, and replicable, how on earth can it be a revolution?

“Because I’m worth it” and “Does she or doesn’t she?” were powerful, then, precisely because they were commercials, for commercials come with products attached, and products offer something that songs and poems and political movements and radical ideologies do not, which is an immediate and affordable means of transformation. “We discovered in the first few years of the ‘Because I’m worth it’ campaign that we were getting more than our fair share of new users to the category—women who were just beginning to color their hair,” Sennott told me. “And within that group we were getting those undergoing life changes, which usually meant divorce. We had far more women who were getting divorced than Clairol had. Their children had grown, and something had happened, and they were reinventing themselves.” They felt different, and Ilon Specht gave them the means to look different—and do we really know which came first, or even how to separate the two? They changed their lives and their hair. But it wasn’t one thing or the other. It was both.

7.

In the midnineties, the spokesperson for Clairol’s Nice’n Easy was Julia Louis-Dreyfus, better known as Elaine from Seinfeld. In the Clairol tradition, she is the girl next door—a postmodern Doris Day. But the spots themselves could not be less like the original Polykoff campaigns for Miss Clairol. In the best of them, Louis-Dreyfus says to the dark-haired woman in front of her on a city bus, “You know, you’d look great as a blonde.” Louis-Dreyfus then shampoos in Nice ’n Easy Shade 104 right then and there, to the gasps and cheers of the other passengers. It is Shirley Polykoff turned upside down: funny, not serious; public, not covert.

L’Oréal, too, has changed. Meredith Baxter Birney said “Because I’m worth it” with an earnestness appropriate to the line. By the time Cybill Shepherd became the brand spokeswoman, in the eighties, it was almost flip—a nod to the materialism of the times—and today, with Heather Locklear, the spots have a lush, indulgent feel. “New Preference by L’Oréal,” she says in one of the current commercials. “Pass it on. You’re worth it.” The “because”—which gave Ilon Specht’s original punch line such emphasis—is gone. The forceful I’m has been replaced by you’re. The Clairol and L’Oréal campaigns have converged. According to the Spectra marketing firm, there are almost exactly as many Preference users as Nice ’n Easy users who earn between fifty thousand and seventy-five thousand dollars a year, listen to religious radio, rent their apartments, watch the Weather Channel, bought more than six books last year, are fans of professional football, and belong to a union.

But it is a tribute to Ilon Specht and Shirley Polykoff’s legacy that there is still a real difference between the two brands. It’s not that there are Clairol women or L’Oréal women. It’s something a little subtler. As Herzog knew, all of us, when it comes to constructing our sense of self, borrow bits and pieces, ideas and phrases, rituals and products from the world around us—over-the-counter ethnicities that shape, in some small but meaningful way, our identities. Our religion matters, the music we listen to matters, the clothes we wear matter, the food we eat matters—and our brand of hair dye matters, too. Carol Hamilton, L’Oréal’s vice president of marketing, says she can walk into a hair-color focus group and instantly distinguish the Clairol users from the L’Oréal users. “The L’Oréal user always exhibits a greater air of confidence, and she usually looks better—not just her hair color, but she always has spent a little more time putting on her makeup, styling her hair,” Hamilton told me. “Her clothing is a little bit more fashion-forward. Absolutely, I can tell the difference.” Jeanne Matson, Hamilton’s counterpart at Clairol, says she can do the same thing. “Oh, yes,” Matson told me. “There’s no doubt. The Clairol woman would represent more the American-beauty icon, more naturalness. But it’s more of a beauty for me, as opposed to a beauty for the external world. L’Oréal users tend to be a bit more aloof. There is a certain warmth you see in the Clairol people. They interact with each other more. They’ll say, ‘I use Shade 101.’ And someone else will say, ‘Ah, I do, too!’ There is this big exchange.”

These are not exactly the brand personalities laid down by Polykoff and Specht, because this is 1999, and not 1956 or 1973. The complexities of Polykoff’s artifice have been muted. Specht’s anger has turned to glamour. We have been left with just a few bars of the original melody. But even that is enough to ensure that “Because I’m worth it” will never be confused with “Does she or doesn’t she?” Specht says, “It meant I know you don’t think I’m worth it, because that’s what it was with the guys in the room. They were going to take a woman and make her the object. I was defensive and defiant. I thought, I’ll fight you. Don’t you tell me what I am. You’ve been telling me what I am for generations.” As she said fight, she extended the middle finger of her right hand. Shirley Polykoff would never have given anyone the finger. She was too busy exulting in the possibilities for self-invention in her America—a land where a single woman could dye her hair and end up lying on a beach with a ring on her finger. At her retirement party, in 1973, Polykoff reminded the assembled executives of Clairol and of Foote, Cone & Belding about the avalanche of mail that arrived after their early campaigns: “Remember that letter from the girl who got to a Bermuda honeymoon by becoming a blonde?”

Everybody did.

“Well,” she said, with what we can only imagine was a certain sweet vindication, “I wrote it.”

March 22, 1999

John Rock’s Error

WHAT THE INVENTOR OF THE BIRTH CONTROL PILL DIDN’T KNOW ABOUT WOMEN’S HEALTH

1.

John Rock was christened in 1890 at the Church of the Immaculate Conception in Marlborough, Massachusetts, and married by Cardinal William O’Connell, of Boston. He had five children and nineteen grandchildren. A crucifix hung above his desk, and nearly every day of his adult life he attended the 7 a.m. Mass at St. Mary’s in Brookline. Rock, his friends would say, was in love with his church. He was also one of the inventors of the birth-control pill, and it was his conviction that his faith and his vocation were perfectly compatible. To anyone who disagreed he would simply repeat the words spoken to him as a child by his hometown priest: “John, always stick to your conscience. Never let anyone else keep it for you. And I mean anyone else.” Even when Monsignor Francis W. Carney, of Cleveland, called him a “moral rapist,” and when Frederick Good, the longtime head of obstetrics at Boston City Hospital, went to Boston’s Cardinal Richard Cushing to have Rock excommunicated, Rock was unmoved. “You should be afraid to meet your Maker,” one angry woman wrote to him, soon after the Pill was approved. “My dear madam,” Rock wrote back, “in my faith, we are taught that the Lord is with us always. When my time comes, there will be no need for introductions.”

In the years immediately after the Pill was approved by the FDA, in 1960, Rock was everywhere. He appeared in interviews and documentaries on CBS and NBC, in Time, Newsweek, Life, The Saturday Evening Post. He toured the country tirelessly. He wrote a widely discussed book, The Time Has Come: A Catholic Doctor’s Proposals to End the Battle over Birth Control, which was translated into French, German, and Dutch. Rock was six feet three and rail-thin, with impeccable manners; he held doors open for his patients and addressed them as “Mrs.” or “Miss.” His mere association with the Pill helped make it seem respectable. “He was a man of great dignity,” Dr. Sheldon J. Segal, of the Population Council, recalls. “Even if the occasion called for an open collar, you’d never find him without an ascot. He had the shock of white hair to go along with that. And posture, straight as an arrow, even to his last year.” At Harvard Medical School, he was a giant, teaching obstetrics for more than three decades. He was a pioneer in in-vitro fertilization and the freezing of sperm cells, and was the first to extract an intact fertilized egg. The Pill was his crowning achievement. His two collaborators, Gregory Pincus and Min-Cheuh Chang, worked out the mechanism. He shepherded the drug through its clinical trials. “It was his name and his reputation that gave ultimate validity to the claims that the pill would protect women against unwanted pregnancy,” Loretta McLaughlin writes in her marvelous 1982 biography of Rock. Not long before the Pill’s approval, Rock traveled to Washington to testify before the FDA about the drug’s safety. The agency examiner, Pasquale DeFelice, was a Catholic obstetrician from Georgetown University, and at one point, the story goes, DeFelice suggested the unthinkable—that the Catholic Church would never approve of the birth-control pill. “I can still see Rock standing there, his face composed, his eyes riveted on DeFelice,” a colleague recalled years later, “and then, in a voice that would congeal your soul, he said, ‘Young man, don’t you sell my church short.’”

In the end, of course, John Rock’s church disappointed him. In 1968, in the encyclical “Humanae Vitae,” Pope Paul VI outlawed oral contraceptives and all other “artificial” methods of birth control. The passion and urgency that animated the birth-control debates of the sixties are now a memory. John Rock still matters, though, for the simple reason that in the course of reconciling his church and his work he made an error. It was not a deliberate error. It became manifest only after his death, and through scientific advances he could not have anticipated. But because that mistake shaped the way he thought about the Pill—about what it was, and how it worked, and most of all what it meant—and because John Rock was one of those responsible for the way the Pill came into the world, his error has colored the way people have thought about contraception ever since.

John Rock believed that the Pill was a “natural” method of birth control. By that, he didn’t mean that it felt natural, because it obviously didn’t for many women, particularly not in its earliest days, when the doses of hormone were many times as high as they are today. He meant that it worked by natural means. Women can get pregnant only during a certain interval each month, because after ovulation their bodies produce a surge of the hormone progesterone. Progesterone—one of a class of hormones known as progestin—prepares the uterus for implantation and stops the ovaries from releasing new eggs; it favors gestation. “It is progesterone, in the healthy woman, that prevents ovulation and establishes the pre- and postmenstrual ‘safe’ period,” Rock wrote. When a woman is pregnant, her body produces a stream of progestin in part for the same reason, so that another egg can’t be released and threaten the pregnancy already under way. Progestin, in other words, is nature’s contraceptive. And what was the Pill? Progestin in tablet form. When a woman was on the Pill, of course, these hormones weren’t coming in a sudden surge after ovulation and weren’t limited to certain times in her cycle. They were being given in a steady dose, so that ovulation was permanently shut down. They were also being given with an additional dose of estrogen, which holds the endometrium together and—as we’ve come to learn—helps maintain other tissues as well. But to Rock, the timing and combination of hormones wasn’t the issue. The key fact was that the Pill’s ingredients duplicated what could be found in the body naturally. And in that naturalness he saw enormous theological significance.

In 1951, for example, Pope Pius XII had sanctioned the rhythm method for Catholics because he deemed it a “natural” method of regulating procreation: it didn’t kill the sperm, like a spermicide, or frustrate the normal process of procreation, like a diaphragm, or mutilate the organs, like sterilization. Rock knew all about the rhythm method. In the 1930s, at the Free Hospital for Women, in Brookline, Massachusetts, he had started the country’s first rhythm clinic for educating Catholic couples in natural contraception. But how did the rhythm method work? It worked by limiting sex to the safe period that progestin created. And how did the Pill work? It worked by using progestin to extend the safe period to the entire month. It didn’t mutilate the reproductive organs, or damage any natural process. “Indeed,” Rock wrote, oral contraceptives “may be characterized as a ‘pill-established safe period,’ and would seem to carry the same moral implications” as the rhythm method. The Pill was, to Rock, no more than “an adjunct to nature.”

In 1958, Pope Pius XII approved the Pill for Catholics, so long as its contraceptive effects were “indirect”—that is, so long as it was intended only as a remedy for conditions like painful menses or “a disease of the uterus.” That ruling emboldened Rock still further. Short-term use of the Pill, he knew, could regulate the cycle of women whose periods had previously been unpredictable. Since a regular menstrual cycle was necessary for the successful use of the rhythm method—and since the rhythm method was sanctioned by the Church—shouldn’t it be permissible for women with an irregular menstrual cycle to use the Pill in order to facilitate the use of rhythm? And if that was true, why not take the logic one step further? As the federal judge John T. Noonan writes in Contraception, his history of the Catholic position on birth control:

If it was lawful to suppress ovulation to achieve a regularity necessary for successfully sterile intercourse, why was it not lawful to suppress ovulation without appeal to rhythm? If pregnancy could be prevented by pill plus rhythm, why not by pill alone? In each case suppression of ovulation was used as a means. How was a moral difference made by the addition of rhythm?

These arguments, as arcane as they may seem, were central to the development of oral contraception. It was John Rock and Gregory Pincus who decided that the Pill ought to be taken over a four-week cycle—a woman would spend three weeks on the Pill and the fourth week off the drug (or on a placebo), to allow for menstruation. There was and is no medical reason for this. A typical woman of childbearing age has a menstrual cycle of around twenty-eight days, determined by the cascades of hormones released by her ovaries. As first estrogen and then a combination of estrogen and progestin flood the uterus, its lining becomes thick and swollen, preparing for the implantation of a fertilized egg. If the egg is not fertilized, hormone levels plunge and cause the lining—the endometrium—to be sloughed off in a menstrual bleed. When a woman is on the Pill, however, no egg is released, because the Pill suppresses ovulation. The fluxes of estrogen and progestin that cause the lining of the uterus to grow are dramatically reduced, because the Pill slows down the ovaries. Pincus and Rock knew that the effect of the Pill’s hormones on the endometrium was so modest that women could conceivably go for months without having to menstruate. “In view of the ability of this compound to prevent menstrual bleeding as long as it is taken,” Pincus acknowledged in 1958, “a cycle of any desired length could presumably be produced.” But he and Rock decided to cut the hormones off after three weeks and trigger a menstrual period because they believed that women would find the continuation of their monthly bleeding reassuring. More to the point, if Rock wanted to demonstrate that the Pill was no more than a natural variant of the rhythm method, he couldn’t very well do away with the monthly menses. Rhythm required “regularity,” and so the Pill had to produce regularity as well.

It has often been said of the Pill that no other drug has ever been so instantly recognizable by its packaging: that small, round plastic dial pack. But what was the dial pack if not the physical embodiment of the twenty-eight-day cycle? It was, in the words of its inventor, meant to fit into a case “indistinguishable” from a woman’s cosmetics compact, so that it might be carried “without giving a visual clue as to matters which are of no concern to others.” Today, the Pill is still often sold in dial packs and taken in twenty-eight-day cycles. It remains, in other words, a drug shaped by the dictates of the Catholic Church—by John Rock’s desire to make this new method of birth control seem as natural as possible. This was John Rock’s error. He was consumed by the idea of the natural. But what he thought was natural wasn’t so natural after all, and the Pill he ushered into the world turned out to be something other than what he thought it was. In John Rock’s mind the dictates of religion and the principles of science got mixed up, and only now are we beginning to untangle them.

2.

In 1986, a young scientist named Beverly Strassmann traveled to Africa to live with the Dogon tribe of Mali. Her research site was the village of Sangui in the Sahel, about 120 miles south of Timbuktu. The Sahel is thorn savannah, green in the rainy season and semi-arid the rest of the year. The Dogon grow millet, sorghum, and onions, raise livestock, and live in adobe houses on the Bandiagara escarpment. They use no contraception. Many of them have held on to their ancestral customs and religious beliefs. Dogon farmers, in many respects, live much as people of that region have lived since antiquity. Strassmann wanted to construct a precise reproductive profile of the women in the tribe, in order to understand what female biology might have been like in the millennia that preceded the modern age. In a way, Strassmann was trying to answer the same question about female biology that John Rock and the Catholic Church had struggled with in the early sixties: what is natural? Only, her sense of natural was not theological but evolutionary. In the era during which natural selection established the basic patterns of human biology—the natural history of our species—how often did women have children? How often did they menstruate? When did they reach puberty and menopause? What impact did breast-feeding have on ovulation? These questions had been studied before, but never so thoroughly that anthropologists felt they knew the answers with any certainty.

Strassmann, who teaches at the University of Michigan at Ann Arbor, is a slender, soft-spoken woman with red hair, and she recalls her time in Mali with a certain wry humor. The house she stayed in while in Sangui had been used as a shelter for sheep before she came and was turned into a pigsty after she left. A small brown snake lived in her latrine, and would curl up in a camouflaged coil on the seat she sat on while bathing. The villagers, she says, were of two minds: was it a deadly snake—Kere me jongolo, literally, “My bite cannot be healed”—or a harmless mouse snake? (It turned out to be the latter.) Once, one of her neighbors and best friends in the tribe roasted her a rat as a special treat. “I told him that white people aren’t allowed to eat rat because rat is our totem,” Strassmann says. “I can still see it. Bloated and charred. Stretched by its paws. Whiskers singed. To say nothing of the tail.” Strassmann meant to live in Sangui for eighteen months, but her experiences there were so profound and exhilarating that she stayed for two and a half years. “I felt incredibly privileged,” she says. “I just couldn’t tear myself away.”

Part of Strassmann’s work focused on the Dogon’s practice of segregating menstruating women in special huts on the fringes of the village. In Sangui, there were two menstrual huts—dark, cramped, one-room adobe structures, with boards for beds. Each accommodated three women, and when the rooms were full, latecomers were forced to stay outside on the rocks. “It’s not a place where people kick back and enjoy themselves,” Strassmann says. “It’s simply a nighttime hangout. They get there at dusk, and get up early in the morning and draw their water.” Strassmann took urine samples from the women using the hut, to confirm that they were menstruating. Then she made a list of all the women in the village, and for her entire time in Mali—736 consecutive nights—she kept track of everyone who visited the hut. Among the Dogon, she found a woman on average has her first period at the age of sixteen and gives birth eight or nine times. From menarche, the onset of menstruation, to the age of twenty, she averages seven periods a year. Over the next decade and a half, from the age of twenty to the age of thirty-four, she spends so much time either pregnant or breast-feeding (which, among the Dogon, suppresses ovulation for an average of twenty months) that she averages only slightly more than one period per year. Then, from the age of thirty-five until menopause, at around fifty, as her fertility rapidly declines, she averages four menses a year. All told, Dogon women menstruate about a hundred times in their lives. (Those who survive early childhood typically live into their seventh or eighth decade.) By contrast, the average for contemporary Western women is somewhere between three hundred and fifty and four hundred times.

Strassmann’s office is in the basement of a converted stable next to the Natural History Museum on the University of Michigan campus. Behind her desk is a row of battered filing cabinets, and as she was talking, she turned and pulled out a series of yellowed charts. Each page listed, on the left, the first names and identification numbers of the Sangui women. Across the top was a time line, broken into thirty-day blocks. Every menses of every woman was marked with an X. In the village, Strassmann explained, there were two women who were sterile, and, because they couldn’t get pregnant, they were regulars at the menstrual hut. She flipped through the pages until she found them. “Look, she had twenty-nine menses over two years, and the other had twenty-three.” Next to each of their names was a solid line of x’s. “Here’s a woman approaching menopause,” Strassmann went on, running her finger down the page. “She’s cycling but is a little bit erratic. Here’s another woman of prime childbearing age. Two periods. Then pregnant. I never saw her again at the menstrual hut. This woman here didn’t go to the menstrual hut for twenty months after giving birth, because she was breast-feeding. Two periods. Got pregnant. Then she miscarried, had a few periods, then got pregnant again. This woman had three menses in the study period.” There weren’t a lot of x’s on Strassmann’s sheets. Most of the boxes were blank. She flipped back through her sheets to the two anomalous women who were menstruating every month. “If this were a menstrual chart of undergraduates here at the University of Michigan, all the rows would be like this.”

Strassmann does not claim that her statistics apply to every preindustrial society. But she believes—and other anthropological work backs her up—that the number of lifetime menses isn’t greatly affected by differences in diet or climate or method of subsistence (foraging versus agriculture, say). The more significant factors, Strassmann says, are things like the prevalence of wet-nursing or sterility. But overall she believes that the basic pattern of late menarche, many pregnancies, and long menstrual-free stretches caused by intensive breast-feeding was virtually universal up until the “demographic transition” of a hundred years ago from high to low fertility. In other words, what we think of as normal—frequent menses—is in evolutionary terms abnormal. “It’s a pity that gynecologists think that women have to menstruate every month,” Strassmann went on. “They just don’t understand the real biology of menstruation.”

To Strassmann and others in the field of evolutionary medicine, this shift from a hundred to four hundred lifetime menses is enormously significant. It means that women’s bodies are being subjected to changes and stresses that they were not necessarily designed by evolution to handle. In a brilliant and provocative book, Is Menstruation Obsolete?, Drs. Elsimar Coutinho and Sheldon S. Segal, two of the world’s most prominent contraceptive researchers, argue that this recent move to what they call “incessant ovulation” has become a serious problem for women’s health. It doesn’t mean that women are always better off the less they menstruate. There are times—particularly in the context of certain medical conditions—when women ought to be concerned if they aren’t menstruating: In obese women, a failure to menstruate can signal an increased risk of uterine cancer. In female athletes, a failure to menstruate can signal an increased risk of osteoporosis. But for most women, Coutinho and Segal say, incessant ovulation serves no purpose except to increase the occurence of abdominal pain, mood shifts, migraines, endometriosis, fibroids, and anemia—the last of which, they point out, is “one of the most serious health problems in the world.”

Most serious of all is the greatly increased risk of some cancers. Cancer, after all, occurs because as cells divide and reproduce they sometimes make mistakes that cripple the cells’ defenses against runaway growth. That’s one of the reasons that our risk of cancer generally increases as we age: our cells have more time to make mistakes. But this also means that any change promoting cell division has the potential to increase cancer risk, and ovulation appears to be one of those changes. Whenever a woman ovulates, an egg literally bursts through the walls of her ovaries. To heal that puncture, the cells of the ovary wall have to divide and reproduce. Every time a woman gets pregnant and bears a child, her lifetime risk of ovarian cancer drops 10 percent. Why? Possibly because, between nine months of pregnancy and the suppression of ovulation associated with breast-feeding, she stops ovulating for twelve months—and saves her ovarian walls from twelve bouts of cell division. The argument is similar for endometrial cancer. When a woman is menstruating, the estrogen that flows through her uterus stimulates the growth of the uterine lining, causing a flurry of potentially dangerous cell division. Women who do not menstruate frequently spare the endometrium that risk. Ovarian and endometrial cancer are characteristically modern diseases, consequences, in part, of a century in which women have come to menstruate four hundred times in a lifetime.

In this sense, the Pill really does have a natural effect. By blocking the release of new eggs, the progestin in oral contraceptives reduces the rounds of ovarian cell division. Progestin also counters the surges of estrogen in the endometrium, restraining cell division there. A woman who takes the Pill for ten years cuts her ovarian-cancer risk by around 70 percent and her endometrial-cancer risk by around 60 percent. But here natural means something different from what Rock meant. He assumed that the Pill was natural because it was an unobtrusive variant of the body’s own processes. In fact, as more recent research suggests, the Pill is really only natural in so far as it’s radical—rescuing the ovaries and endometrium from modernity. That Rock insisted on a twenty-eight-day cycle for his pill is evidence of just how deep his misunderstanding was: the real promise of the Pill was not that it could preserve the menstrual rhythms of the twentieth century but that it could disrupt them.

Today, a growing movement of reproductive specialists has begun to campaign loudly against the standard twenty-eight-day pill regimen. The drug company Organon has come out with a new oral contraceptive, called Mircette, that cuts the seven-day placebo interval to two days. Patricia Sulak, a medical researcher at Texas A &M University, has shown that most women can probably stay on the Pill, straight through, for six to twelve weeks before they experience breakthrough bleeding or spotting. More recently, Sulak has documented precisely what the cost of the Pill’s monthly “off” week is. In a paper in the February issue of the journal Obstetrics and Gynecology, she and her colleagues documented something that will come as no surprise to most women on the Pill: during the placebo week, the number of users experiencing pelvic pain, bloating, and swelling more than triples, breast tenderness more than doubles, and headaches increase by almost 50 percent. In other words, some women on the Pill continue to experience the kinds of side effects associated with normal menstruation. Sulak’s paper is a short, dry, academic work, of the sort intended for a narrow professional audience. But it is impossible to read it without being struck by the consequences of John Rock’s desire to please his Church. In the past forty years, millions of women around the world have been given the Pill in such a way as to maximize their pain and suffering. And to what end? To pretend that the Pill was no more than a pharmaceutical version of the rhythm method?

3.

In 1980 and 1981, Malcolm Pike, a medical statistician at the University of Southern California, traveled to Japan for six months to study at the Atomic Bomb Casualties Commission. Pike wasn’t interested in the effects of the bomb. He wanted to examine the medical records that the commission had been painstakingly assembling on the survivors of Hiroshima and Nagasaki. He was investigating a question that would ultimately do as much to complicate our understanding of the Pill as Strassmann’s research would a decade later: why did Japanese women have breast-cancer rates six times lower than American women?

In the late forties, the World Health Organization began to collect and publish comparative health statistics from around the world, and the breast-cancer disparity between Japan and America had come to obsess cancer specialists. The obvious answer—that Japanese women were somehow genetically protected against breast cancer—didn’t make sense, because once Japanese women moved to the United States they began to get breast cancer almost as often as American women did. As a result, many experts at the time assumed that the culprit had to be some unknown toxic chemical or virus unique to the West. Brian Henderson, a colleague of Pike’s at USC and his regular collaborator, says that when he entered the field in 1970, “the whole viral- and chemical-carcinogenesis idea was huge—it dominated the literature.” As he recalls, “Breast cancer fell into this large, unknown box that said it was something to do with the environment—and that word environment meant a lot of different things to a lot of different people. They might be talking about diet or smoking or pesticides.”

Henderson and Pike, however, became fascinated by a number of statistical peculiarities. For one thing, the rate of increase in breast-cancer risk rises sharply throughout women’s thirties and forties and then, at menopause, it starts to slow down. If a cancer is caused by some toxic outside agent, you’d expect that rate to rise steadily with each advancing year, as the number of mutations and genetic mistakes steadily accumulates. Breast cancer, by contrast, looked as if it were being driven by something specific to a woman’s reproductive years. What was more, younger women who had had their ovaries removed had a markedly lower risk of breast cancer; when their bodies weren’t producing estrogen and progestin every month, they got far fewer tumors. Pike and Henderson became convinced that breast cancer was linked to a process of cell division similar to that of ovarian and endometrial cancer. The female breast, after all, is just as sensitive to the level of hormones in a woman’s body as the reproductive system. When the breast is exposed to estrogen, the cells of the terminal-duct lobular unit—where most breast cancer arises—undergo a flurry of division. And during the mid-to-late stage of the menstrual cycle, when the ovaries start producing large amounts of progestin, the pace of cell division in that region doubles.

It made intuitive sense, then, that a woman’s risk of breast cancer would be linked to the amount of estrogen and progestin her breasts have been exposed to during her lifetime. How old a woman is at menarche should make a big difference, because the beginning of puberty results in a hormonal surge through a woman’s body, and the breast cells of an adolescent appear to be highly susceptible to the errors that result in cancer. (For more complicated reasons, bearing children turns out to be protective against breast cancer, perhaps because in the last two trimesters of pregnancy the cells of the breast mature and become much more resistant to mutations.) How old a woman is at menopause should matter, and so should how much estrogen and progestin her ovaries actually produce, and even how much she weighs after menopause, because fat cells turn other hormones into estrogen.

Pike went to Hiroshima to test the cell-division theory. With other researchers at the medical archive, he looked first at the age when Japanese women got their period. A Japanese woman born at the turn of the century had her first period at sixteen and a half. American women born at the same time had their first period at fourteen. That difference alone, by their calculation, was sufficient to explain 40 percent of the gap between American and Japanese breast-cancer rates. “They had collected amazing records from the women of that area,” Pike said. “You could follow precisely the change in age of menarche over the century. You could even see the effects of the Second World War. The age of menarche of Japanese girls went up right at that point because of poor nutrition and other hardships. And then it started to go back down after the war. That’s what convinced me that the data were wonderful.”

Pike, Henderson, and their colleagues then folded in the other risk factors. Age at menopause, age at first pregnancy, and number of children weren’t sufficiently different between the two countries to matter. But weight was. The average post-menopausal Japanese woman weighed a hundred pounds; the average American woman weighed a hundred and forty-five pounds. That fact explained another 25 percent of the difference. Finally, the researchers analyzed blood samples from women in rural Japan and China, and found that their ovaries—possibly because of their extremely low-fat diet—were producing about 75 percent the amount of estrogen that American women were producing. Those three factors, added together, seemed to explain the breast-cancer gap. They also appeared to explain why the rates of breast cancer among Asian women began to increase when they came to America: on an American diet, they started to menstruate earlier, gained more weight, and produced more estrogen. The talk of chemicals and toxins and power lines and smog was set aside. “When people say that what we understand about breast cancer explains only a small amount of the problem, that it is somehow a mystery, it’s absolute nonsense,” Pike says flatly. He is a South African in his sixties, with graying hair and a salt-and-pepper beard. Along with Henderson, he is an eminent figure in cancer research, but no one would ever accuse him of being tentative in his pronouncements. “We understand breast cancer extraordinarily well. We understand it as well as we understand cigarettes and lung cancer.”

What Pike discovered in Japan led him to think about the Pill, because a tablet that suppressed ovulation—and the monthly tides of estrogen and progestin that come with it—obviously had the potential to be a powerful anti-breast-cancer drug. But the breast was a little different from the reproductive organs. Progestin prevented ovarian cancer because it suppressed ovulation. It was good for preventing endometrial cancer because it countered the stimulating effects of estrogen. But in breast cells, Pike believed, progestin wasn’t the solution; it was one of the hormones that caused cell division. This is one explanation for why, after years of studying the Pill, researchers have concluded that it has no effect one way or the other on breast cancer: whatever beneficial effect results from what the Pill does is canceled out by how it does it. John Rock touted the fact that the Pill used progestin, because progestin was the body’s own contraceptive. But Pike saw nothing “natural” about subjecting the breast to that heavy a dose of progestin. In his view, the amount of progestin and estrogen needed to make an effective contraceptive was much greater than the amount needed to keep the reproductive system healthy—and that excess was unnecessarily raising the risk of breast cancer. A truly natural Pill might be one that found a way to suppress ovulation without using progestin. Throughout the 1980s, Pike recalls, this was his obsession. “We were all trying to work out how the hell we could fix the Pill. We thought about it day and night.”

4.

Pike’s proposed solution is a class of drugs known as GnRHAs, which has been around for many years. GnRHAs disrupt the signals that the pituitary gland sends when it is attempting to order the manufacture of sex hormones. It’s a circuit breaker. “We’ve got substantial experience with this drug,” Pike says. Men suffering from prostate cancer are sometimes given a GnRHA to temporarily halt the production of testosterone, which can exacerbate their tumors. Girls suffering from what’s called precocious puberty—puberty at seven or eight, or even younger—are sometimes given the drug to forestall sexual maturity. If you give GnRHA to women of childbearing age, it stops their ovaries from producing estrogen and progestin. If the conventional Pill works by convincing the body that it is, well, a little bit pregnant, Pike’s pill would work by convincing the body that it was menopausal.

In the form Pike wants to use it, GnRHA will come in a clear glass bottle the size of a saltshaker, with a white plastic mister on top. It will be inhaled nasally. It breaks down in the body very quickly. A morning dose simply makes a woman menopausal for a while. Menopause, of course, has its risks. Women need estrogen to keep their hearts and bones strong. They also need progestin to keep the uterus healthy. So Pike intends to add back just enough of each hormone to solve these problems, but much less than women now receive on the Pill. Ideally, Pike says, the estrogen dose would be adjustable: women would try various levels until they found one that suited them. The progestin would come in four twelve-day stretches a year. When someone on Pike’s regimen stopped the progestin, she would have one of four annual menses.

Pike and an oncologist named Darcy Spicer have joined forces with another oncologist, John Daniels, in a startup called Balance Pharmaceuticals. The firm operates out of a small white industrial strip mall next to the freeway in Santa Monica. One of the tenants is a paint store, another looks like some sort of export company. Balance’s offices are housed in an oversized garage with a big overhead door and concrete floors. There is a tiny reception area, a little coffee table and a couch, and a warren of desks, bookshelves, filing cabinets, and computers. Balance is testing its formulation on a small group of women at high risk for breast cancer, and if the results continue to be encouraging, it will one day file for FDA approval.

“When I met Darcy Spicer a couple of years ago,” Pike said recently, as he sat at a conference table deep in the Balance garage, “he said, ‘Why don’t we just try it out? By taking mammograms, we should be able to see changes in the breasts of women on this drug, even if we add back a little estrogen to avoid side effects.’ So we did a study, and we found that there were huge changes.” Pike pulled out a paper he and Spicer had published in the Journal of the National Cancer Institute, showing breast X-rays of three young women. “These are the mammograms of the women before they start,” he said. Amid the grainy black outlines of the breast were large white fibrous clumps—clumps that Pike and Spicer believe are indicators of the kind of relentless cell division that increases breast-cancer risk. Next to those X-rays were three mammograms of the same women taken after a year on the GnRHA regimen. The clumps were almost entirely gone. “This to us represents that we have actually stopped the activity inside the breasts,” Pike went on. “White is a proxy for cell proliferation. We’re slowing down the breast.”

Pike stood up from the table and turned to a sketch pad on an easel behind him. He quickly wrote a series of numbers on the paper. “Suppose a woman reaches menarche at fifteen and menopause at fifty. That’s thirty-five years of stimulating the breast. If you cut that time in half, you will change her risk not by half but by half raised to the power of 4.5.” He was working with a statistical model he had developed to calculate breast-cancer risk. “That’s one-twenty-third. Your risk of breast cancer will be one-twenty-third of what it would be otherwise. It won’t be zero. You can’t get to zero. If you use this for ten years, your risk will be cut by at least half. If you use it for five years, your risk will be cut by at least a third. It’s as if your breast were to be five years younger, or ten years younger—forever.” The regimen, he says, should also provide protection against ovarian cancer.

Pike gave the sense that he had made this little speech many times before, to colleagues, to his family and friends—and to investors. He knew by now how strange and unbelievable what he was saying sounded. Here he was, in a cold, cramped garage in the industrial section of Santa Monica, arguing that he knew how to save the lives of hundreds of thousands of women around the world. And he wanted to do that by making young women menopausal through a chemical regimen sniffed every morning out of a bottle. This was, to say the least, a bold idea. Could he strike the right balance between the hormone levels women need to stay healthy and those that ultimately make them sick? Was progestin really so important in breast cancer? There are cancer specialists who remain skeptical. And, most of all, what would women think? John Rock, at least, had lent the cause of birth control his Old World manners and distinguished white hair and appeals from theology; he took pains to make the Pill seem like the least radical of interventions—nature’s contraceptive, something that could be slipped inside a woman’s purse and pass without notice. Pike was going to take the whole forty-year mythology of natural and sweep it aside. “Women are going to think, I’m being manipulated here. And it’s a perfectly reasonable thing to think.” Pike’s South African accent gets a little stronger as he becomes more animated. “But the modern way of living represents an extraordinary change in female biology. Women are going out and becoming lawyers, doctors, presidents of countries. They need to understand that what we are trying to do isn’t abnormal. It’s just as normal as when someone hundreds of years ago had menarche at seventeen and had five babies and had three hundred fewer menstrual cycles than most women have today. The world is not the world it was. And some of the risks that go with the benefits of a woman getting educated and not getting pregnant all the time are breast cancer and ovarian cancer, and we need to deal with it. I have three daughters. The earliest grandchild I had was when one of them was thirty-one. That’s the way many women are now. They ovulate from twelve or thirteen until their early thirties. Twenty years of uninterrupted ovulation before their first child! That’s a brand-new phenomenon!”

5.

John Rock’s long battle on behalf of his birth-control pill forced the Church to take notice. In the spring of 1963, just after Rock’s book was published, a meeting was held at the Vatican between high officials of the Catholic Church and Donald B. Straus, the chairman of Planned Parenthood. That summit was followed by another, on the campus of the University of Notre Dame. In the summer of 1964, on the eve of the feast of St. John the Baptist, Pope Paul VI announced that he would ask a committee of Church officials to reexamine the Vatican’s position on contraception. The group met first at the Collegio San Jose, in Rome, and it was clear that a majority of the committee were in favor of approving the Pill. Committee reports leaked to the National Catholic Register confirmed that Rock’s case appeared to be winning. Rock was elated. Newsweek put him on its cover, and ran a picture of the Pope inside. “Not since the Copernicans suggested in the sixteenth century that the sun was the center of the planetary system has the Roman Catholic Church found itself on such a perilous collision course with a new body of knowledge,” the article concluded. Paul VI, however, was unmoved. He stalled, delaying a verdict for months, and then years. Some said he fell under the sway of conservative elements within the Vatican. In the interim, theologians began exposing the holes in Rock’s arguments. The rhythm method “ ‘prevents’ conception by abstinence, that is, by the non-performance of the conjugal act during the fertile period,” the Catholic journal America concluded in a 1964 editorial. “The Pill prevents conception by suppressing ovulation and by thus abolishing the fertile period. No amount of word juggling can make abstinence from sexual relations and the suppression of ovulation one and the same thing.” On July 29, 1968, in the “Humanae Vitae” encyclical, the Pope broke his silence, declaring all “artificial” methods of contraception to be against the teachings of the Church.

In hindsight, it is possible to see the opportunity that Rock missed. If he had known what we know now and had talked about the Pill not as a contraceptive but as a cancer drug—not as a drug to prevent life but as one that would save life—the Church might well have said yes. Hadn’t Pius XII already approved the Pill for therapeutic purposes? Rock would only have had to think of the Pill as Pike thinks of it: as a drug whose contraceptive aspects are merely a means of attracting users, of getting, as Pike put it, “people who are young to take a lot of stuff they wouldn’t otherwise take.”

But Rock did not live long enough to understand how things might have been. What he witnessed, instead, was the terrible time at the end of the sixties when the Pill suddenly stood accused—wrongly—of causing blood clots, strokes, and heart attacks. Between the midseventies and the early eighties, the number of women in the United States using the Pill fell by half. Harvard Medical School, meanwhile, took over Rock’s Reproductive Clinic and pushed him out. His Harvard pension paid him only seventy-five dollars a year. He had almost no money in the bank and had to sell his house in Brookline. In 1971, Rock left Boston and retreated to a farmhouse in the hills of New Hampshire. He swam in the stream behind the house. He listened to John Philip Sousa marches. In the evening, he would sit in the living room with a pitcher of martinis. In 1983, he gave his last public interview, and it was as if the memory of his achievements were now so painful that he had blotted it out.

He was asked what the most gratifying time of his life was. “Right now,” the inventor of the Pill answered, incredibly. He was sitting by the fire in a crisp white shirt and tie, reading The Origin, Irving Stone’s fictional account of the life of Darwin. “It frequently occurs to me, gosh, what a lucky guy I am. I have no responsibilities, and I have everything I want. I take a dose of equanimity every twenty minutes. I will not be disturbed about things.”

Once, John Rock had gone to seven-o’clock Mass every morning and kept a crucifix above his desk. His interviewer, the writer Sara Davidson, moved her chair closer to his and asked him whether he still believed in an afterlife.

“Of course I don’t,” Rock answered abruptly. Though he didn’t explain why, his reasons aren’t hard to imagine. The Church could not square the requirements of its faith with the results of his science, and if the Church couldn’t reconcile them, how could Rock be expected to? John Rock always stuck to his conscience, and in the end his conscience forced him away from the thing he loved most. This was not John Rock’s error. Nor was it his Church’s. It was the fault of the haphazard nature of science, which all too often produces progress in advance of understanding. If the order of events in the discovery of what was natural had been reversed, his world, and our world, too, would have been a different place.

“Heaven and Hell, Rome, all the Church stuff—that’s for the solace of the multitude,” Rock said. He had only a year to live. “I was an ardent practicing Catholic for a long time, and I really believed it all then, you see.”[3]

March 13, 2000

What the Dog Saw

CESAR MILLAN AND THE MOVEMENTS OF MASTERY

1.

In the case of Sugar v. Forman, Cesar Millan knew none of the facts before arriving at the scene of the crime. That is the way Cesar prefers it. His job was to reconcile Forman with Sugar, and, since Sugar was a good deal less adept in making her case than Forman, whatever he learned beforehand might bias him in favor of the aggrieved party.

The Forman residence was in a trailer park in Mission Hills, just north of Los Angeles. Dark wood paneling, leather couches, deep-pile carpeting. The air-conditioning was on, even though it was one of those ridiculously pristine Southern California days. Lynda Forman was in her sixties, possibly older, a handsome woman with a winning sense of humor. Her husband, Ray, was in a wheelchair, and looked vaguely ex-military. Cesar sat across from them, in black jeans and a blue shirt, his posture characteristically perfect.

“So how can I help?” he said.

“You can help our monster turn into a sweet, lovable dog,” Lynda replied. It was clear that she had been thinking about how to describe Sugar to Cesar for a long time. “She’s ninety percent bad, ten percent the love…She sleeps with us at night. She cuddles.” Sugar meant a lot to Lynda. “But she grabs anything in sight that she can get, and tries to destroy it. My husband is disabled, and she destroys his room. She tears clothes. She’s torn our carpet. She bothers my grandchildren. If I open the door, she will run.” Lynda pushed back her sleeves and exposed her forearms. They were covered in so many bites and scratches and scars and scabs that it was as if she had been tortured. “But I love her. What can I say?”

Cesar looked at her arms and blinked. “Wow.”

Cesar is not a tall man. He is built like a soccer player. He is in his midthirties, and has large, wide eyes, olive skin, and white teeth. He crawled across the border from Mexico fourteen years ago, but his English is exceptional, except when he gets excited and starts dropping his articles—which almost never happens, because he rarely gets excited. He saw the arms and he said, “Wow,” but it was a “wow” in the same calm tone of voice as “So how can I help?”

Cesar began to ask questions. Did Sugar urinate in the house? She did. She had a particularly destructive relationship with newspapers, television remotes, and plastic cups. Cesar asked about walks. Did Sugar travel, or did she track—and when he said track he did an astonishing impersonation of a dog sniffing. Sugar tracked. What about discipline?

“Sometimes I put her in a crate,” Lynda said. “And it’s only for a fifteen-minute period. Then she lays down and she’s fine. I don’t know how to give discipline. Ask my kids.”

“Did your parents discipline you?”

“I didn’t need discipline. I was perfect.”

“So you had no rules…What about using physical touch with Sugar?”

“I have used it. It bothers me.”

“What about the bites?”

“I can see it in the head. She gives me that look.”

“She’s reminding you who rules the roost.”

“Then she will lick me for half an hour where she has bit me.”

“She’s not apologizing. Dogs lick each other’s wounds to heal the pack, you know.”

Lynda looked a little lost. “I thought she was saying sorry.”

“If she was sorry,” Cesar said softly, “she wouldn’t do it in the first place.”

It was time for the defendant. Lynda’s granddaughter, Carly, came in, holding a beagle as if it were a baby. Sugar was cute, but she had a mean, feral look in her eyes. Carly put Sugar on the carpet, and Sugar swaggered over to Cesar, sniffing his shoes. In front of her, Cesar placed a newspaper, a plastic cup, and a television remote.

Sugar grabbed the newspaper. Cesar snatched it back. Sugar picked up the newspaper again. She jumped on the couch. Cesar took his hand and “bit” Sugar on the shoulder, firmly and calmly. “My hand is the mouth,” he explained. “My fingers are the teeth.” Sugar jumped down. Cesar stood, and firmly and fluidly held Sugar down for an instant. Sugar struggled, briefly, then relaxed. Cesar backed off. Sugar lunged at the remote. Cesar looked at her and said, simply and briefly, “Sh-h-h.” Sugar hesitated. She went for the plastic cup. Cesar said, “Sh-h-h.” She dropped it. Cesar motioned for Lynda to bring a jar of treats into the room. He placed it in the middle of the floor and hovered over it. Sugar looked at the treats and then at Cesar. She began sniffing, inching closer, but an invisible boundary now stood between her and the prize. She circled and circled but never came closer than three feet. She looked as if she were about to jump on the couch. Cesar shifted his weight, and blocked her. He took a step toward her. She backed up, head lowered, into the furthest corner of the room. She sank down on her haunches, then placed her head flat on the ground. Cesar took the treats, the remote, the plastic cup, and the newspaper and placed them inches from her lowered nose. Sugar, the onetime terror of Mission Hills, closed her eyes in surrender.

“She has no rules in the outside world, no boundaries,” Cesar said, finally. “You practice exercise and affection. But you’re not practicing exercise, discipline, and affection. When we love someone, we fulfill everything about them. That’s loving. And you’re not loving your dog.” He stood up. He looked around.

“Let’s go for a walk.”

Lynda staggered into the kitchen. In five minutes, her monster had turned into an angel. “Unbelievable,” she said.

2.

Cesar Millan runs the Dog Psychology Center out of a converted auto mechanic’s shop in the industrial zone of South-Central Los Angeles. The center is situated at the end of a long narrow alley, off a busy street lined with bleak warehouses and garages. Behind a high green chain-link fence is a large concrete yard, and everywhere around the yard there are dogs. Dogs basking in the sun. Dogs splashing in a pool. Dogs lying on picnic tables. Cesar takes in people’s problem dogs; he keeps them for a minimum of two weeks, integrating them into the pack. He has no formal training. He learned what he knows growing up in Mexico on his grandfather’s farm in Sinaloa. As a child, he was called el Perrero, “the dog boy,” watching and studying until he felt that he could put himself inside the mind of a dog. In the mornings, Cesar takes the pack on a four-hour walk in the Santa Monica mountains: Cesar in front, the dogs behind him; the pit bulls and the Rottweilers and the German shepherds with backpacks, so that when the little dogs get tired Cesar can load them up on the big dogs’ backs. Then they come back and eat. Exercise, then food. Work, then reward.

“I have forty-seven dogs right now,” Cesar said. He opened the door, and they came running over, a jumble of dogs, big and small. Cesar pointed to a bloodhound. “He was aggressive with humans, really aggressive,” he said. In a corner of the compound, a Wheaton terrier had just been given a bath. “She’s stayed here six months because she could not trust men,” Cesar explained. “She was beat up severely.” He idly scratched a big German shepherd. “My girlfriend here, Beauty. If you were to see the relationship between her and her owner.” He shook his head. “A very sick relationship. A Fatal Attraction kind of thing. Beauty sees her and she starts scratching her and biting her, and the owner is, like, ‘I love you, too.’ That one killed a dog. That one killed a dog, too. Those two guys came from New Orleans. They attacked humans. That pit bull over there with a tennis ball killed a Labrador in Beverly Hills. And look at this one—one eye. Lost the eye in a dogfight. But look at him now.” Now he was nuzzling a French bulldog. He was happy, and so was the Labrador killer from Beverly Hills, who was stretched out in the sun, and so was the aggressive-toward-humans bloodhound, who was lingering by a picnic table with his tongue hanging out. Cesar stood in the midst of all the dogs, his back straight and his shoulders square. It was a prison yard. But it was the most peaceful prison yard in all of California. “The whole point is that everybody has to stay calm, submissive, no matter what,” he said. “What you are witnessing right now is a group of dogs who all have the same state of mind.”

Cesar Millan is the host of Dog Whisperer, on the National Geographic television channel. In every episode, he arrives amid canine chaos and leaves behind peace. He is the teacher we all had in grade school who could walk into a classroom filled with rambunctious kids and get everyone to calm down and behave. But what did that teacher have? If you’d asked us back then, we might have said that we behaved for Mr. Exley because Mr. Exley had lots of rules and was really strict. But the truth is that we behaved for Mr. DeBock as well, and he wasn’t strict at all. What we really mean is that both of them had that indefinable thing called presence—and if you are going to teach a classroom full of headstrong ten-year-olds, or run a company, or command an army, or walk into a trailer home in Mission Hills where a beagle named Sugar is terrorizing its owners, you have to have presence or you’re lost.

Behind the Dog Psychology Center, between the back fence and the walls of the adjoining buildings, Cesar has built a dog run—a stretch of grass and dirt as long as a city block. “This is our Chuck E. Cheese,” Cesar said. The dogs saw Cesar approaching the back gate, and they ran, expectantly, toward him, piling through the narrow door in a hodgepodge of whiskers and wagging tails. Cesar had a bag over his shoulder, filled with tennis balls, and a long orange plastic ball scoop in his right hand. He reached into the bag with the scoop, grabbed a tennis ball, and flung it in a smooth practiced motion off the wall of an adjoining warehouse. A dozen dogs set off in ragged pursuit. Cesar wheeled and threw another ball, in the opposite direction, and then a third, and then a fourth, until there were so many balls in the air and on the ground that the pack had turned into a yelping, howling, leaping, charging frenzy. Woof. Woof, woof, woof. Woof.

“The game should be played five or ten minutes, maybe fifteen minutes,” Cesar said. “You begin. You end. And you don’t ask, ‘Please stop.’ You demand that it stop.” With that, Cesar gathered himself, stood stock still, and let out a short whistle: not a casual whistle but a whistle of authority. Suddenly, there was absolute quiet. All forty-seven dogs stopped charging and jumping and stood as still as Cesar, their heads erect, eyes trained on their ringleader. Cesar nodded, almost imperceptibly, toward the enclosure, and all forty-seven dogs turned and filed happily back through the gate.

3.

In the fall of 2005, Cesar filmed an episode of Dog Whisperer at the Los Angeles home of a couple named Patrice and Scott. They had a Korean jindo named JonBee, a stray that they had found and adopted. Outside, and on walks, JonBee was well behaved and affectionate. Inside the house, he was a terror, turning viciously on Scott whenever he tried to get the dog to submit.

“Help us tame the wild beast,” Scott says to Cesar. “We’ve had two trainers come out, one of whom was doing this domination thing, where he would put JonBee on his back and would hold him until he submits. It went on for a good twenty minutes. This dog never let up. But, as soon as he let go, JonBee bit him four times…The guy was bleeding, both hands and his arms. I had another trainer come out, too, and they said, ‘You’ve got to get rid of this dog.’”

Cesar goes outside to meet JonBee. He walks down a few steps to the backyard. Cesar crouches down next to the dog. “The owner was a little concerned about me coming here by myself,” he says. “To tell you the truth, I feel more comfortable with aggressive dogs than insecure dogs, or fearful dogs, or panicky dogs. These are actually the guys who put me on the map.”

JonBee comes up and sniffs him. Cesar puts a leash on him. JonBee eyes Cesar nervously and starts to poke around. Cesar then walks JonBee into the living room. Scott puts a muzzle on him. Cesar tries to get the dog to lie on its side—and all hell breaks loose. JonBee turns and snaps and squirms and spins and jumps and lunges and struggles. His muzzle falls off. He bites Cesar. He twists his body up into the air, in a cold, vicious fury. The struggle between the two goes on and on. Patrice covers her face. Cesar asks her to leave the room. He is standing up, leash extended. He looks like a wrangler, taming a particularly ornery rattlesnake. Sweat is streaming down his face. Finally, Cesar gets the dog to sit, then to lie down, and then, somehow, to lie on its side. JonBee slumps, defeated. Cesar massages JonBee’s stomach. “That’s all we wanted,” he says.

What happened between Cesar and JonBee? One explanation is that they had a fight, alpha male versus alpha male. But fights don’t come out of nowhere. JonBee was clearly reacting to something in Cesar. Before he fought, he sniffed and explored and watched Cesar—the last of which is most important, because everything we know about dogs suggests that, in a way that is true of almost no other animals, dogs are students of human movement.

The anthropologist Brian Hare has done experiments with dogs, for example, where he puts a piece of food under one of two cups, placed several feet apart. The dog knows that there is food to be had, but has no idea which of the cups holds the prize. Then Hare points at the right cup, taps on it, looks directly at it. What happens? The dog goes to the right cup virtually every time. Yet when Hare did the same experiment with chimpanzees—an animal that shares 98.6 percent of our genes—the chimps couldn’t get it right. A dog will look at you for help, and a chimp won’t.

“Primates are very good at using the cues of the same species,” Hare explained. “So if we were able to do a similar game, and it was a chimp or another primate giving a social cue, they might do better. But they are not good at using human cues when you are trying to cooperate with them. They don’t get it: ‘Why would you ever tell me where the food is?’ The key specialization of dogs, though, is that dogs pay attention to humans, when humans are doing something very human, which is sharing information about something that someone else might actually want.” Dogs aren’t smarter than chimps; they just have a different attitude toward people. “Dogs are really interested in humans,” Hare went on. “ Interested to the point of obsession. To a dog, you are a giant walking tennis ball.”

A dog cares, deeply, which way your body is leaning. Forward or backward? Forward can be seen as aggressive; backward—even a quarter of an inch—means nonthreatening. It means you’ve relinquished what ethologists call an intention movement to proceed forward. Cock your head, even slightly, to the side, and a dog is disarmed. Look at him straight on and he’ll read it like a red flag. Standing straight, with your shoulders squared, rather than slumped, can mean the difference between whether your dog obeys a command or ignores it. Breathing even and deeply—rather than holding your breath—can mean the difference between defusing a tense situation and igniting it. “I think they are looking at our eyes and where our eyes are looking, and what our eyes look like,” the ethologist Patricia McConnell, who teaches at the University of Wisconsin, Madison, says. “A rounded eye with a dilated pupil is a sign of high arousal and aggression in a dog. I believe they pay a tremendous amount of attention to how relaxed our face is and how relaxed our facial muscles are, because that’s a big cue for them with each other. Is the jaw relaxed? Is the mouth slightly open? And then the arms. They pay a tremendous amount of attention to where our arms go.”

In the book The Other End of the Leash, McConnell decodes one of the most common of all human-dog interactions—the meeting between two leashed animals on a walk. To us, it’s about one dog sizing up another. To her, it’s about two dogs sizing up each other after first sizing up their respective owners. The owners “are often anxious about how well the dogs will get along,” she writes, “and if you watch them instead of the dogs, you’ll often notice that the humans will hold their breath and round their eyes and mouths in an ‘on alert’ expression. Since these behaviors are expressions of offensive aggression in canine culture, I suspect that the humans are unwittingly signaling tension. If you exaggerate this by tightening the leash, as many owners do, you can actually cause the dogs to attack each other. Think of it: the dogs are in a tense social encounter, surrounded by support from their own pack, with the humans forming a tense, staring, breathless circle around them. I don’t know how many times I’ve seen dogs shift their eyes toward their owner’s frozen faces, and then launch growling at the other dog.”

When Cesar walked down the stairs of Patrice and Scott’s home, then, and crouched down in the backyard, JonBee looked at him, intently. And what he saw was someone who moved in a very particular way. Cesar is fluid. “He’s beautifully organized intraphysically,” Karen Bradley, who heads the graduate dance program at the University of Maryland, said when she first saw tapes of Cesar in action. “That lower-unit organization—I wonder whether he was a soccer player.” Movement experts like Bradley use something called Laban Movement Analysis to make sense of movement, describing, for instance, how people shift their weight, or how fluid and symmetrical they are when they move, or what kind of effort it involves. Is it direct or indirect—that is, what kind of attention does the movement convey? Is it quick or slow? Is it strong or light—that is, what is its intention? Is it bound or free—that is, how much precision is involved? If you want to emphasize a point, you might bring your hand down across your body in a single, smooth motion. But how you make that motion greatly affects how your point will be interpreted by your audience. Ideally, your hand would come down in an explosive, bound movement—that is, with accelerating force, ending abruptly and precisely—and your head and shoulders would descend simultaneously, so posture and gesture would be in harmony. Suppose, though, that your head and shoulders moved upward as your hand came down, or your hand came down in a free, implosive manner—that is, with a kind of a vague, decelerating force. Now your movement suggests that you are making a point on which we all agree, which is the opposite of your intention. Combinations of posture and gesture are called phrasing, and the great communicators are those who match their phrasing with their communicative intentions—who understand, for instance, that emphasis requires them to be bound and explosive. To Bradley, Cesar had beautiful phrasing.

There he is, talking to Patrice and Scott. He has his hands in front of him, in what Laban analysts call the sagittal plane—that is, the area directly in front of and behind the torso. He then leans forward for emphasis. But as he does, he lowers his hands to waist level, and draws them toward his body, to counterbalance the intrusion of his posture. And, when he leans backward again, the hands rise up, to fill the empty space. It’s not the kind of thing you’d ever notice. But, when it’s pointed out, its emotional meaning is unmistakable. It is respectful and reassuring. It communicates without being intrusive. Bradley was watching Cesar with the sound off, and there was one sequence she returned to again and again, in which Cesar was talking to a family, and his right hand swung down in a graceful arc across his chest. “He’s dancing,” Bradley said. “Look at that. It’s gorgeous. It’s such a gorgeous little dance.

“The thing is, his phrases are of mixed length,” she went on. “Some of them are long. Some of them are very short. Some of them are explosive phrases, loaded up in the beginning and then trailing off. Some of them are impactive—building up, and then coming to a sense of impact at the end. What they are is appropriate to the task. That’s what I mean by versatile.”

Movement analysts tend to like watching, say, Bill Clinton or Ronald Reagan; they had great phrasing. George W. Bush does not. During this year’s State of the Union address, Bush spent the entire speech swaying metronomically, straight down through his lower torso, a movement underscored, unfortunately, by the presence of a large vertical banner behind him. “Each shift ended with this focus that channels toward a particular place in the audience,” Bradley said. She mimed, perfectly, the Bush gaze—the squinty, fixated look he reserves for moments of great solemnity—and gently swayed back and forth. “It’s a little primitive, a little regressed.” The combination of the look, the sway, and the gaze was, to her mind, distinctly adolescent. When people say of Bush that he seems eternally boyish, this is in part what they’re referring to. He moves like a boy, which is fine, except that, unlike such movement masters as Reagan and Clinton, he can’t stop moving like a boy when the occasion demands a more grown-up response.

“Mostly what we see in the normal population is undifferentiated phrasing,” Bradley said. “And then you have people who are clearly preferential in their phrases, like my husband. He’s Mr. Horizontal. When he’s talking in a meeting, he’s back. He’s open. He just goes into this, this same long thing”—she leaned back, and spread her arms out wide and slowed her speech—“and it doesn’t change very much. He works with people who understand him, fortunately.” She laughed. “When we meet someone like this”—she nodded at Cesar, on the television screen—“what do we do? We give them their own TV series. Seriously. We reward them. We are drawn to them, because we can trust that we can get the message. It’s not going to be hidden. It contributes to a feeling of authenticity.”

4.

Back to JonBee, from the beginning—only this time with the sound off. Cesar walks down the stairs. It’s not the same Cesar who whistled and brought forty-seven dogs to attention. This occasion calls for subtlety. “Did you see the way he walks? He drops his hands. They’re close to his side.” The analyst this time was Suzi Tortora, the author of The Dancing Dialogue. Tortora is a New York dance-movement psychotherapist, a tall, lithe woman with long dark hair and beautiful phrasing. She was in her office on lower Broadway, a large, empty, paneled room. “He’s very vertical,” Tortora said. “His legs are right under his torso. He’s not taking up any space. And he slows down his gait. He’s telling the dog, ‘I’m here by myself. I’m not going to rush. I haven’t introduced myself yet. Here I am. You can feel me.’” Cesar crouches down next to JonBee. His body is perfectly symmetrical, the center of gravity low. He looks stable, as though you couldn’t knock him over, which conveys a sense of calm.

JonBee was investigating Cesar, squirming nervously. When JonBee got too jumpy, Cesar would correct him, with a tug on the leash. Because Cesar was talking and the correction was so subtle, it was easy to miss. Stop. Rewind. Play. “Do you see how rhythmic it is?” Tortora said. “He pulls. He waits. He pulls. He waits. He pulls. He waits. The phrasing is so lovely. It’s predictable. To a dog that is all over the place, he’s bringing a rhythm. But it isn’t a panicked rhythm. It has a moderate tempo to it. There was room to wander. And it’s not attack, attack. It wasn’t long and sustained. It was quick and light. I would bet that with dogs like this, where people are so afraid of them being aggressive and so defensive around them, there is a lot of aggressive strength directed at them. There is no aggression here. He’s using strength without it being aggressive.”

Cesar moves into the living room. The fight begins. “Look how he involves the dog,” Tortora said. “He’s letting the dog lead. He’s giving the dog room.” This was not a Secret Service agent wrestling an assailant to the ground. Cesar had his body vertical, and his hand high above JonBee holding the leash, and, as JonBee turned and snapped and squirmed and spun and jumped and lunged and struggled, Cesar seemed to be moving along with him, providing a loose structure for his aggression. It may have looked like a fight, but Cesar wasn’t fighting. And what was JonBee doing? Child psychologists talk about the idea of regulation. If you expose healthy babies, repeatedly, to a very loud noise, eventually they will be able to fall asleep. They’ll become habituated to the noise: the first time the noise is disruptive, but, by the second or third time, they’ve learned to handle the disruption, and block it out. They’ve regulated themselves. Children throwing tantrums are said to be in a state of dysregulation. They’ve been knocked off kilter in some way, and cannot bring themselves back to baseline. JonBee was dysregulated. He wasn’t fighting; he was throwing a tantrum. And Cesar was the understanding parent. When JonBee paused, to catch his breath, Cesar paused with him. When JonBee bit Cesar, Cesar brought his finger to his mouth, instinctively, but in a smooth and fluid and calm motion that betrayed no anxiety. “Timing is a big part of Cesar’s repertoire,” Tortora went on. “His movements right now aren’t complex. There aren’t a lot of efforts together at one time. His range of movement qualities is limited. Look at how he’s narrowing. Now he’s enclosing.” As JonBee calmed down, Cesar began caressing him. His touch was firm but not aggressive; not so strong as to be abusive and not so light as to be insubstantial and irritating. Using the language of movement—the plainest and most transparent of all languages—Cesar was telling JonBee that he was safe. Now JonBee was lying on his side, mouth relaxed, tongue out. “Look at that, look at the dog’s face,” Tortora said. This was not defeat; this was relief.

Later, when Cesar tried to show Scott how to placate JonBee, Scott couldn’t do it, and Cesar made him stop. “You’re still nervous,” Cesar told him. “You are still unsure. That’s how you become a target.” It isn’t as easy as it sounds to calm a dog. “There, there” in a soothing voice, accompanied by a nice belly scratch, wasn’t enough for JonBee, because he was reading gesture and posture and symmetry and the precise meaning of touch. He was looking for clarity and consistency. Scott didn’t have it. “Look at the tension and aggression in his face,” Tortora said, when the camera turned to Scott. It was true. Scott had a long and craggy face, with high, wide cheekbones and pronounced lips, and his movements were taut and twitchy. “There’s a bombardment of actions, quickness combined with tension, a quality in how he is using his eyes and focus—a darting,” Tortora said. “He gesticulates in a way that is complex. There is a lot going on. So many different qualities of movement happening at the same time. It leads those who watch him to get distracted.” Scott is a character actor, with a list of credits going back thirty years. The tension and aggression in his manner made him interesting and complicated—which works for Hollywood but doesn’t work for a troubled dog. Scott said he loved JonBee, but the quality of his movement did not match his emotions.

For a number of years, Tortora has worked with Eric (not his real name), an autistic boy with severe language and communication problems. Tortora videotaped some of their sessions, and in one, four months after they started to work together, Eric is standing in the middle of Tortora’s studio in Cold Spring, New York, a beautiful dark-haired three-and-a-half-year-old wearing only a diaper. His mother is sitting to the side, against the wall. In the background, you can hear the sound track to Riverdance, which happens to be Eric’s favorite album. Eric is having a tantrum.

He gets up and runs toward the stereo. Then he runs back and throws himself down on his stomach, arms and legs flailing. Tortora throws herself down on the ground, just as he did. He sits up. She sits up. He twists. She twists. He squirms. She squirms. “When Eric is running around, I didn’t say, ‘Let’s put on quiet music.’ I can’t turn him off, because he can’t turn off,” Tortora said. “He can’t go from zero to sixty and then back down to zero. With a typical child, you might say, ‘Take a deep breath. Reason with me’—and that might work. But not with children like this. They are in their world by themselves. I have to go in there and meet them and bring them back out.”

Tortora sits up on her knees, and faces Eric. His legs are moving in every direction, and she takes his feet in her hands. Slowly, and subtly, she begins to move his legs in time with the music. Eric gets up and runs to the corner of the room and back again. Tortora gets up and mirrors his action, but this time she moves more fluidly and gracefully than he did. She takes his feet again. This time, she moves Eric’s entire torso, opening the pelvis in a contralateral twist. “I’m standing above him, looking directly at him. I am very symmetrical. So I’m saying to him, ‘I’m stable. I’m here. I’m calm.’ I’m holding him at the knees and giving him sensory input. It’s firm and clear. Touch is an incredible tool. It’s another way to speak.”

She starts to rock his knees from side to side. Eric begins to calm down. He begins to make slight adjustments to the music. His legs move more freely, more lyrically. His movement is starting to get organized. He goes back into his mother’s arms. He’s still upset, but his cry has softened. Tortora sits and faces him—stable, symmetrical, direct eye contact.

His mother says, “You need a tissue?”

Eric nods.

Tortora brings him a tissue. Eric’s mother says that she needs a tissue. Eric gives his tissue to his mother.

“Can we dance?” Tortora asks him.

“OK,” he says in a small voice.

It was impossible to see Tortora with Eric and not think of Cesar with JonBee: here was the same extraordinary energy and intelligence and personal force marshaled on behalf of the helpless, the same calm in the face of chaos, and, perhaps most surprising, the same gentleness. When we talk about people with presence, we often assume that they have a strong personality—that they sweep us all up in their own personal whirlwind. Our model is the Pied Piper, who played his irresistible tune and every child in Hamelin blindly followed. But Cesar Millan and Suzi Tortora play different tunes, in different situations. And they don’t turn their back, and expect others to follow. Cesar let JonBee lead; Tortora’s approaches to Eric were dictated by Eric. Presence is not just versatile; it’s also reactive. Certain people, we say, “command our attention,” but the verb is all wrong. There is no commanding, only soliciting. The dogs in the dog run wanted someone to tell them when to start and stop; they were refugees from anarchy and disorder. Eric wanted to enjoy Riverdance. It was his favorite music. Tortora did not say, “Let us dance.” She asked, “Can we dance?”

Then Tortora gets a drum and starts to play. Eric’s mother stands up and starts to circle the room, in an Irish step dance. Eric is lying on the ground, and slowly his feet start to tap in time with the music. He gets up. He walks to the corner of the room, disappears behind a partition, and then reenters, triumphant. He begins to dance, playing an imaginary flute as he circles the room.

5.

When Cesar was twenty-one, he traveled from his hometown to Tijuana, and a “coyote” took him across the border for a hundred dollars. They waited in a hole, up to their chests in water, and then ran over the mudflats, through a junkyard, and across a freeway. A taxi took him to San Diego. After a month on the streets, grimy and dirty, he walked into a dog-grooming salon and got a job, working with the difficult cases and sleeping in the offices at night. He moved to Los Angeles, and took a day job detailing limousines while he ran his dog-psychology business out of a white Chevy Astrovan. When he was twenty-three, he fell in love with an American girl named Illusion. She was seventeen, small, dark, and very beautiful. A year later, they got married.

“Cesar was a machoistic, egocentric person who thought the world revolved around him,” Illusion recalled, of their first few years together. “His view was that marriage was where a man tells a woman what to do. Never give affection. Never give compassion or understanding. Marriage is about keeping the man happy, and that’s where it ends.”

Early in their marriage, Illusion got sick, and was in the hospital for three weeks. “Cesar visited once, for less than two hours,” she said. “I thought to myself, This relationship is not working out. He just wanted to be with his dogs.” They had a new baby and no money. They separated. Illusion told Cesar that she would divorce him if he didn’t get into therapy. He agreed, reluctantly. “The therapist’s name was Wilma,” Illusion went on. “ She was a strong African American woman. She said, ‘You want your wife to take care of you, to clean the house. Well, she wants something, too. She wants your affection and love.’” Illusion remembers Cesar scribbling furiously on a pad. “He wrote that down. He said, ‘That’s it! It’s like the dogs. They need exercise, discipline, and affection.’” Illusion laughed. “I looked at him, upset, because why the hell are you talking about your dogs when you should be talking about us?”

“I was fighting it,” Cesar said. “Two women against me, blah, blah, blah. I had to get rid of the fight in my mind. That was very difficult. But that’s when the lightbulb came on. Women have their own psychology.”

Cesar could calm a stray off the street, yet, at least in the beginning, he did not grasp the simplest of truths about his own wife. “Cesar related to dogs because he didn’t feel connected to people,” Illusion said. “His dogs were his way of feeling like he belonged in the world, because he wasn’t people-friendly. And it was hard for him to get out of that.” In Mexico, on his grandfather’s farm, dogs were dogs and humans were humans: each knew its place. But in America, dogs were treated like children, and owners had shaken up the hierarchy of human and animal. Sugar’s problem was Lynda. JonBee’s problem was Scott. Cesar calls that epiphany in the therapist’s office the most important moment in his life, because it was the moment when he understood that to succeed in the world he could not be just a dog whisperer. He needed to be a people whisperer.

For his show, Cesar once took a case involving a Chihuahua named Bandit. Bandit had a large, rapper-style diamond-encrusted necklace around his neck spelling “Stud.” His owner was Lori, a voluptuous woman with an oval face and large, pleading eyes. Bandit was out of control, terrorizing guests and menacing other dogs. Three trainers had failed to get him under control.

Lori was on the couch in her living room as she spoke to Cesar. Bandit was sitting in her lap. Her teenage son, Tyler, was sitting next to her.

“About two weeks after his first visit with the vet, he started to lose a lot of hair,” Lori said. “They said that he had Demodex mange.” Bandit had been sold to her as a show-quality dog, she recounted, but she had the bloodline checked and learned that he had come from a puppy mill. “He didn’t have any human contact,” she went on. “So for three months he was getting dipped every week to try to get rid of the symptoms.” As she spoke, her hands gently encased Bandit. “He would hide inside my shirt and lay his head right by my heart, and stay there.” Her eyes were moist. “He was right here on my chest.”

“So your husband cooperated?” Cesar asked. He was focused on Lori, not on Bandit. This is what the new Cesar understood that the old Cesar did not.

“He was our baby. He was in need of being nurtured and helped and he was so scared all the time.”

“Do you still feel the need of feeling sorry about him?”

“Yeah. He’s so cute.”

Cesar seemed puzzled. He didn’t know why Lori would still feel sorry for her dog.

Lori tried to explain. “He’s so small and he’s helpless.”

“But do you believe that he feels helpless?”

Lori still had her hands over the dog, stroking him. Tyler was looking at Cesar, and then at his mother, and then down at Bandit. Bandit tensed. Tyler reached over to touch the dog, and Bandit leaped out of Lori’s arms and attacked him, barking and snapping and growling. Tyler, startled, jumped back. Lori, alarmed, reached out, and—this was the critical thing—put her hands around Bandit in a worried, caressing motion, and lifted him back into her lap. It happened in an instant.

Cesar stood up. “Give me the space,” he said, gesturing for Tyler to move aside. “Enough dogs attacking humans, and humans not really blocking him, so he is only becoming more narcissistic. It is all about him. He owns you.” Cesar was about as angry as he ever gets. “It seems like you are favoring the dog, and hopefully that is not the truth…If Tyler kicked the dog, you would correct him. The dog is biting your son, and you are not correcting hard enough.” Cesar was in emphatic mode now, his phrasing sure and unambiguous. “I don’t understand why you are not putting two and two together.”

Bandit was nervous. He started to back up on the couch. He started to bark. Cesar gave him a look out of the corner of his eye. Bandit shrank. Cesar kept talking. Bandit came at Cesar. Cesar stood up. “I have to touch,” he said, and he gave Bandit a sharp nudge with his elbow. Lori looked horrifed.

Cesar laughed, incredulously. “You are saying that it is fair for him to touch us but not fair for us to touch him?” he asked. Lori leaned forward to object. “You don’t like that, do you?” Cesar said, in his frustration speaking to the whole room now. “It’s not going to work. This is a case that is not going to work, because the owner doesn’t want to allow what you normally do with your kids…The hardest part for me is that the father or mother chooses the dog instead of the son. That’s hard for me. I love dogs. I’m the dog whisperer. You follow what I’m saying? But I would never choose a dog over my son.”

He stopped. He had had enough of talking. There was too much talking anyhow. People saying “I love you” with a touch that didn’t mean “I love you.” People saying “There, there” with gestures that did not soothe. People saying “I’m your mother” while reaching out to a Chihuahua instead of their own flesh and blood. Tyler looked stricken. Lori shifted nervously in her seat. Bandit growled. Cesar turned to the dog and said “Sh-h-h.” And everyone was still.

May 22, 2006

PART TWO

Theories, Predictions, and Diagnoses

“It was like driving down an interstate looking through a soda straw.”

Open Secrets

ENRON, INTELLIGENCE, AND THE PERILS OF TOO MUCH INFORMATION

1.

On the afternoon of October 23, 2006, Jeffrey Skilling sat at a table at the front of a federal courtroom in Houston, Texas. He was wearing a navy blue suit and a tie. He was fifty-two years old, but looked older. Huddled around him were eight lawyers from his defense team. Outside, television-satellite trucks were parked up and down the block.

“We are here this afternoon,” Judge Simeon Lake began, “for sentencing in United States of America versus Jeffrey K. Skilling, Criminal Case Number H-04- 25.” He addressed the defendant directly: “Mr. Skilling, you may now make a statement and present any information in mitigation.”

Skilling stood up. Enron, the company he had built into an energy-trading leviathan, had collapsed into bankruptcy almost exactly five years before. In May, he had been convicted by a jury of fraud. Under a settlement agreement, almost everything he owned had been turned over to a fund to compensate former shareholders.

He spoke haltingly, stopping in midsentence. “In terms of remorse, Your Honor, I can’t imagine more remorse,” he said. He had “friends who have died, good men.” He was innocent—“innocent of every one of these charges.” He spoke for two or three minutes and sat down.

Judge Lake called on Anne Beliveaux, who worked as the senior administrative assistant in Enron’s tax department for eighteen years. She was one of nine people who had asked to address the sentencing hearing.

“How would you like to be facing living off of sixteen hundred dollars a month, and that is what I’m facing,” she said to Skilling. Her retirement savings had been wiped out by the Enron bankruptcy. “And, Mr. Skilling, that only is because of greed, nothing but greed. And you should be ashamed of yourself.”

The next witness said that Skilling had destroyed a good company, the third witness that Enron had been undone by the misconduct of its management; another lashed out at Skilling directly. “Mr. Skilling has proven to be a liar, a thief, and a drunk,” a woman named Dawn Powers Martin, a twenty-two-year veteran of Enron, told the court. “Mr. Skilling has cheated me and my daughter of our retirement dreams. Now it’s his time to be robbed of his freedom to walk the earth as a free man.” She turned to Skilling and said, “While you dine on Chateaubriand and champagne, my daughter and I clip grocery coupons and eat leftovers.” And on and on it went.

The judge asked Skilling to rise.

“The evidence established that the defendant repeatedly lied to investors, including Enron’s own employees, about various aspects of Enron’s business,” the judge said. He had no choice but to be harsh: Skilling would serve 292 months in prison—twenty-four years. The man who headed a firm that Fortune ranked among the “most admired” in the world had received one of the heaviest sentences ever given to a white-collar criminal. He would leave prison an old man, if he left prison at all.

“I only have one request, Your Honor,” Daniel Petrocelli, Skilling’s lawyer, said. “If he received ten fewer months, which shouldn’t make a difference in terms of the goals of sentencing, if you do the math and you subtract fifteen percent for good time, he then qualifies under Bureau of Prisons policies to be able to serve his time at a lower facility. Just a ten-month reduction in sentence…”

It was a plea for leniency. Skilling wasn’t a murderer or a rapist. He was a pillar of the Houston community, and a small adjustment in his sentence would keep him from spending the rest of his life among hardened criminals.

“No,” Judge Lake said.

2.

The national security expert Gregory Treverton has famously made a distinction between puzzles and mysteries. Osama bin Laden’s whereabouts are a puzzle. We can’t find him because we don’t have enough information. The key to the puzzle will probably come from someone close to bin Laden, and until we can find that source, bin Laden will remain at large.

The problem of what would happen in Iraq after the toppling of Saddam Hussein was, by contrast, a mystery. It wasn’t a question that had a simple, factual answer. Mysteries require judgments and the assessment of uncertainty, and the hard part is not that we have too little information but that we have too much. The CIA had a position on what a post-invasion Iraq would look like, and so did the Pentagon and the State Department and Colin Powell and Dick Cheney and any number of political scientists and journalists and think tank fellows. For that matter, so did every cabdriver in Baghdad.

The distinction is not trivial. If you consider the motivation and methods behind the attacks of September 11 to be mainly a puzzle, for instance, then the logical response is to increase the collection of intelligence, recruit more spies, add to the volume of information we have about Al Qaeda. If you consider September 11 a mystery, though, you’d have to wonder whether adding to the volume of information will only make things worse. You’d want to improve the analysis within the intelligence community; you’d want more thoughtful and skeptical people with the skills to look more closely at what we already know about Al Qaeda. You’d want to send the counterterrorism team from the CIA on a golfing trip twice a month with the counterterrorism teams from the FBI and the NSA and the Defense Department, so they could get to know one another and compare notes.

If things go wrong with a puzzle, identifying the culprit is easy: it’s the person who withheld information. Mysteries, though, are a lot murkier: sometimes the information we’ve been given is inadequate, and sometimes we aren’t very smart about making sense of what we’ve been given, and sometimes the question itself cannot be answered. Puzzles come to satisfying conclusions. Mysteries often don’t.

If you sat through the trial of Jeffrey Skilling, you’d think that the Enron scandal was a puzzle. The company, the prosecution said, conducted shady side deals that no one quite understood. Senior executives withheld critical information from investors. Skilling, the architect of the firm’s strategy, was a liar, a thief, and a drunk. We were not told enough—the classic puzzle premise—was the central assumption of the Enron prosecution.

“This is a simple case, ladies and gentlemen,” the lead prosecutor for the Department of Justice said in his closing arguments to the jury:

Because it’s so simple, I’m probably going to end before my allotted time. It’s black-and-white. Truth and lies. The shareholders, ladies and gentlemen… buy a share of stock, and for that they’re not entitled to much but they’re entitled to the truth. They’re entitled for the officers and employees of the company to put their interests ahead of their own. They’re entitled to be told what the financial condition of the company is. They are entitled to honesty, ladies and gentlemen.

But the prosecutor was wrong. Enron wasn’t really a puzzle. It was a mystery.

3.

In late July of 2000, Jonathan Weil, a reporter at the Dallas bureau of the Wall Street Journal, got a call from someone he knew in the investment-management business. Weil wrote the stock column called “Heard in Texas” for the paper’s regional edition, and he had been closely following the big energy firms based in Houston—Dynegy, El Paso, and Enron. His caller had a suggestion. “He said, ‘You really ought to check out Enron and Dynegy and see where their earnings come from,’” Weil recalled. “So I did.”

Weil was interested in Enron’s use of what is called mark-to-market accounting, which is a technique used by companies that engage in complicated financial trading. Suppose, for instance, that you are an energy company and you enter into a $100 million contract with the state of California to deliver a billion kilowatt hours of electricity in 2016. How much is that contract worth? You aren’t going to get paid for another ten years, and you aren’t going to know until then whether you’ll show a profit on the deal or a loss. Nonetheless, that $100 million promise clearly matters to your bottom line. If electricity steadily drops in price over the next several years, the contract is going to become a hugely valuable asset. But if electricity starts to get more expensive as 2016 approaches, you could be out tens of millions of dollars. With mark-to-market accounting, you estimate how much revenue the deal is going to bring in and put that number in your books at the moment you sign the contract. If, down the line, the estimate changes, you adjust the balance sheet accordingly.

When a company using mark-to-market accounting says it has made a profit of $10 million on revenues of $100 million, then, it could mean one of two things. The company may actually have $100 million in its bank accounts, of which $10 million will remain after it has paid its bills. Or it may be guessing that it will make $10 million on a deal where money may not actually change hands for years. Weil’s source wanted him to see how much of the money Enron said it was making was “real.”

Weil got copies of the firm’s annual reports and quarterly filings and began comparing the income statements and the cash-flow statements. “It took me a while to figure out everything I needed to,” Weil said. “It probably took a good month or so. There was a lot of noise in the financial statements, and to zero in on this particular issue you needed to cut through a lot of that.” Weil spoke to Thomas Linsmeier, then an accounting professor at Michigan State, and they talked about how some finance companies in the 1990s had used mark-to-market accounting on subprime loans—that is, loans made to higher-credit-risk consumers—and when the economy declined and consumers defaulted or paid off their loans more quickly than expected, the lenders suddenly realized that their estimates of how much money they were going to make were far too generous. Weil spoke to someone at the Financial Accounting Standards Board, to an analyst at the Moody’s investment-rating agency, and to a dozen or so others. Then he went back to Enron’s financial statements. His conclusions were sobering. In the second quarter of 2000, $747 million of the money Enron said it had made was unrealized—that is, it was money that executives thought they were going to make at some point in the future. If you took that imaginary money away, Enron had shown a significant loss in the second quarter. This was one of the most admired companies in the United States, a firm that was then valued by the stock market as the seventh-largest corporation in the country, and there was practically no cash coming into its coffers.

Weil’s story ran in the Journal on September 20, 2000. A few days later, it was read by a Wall Street financier named James Chanos. Chanos is a short-seller—an investor who tries to make money by betting that a company’s stock will fall. “It pricked up my ears,” Chanos said. “I read the 10-K and the 10-Q that first weekend,” he went on, referring to the financial statements that public companies are required to file with federal regulators. “I went through it pretty quickly. I flagged right away the stuff that was questionable. I circled it. That was the first run-through. Then I flagged the pages and read the stuff I didn’t understand, and reread it two or three times. I remember I spent a couple hours on it.” Enron’s profit margins and its return on equity were plunging, Chanos saw. Cash flow—the lifeblood of any business—had slowed to a trickle, and the company’s rate of return was less than its cost of capital: it was as if you had borrowed money from the bank at 9 percent interest and invested it in a savings bond that paid you 7 percent interest. “They were basically liquidating themselves,” Chanos said.

In November of that year, Chanos began shorting Enron stock. Over the next few months, he spread the word that he thought the company was in trouble. He tipped off a reporter for Fortune, Bethany McLean. She read the same reports that Chanos and Weil had, and came to the same conclusion. Her story, under the headline “IS ENRON OVERPRICED?,” ran in March of 2001. More and more journalists and analysts began taking a closer look at Enron, and the stock began to fall. In August, Skilling resigned. Enron’s credit rating was downgraded. Banks became reluctant to lend Enron the money it needed to make its trades. By December, the company had filed for bankruptcy.

Enron’s downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the 1970s. To expose the White House cover-up, Bob Woodward and Carl Bernstein used a source—Deep Throat—who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flowerpot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn’t being followed, and meet his source in an underground parking garage at 2 a.m. Here, from All the President’s Men, is Woodward’s climactic encounter with Deep Throat:

“Okay,” he said softly. “This is very serious. You can safely say that fifty people worked for the White House and CRP to play games and spy and sabotage and gather intelligence. Some of it is beyond belief, kicking at the opposition in every imaginable way.”

Deep Throat nodded confirmation as Woodward ran down items on a list of tactics that he and Bernstein had heard were used against the political opposition: bugging, following people, false press leaks, fake letters, cancelling campaign rallies, investigating campaign workers’ private lives, planting spies, stealing documents, planting provocateurs in political demonstrations.

“It’s all in the files,” Deep Throat said. “Justice and the Bureau know about it, even though it wasn’t followed up.”

Woodward was stunned. Fifty people directed by the White House and CRP to destroy the opposition, no holds barred?

Deep Throat nodded.

The White House had been willing to subvert—was that the right word?—the whole electoral process? Had actually gone ahead and tried to do it?

Another nod. Deep Throat looked queasy.

And hired fifty agents to do it?

“You can safely say more than fifty,” Deep Throat said. Then he turned, walked up the ramp and out. It was nearly 6:00 a.m.

Watergate was a classic puzzle: Woodward and Bernstein were searching for a buried secret, and Deep Throat was their guide.

Did Jonathan Weil have a Deep Throat? Not really. He had a friend in the investment-management business with some suspicions about energy-trading companies like Enron, but the friend wasn’t an insider. Nor did Weil’s source direct him to files detailing the clandestine activities of the company. He just told Weil to read a series of public documents that had been prepared and distributed by Enron itself. Woodward met with his secret source in an underground parking garage in the hours before dawn. Weil called up an accounting expert at Michigan State.

When Weil had finished his reporting, he called Enron for comment. “They had their chief accounting officer and six or seven people fly up to Dallas,” Weil says. They met in a conference room at the Journal’s offices. The Enron officials acknowledged that the money they said they earned was virtually all money that they hoped to earn. Weil and the Enron officials then had a long conversation about how certain Enron was about its estimates of future earnings. “They were telling me how brilliant the people who put together their mathematical models were,” Weil says. “These were MIT PhDs. I said, ‘Were your mathematical models last year telling you that the California electricity markets would be going berserk this year? No? Why not?’ They said, ‘Well, this is one of those crazy events.’ It was late September 2000 so I said, ‘Who do you think is going to win? Bush or Gore?’ They said, ‘We don’t know.’ I said, ‘Don’t you think it will make a difference to the market whether you have an environmentalist Democrat in the White House or a Texas oilman?’” It was all very civil. “There was no dispute about the numbers,” Weil went on. “There was only a difference in how you should interpret them.”

Of all the moments in the Enron unraveling, this meeting is surely the strangest. The prosecutor in the Enron case told the jury to send Jeffrey Skilling to prison because Enron had hidden the truth: You’re “entitled to be told what the financial condition of the company is,” the prosecutor had said. But what truth was Enron hiding here? Everything Weil learned for his Enron exposé came from Enron, and when he wanted to confirm his numbers, the company’s executives got on a plane and sat down with him in a conference room in Dallas.

Nixon never went to see Woodward and Bernstein at the Washington Post. He hid in the White House.

4.

The second, and perhaps more consequential, problem with Enron’s accounting was its heavy reliance on what are called special-purpose entities, or SPEs.

An SPE works something like this. Your company isn’t doing well; sales are down and you are heavily in debt. If you go to a bank to borrow $100 million, it will probably charge you an extremely high interest rate, if it agrees to lend to you at all. But you’ve got a bundle of oil leases that over the next four or five years are almost certain to bring in $100 million. So you hand them over to a partnership—the SPE—that you have set up with some outside investors. The bank then lends $100 million to the partnership, and the partnership gives the money to you. That bit of financial maneuvering makes a big difference. This kind of transaction did not (at the time) have to be reported in the company’s balance sheet. So a company could raise capital without increasing its indebtedness. And because the bank is almost certain the leases will generate enough money to pay off the loan, it’s willing to lend its money at a much lower interest rate. SPEs have become commonplace in corporate America.

Enron introduced all kinds of twists into the SPE game. It didn’t always put blue-chip assets into the partnerships—like oil leases that would reliably generate income. It sometimes sold off less-than-sterling assets. Nor did it always sell those assets to outsiders, who presumably would raise questions about the value of what they were buying. Enron had its own executives manage these partnerships. And the company would make the deals work—that is, get the partnerships and the banks to play along—by guaranteeing that, if whatever they had to sell declined in value, Enron would make up the difference with its own stock. In other words, Enron didn’t sell parts of itself to an outside entity; it effectively sold parts of itself to itself—a strategy that was not only legally questionable but extraordinarily risky. It was Enron’s tangle of financial obligations to the SPEs that ended up triggering the collapse.

When the prosecution in the Skilling case argued that the company had misled its investors, they were referring, in part, to these SPEs. Enron’s management, the argument went, had an obligation to reveal the extent to which it had staked its financial livelihood on these shadowy side deals. As the Powers Committee, a panel charged with investigating Enron’s demise, noted, the company “failed to achieve a fundamental objective: they did not communicate the essence of the transactions in a sufficiently clear fashion to enable a reader of [Enron’s] financial statements to understand what was going on.” In short, we weren’t told enough.

Here again, though, the lessons of the Enron case aren’t nearly so straightforward. The public became aware of the nature of these SPEs through the reporting of several of Weil’s colleagues at the Wall Street Journal—principally John Emshwiller and Rebecca Smith—starting in the late summer of 2001. And how was Emshwiller tipped off to Enron’s problems? The same way Jonathan Weil and Jim Chanos were: he read what Enron had reported in its own public filings. Here is the description of Emshwiller’s epiphany, as described in Kurt Eichenwald’s Conspiracy of Fools, the definitive history of the Enron debacle. (Note the verb scrounged, which Eichenwald uses to describe how Emshwiller found the relevant Enron documents. What he means by that is downloaded.)

It was section eight, called “Related Party Transactions,” that got John Emshwiller’s juices flowing.

After being assigned to follow the Skilling resignation, Emshwiller had put in a request for an interview, then scrounged up a copy of Enron’s most recent SEC filing in search of any nuggets.

What he found startled him. Words about some partnerships run by an unidentified “senior officer.” Arcane stuff, maybe, but the numbers were huge. Enron reported more than $240 million in revenues in the first six months of the year from its dealings with them.

Enron’s SPEs were, by any measure, evidence of extraordinary recklessness and incompetence. But you can’t blame Enron for covering up the existence of its side deals. It didn’t; it disclosed them. The argument against the company, then, is more accurately that it didn’t tell its investors enough about its SPEs. But what is enough? Enron had some three thousand SPEs, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty SPE disclosure statements from various corporations—that is, summaries of the deals put together for interested parties—and found that on average they ran to forty single-spaced pages. So a summary of Enron’s SPEs would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That’s what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That’s what the Powers Committee put together. The committee looked only at the “substance of the most significant transactions,” and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was “with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation.”

A puzzle grows simpler with the addition of each new piece of information: if I tell you that Osama bin Laden is hiding in Peshawar, I make the problem of finding him an order of magnitude easier, and if I add that he’s hiding in a neighborhood in the northwest corner of the city, the problem becomes simpler still. But here the rules seem different. According to the Powers report, many on Enron’s board of directors failed to understand “the economic rationale, the consequences, and the risks” of their company’s SPE deals—and the directors sat in meetings where those deals were discussed in detail. In Conspiracy of Fools, Eichenwald convincingly argues that Andrew Fastow, Enron’s chief financial officer, didn’t understand the full economic implications of the deals, either, and he was the one who put them together.

“These were very, very sophisticated, complex transactions,” says Anthony Catanach, who teaches accounting at the Villanova University School of Business and has written extensively on the Enron case. Referring to Enron’s accounting firm, he said, “I’m not even sure any of Arthur Andersen’s field staff at Enron would have been able to understand them, even if it was all in front of them. This is senior-management-type stuff. I spent two months looking at the Powers report, just diagramming it. These deals were really convoluted.”

Enron’s SPEs, it should be noted, would have been this hard to understand even if they were standard issue. SPEs are by nature difficult. A company creates an SPE because it wants to reassure banks about the risks of making a loan. To provide that reassurance, the company gives its lenders and partners very detailed information about a specific portion of its business. And the more certainty a company creates for the lender—the more guarantees and safeguards and explanations it writes into the deal—the less comprehensible the transaction becomes to outsiders. Schwarcz writes that Enron’s disclosure was “necessarily imperfect.” You can try to make financial transactions understandable by simplifying them, in which case you run the risk of smoothing over some of their potential risks, or you can try to disclose every potential pitfall, in which case you’ll make the disclosure so unwieldy that no one will be able to understand it. To Schwarcz, all Enron proves is that in an age of increasing financial complexity, the “disclosure paradigm”—the idea that the more a company tells us about its business, the better off we are—has become an anachronism.

5.

During the summer of 1943, Nazi propaganda broadcasts boasted that the German military had developed a devastating “super weapon.” Immediately, the Allied intelligence services went to work. Spies confirmed that the Germans had built a secret weapons factory. Aerial photographs taken over northern France showed a strange new concrete installation pointed in the direction of England. The Allies were worried. Bombing missions were sent to try to disrupt the mysterious operation, and plans were drawn up to deal with the prospect of devastating new attacks on English cities. Nobody was sure, though, whether the weapon was real. There seemed to be weapons factories there, but it wasn’t evident what was happening inside them. And there was a launching pad in northern France, but it might have been just a decoy, designed to distract the Allies from bombing real targets. The German secret weapon was a puzzle, and the Allies didn’t have enough information to solve it. There was another way to think about the problem, though, which ultimately proved far more useful: treat the German secret weapon as a mystery.

The mystery solvers of the Second World War were small groups of analysts whose job was to listen to the overseas and domestic propaganda broadcasts of Japan and Germany. The British outfit had been around since shortly before the First World War and was run by the BBC. The American operation was known as the Screwball Division, the historian Stephen Mercado writes, and in the early 1940s had been housed in a nondescript office building on K Street, in Washington. The analysts listened to the same speeches that anyone with a shortwave radio could listen to. They simply sat at their desks with headphones on, working their way through hours and hours of Nazi broadcasts. Then they tried to figure out how what the Nazis said publicly—about, for instance, the possibility of a renewed offensive against Russia—revealed what they felt about, say, invading Russia. One journalist at the time described the propaganda analysts as “the greatest collection of individualists, international rolling stones, and slightly batty geniuses ever gathered together in one organization.” And they had very definite thoughts about the Nazis’ secret weapon.

The German leadership, first of all, was boasting about the secret weapon in domestic broadcasts. That was important. Propaganda was supposed to boost morale. If the Nazi leadership said things that turned out to be misleading, its credibility would fall. When German U-boats started running into increasingly effective Allied resistance in the spring of 1943, for example, Joseph Goebbels, the Nazi minister of propaganda, tacitly acknowledged the bad news, switching his emphasis from trumpeting recent victories to predicting long-term success, and blaming the weather for hampering U-boat operations. Up to that point, Goebbels had never lied to his own people about that sort of news. So if he said that Germany had a devastating secret weapon it meant, in all likelihood, that Germany had a devastating secret weapon.

Starting from that premise, the analysts then mined the Nazis’ public pronouncements for more insights. It was, they concluded, “beyond reasonable doubt” that as of November 1943 the weapon existed, that it was of an entirely new type, that it could not be easily countered, that it would produce striking results, and that it would shock the civilian population upon whom it would be used. It was, furthermore, “highly probable” that the Germans were past the experimental stage as of May of 1943, and that something had happened in August of that year that significantly delayed deployment. The analysts based this inference, in part, on the fact that, in August, the Nazis abruptly stopped mentioning their secret weapon for ten days, and that when they started again, their threats took on a new, less certain, tone. Finally, it could be tentatively estimated that the weapon would be ready between the middle of January and the middle of April, with a month’s margin of error on either side. That inference, in part, came from Nazi propaganda in late 1943, which suddenly became more serious and specific in tone, and it seemed unlikely that Goebbels would raise hopes in this way if he couldn’t deliver within a few months. The secret weapon was the Nazis’ fabled V-1 rocket, and virtually every one of the propaganda analysts’ predictions turned out to be true.

The political scientist Alexander George described the sequence of V-1 rocket inferences in his 1959 book Propaganda Analysis, and the striking thing about his account is how contemporary it seems. The spies were fighting a nineteenth-century war. The analysts belonged to our age, and the lesson of their triumph is that the complex, uncertain issues that the modern world throws at us require the mystery paradigm.

Diagnosing prostate cancer used to be a puzzle, for example: the doctor would do a rectal exam and feel for a lumpy tumor on the surface of the patient’s prostate. These days, though, we don’t wait for patients to develop the symptoms of prostate cancer. Doctors now regularly test middle-aged men for elevated levels of PSA, a substance associated with prostate changes, and, if the results look problematic, they use ultrasound imaging to take a picture of the prostate. Then they perform a biopsy, removing tiny slices of the gland and examining the extracted tissue under a microscope. Much of that flood of information, however, is inconclusive: elevated levels of PSA don’t always mean that you have cancer, and normal levels of PSA don’t always mean that you don’t—and, in any case, there’s debate about what constitutes a normal PSA level. Nor is the biopsy definitive: because what a pathologist is looking for is early evidence of cancer—and in many cases merely something that might one day turn into cancer—two equally skilled pathologists can easily look at the same sample and disagree about whether there is any cancer present. Even if they do agree, they may disagree about the benefits of treatment, given that most prostate cancers grow so slowly that they never cause problems. The urologist is now charged with the task of making sense of a maze of unreliable and conflicting claims. He is no longer confirming the presence of a malignancy. He’s predicting it, and the certainties of his predecessors have been replaced with outcomes that can only be said to be “highly probable” or “tentatively estimated.” What medical progress has meant for prostate cancer—and, as the physician H. Gilbert Welch argues in his book Should I Be Tested for Cancer?, for virtually every other cancer as well—is the transformation of diagnosis from a puzzle to a mystery.

That same transformation is happening in the intelligence world as well. During the Cold War, the broad context of our relationship with the Soviet bloc was stable and predictable. What we didn’t know was details. As Gregory Treverton, who was a former vice-chair of the National Intelligence Council, writes in his book Reshaping National Intelligence for an Age of Information:

Then the pressing questions that preoccupied intelligence were puzzles, ones that could, in principle, have been answered definitively if only the information had been available: How big was the Soviet economy? How many missiles did the Soviet Union have? Had it launched a “bolt from the blue” attack? These puzzles were intelligence’s stock-in-trade during the Cold War.

With the collapse of the Eastern bloc, Treverton and others have argued that the situation facing the intelligence community has turned upside down. Now most of the world is open, not closed. Intelligence officers aren’t dependent on scraps from spies. They are inundated with information. Solving puzzles remains critical: we still want to know precisely where Osama bin Laden is hiding and where North Korea’s nuclear-weapons facilities are situated. But mysteries increasingly take center stage. The stable and predictable divisions of East and West have been shattered. Now the task of the intelligence analyst is to help policymakers navigate the disorder. Several years ago, Admiral Bobby R. Inman was asked by a congressional commission what changes he thought would strengthen America’s intelligence system. Inman used to head the National Security Agency, the nation’s premier puzzle-solving authority, and was once the deputy director of the CIA. He was the embodiment of the Cold War intelligence structure. His answer: revive the State Department, the one part of the US foreign-policy establishment that isn’t considered to be in the intelligence business at all. In a post-Cold War world of “openly available information,” Inman said, “what you need are observers with language ability, with understanding of the religions, cultures of the countries they’re observing.” Inman thought we needed fewer spies and more slightly batty geniuses.

6.

Enron revealed that the financial community needs to make the same transition. “In order for an economy to have an adequate system of financial reporting, it is not enough that companies make disclosures of financial information,” the Yale law professor Jonathan Macey wrote in a landmark law review article that encouraged many to rethink the Enron case. “In addition, it is vital that there be a set of financial intermediaries, who are at least as competent and sophisticated at receiving, processing, and interpreting financial information… as the companies are at delivering it.” Puzzles are “transmitter-dependent”; they turn on what we are told. Mysteries are “receiver-dependent”; they turn on the skills of the listener, and Macey argues that, as Enron’s business practices grew more complicated, it was Wall Street’s responsibility to keep pace.

Victor Fleischer, who teaches at the University of Colorado Law School, points out that one of the critical clues about Enron’s condition lay in the fact that it paid no income tax in four of its last five years. Enron’s use of mark-to-market accounting and SPEs was an accounting game that made the company look as though it were earning far more money than it was. But the IRS doesn’t accept mark-to-market accounting; you pay tax on income when you actually receive that income. And, from the IRS’s perspective, all of Enron’s fantastically complex maneuvering around its SPEs was, as Fleischer puts it, “a non-event”: until the partnership actually sells the asset—and makes either a profit or a loss—an SPE is just an accounting fiction. Enron wasn’t paying any taxes because, in the eyes of the IRS, Enron wasn’t making any money.

If you looked at Enron from the perspective of the tax code, that is, you would have seen a very different picture of the company than if you had looked through the more traditional lens of the accounting profession. But in order to do that you would have to be trained in the tax code and be familiar with its particular conventions and intricacies, and know what questions to ask. “The fact of the gap between [Enron’s] accounting income and taxable income was easily observed,” Fleischer notes, but not the source of the gap. “The tax code requires special training.”

Woodward and Bernstein didn’t have any special training. They were in their twenties at the time of Watergate. In All the President’s Men, they even joke about their inexperience: Woodward’s expertise was mainly in office politics; Bernstein was a college dropout. But it hardly mattered, because cover-ups, whistle-blowers, secret tapes, and exposés—the principal elements of the puzzle—all require the application of energy and persistence, which are the virtues of youth. Mysteries demand experience and insight. Woodward and Bernstein would never have broken the Enron story.

“There have been scandals in corporate history where people are really making stuff up, but this wasn’t a criminal enterprise of that kind,” Macey says. “Enron was vanishingly close, in my view, to having complied with the accounting rules. They were going over the edge, just a little bit. And this kind of financial fraud—where people are simply stretching the truth—falls into the area that analysts and short-sellers are supposed to ferret out. The truth wasn’t hidden. But you’d have to look at their financial statements, and you would have to say to yourself, ‘What’s that about?’ It’s almost as if they were saying, ‘We’re doing some really sleazy stuff in footnote 42, and if you want to know more about it, ask us.’ And that’s the thing. Nobody did.”

Alexander George, in his history of propaganda analysis, looked at hundreds of the inferences drawn by the American analysts about the Nazis, and concluded that an astonishing 81 percent of them were accurate. George’s account, however, spends almost as much time on the propaganda analysts’ failures as on their successes. It was the British, for example, who did the best work on the V-1 rocket problem. They systematically tracked the “occurrence and volume” of Nazi reprisal threats, which is how they were able to pinpoint things like the setback suffered by the V-1 program in August of 1943 (it turned out that Allied bombs had caused serious damage) and the date of the Nazi V-1 rocket launch. K Street’s analysis was lackluster in comparison. George writes that the Americans “did not develop analytical techniques and hypotheses of sufficient refinement,” relying instead on “impressionistic” analysis. George was himself one of the slightly batty geniuses of K Street, and, of course, he could easily have excused his former colleagues. They never left their desks, after all. All they had to deal with was propaganda, and their big source was Goebbels, who was a liar, a thief, and a drunk. But that is puzzle thinking. In the case of puzzles, we put the offending target, the CEO, in jail for twenty-four years and assume that our work is done. Mysteries require that we revisit our list of culprits and be willing to spread the blame a little more broadly. Because if you can’t find the truth in a mystery—even a mystery shrouded in propaganda—it’s not just the fault of the propagandist. It’s your fault as well.

In the spring of 1998, Macey notes, a group of six students at Cornell University’s business school decided to do their term project on Enron. “It was for an advanced financial-statement-analysis class taught by a guy at Cornell called Charles Lee, who is pretty famous in financial circles,” one member of the group, Jay Krueger, recalls. In the first part of the semester, Lee had led his students through a series of intensive case studies, teaching them techniques and sophisticated tools to make sense of the vast amounts of information that companies disclose in their annual reports and SEC filings. Then the students picked a company and went off on their own. “One of the second-years had a summer-internship interview with Enron, and he was very interested in the energy sector,” Krueger went on. “So he said, ‘Let’s do them.’ It was about a six-week project, half a semester. Lots of group meetings. It was a ratio analysis, which is pretty standard business-school fare. You know, take fifty different financial ratios, then lay that on top of every piece of information you could find out about the company, the businesses, how their performance compared to other competitors.”

The people in the group reviewed Enron’s accounting practices as best they could. They analyzed each of Enron’s businesses, in succession. They used statistical tools, designed to find telltale patterns in the company’s financial performance—the Beneish model, the Lev and Thiagarajan indicators, the Edwards-Bell-Ohlsen analysis—and made their way through pages and pages of footnotes. “We really had a lot of questions about what was going on with their business model,” Krueger said. The students’ conclusions were straightforward. Enron was pursuing a far riskier strategy than its competitors. There were clear signs that “Enron may be manipulating its earnings.” The stock was then at $48—at its peak, two years later, it was almost double that—but the students found it overvalued. The report was posted on the website of the Cornell University business school, where it has been, ever since, for anyone who cares to read twenty-three pages of analysis. The students’ recommendation was on the first page, in boldfaced type: “Sell.”[4]

January 8, 2007

Million-Dollar Murray

WHY PROBLEMS LIKE HOMELESSNESS MAY BE EASIER TO SOLVE THAN TO MANAGE

1.

Murray Barr was a bear of a man, an ex-Marine, six feet tall and heavyset, and when he fell down—which he did nearly every day—it could take two or three grown men to pick him up. He had straight black hair and olive skin. On the street, they called him Smokey. He was missing most of his teeth. He had a wonderful smile. People loved Murray.

His chosen drink was vodka. Beer he called “horse piss.” On the streets of downtown Reno, where he lived, he could buy a 250-milliliter bottle of cheap vodka for $1.50. If he was flush, he could go for the 750-milliliter bottle, and if he was broke, he could always do what many of the other homeless people of Reno did, which is to walk through the casinos and finish off the half-empty glasses of liquor left at the gaming tables.

“If he was on a runner, we could pick him up several times a day,” Patrick O’Bryan, who is a bicycle cop in downtown Reno, said. “And he’s gone on some amazing runners. He would get picked up, get detoxed, then get back out a couple of hours later and start up again. A lot of the guys on the streets who’ve been drinking, they get so angry. They are so incredibly abrasive, so violent, so abusive. Murray was such a character and had such a great sense of humor that we somehow got past that. Even when he was abusive, we’d say, ‘ Murray, you know you love us,’ and he’d say, ‘I know’—and go back to swearing at us.”

“I’ve been a police officer for fifteen years,” O’Bryan’s partner, Steve Johns, said. “I picked up Murray my whole career. Literally.”

Johns and O’Bryan pleaded with Murray to quit drinking. A few years ago, he was assigned to a treatment program in which he was under the equivalent of house arrest, and he thrived. He got a job and worked hard. But then the program ended. “Once he graduated out, he had no one to report to, and he needed that,” O’Bryan said. “I don’t know whether it was his military background. I suspect that it was. He was a good cook. One time, he accumulated savings of over six thousand dollars. Showed up for work religiously. Did everything he was supposed to do. They said, ‘Congratulations,’ and put him back on the street. He spent that six thousand in a week or so.”

Often, he was too intoxicated for the drunk tank at the jail, and he’d get sent to the emergency room at either Saint Mary’s or Washoe Medical Center. Marla Johns, who was a social worker in the emergency room at Saint Mary’s, saw him several times a week. “The ambulance would bring him in. We would sober him up, so he would be sober enough to go to jail. And we would call the police to pick him up. In fact, that’s how I met my husband.” Marla Johns is married to Steve Johns.

“He was like the one constant in an environment that was ever changing,” she went on. “In he would come. He would grin that half-toothless grin. He called me ‘my angel.’ I would walk in the room, and he would smile and say, ‘Oh, my angel, I’m so happy to see you.’ We would joke back and forth, and I would beg him to quit drinking and he would laugh it off. And when time went by and he didn’t come in, I would get worried and call the coroner’s office. When he was sober, we would find out, oh, he’s working someplace, and my husband and I would go and have dinner where he was working. When my husband and I were dating, and we were going to get married, he said, ‘Can I come to the wedding?’ And I almost felt like he should. My joke was ‘If you are sober you can come, because I can’t afford your bar bill.’ When we started a family, he would lay a hand on my pregnant belly and bless the child. He really was this kind of light.”

In the fall of 2003, the Reno Police Department started an initiative designed to limit panhandling in the downtown core. There were articles in the newspapers, and the police department came under harsh criticism on local talk radio. The crackdown on panhandling amounted to harassment, the critics said. The homeless weren’t an imposition on the city; they were just trying to get by. “One morning, I’m listening to one of the talk shows, and they’re just trashing the police department and going on about how unfair it is,” O’Bryan said. “And I thought, Wow, I’ve never seen any of these critics in one of the alleyways in the middle of the winter looking for bodies.” O’Bryan was angry. In downtown Reno, food for the homeless was plentiful: there was a Gospel kitchen and Catholic Services, and even the local McDonald’s fed the hungry. The panhandling was for liquor, and the liquor was anything but harmless. He and Johns spent at least half their time dealing with people like Murray; they were as much caseworkers as police officers. And they knew they weren’t the only ones involved. When someone passed out on the street, there was a “One down” call to the paramedics. There were four people in an ambulance, and the patient sometimes stayed at the hospital for days, because living on the streets in a state of almost constant intoxication was a reliable way of getting sick. None of that, surely, could be cheap.

O’Bryan and Johns called someone they knew at an ambulance service and then contacted the local hospitals. “We came up with three names that were some of our chronic inebriates in the downtown area, that got arrested the most often,” O’Bryan said. “We tracked those three individuals through just one of our two hospitals. One of the guys had been in jail previously, so he’d only been on the streets for six months. In those six months, he had accumulated a bill of a hundred thousand dollars—and that’s at the smaller of the two hospitals near downtown Reno. It’s pretty reasonable to assume that the other hospital had an even larger bill. Another individual came from Portland and had been in Reno for three months. In those three months, he had accumulated a bill for sixty-five thousand dollars. The third individual actually had some periods of being sober, and had accumulated a bill of fifty thousand.”

The first of those people was Murray Barr, and Johns and O’Bryan realized that if you toted up all his hospital bills for the ten years that he had been on the streets—as well as substance-abuse-treatment costs, doctors’ fees, and other expenses—Murray Barr probably ran up a medical bill as large as anyone in the state of Nevada.

“It cost us one million dollars not to do something about Murray,” O’Bryan said.

2.

In the wake of the Rodney King beating, the Los Angeles Police Department was in crisis. It was accused of racial insensitivity and ill discipline and violence, and the assumption was that those problems had spread broadly throughout the rank and file. In the language of statisticians, it was thought that the LAPD’s troubles had a “normal” distribution—that if you graphed them, the result would look like a bell curve, with a small number of officers at one end of the curve, a small number at the other end, and the bulk of the problem situated in the middle. The bell-curve assumption has become so much a part of our mental architecture that we tend to use it to organize experience automatically.

But when the LAPD was investigated by a special commission headed by Warren Christopher, a very different picture emerged. Between 1986 and 1990, allegations of excessive force or improper tactics were made against eighteen hundred of the eighty-five hundred officers in the LAPD. The broad middle had scarcely been accused of anything. Furthermore, more than fourteen hundred officers had only one or two allegations made against them—and bear in mind that these were not proven charges, that they happened in a four-year period, and that allegations of excessive force are an inevitable feature of urban police work. (The NYPD receives about three thousand such complaints a year.) A hundred and eighty-three officers, however, had four or more complaints against them, forty-four officers had six or more complaints, sixteen had eight or more, and one had sixteen complaints. If you were to graph the troubles of the LAPD, it wouldn’t look like a bell curve. It would look more like a hockey stick. It would follow what statisticians call a power law distribution—where all the activity is not in the middle but at one extreme.

The Christopher Commission’s report repeatedly comes back to what it describes as the extreme concentration of problematic officers. One officer had been the subject of thirteen allegations of excessive use of force, five other complaints, twenty-eight “use of force reports” (that is, documented internal accounts of inappropriate behavior), and one shooting. Another had six excessive-force complaints, nineteen other complaints, ten use-of-force reports, and three shootings. A third had twenty-seven use-of-force reports, and a fourth had thirty-five. Another had a file full of complaints for doing things like “striking an arrestee on the back of the neck with the butt of a shotgun for no apparent reason while the arrestee was kneeling and handcuffed,” beating up a thirteen-year-old juvenile, and throwing an arrestee from his chair and kicking him in the back and side of the head while he was handcuffed and lying on his stomach.

The report gives the strong impression that if you fired those forty-four cops, the LAPD would suddenly become a pretty well-functioning police department. But the report also suggests that the problem is tougher than it seems, because those forty-four bad cops were so bad that the institutional mechanisms in place to get rid of bad apples clearly weren’t working. If you made the mistake of assuming that the department’s troubles fell into a normal distribution, you’d propose solutions that would raise the performance of the middle—like better training or better hiring—when the middle didn’t need help. For those hard-core few who did need help, meanwhile, the medicine that helped the middle wouldn’t be nearly strong enough.

In the 1980s, when homelessness first surfaced as a national issue, the assumption was that the problem fit a normal distribution: that the vast majority of the homeless were in the same state of semipermanent distress. It was an assumption that bred despair: if there were so many homeless, with so many problems, what could be done to help them? Then, in the early 1990s, a young Boston College graduate student named Dennis Culhane lived in a shelter in Philadelphia for seven weeks as part of the research for his dissertation. A few months later he went back and was surprised to discover that he couldn’t find any of the people he had recently spent so much time with. “It made me realize that most of these people were getting on with their own lives,” he said.

Culhane then put together a database—the first of its kind—to track who was coming in and out of the shelter system. What he discovered profoundly changed the way homelessness is understood. Homelessness doesn’t have a normal distribution, it turned out. It has a power-law distribution. “We found that eighty percent of the homeless were in and out really quickly,” he said. “In Philadelphia, the most common length of time that someone is homeless is one day. And the second most common length is two days. And they never come back. Anyone who ever has to stay in a shelter involuntarily knows that all you think about is how to make sure you never come back.”

The next 10 percent were what Culhane calls episodic users. They would come for three weeks at a time, and return periodically, particularly in the winter. They were quite young, and they were often heavy drug users. It was the last 10 percent—the group at the farthest edge of the curve—that interested Culhane the most. They were the chronically homeless, who lived in the shelters, sometimes for years at a time. They were older. Many were mentally ill or physically disabled, and when we think about homelessness as a social problem—the people sleeping on the sidewalk, aggressively panhandling, lying drunk in doorways, huddled on subway grates and under bridges—it’s this group that we have in mind. In the early 1990s, Culhane’s database suggested that New York City had a quarter of a million people who were homeless at some point in the previous half decade—which was a surprisingly high number. But only about twenty-five hundred were chronically homeless.

It turns out, furthermore, that this group costs the health-care and social-services systems far more than anyone had ever anticipated. Culhane estimates that in New York at least $62 million was being spent annually to shelter just those twenty-five hundred hard-core homeless. “It costs twenty-four thousand dollars a year for one of these shelter beds,” Culhane said. “We’re talking about a cot eighteen inches away from the next cot.” Boston Health Care for the Homeless Program, a leading service group for the homeless in Boston, recently tracked the medical expenses of a hundred and nineteen chronically homeless people. In the course of five years, thirty-three people died and seven more were sent to nursing homes, and the group still accounted for 18,834 emergency-room visits—at a minimum cost of $1,000 a visit. The University of California, San Diego, Medical Center followed fifteen chronically homeless inebriates and found that over eighteen months, those fifteen people were treated at the hospital’s emergency room 417 times, and ran up bills that averaged $100,000 each. One person—San Diego’s counterpart to Murray Barr—came to the emergency room eighty-seven times.

“If it’s a medical admission, it’s likely to be the guys with the really complex pneumonia,” James Dunford, the city of San Diego’s emergency medical director and the author of the observational study, said. “They are drunk and they aspirate and get vomit in their lungs and develop a lung abscess, and they get hypothermia on top of that, because they’re out in the rain. They end up in the intensive-care unit with these very complicated medical infections. These are the guys who typically get hit by cars and buses and trucks. They often have a neurosurgical catastrophe as well. So they are very prone to just falling down and cracking their head and getting a subdural hematoma, which, if not drained, could kill them, and it’s the guy who falls down and hits his head who ends up costing you at least fifty thousand dollars. Meanwhile, they are going through alcohol withdrawal and have devastating liver disease that only adds to their inability to fight infections. There is no end to the issues. We do this huge drill. We run up big lab fees, and the nurses want to quit, because they see the same guys come in over and over, and all we’re doing is making them capable of walking down the block.”

The homelessness problem is like the LAPD’s bad-cop problem. It’s a matter of a few hard cases, and that’s good news, because when a problem is that concentrated you can wrap your arms around it and think about solving it. The bad news is that those few hard cases are hard. They are falling-down drunks with liver disease and complex infections and mental illness. They need time and attention and lots of money. But enormous sums of money are already being spent on the chronically homeless, and Culhane saw that the kind of money it would take to solve the homeless problem could well be less than the kind of money it took to ignore it. Murray Barr used more health-care dollars, after all, than almost anyone in the state of Nevada. It would probably have been cheaper to give him a full-time nurse and his own apartment.

The leading exponent for the power-law theory of homelessness is Philip Mangano, who, since he was appointed by President Bush in 2002, has been the executive director of the US Interagency Council on Homelessness, a group that oversees the programs of twenty federal agencies. Mangano is a slender man, with a mane of white hair and a magnetic presence, who got his start as an advocate for the homeless in Massachusetts. He is on the road constantly, crisscrossing the United States, educating local mayors and city councils about the real shape of the homelessness curve. Simply running soup kitchens and shelters, he argues, allows the chronically homeless to remain chronically homeless. You build a shelter and a soup kitchen if you think that homelessness is a problem with a broad and unmanageable middle. But if it’s a problem at the fringe it can be solved. So far, Mangano has convinced more than two hundred cities to radically reevaluate their policy for dealing with the homeless.

“I was in St. Louis recently,” Mangano said, back in June, when he dropped by New York on his way to Boise, Idaho. “I spoke with people doing services there. They had a very difficult group of people they couldn’t reach no matter what they offered. So I said, ‘Take some of your money and rent some apartments and go out to those people, and literally go out there with the key and say to them, “This is the key to an apartment. If you come with me right now I am going to give it to you, and you are going to have that apartment.” ’ And so they did. And one by one those people were coming in. Our intent is to take homeless policy from the old idea of funding programs that serve homeless people endlessly and invest in results that actually end homelessness.”

Mangano is a history buff, a man who sometimes falls asleep listening to old Malcolm X speeches, and who peppers his remarks with references to the civil-rights movement and the Berlin Wall and, most of all, the fight against slavery. “I am an abolitionist,” he says. “My office in Boston was opposite the monument to the 54th Regiment on the Boston Common, up the street from the Park Street Church, where William Lloyd Garrison called for immediate abolition, and around the corner from where Frederick Douglass gave that famous speech at the Tremont Temple. It is very much ingrained in me that you do not manage a social wrong. You should be ending it.”

3.

The old YMCA in downtown Denver is on Sixteenth Street, just east of the central business district. The main building is a handsome six-story stone structure that was erected in 1906, and next door is an annex that was added in the 1950s. On the ground floor are a gym and exercise rooms. On the upper floors there are several hundred apartments—brightly painted one bedrooms, efficiencies, and SRO-style rooms with microwaves and refrigerators and central air-conditioning—and for the past several years those apartments have been owned and managed by the Colorado Coalition for the Homeless.

Even by big-city standards, Denver has a serious homelessness problem. The winters are relatively mild, and the summers aren’t nearly as hot as those of neighboring New Mexico or Utah, which has made the city a magnet for the indigent. By the city’s estimates, it has roughly a thousand chronically homeless people, of whom three hundred spend their time downtown, along the central Sixteenth Street shopping corridor or in nearby Civic Center Park. Many of the merchants downtown worry that the presence of the homeless is scaring away customers. A few blocks north, near the hospital, a modest, low-slung detox center handles twenty-eight thousand admissions a year, many of them homeless people who have passed out on the streets, either from liquor or—as is increasingly the case—from mouthwash. “Dr. -, Dr. Tich, they call it—is the brand of mouthwash they use,” says Roxane White, the manager of the city’s social services. “You can imagine what that does to your gut.”

Eighteen months ago the city signed up with Mangano. With a mixture of federal and local funds, the CCH inaugurated a new program that has so far enrolled 106 people. It is aimed at the Murray Barrs of Denver, the people costing the system the most. CCH went after the people who had been on the streets the longest, who had a criminal record, who had a problem with substance abuse or mental illness. “We have one individual in her early sixties, but looking at her you’d think she’s eighty,” Rachel Post, the director of substance treatment at the CCH, said. (Post changed some details about her clients in order to protect their identity.) “She’s a chronic alcoholic. A typical day for her is, she gets up and tries to find whatever she’s going to drink that day. She falls down a lot. There’s another person who came in during the first week. He was on methadone maintenance. He’d had psychiatric treatment. He was incarcerated for eleven years, and lived on the streets for three years after that, and, if that’s not enough, he had a hole in his heart.”

The recruitment strategy was as simple as the one that Mangano had laid out in St. Louis: Would you like a free apartment? The enrollees got either an efficiency at the YMCA or an apartment rented for them in a building somewhere else in the city, provided they agreed to work within the rules of the program. In the basement of the Y, where the racquetball courts used to be, the coalition built a command center, staffed with ten caseworkers. Five days a week, between eight-thirty and ten in the morning, the caseworkers meet and painstakingly review the status of everyone in the program. On the wall around the conference table are several large whiteboards, with lists of doctor’s appointments and court dates and medication schedules. “We need a staffing ratio of one to ten to make it work,” Post said. “You go out there and you find people and assess how they’re doing in their residence. Sometimes we’re in contact with someone every day. Ideally, we want to be in contact every couple of days. We’ve got about fifteen people we’re really worried about now.”

The cost of services comes to about $10,000 per homeless client per year. An efficiency apartment in Denver averages $376 a month, or just over $4,500 a year, which means that you can house and care for a chronically homeless person for at most $15,000, or about a third of what he or she would cost on the street. The idea is that once the people in the program get stabilized, they will find jobs, and start to pick up more and more of their own rent, which would bring someone’s annual cost to the program closer to $6,000 dollars. As of today, seventy-five supportive housing slots have already been added, and the city’s homeless plan calls for eight hundred more over the next ten years.

The reality, of course, is hardly that neat and tidy. The idea that the very sickest and most troubled of the homeless can be stabilized and eventually employed is only a hope. Some of them plainly won’t be able to get there: these are, after all, hard cases. “We’ve got one man, he’s in his twenties,” Post said. “Already, he has cirrhosis of the liver. One time he blew a blood alcohol of.49, which is enough to kill most people. The first place we had, he brought over all his friends, and they partied and trashed the place and broke a window. Then we gave him another apartment, and he did the same thing.”

Post said that the man had been sober for several months. But he could relapse at some point and perhaps trash another apartment, and they’d have to figure out what to do with him next. Post had just been on a conference call with some people in New York City who run a similar program, and they talked about whether giving clients so many chances simply encourages them to behave irresponsibly. For some people, it probably does. But what was the alternative? If this young man was put back on the streets, he would cost the system even more money. The current philosophy of welfare holds that government assistance should be temporary and conditional, to avoid creating dependency. But someone who blows.49 on a Breathalyzer and has cirrhosis of the liver at the age of twenty-seven doesn’t respond to incentives and sanctions in the usual way. “The most complicated people to work with are those who have been homeless for so long that going back to the streets just isn’t scary to them,” Post said. “The summer comes along and they say, ‘I don’t need to follow your rules.’” Power-law homeless policy has to do the opposite of normal-distribution social policy. It should create dependency: you want people who have been outside the system to come inside and rebuild their lives under the supervision of those ten caseworkers in the basement of the YMCA.

That is what is so perplexing about power-law homeless policy. From an economic perspective the approach makes perfect sense. But from a moral perspective it doesn’t seem fair. Thousands of people in the Denver area no doubt live day to day, work two or three jobs, and are eminently deserving of a helping hand—and no one offers them the key to a new apartment. Yet that’s just what the guy screaming obscenities and swigging Dr. Tich gets. When the welfare mom’s time on public assistance runs out, we cut her off. Yet when the homeless man trashes his apartment, we give him another. Social benefits are supposed to have some kind of moral justification. We give them to widows and disabled veterans and poor mothers with small children. Giving the homeless guy passed out on the sidewalk an apartment has a different rationale. It’s simply about efficiency.

We also believe that the distribution of social benefits should not be arbitrary. We don’t give only to some poor mothers, or to a random handful of disabled veterans. We give to everyone who meets a formal criterion, and the moral credibility of government assistance derives, in part, from this universality. But the Denver homelessness program doesn’t help every chronically homeless person in Denver. There is a waiting list of six hundred for the supportive-housing program; it will be years before all those people get apartments, and some may never get one. There isn’t enough money to go around, and to try to help everyone a little bit—to observe the principle of universality—isn’t as cost-effective as helping a few people a lot. Being fair, in this case, means providing shelters and soup kitchens, and shelters and soup kitchens don’t solve the problem of homelessness. Our usual moral intuitions are of little use, then, when it comes to a few hard cases. Power-law problems leave us with an unpleasant choice. We can be true to our principles or we can fix the problem. We cannot do both.

4.

A few miles northwest of the old YMCA in downtown Denver, on the Speer Boulevard off-ramp from I-25, there is a big electronic sign by the side of the road, connected to a device that remotely measures the emissions of the vehicles driving past. When a car with properly functioning pollution-control equipment passes, the sign flashes “Good.” When a car passes that is well over the acceptable limits, the sign flashes “Poor.” If you stand at the Speer Boulevard exit and watch the sign for any length of time, you’ll find that virtually every car scores “Good.” An Audi A4—“Good.” A Buick Century—“Good.” A Toyota Corolla—“Good.” A Ford Taurus—“Good.” A Saab 9-5—“Good,” and on and on, until after twenty minutes or so, some beat-up old Ford Escort or tricked-out Porsche drives by and the sign flashes “Poor.” The picture of the smog problem you get from watching the Speer Boulevard sign and the picture of the homelessness problem you get from listening in on the morning staff meetings at the YMCA are pretty much the same. Auto emissions follow a power-law distribution, and the air-pollution example offers another look at why we struggle so much with problems centered on a few hard cases.

Most cars, especially new ones, are extraordinarily clean. A 2004 Subaru in good working order has an exhaust stream that’s just.06 percent carbon monoxide, which is negligible. But on almost any highway, for whatever reason—age, ill repair, deliberate tampering by the owner—a small number of cars have carbon-monoxide levels in excess of 10 percent, which is almost two hundred times higher. In Denver, 5 percent of the vehicles on the road produce 55 percent of the automobile pollution.

“Let’s say a car is fifteen years old,” Donald Stedman says. Stedman is a chemist and automobile-emissions specialist at the University of Denver. His laboratory put up the sign on Speer Avenue. “Obviously, the older a car is, the more likely it is to become broken. It’s the same as human beings. And by broken we mean any number of mechanical malfunctions—the computer’s not working anymore, fuel injection is stuck open, the catalyst died. It’s not unusual that these failure modes result in high emissions. We have at least one car in our database which was emitting seventy grams of hydrocarbon per mile, which means that you could almost drive a Honda Civic on the exhaust fumes from that car. It’s not just old cars. It’s new cars with high mileage, like taxis. One of the most successful and least publicized control measures was done by a district attorney in L.A. back in the nineties. He went to LAX and discovered that all of the Bell Cabs were gross emitters. One of those cabs emitted more than its own weight of pollution every year.”

In Stedman’s view, the current system of smog checks makes little sense. A million motorists in Denver have to go to an emissions center every year—take time from work, wait in line, pay $15 or $25—for a test that more than 90 percent of them don’t need. “Not everybody gets tested for breast cancer,” Stedman says. “Not everybody takes an AIDS test.” On-site smog checks, furthermore, do a pretty bad job of finding and fixing the few outliers. Car enthusiasts—with high-powered, high-polluting sports cars—have been known to drop a clean engine into their car on the day they get it tested. Others register their car in a faraway town without emissions testing or arrive at the test site “hot”—having just come off hard driving on the freeway—which is a good way to make a dirty engine appear to be clean. Still others randomly pass the test when they shouldn’t, because dirty engines are highly variable and sometimes burn cleanly for short durations. There is little evidence, Stedman says, that the city’s regime of inspections makes any difference in air quality.

He proposes mobile testing instead. In the early 1980s, he invented a device the size of a suitcase that uses infrared light to instantly measure and then analyze the emissions of cars as they drive by on the highway. The Speer Avenue sign is attached to one of Stedman’s devices. He says that cities should put half a dozen or so of his devices in vans, park them on freeway off-ramps around the city, and have a police car poised to pull over anyone who fails the test. A half-dozen vans could test thirty thousand cars a day. For the same $25 million that Denver’s motorists now spend on on-site testing, Stedman estimates, the city could identify and fix twenty-five thousand truly dirty vehicles every year, and within a few years cut automobile emissions in the Denver metropolitan area by somewhere between 35 and 40 percent. The city could stop managing its smog problem and start ending it.

Why don’t we all adopt the Stedman method? There’s no moral impediment here. We’re used to the police pulling people over for having a blown headlight or a broken side mirror, and it wouldn’t be difficult to have them add pollution-control devices to their list. Yet it does run counter to an instinctive social preference for thinking of pollution as a problem to which we all contribute equally. We have developed institutions that move reassuringly quickly and forcefully on collective problems. Congress passes a law. The Environmental Protection Agency promulgates a regulation. The auto industry makes its cars a little cleaner, and—presto—the air gets better. But Stedman doesn’t much care about what happens in Washington and Detroit. The challenge of controlling air pollution isn’t so much about the laws as it is about compliance with them. It’s a policing problem, rather than a policy problem, and there is something ultimately unsatisfying about his proposed solution. He wants to end air pollution in Denver with a half-dozen vans outfitted with a contraption about the size of a suitcase. Can such a big problem have such a small-bore solution?

That’s what made the findings of the Christopher Commission so unsatisfying. We put together blue-ribbon panels when we’re faced with problems that seem too large for the normal mechanisms of bureaucratic repair. We want sweeping reforms. But what was the commission’s most memorable observation? It was the story of an officer with a known history of doing things like beating up handcuffed suspects who nonetheless received a performance review from his superior stating that he “usually conducts himself in a manner that inspires respect for the law and instills public confidence.” This is what you say about an officer when you haven’t actually read his file, and the implication of the Christopher Commission’s report was that the LAPD might help solve its problem simply by getting its police captains to read the files of their officers. The LAPD’s problem was a matter not of policy but of compliance. The department needed to adhere to the rules it already had in place, and that’s not what a public hungry for institutional transformation wants to hear. Solving problems that have power-law distributions doesn’t just violate our moral intuitions; it violates our political intuitions as well. It’s hard not to conclude, in the end, that the reason we treated the homeless as one hopeless undifferentiated group for so long is not simply that we didn’t know better. It’s that we didn’t want to know better. It was easier the old way.

Power-law solutions have little appeal to the right, because they involve special treatment for people who do not deserve special treatment; and they have little appeal to the left, because their emphasis on efficiency over fairness suggests the cold number-crunching of Chicago school cost-benefit analysis. Even the promise of millions of dollars in savings or cleaner air or better police departments cannot entirely compensate for such discomfort. In Denver, John Hickenlooper, the city’s enormously popular mayor, has worked on the homelessness issue tirelessly during the past couple of years. He spent more time on the subject in his annual State of the City address this past summer than on any other topic. He gave the speech, with deliberate symbolism, in the city’s downtown Civic Center Park, where homeless people gather every day with their shopping carts and garbage bags. He has gone on local talk radio on many occasions to discuss what the city is doing about the issue. He has commissioned studies to show what a drain on the city’s resources the homeless population has become. But, he says, “there are still people who stop me going into the supermarket and say, ‘I can’t believe you’re going to help those homeless people, those bums.’”

5.

Early one morning, a few years ago, Marla Johns got a call from her husband, Steve. He was at work. “He called and woke me up,” Johns remembers. “He was choked up and crying on the phone. And I thought that something had happened with another police officer. I said, ‘Oh, my gosh, what happened?’ He said, ‘ Murray died last night.’” He died of intestinal bleeding. At the police department that morning, some of the officers gave Murray a moment of silence.

“There are not many days that go by that I don’t have a thought of him,” she went on. “Christmas comes—and I used to buy him a Christmas present. Make sure he had warm gloves and a blanket and a coat. There was this mutual respect. There was a time when another intoxicated patient jumped off the gurney and was coming at me, and Murray jumped off his gurney and shook his fist and said, ‘Don’t you touch my angel.’ You know, when he was monitored by the system, he did fabulously. He would be on house arrest and he would get a job and he would save money and go to work every day, and he wouldn’t drink. He would do all the things he was supposed to do. There are some people who can be very successful members of society if someone monitors them. Murray needed someone to be in charge of him.”

But, of course, Reno didn’t have a place where Murray could be given the structure he needed. Someone must have decided that it cost too much.

“I told my husband that I would claim his body if no one else did,” she said. “I would not have him in an unmarked grave.”

February 13, 2006

The Picture Problem

MAMMOGRAPHY, AIR POWER, AND THE LIMITS OF LOOKING

1.

At the beginning of the first Gulf war, the United States Air Force dispatched two squadrons of F-15E Strike Eagle fighter jets to find and destroy the Scud missiles that Iraq was firing at Israel. The rockets were being launched, mostly at night, from the backs of modified flatbed tractor-trailers, moving stealthily around a four-hundred-square-mile “Scud box” in the western desert. The plan was for the fighter jets to patrol the box from sunset to sunrise. When a Scud was launched, it would light up the night sky. An F-15E pilot would fly toward the launch point, follow the roads that crisscrossed the desert, and then locate the target using a state-of-the-art, $4.6 million device called a LANTIM navigation and targeting pod, capable of taking a high-resolution infrared photograph of a four-and-a-half-mile swath below the plane. How hard could it be to pick up a hulking tractor-trailer in the middle of an empty desert?

Almost immediately, reports of Scud kills began to come back from the field. The Desert Storm commanders were elated. “I remember going out to Nellis Air Force Base after the war,” Barry Watts, a former Air Force colonel, says. “They did a big static display, and they had all the Air Force jets that flew in Desert Storm, and they had little placards in front of them, with a box score, explaining what this plane did and that plane did in the war. And, when you added up how many Scud launchers they claimed each got, the total was about a hundred.” Air Force officials were not guessing at the number of Scud launchers hit; as far as they were concerned, they knew. They had a $4 million camera that took a nearly perfect picture, and there are few cultural reflexes more deeply ingrained than the idea that a picture has the weight of truth. “That photography not only does not, but cannot, lie is a matter of belief, an article of faith,” Charles Rosen and Henri Zerner have written. “We tend to trust the camera more than our own eyes.” Thus was victory declared in the Scud hunt—until hostilities ended and the Air Force appointed a team to determine the effectiveness of the air campaigns in Desert Storm. The actual number of definite Scud kills, the team said, was zero.

The problem was that the pilots were operating at night, when depth perception is impaired. LANTIM could see in the dark, but the camera worked only when it was pointed in the right place, and the right place wasn’t obvious. Meanwhile, the pilot had only about five minutes to find his quarry, because after launch the Iraqis would immediately hide in one of the many culverts underneath the highway between Baghdad and Jordan, and the screen the pilot was using to scan all that desert measured just six inches by six inches. “It was like driving down an interstate looking through a soda straw,” Major General Mike DeCuir, who flew numerous Scud-hunt missions throughout the war, recalled. Nor was it clear what a Scud launcher looked like on that screen. “We had an intelligence photo of one on the ground. But you had to imagine what it would look like on a black-and-white screen from twenty thousand feet up and five or more miles away,” DeCuir went on. “With the resolution we had at the time, you could tell something was a big truck and that it had wheels, but at that altitude it was hard to tell much more than that.” The postwar analysis indicated that a number of the targets the pilots had hit were actually decoys, constructed by the Iraqis from old trucks and spare missile parts. Others were tanker trucks transporting oil on the highway to Jordan. A tanker truck, after all, is a tractor-trailer hauling a long, shiny cylindrical object, and, from twenty thousand feet up at four hundred miles an hour on a six-by-six-inch screen, a long, shiny cylindrical object can look a lot like a missile. “It’s a problem we’ve always had,” Watts, who served on the team that did the Gulf war analysis, said. “It’s night out. You think you’ve got something on the sensor. You roll out your weapons. Bombs go off. It’s really hard to tell what you did.”

You can build a high-tech camera capable of taking pictures in the middle of the night, in other words, but the system works only if the camera is pointed in the right place, and even then the pictures are not self-explanatory. They need to be interpreted, and the human task of interpretation is often a bigger obstacle than the technical task of picture taking. This was the lesson of the Scud hunt: pictures promise to clarify but often confuse. The Zapruder film intensified rather than dispelled the controversy surrounding John F. Kennedy’s assassination. The videotape of the beating of Rodney King led to widespread uproar about police brutality; it also served as the basis for a jury’s decision to acquit the officers charged with the assault. Perhaps nowhere have these issues been so apparent, however, as in the arena of mammography. Radiologists developed state-of-the-art X-ray cameras and used them to scan women’s breasts for tumors, reasoning that, if you can take a nearly perfect picture, you can find and destroy tumors before they go on to do serious damage. Yet there remains a great deal of confusion about the benefits of mammography. Is it possible that we place too much faith in pictures?

2.

The head of breast imaging at Memorial Sloan-Kettering Cancer Center, in New York City, is a physician named David Dershaw, a youthful man in his fifties, who bears a striking resemblance to the actor Kevin Spacey. One morning not long ago, he sat down in his office at the back of the Sloan-Kettering Building and tried to explain how to read a mammogram.

Dershaw began by putting an X-ray on a light box behind his desk. “Cancer shows up as one of two patterns,” he said. “You look for lumps and bumps, and you look for calcium. And, if you find it, you have to make a determination: is it acceptable, or is it a pattern that might be due to cancer?” He pointed at the X-ray. “This woman has cancer. She has these tiny little calcifications. Can you see them? Can you see how small they are?” He took out a magnifying glass and placed it over a series of white flecks; as a cancer grows, it produces calcium deposits. “That’s the stuff we are looking for,” he said.

Then Dershaw added a series of slides to the light box and began to explain all the varieties that those white flecks came in. Some calcium deposits are oval and lucent. “They’re called eggshell calcifications,” Dershaw said. “And they’re basically benign.” Another kind of calcium runs like a railway track on either side of the breast’s many blood vessels—that’s benign, too. “Then there’s calcium that’s thick and heavy and looks like popcorn,” Dershaw went on. “That’s just dead tissue. That’s benign. There’s another calcification that’s little sacs of calcium floating in liquid. It’s called ‘milk of calcium.’ That’s another kind of calcium that’s always benign.” He put a new set of slides against the light. “Then we have calcium that looks like this—irregular. All of these are of different density and different sizes and different configurations. Those are usually benign, but sometimes they are due to cancer. Remember you saw those railway tracks? This is calcium laid down inside a tube as well, but you can see that the outside of the tube is irregular. That’s cancer.” Dershaw’s explanations were beginning to be confusing. “There are certain calcifications in benign tissues that are always benign,” he said. “There are certain kinds that are always associated with cancer. But those are the ends of the spectrum, and the vast amount of calcium is somewhere in the middle. And making that differentiation, between whether the calcium is acceptable or not, is not clear-cut.”

The same is true of lumps. Some lumps are simply benign clumps of cells. You can tell they are benign because the walls of the mass look round and smooth; in a cancer, cells proliferate so wildly that the walls of the tumor tend to be ragged and to intrude into the surrounding tissue. But sometimes benign lumps resemble tumors, and sometimes tumors look a lot like benign lumps. And sometimes you have lots of masses that, taken individually, would be suspicious but are so pervasive that the reasonable conclusion is that this is just how the woman’s breast looks. “If you have a CAT scan of the chest, the heart always looks like the heart, the aorta always looks like the aorta,” Dershaw said. “So when there’s a lump in the middle of that, it’s clearly abnormal. Looking at a mammogram is conceptually different from looking at images elsewhere in the body. Everything else has anatomy—anatomy that essentially looks the same from one person to the next. But we don’t have that kind of standardized information on the breast. The most difficult decision I think anybody needs to make when we’re confronted with a patient is: Is this person normal? And we have to decide that without a pattern that is reasonably stable from individual to individual, and sometimes even without a pattern that is the same from the left side to the right.”

Dershaw was saying that mammography doesn’t fit our normal expectations of pictures. In the days before the invention of photography, for instance, a horse in motion was represented in drawings and paintings according to the convention of ventre à terre, or “belly to the ground.” Horses were drawn with their front legs extended beyond their heads, and their hind legs stretched straight back, because that was the way, in the blur of movement, a horse seemed to gallop. Then, in the 1870s, came Eadweard Muybridge, with his famous sequential photographs of a galloping horse, and that was the end of ventre à terre. Now we knew how a horse galloped. The photograph promised that we would now be able to capture reality itself.

The situation with mammography is different. The way in which we ordinarily speak about calcium and lumps is clear and unambiguous. But the picture demonstrates how blurry those seemingly distinct categories actually are. Joann Elmore, a physician and epidemiologist at the University of Washington Harborview Medical Center, once asked ten board-certified radiologists to look at 150 mammograms—of which 27 had come from women who developed breast cancer, and 123 from women who were known to be healthy. One radiologist caught 85 percent of the cancers the first time around. Another caught only 37 percent. One looked at the same X-rays and saw suspicious masses in 78 percent of the cases. Another doctor saw “focal asymmetric density” in half of the cancer cases; yet another saw no “focal asymmetric density” at all. There was one particularly perplexing mammogram that three radiologists thought was normal, two thought was abnormal but probably benign, four couldn’t make up their minds about, and one was convinced was cancer. (The patient was fine.) Some of these differences are a matter of skill, and there is good evidence that with more rigorous training and experience radiologists can become better at reading breast X-rays. But so much of what can be seen on an X-ray falls into a gray area that interpreting a mammogram is also, in part, a matter of temperament. Some radiologists see something ambiguous and are comfortable calling it normal. Others see something ambiguous and get suspicious.

Does that mean radiologists ought to be as suspicious as possible? You might think so, but caution simply creates another kind of problem. The radiologist in the Elmore study who caught the most cancers also recommended immediate workups—a biopsy, an ultrasound, or additional X-rays—on 64 percent of the women who didn’t have cancer. In the real world, a radiologist who needlessly subjected such an extraordinary percentage of healthy patients to the time, expense, anxiety, and discomfort of biopsies and further testing would find himself seriously out of step with his profession. Mammography is not a form of medical treatment, where doctors are justified in going to heroic lengths on behalf of their patients. Mammography is a form of medical screening: it is supposed to exclude the healthy, so that more time and attention can be given to the sick. If screening doesn’t screen, it ceases to be useful.

Gilbert Welch, a medical-outcomes expert at Dartmouth Medical School, has pointed out that, given current breast-cancer mortality rates, nine out of every thousand sixty-year-old women will die of breast cancer in the next ten years. If every one of those women had a mammogram every year, that number would fall to six. The radiologist seeing those thousand women, in other words, would read ten thousand X-rays over a decade in order to save three lives—and that’s using the most generous possible estimate of mammography’s effectiveness. The reason a radiologist is required to assume that the overwhelming number of ambiguous things are normal, in other words, is that the overwhelming number of ambiguous things really are normal. Radiologists are, in this sense, a lot like baggage screeners at airports. The chances are that the dark mass in the middle of the suitcase isn’t a bomb, because you’ve seen a thousand dark masses like it in suitcases before, and none of those were bombs—and if you flagged every suitcase with something ambiguous in it, no one would ever make his flight. But that doesn’t mean, of course, that it isn’t a bomb. All you have to go on is what it looks like on the X-ray screen—and the screen seldom gives you quite enough information.

3.

Dershaw picked up a new X-ray and put it on the light box. It belonged to a forty-eight-year-old woman. Mammograms indicate density in the breast: the denser the tissue is, the more the X-rays are absorbed, creating the variations in black and white that make up the picture. Fat hardly absorbs the beam at all, so it shows up as black. Breast tissue, particularly the thick breast tissue of younger women, shows up on an X-ray as shades of light gray or white. This woman’s breasts consisted of fat at the back of the breast and more dense, glandular tissue toward the front, so the X-ray was entirely black, with what looked like a large, white, dense cloud behind the nipple. Clearly visible, in the black, fatty portion of the left breast, was a white spot. “Now, that looks like a cancer, that little smudgy, irregular, infiltrative thing,” Dershaw said. “It’s about five millimeters across.” He looked at the X-ray for a moment. This was mammography at its best: a clear picture of a problem that needed to be fixed. Then he took a pen and pointed to the thick cloud just to the right of the tumor. The cloud and the tumor were exactly the same color. “That cancer only shows up because it’s in the fatty part of the breast,” he said. “If you take that cancer and put it in the dense part of the breast, you’d never see it, because the whiteness of the mass is the same as the whiteness of normal tissue. If the tumor was over there, it could be four times as big and we still wouldn’t see it.”

What’s more, mammography is especially likely to miss the tumors that do the most harm. A team led by the research pathologist Peggy Porter analyzed 429 breast cancers that had been diagnosed over five years at the Group Health Cooperative of Puget Sound. Of those, 279 were picked up by mammography, and the bulk of them were detected very early, at what is called Stage One. (Cancer is classified into four stages, according to how far the tumor has spread from its original position.) Most of the tumors were small, less than two centimeters. Pathologists grade a tumor’s aggression according to such measures as the “mitotic count”—the rate at which the cells are dividing—and the screen-detected tumors were graded “low” in almost 70 percent of the cases. These were the kinds of cancers that could probably be treated successfully. “Most tumors develop very, very slowly, and those tend to lay down calcium deposits—and what mammograms are doing is picking up those calcifications,” Leslie Laufman, a hematologist-oncologist in Ohio, who served on a recent National Institutes of Health breast-cancer advisory panel, said. “Almost by definition, mammograms are picking up slow-growing tumors.”

A hundred and fifty cancers in Porter’s study, however, were missed by mammography. Some of these were tumors the mammogram couldn’t see—that were, for instance, hiding in the dense part of the breast. The majority, though, simply didn’t exist at the time of the mammogram. These cancers were found in women who had had regular mammograms, and who were legitimately told that they showed no sign of cancer on their last visit. In the interval between X-rays, however, either they or their doctor had manually discovered a lump in their breast, and these “interval” cancers were twice as likely to be in Stage Three and three times as likely to have high mitotic counts; 28 percent had spread to the lymph nodes, as opposed to 18 percent of the screen-detected cancers. These tumors were so aggressive that they had gone from undetectable to detectable in the interval between two mammograms.

The problem of interval tumors explains why the overwhelming majority of breast-cancer experts insist that women in the critical fifty-to-sixty-nine age group get regular mammograms. In Porter’s study, the women were X-rayed at intervals as great as every three years, and that created a window large enough for interval cancers to emerge. Interval cancers also explain why many breast-cancer experts believe that mammograms must be supplemented by regular and thorough clinical breast exams. (Thorough is defined as palpation of the area from the collarbone to the bottom of the rib cage, one dime-size area at a time, at three levels of pressure—just below the skin, the midbreast, and up against the chest wall—by a specially trained practitioner for a period not less than five minutes per breast.) In a major study of mammography’s effectiveness—one of a pair of Canadian trials conducted in the 1980s—women who were given regular, thorough breast exams but no mammograms were compared with those who had thorough breast exams and regular mammograms, and no difference was found in the death rates from breast cancer between the two groups. The Canadian studies are controversial, and some breast-cancer experts are convinced that they may have understated the benefits of mammography. But there is no denying the basic lessons of the Canadian trials: that a skilled pair of fingertips can find out an extraordinary amount about the health of a breast, and that we should not automatically value what we see in a picture over what we learn from our other senses.

“The finger has hundreds of sensors per square centimeter,” says Mark Goldstein, a sensory psychophysicist who cofounded MammaCare, a company devoted to training nurses and physicians in the art of the clinical exam. “There is nothing in science or technology that has even come close to the sensitivity of the human finger with respect to the range of stimuli it can pick up. It’s a brilliant instrument. But we simply don’t trust our tactile sense as much as our visual sense.”

4.

On the night of August 17, 1943, two hundred B-17 bombers from the United States Eighth Air Force set out from England for the German city of Schweinfurt. Two months later, 228 B-17s set out to strike Schweinfurt a second time. The raids were two of the heaviest nights of bombing in the war, and the Allied experience at Schweinfurt is an example of a more subtle—but in some cases more serious—problem with the picture paradigm.

The Schweinfurt raids grew out of the United States military’s commitment to bombing accuracy. As Stephen Budiansky writes in his wonderful recent book Air Power, the chief lesson of aerial bombardment in the First World War was that hitting a target from eight or ten thousand feet was a prohibitively difficult task. In the thick of battle, the bombardier had to adjust for the speed of the plane, the speed and direction of the prevailing winds, and the pitching and rolling of the plane, all while keeping the bombsight level with the ground. It was an impossible task, requiring complex trigonometric calculations. For a variety of reasons, including the technical challenges, the British simply abandoned the quest for precision: in both the First World War and the Second, the British military pursued a strategy of morale or area bombing, in which bombs were simply dropped, indiscriminately, on urban areas, with the intention of killing, dispossessing, and dispiriting the German civilian population.

But the American military believed that the problem of bombing accuracy was solvable, and a big part of the solution was something called the Norden bombsight. This breakthrough was the work of a solitary, cantankerous genius named Carl Norden, who operated out of a factory in New York City. Norden built a fifty-pound mechanical computer called the Mark XV, which used gears and wheels and gyroscopes to calculate airspeed, altitude, and crosswinds in order to determine the correct bomb-release point. The Mark XV, Norden’s business partner boasted, could put a bomb in a pickle barrel from twenty thousand feet. The United States spent $1.5 billion developing it, which, as Budiansky points out, was more than half the amount that was spent building the atomic bomb. “At air bases, the Nordens were kept under lock and key in secure vaults, escorted to their planes by armed guards, and shrouded in a canvas cover until after takeoff,” Budiansky recounts. The American military, convinced that its bombers could now hit whatever they could see, developed a strategic approach to bombing, identifying, and selectively destroying targets that were critical to the Nazi war effort. In early 1943, General Henry (Hap) Arnold—the head of the Army Air Forces—assembled a group of prominent civilians to analyze the German economy and recommend critical targets. The Advisory Committee on Bombardment, as it was called, determined that the United States should target Germany’s ball-bearing factories, since ball bearings were critical to the manufacture of airplanes. And the center of the German ball-bearing industry was Schweinfurt. Allied losses from the two raids were staggering. Thirty-six B-17s were shot down in the August attack, 62 bombers were shot down in the October raid, and between the two operations, a further 138 planes were badly damaged. Yet, with the war in the balance, this was considered worth the price. When the damage reports came in, Arnold exulted, “Now we have got Schweinfurt!” He was wrong.

The problem was not, as in the case of the Scud hunt, that the target could not be found, or that what was thought to be the target was actually something else. The B-17s, aided by their Norden Mark XVs, hit the ball-bearing factories hard. The problem was that the picture Air Force officers had of their target didn’t tell them what they really needed to know. The Germans, it emerged, had ample stockpiles of ball bearings. They also had no difficulty increasing their imports from Sweden and Switzerland, and, through a few simple design changes, they were able to greatly reduce their need for ball bearings in aircraft production. What’s more, although the factory buildings were badly damaged by the bombing, the machinery inside wasn’t. Ball-bearing equipment turned out to be surprisingly hardy. “As it was, not a tank, plane, or other piece of weaponry failed to be produced because of lack of ball bearings,” Albert Speer, the Nazi production chief, wrote after the war. Seeing a problem and understanding it, then, are two different things.

In recent years, with the rise of highly accurate long-distance weaponry, the Schweinfurt problem has become even more acute. If you can aim at and hit the kitchen at the back of a house, after all, you don’t have to bomb the whole building. So your bomb can be two hundred pounds rather than a thousand. That means, in turn, that you can fit five times as many bombs on a single plane and hit five times as many targets in a single sortie, which sounds good—except that now you need to get intelligence on five times as many targets. And that intelligence has to be five times more specific, because if the target is in the bedroom and not the kitchen, you’ve missed him.

This is the issue that the US command faced in the most recent Iraq war. Early in the campaign, the military mounted a series of air strikes against specific targets, where Saddam Hussein or other senior Baathist officials were thought to be hiding. There were fifty of these so-called decapitation attempts, each taking advantage of the fact that modern-day GPS-guided bombs can be delivered from a fighter to within thirteen meters of their intended target. The strikes were dazzling in their precision. In one case, a restaurant was leveled. In another, a bomb burrowed down into a basement. But, in the end, every single strike failed. “The issue isn’t accuracy,” Watts, who has written extensively on the limitations of high-tech weaponry, says. “The issue is the quality of targeting information. The amount of information we need has gone up an order of magnitude or two in the last decade.”

5.

Mammography has a Schweinfurt problem as well. Nowhere is that more evident than in the case of the breast lesion known as ductal carcinoma in situ, or DCIS, which shows up as a cluster of calcifications inside the ducts that carry milk to the nipple. It’s a tumor that hasn’t spread beyond those ducts, and it is so tiny that without mammography few women with DCIS would ever know they have it. In the past couple of decades, as more and more people have received regular breast X-rays and the resolution of mammography has increased, diagnoses of DCIS have soared. About fifty thousand new cases are now found every year in the United States, and virtually every DCIS lesion detected by mammography is promptly removed. But what has the targeting and destruction of DCIS meant for the battle against breast cancer? You’d expect that if we’ve been catching fifty thousand early-stage cancers every year, we should be seeing a corresponding decrease in the number of late-stage invasive cancers. It’s not clear whether we have. During the past twenty years, the incidence of invasive breast cancer has continued to rise by the same small, steady increment every year.

In 1987, pathologists in Denmark performed a series of autopsies on women in their forties who had not been known to have breast cancer when they died of other causes. The pathologists looked at an average of 275 samples of breast tissue in each case, and found some evidence of cancer—usually DCIS—in nearly 40 percent of the women. Since breast cancer accounts for less than 4 percent of female deaths, clearly the overwhelming majority of these women, had they lived longer, would never have died of breast cancer. “To me, that indicates that these kinds of genetic changes happen really frequently, and that they can happen without having an impact on women’s health,” Karla Kerlikowske, a breast-cancer expert at the University of California at San Francisco, says. “The body has this whole mechanism to repair things, and maybe that’s what happened with these tumors.” Gilbert Welch, the medical-outcomes expert, thinks that we fail to understand the hit-or-miss nature of cancerous growth, and assume it to be a process that, in the absence of intervention, will eventually kill us. “A pathologist from the International Agency for Research on Cancer once told me that the biggest mistake we ever made was attaching the word ‘carcinoma’ to DCIS,” Welch says. “The minute carcinoma got linked to it, it all of a sudden drove doctors to recommend therapy, because what was implied was that this was a lesion that would inexorably progress to invasive cancer. But we know that that’s not always the case.”

In some percentage of cases, however, DCIS does progress to something more serious. Some studies suggest that this happens very infrequently. Others suggest that it happens frequently enough to be of major concern. There is no definitive answer, and it’s all but impossible to tell, simply by looking at a mammogram, whether a given DCIS tumor is among those lesions that will grow out from the duct, or part of the majority that will never amount to anything. That’s why some doctors feel that we have no choice but to treat every DCIS as life-threatening, and in 30 percent of cases that means a mastectomy, and in another 35 percent it means a lumpectomy and radiation. Would taking a better picture solve the problem? Not really, because the problem is that we don’t know for sure what we’re seeing, and as pictures have become better we have put ourselves in a position where we see more and more things that we don’t know how to interpret. When it comes to DCIS, the mammogram delivers information without true understanding. “Almost half a million women have been diagnosed and treated for DCIS since the early nineteen-eighties—a diagnosis virtually unknown before then,” Welch writes in his new book, Should I Be Tested for Cancer?, a brilliant account of the statistical and medical uncertainties surrounding cancer screening. “This increase is the direct result of looking harder—in this case with ‘better’ mammography equipment. But I think you can see why it is a diagnosis that some women might reasonably prefer not to know about.”

6.

The disturbing thing about DCIS, of course, is that our approach to this tumor seems like a textbook example of how the battle against cancer is supposed to work. Use a powerful camera. Take a detailed picture. Spot the tumor as early as possible. Treat it immediately and aggressively. The campaign to promote regular mammograms has used this early-detection argument with great success because it makes intuitive sense. The danger posed by a tumor is represented visually. Large is bad; small is better—less likely to have metastasized. But here, too, tumors defy our visual intuitions.

According to Donald Berry, who is the chairman of the Department of Biostatistics and Applied Mathematics at M. D. Anderson Cancer Center, in Houston, a woman’s risk of death increases only by about 10 percent for every additional centimeter in tumor length. “Suppose there is a tumor size above which the tumor is lethal, and below which it’s not,” Berry says. “The problem is that the threshold varies. When we find a tumor, we don’t know whether it has metastasized already. And we don’t know whether it’s tumor size that drives the metastatic process or whether all you need is a few million cells to start sloughing off to other parts of the body. We do observe that it’s worse to have a bigger tumor. But not amazingly worse. The relationship is not as great as you’d think.”

In a recent genetic analysis of breast-cancer tumors, scientists selected women with breast cancer who had been followed for many years, and divided them into two groups—those whose cancer had gone into remission, and those whose cancer had spread to the rest of their body. Then the scientists went back to the earliest moment that each cancer became apparent and analyzed thousands of genes in order to determine whether it was possible to predict, at that moment, who was going to do well and who wasn’t. Early detection presumes that it isn’t possible to make that prediction: a tumor is removed before it becomes truly dangerous. But scientists discovered that even with tumors in the one-centimeter range—the range in which cancer is first picked up by a mammogram—the fate of the cancer seems already to have been set. “What we found is that there is biology that you can glean from the tumor, at the time you take it out, that is strongly predictive of whether or not it will go on to metastasize,” Stephen Friend, a member of the gene-expression team at Merck, says. “We like to think of a small tumor as an innocent. The reality is that in that innocent lump are a lot of behaviors that spell a potential poor or good prognosis.”

The good news here is that it might eventually be possible to screen breast cancers on a genetic level, using other kinds of tests—even blood tests—to look for the biological traces of those genes. This might also help with the chronic problem of overtreatment in breast cancer. If we can single out that small percentage of women whose tumors will metastasize, we can spare the rest the usual regimen of surgery, radiation, and chemotherapy. Gene-signature research is one of a number of reasons that many scientists are optimistic about the fight against breast cancer. But it is an advance that has nothing to do with taking more pictures, or taking better pictures. It has to do with going beyond the picture.

Under the circumstances, it is not hard to understand why mammography draws so much controversy. The picture promises certainty, and it cannot deliver on that promise. Even after forty years of research, there remains widespread disagreement over how much benefit women in the critical fifty-to-sixty-nine age bracket receive from breast X-rays, and further disagreement about whether there is enough evidence to justify regular mammography in women under fifty and over seventy. Is there any way to resolve the disagreement? Donald Berry says that there probably isn’t—that a clinical trial that could definitively answer the question of mammography’s precise benefits would have to be so large (involving more than five hundred thousand women) and so expensive (costing billions of dollars) as to be impractical. The resulting confusion has turned radiologists who do mammograms into one of the chief targets of malpractice litigation. “The problem is that mammographers—radiology groups—do hundreds of thousands of these mammograms, giving women the illusion that these things work and they are good, and if a lump is found and in most cases if it is found early, they tell women they have the probability of a higher survival rate,” says E. Clay Parker, a Florida plaintiff’s attorney, who recently won a $5.1 million judgment against an Orlando radiologist. “But then, when it comes to defending themselves, they tell you that the reality is that it doesn’t make a difference when you find it. So you scratch your head and say, ‘Well, why do you do mammography, then?’”

The answer is that mammograms do not have to be infallible to save lives. A modest estimate of mammography’s benefit is that it reduces the risk of dying from breast cancer by about 10 percent—which works out, for the average woman in her fifties, to be about three extra days of life, or, to put it another way, a health benefit on a par with wearing a helmet on a ten-hour bicycle trip. That is not a trivial benefit. Multiplied across the millions of adult women in the United States, it amounts to thousands of lives saved every year, and, in combination with a medical regimen that includes radiation, surgery, and new and promising drugs, it has helped brighten the prognosis for women with breast cancer. Mammography isn’t as good as we’d like it to be. But we are still better off than we would be without it.

“There is increasingly an understanding among those of us who do this a lot that our efforts to sell mammography may have been overvigorous,” Dershaw said, “and that although we didn’t intend to, the perception may have been that mammography accomplishes even more than it does.” He was looking, as he spoke, at the mammogram of the woman whose tumor would have been invisible had it been a few centimeters to the right. Did looking at an X-ray like that make him nervous? Dershaw shook his head. “You have to respect the limitations of the technology,” he said. “My job with the mammogram isn’t to find what I can’t find with a mammogram. It’s to find what I can find with a mammogram. If I’m not going to accept that, then I shouldn’t be reading mammograms.”

7.

In February of 2002, just before the start of the Iraq war, Secretary of State Colin Powell went before the United Nations to declare that Iraq was in defiance of international law. He presented transcripts of telephone conversations between senior Iraqi military officials, purportedly discussing attempts to conceal weapons of mass destruction. He told of eyewitness accounts of mobile biological-weapons facilities. And, most persuasive, he presented a series of images—carefully annotated, high-resolution satellite photographs of what he said was the Taji Iraqi chemical-munitions facility.

“Let me say a word about satellite images before I show a couple,” Powell began. “The photos that I am about to show you are sometimes hard for the average person to interpret, hard for me. The painstaking work of photo analysis takes experts with years and years of experience, poring for hours and hours over light tables. But as I show you these images, I will try to capture and explain what they mean, what they indicate, to our imagery specialists.” The first photograph was dated November 10, 2002, just three months earlier, and years after the Iraqis were supposed to have rid themselves of all weapons of mass destruction. “Let me give you a closer look,” Powell said as he flipped to a closeup of the first photograph. It showed a rectangular building, with a vehicle parked next to it. “Look at the image on the left. On the left is a closeup of one of the four chemical bunkers. The two arrows indicate the presence of sure signs that the bunkers are storing chemical munitions. The arrow at the top that says ‘Security’ points to a facility that is a signature item for this kind of bunker. Inside that facility are special guards and special equipment to monitor any leakage that might come out of the bunker.” Then he moved to the vehicle next to the building. It was, he said, another signature item. “It’s a decontamination vehicle in case something goes wrong…It is moving around those four and it moves as needed to move as people are working in the different bunkers.”

Powell’s analysis assumed, of course, that you could tell from the picture what kind of truck it was. But pictures of trucks, taken from above, are not always as clear as we would like; sometimes trucks hauling oil tanks look just like trucks hauling Scud launchers, and, while a picture is a good start, if you really want to know what you’re looking at, you probably need more than a picture. I looked at the photographs with Patrick Eddington, who for many years was an imagery analyst with the CIA. Eddington examined them closely. “They’re trying to say that those are decontamination vehicles,” he told me. He had a photo up on his laptop, and he peered closer to get a better look. “But the resolution is sufficient for me to say that I don’t think it is—and I don’t see any other decontamination vehicles down there that I would recognize.” The standard decontamination vehicle was a Soviet-made box-body van, Eddington said. This truck was too long. For a second opinion, Eddington recommended Ray McGovern, a twenty-seven-year CIA analyst, who had been one of George H. W. Bush’s personal intelligence briefers when he was vice president. “If you’re an expert, you can tell one hell of a lot from pictures like this,” McGovern said. He’d heard another interpretation. “I think,” he said, “that it’s a fire truck.”

December 13, 2004

Something Borrowed

SHOULD A CHARGE OF PLAGIARISM RUIN YOUR LIFE?

1.

One day in the spring of 2004, a psychiatrist named Dorothy Lewis got a call from her friend Betty, who works in New York City. Betty had just seen a Broadway play called Frozen, written by the British playwright Bryony Lavery. “She said, ‘Somehow it reminded me of you. You really ought to see it,’” Lewis recalled. Lewis asked Betty what the play was about, and Betty said that one of the characters was a psychiatrist who studied serial killers. “And I told her, ‘I need to see that as much as I need to go to the moon.’”

Lewis has studied serial killers for the past twenty-five years. With her collaborator, the neurologist Jonathan Pincus, she has published a great many research papers, showing that serial killers tend to suffer from predictable patterns of psychological, physical, and neurological dysfunction: that they were almost all the victims of harrowing physical and sexual abuse as children, and that almost all of them have suffered some kind of brain injury or mental illness. In 1998, she published a memoir of her life and work entitled Guilty by Reason of Insanity. She was the last person to visit Ted Bundy before he went to the electric chair. Few people in the world have spent as much time thinking about serial killers as Dorothy Lewis, so when her friend Betty told her that she needed to see Frozen it struck her as a busman’s holiday.

But the calls kept coming. Frozen was winning raves on Broadway, and it had been nominated for a Tony. Whenever someone who knew Dorothy Lewis saw it, they would tell her that she really ought to see it, too. In June, she got a call from a woman at the theater where Frozen was playing. “She said she’d heard that I work in this field, and that I see murderers, and she was wondering if I would do a talk-back after the show,” Lewis said. “I had done that once before, and it was a delight, so I said sure. And I said, ‘Would you please send me the script, because I want to read the play.’”

The script came, and Lewis sat down to read it. Early in the play, something caught her eye, a phrase: “it was one of those days.” One of the murderers Lewis had written about in her book had used that same expression. But she thought it was just a coincidence. “Then, there’s a scene of a woman on an airplane, typing away to her friend. Her name is Agnetha Gottmundsdottir. I read that she’s writing to her colleague, a neurologist called David Nabkus. And with that I realized that more was going on, and I realized as well why all these people had been telling me to see the play.”

Lewis began underlining line after line. She had worked at New York University School of Medicine. The psychiatrist in Frozen worked at New York School of Medicine. Lewis and Pincus did a study of brain injuries among fifteen death-row inmates. Gottmundsdottir and Nabkus did a study of brain injuries among fifteen death-row inmates. Once, while Lewis was examining the serial killer Joseph Franklin, he sniffed her, in a grotesque, sexual way. Gottmundsdottir is sniffed by the play’s serial killer, Ralph. Once, while Lewis was examining Ted Bundy, she kissed him on the cheek. Gottmundsdottir, in some productions of Frozen, kisses Ralph. “The whole thing was right there,” Lewis went on. “I was sitting at home reading the play, and I realized that it was I. I felt robbed and violated in some peculiar way. It was as if someone had stolen—I don’t believe in the soul, but, if there was such a thing, it was as if someone had stolen my essence.”

Lewis never did the talk-back. She hired a lawyer. And she came down from New Haven to see Frozen. “In my book,” she said, “I talk about where I rush out of the house with my black carry-on, and I have two black pocketbooks, and the play opens with her”—Agnetha—“with one big black bag and a carry-on, rushing out to do a lecture.” Lewis had written about biting her sister on the stomach as a child. Onstage, Agnetha fantasized out loud about attacking a stewardess on an airplane and “biting out her throat.” After the play was over, the cast came onstage and took questions from the audience. “Somebody in the audience said, ‘Where did Bryony Lavery get the idea for the psychiatrist?’” Lewis recounted. “And one of the cast members, the male lead, said, ‘Oh, she said that she read it in an English medical magazine.’” Lewis is a tiny woman, with enormous, childlike eyes, and they were wide open now with the memory. “I wouldn’t have cared if she did a play about a shrink who’s interested in the frontal lobe and the limbic system. That’s out there to do. I see things week after week on television, on Law & Order or C.S.I., and I see that they are using material that Jonathan and I brought to light. And it’s wonderful. That would have been acceptable. But she did more than that. She took things about my own life, and that is the part that made me feel violated.”

At the request of her lawyer, Lewis sat down and made up a chart detailing what she felt were the questionable parts of Lavery’s play. The chart was fifteen pages long. The first part was devoted to thematic similarities between Frozen and Lewis’s book Guilty by Reason of Insanity. The other, more damning section listed twelve instances of almost verbatim similarities—totaling perhaps 675 words—between passages from Frozen and passages from a 1997 magazine profile of Lewis. The profile was called “Damaged.” It appeared in the February 24, 1997, issue of The New Yorker. It was written by me.

2.

Words belong to the person who wrote them. There are few simpler ethical notions than this one, particularly as society directs more and more energy and resources toward the creation of intellectual property. In the past thirty years, copyright laws have been strengthened. Courts have become more willing to grant intellectual-property protections. Fighting piracy has become an obsession with Hollywood and the recording industry, and, in the worlds of academia and publishing, plagiarism has gone from being bad literary manners to something much closer to a crime. When, two years ago, Doris Kearns Goodwin was found to have lifted passages from several other historians, she was asked to resign from the board of the Pulitzer Prize committee. And why not? If she had robbed a bank, she would have been fired the next day.

I’d worked on “Damaged” through the fall of 1996. I would visit Dorothy Lewis in her office at Bellevue Hospital and watch the videotapes of her interviews with serial killers. At one point, I met up with her in Missouri. Lewis was testifying at the trial of Joseph Franklin, who claims responsibility for shooting, among others, the civil-rights leader Vernon Jordan and the pornographer Larry Flynt. In the trial, a videotape was shown of an interview that Franklin once gave to a television station. He was asked whether he felt any remorse. I wrote:

“I can’t say that I do,” he said. He paused again, then added, “The only thing I’m sorry about is that it’s not legal.”

“What’s not legal?”

Franklin answered as if he’d been asked the time of day: “Killing Jews.”

That exchange, almost to the word, was reproduced in Frozen.

Lewis, the article continued, didn’t feel that Franklin was fully responsible for his actions. She viewed him as a victim of neurological dysfunction and childhood physical abuse. “The difference between a crime of evil and a crime of illness,” I wrote, “is the difference between a sin and a symptom.” That line was in Frozen, too—not once but twice. I faxed Bryony Lavery a letter:

I am happy to be the source of inspiration for other writers, and had you asked for my permission to quote—even liberally—from my piece, I would have been delighted to oblige. But to lift material, without my approval, is theft.

Almost as soon as I’d sent the letter, though, I began to have second thoughts. The truth was that, although I said I’d been robbed, I didn’t feel that way. Nor did I feel particularly angry. One of the first things I had said to a friend after hearing about the echoes of my article in Frozen was that this was the only way I was ever going to get to Broadway—and I was only half joking. On some level, I considered Lavery’s borrowing to be a compliment. A savvier writer would have changed all those references to Lewis, and rewritten the quotes from me, so that their origin was no longer recognizable. But how would I have been better off if Lavery had disguised the source of her inspiration?

Dorothy Lewis, for her part, was understandably upset. She was considering a lawsuit. And, to increase her odds of success, she asked me to assign her the copyright to my article. I agreed, but then I changed my mind. Lewis had told me that she “wanted her life back.” Yet in order to get her life back, it appeared, she first had to acquire it from me. That seemed a little strange.

Then I got a copy of the script for Frozen. I found it breathtaking. I realize that this isn’t supposed to be a relevant consideration. And yet it was: instead of feeling that my words had been taken from me, I felt that they had become part of some grander cause. In late September, the story broke. The Times, the Observer in England, and the Associated Press all ran stories about Lavery’s alleged plagiarism, and the articles were picked up by newspapers around the world. Bryony Lavery had seen one of my articles, responded to what she read, and used it as she constructed a work of art. And now her reputation was in tatters. Something about that didn’t seem right.

3.

In 1992, the Beastie Boys released a song called “Pass the Mic,” which begins with a six-second sample taken from the 1976 composition “Choir” by the jazz flutist James Newton. The sample was an exercise in what is called multiphonics, where the flutist “overblows” into the instrument while simultaneously singing in a falsetto. In the case of “Choir,” Newton played a C on the flute, then sang C, D-flat, C—and the distortion of the overblown C combined with his vocalizing created a surprisingly complex and haunting sound. In “Pass the Mic,” the Beastie Boys repeated the Newton sample more than forty times. The effect was riveting.

In the world of music, copyrighted works fall into two categories—the recorded performance and the composition underlying that performance. If you write a rap song, and you want to sample the chorus from Billy Joel’s “Piano Man,” you have to first get permission from the record label to use the “Piano Man” recording, and then get permission from Billy Joel (or whoever owns his music) to use the underlying composition. In the case of “Pass the Mic,” the Beastie Boys got the first kind of permission—the rights to use the recording of “Choir”—but not the second. Newton sued, and he lost—and the reason he lost serves as a useful introduction to how to think about intellectual property.

At issue in the case wasn’t the distinctiveness of Newton’s performance. The Beastie Boys, everyone agreed, had properly licensed Newton’s performance when they paid the copyright recording fee. And there was no question about whether they had copied the underlying music to the sample. At issue was simply whether the Beastie Boys were required to ask for that secondary permission: was the composition underneath those six seconds so distinctive and original that Newton could be said to own it? The court said that it wasn’t.

The chief expert witness for the Beastie Boys in the “Choir” case was Lawrence Ferrara, who is a professor of music at New York University, and when I asked him to explain the court’s ruling, he walked over to the piano in the corner of his office and played those three notes: C, D-flat, C. “That’s it!” he shouted. “There ain’t nothing else! That’s what was used. You know what this is? It’s no more than a mordent, a turn. It’s been done thousands upon thousands of times. No one can say they own that.”

Ferrara then played the most famous four-note sequence in classical music, the opening of Beethoven’s Fifth: G, G, G, E-flat. This was unmistakably Beethoven. But was it original? “That’s a harder case,” Ferrara said. “Actually, though, other composers wrote that. Beethoven himself wrote that in a piano sonata, and you can find figures like that in composers who predate Beethoven. It’s one thing if you’re talking about da-da-da dummm, da-da-da dummm—those notes, with those durations. But just the four pitches, G, G, G, E-flat? Nobody owns those.”

Ferrara once served as an expert witness for Andrew Lloyd Webber, who was being sued by Ray Repp, a composer of Catholic folk music. Repp said that the opening few bars of Lloyd Webber’s 1984 “Phantom Song,” from The Phantom of the Opera, bore an overwhelming resemblance to his composition “Till You,” written six years earlier, in 1978. As Ferrara told the story, he sat down at the piano again and played the beginning of both songs, one after the other; sure enough, they sounded strikingly similar. “Here’s Lloyd Webber,” he said, calling out each note as he played it. “Here’s Repp. Same sequence. The only difference is that Andrew writes a perfect fourth and Repp writes a sixth.”

But Ferrara wasn’t quite finished. “I said, let me have everything Andrew Lloyd Webber wrote prior to 1978—Jesus Christ Superstar, Joseph, Evita.” He combed through every score, and in Joseph and the Amazing Technicolor Dreamcoat he found what he was looking for. “It’s the song ‘Benjamin Calypso.’” Ferrara started playing it. It was immediately familiar. “It’s the first phrase of ‘Phantom Song.’ It’s even using the same notes. But wait—it gets better. Here’s ‘Close Every Door,’ from a 1969 concert performance of Joseph.” Ferrara is a dapper, animated man, with a thin, well-manicured mustache, and thinking about the Lloyd Webber case was almost enough to make him jump up and down. He began to play again. It was the second phrase of “Phantom.” “The first half of ‘Phantom’ is in ‘Benjamin Calypso.’ The second half is in ‘Close Every Door.’ They are identical. On the button. In the case of the first theme, in fact, ‘Benjamin Calypso’ is closer to the first half of the theme at issue than the plaintiff’s song. Lloyd Webber writes something in 1984, and he borrows from himself.”

In the “Choir” case, the Beastie Boys’ copying didn’t amount to theft because it was too trivial. In the “Phantom” case, what Lloyd Webber was alleged to have copied didn’t amount to theft because the material in question wasn’t original to his accuser. Under copyright law, what matters is not that you copied someone else’s work. What matters is what you copied, and how much you copied. Intellectual-property doctrine isn’t a straightforward application of the ethical principle “Thou shalt not steal.” At its core is the notion that there are certain situations where you can steal. The protections of copyright, for instance, are time-limited; once something passes into the public domain, anyone can copy it without restriction. Or suppose that you invented a cure for breast cancer in your basement lab. Any patent you received would protect your intellectual property for twenty years, but after that anyone could take your invention. You get an initial monopoly on your creation because we want to provide economic incentives for people to invent things like cancer drugs. But everyone gets to steal your breast-cancer cure—after a decent interval—because it is also in society’s interest to let as many people as possible copy your invention; only then can others learn from it, and build on it, and come up with better and cheaper alternatives. This balance between the protecting and the limiting of intellectual property is, in fact, enshrined in the Constitution: “Congress shall have the power to promote the Progress of Science and useful Arts, by securing for limited”—note that specification, limited—“Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”

4.

So is it true that words belong to the person who wrote them, just as other kinds of property belong to their owners? Actually, no. As the Stanford law professor Lawrence Lessig argues in his book Free Culture:

In ordinary language, to call a copyright a “property” right is a bit misleading, for the property of copyright is an odd kind of property…I understand what I am taking when I take the picnic table you put in your backyard. I am taking a thing, the picnic table, and after I take it, you don’t have it. But what am I taking when I take the good idea you had to put a picnic table in the backyard—by, for example, going to Sears, buying a table, and putting it in my backyard? What is the thing that I am taking then?

The point is not just about the thingness of picnic tables versus ideas, though that is an important difference. The point instead is that in the ordinary case—indeed, in practically every case except for a narrow range of exceptions—ideas released to the world are free. I don’t take anything from you when I copy the way you dress—though I might seem weird if I do it every day…Instead, as Thomas Jefferson said (and this is especially true when I copy the way someone dresses), “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”

Lessig argues that, when it comes to drawing this line between private interests and public interests in intellectual property, the courts and Congress have, in recent years, swung much too far in the direction of private interests. He writes, for instance, about the fight by some developing countries to get access to inexpensive versions of Western drugs through what is called parallel importation—buying drugs from another developing country that has been licensed to produce patented medicines. The move would save countless lives. But it has been opposed by the United States not on the ground that it would cut into the profits of Western pharmaceutical companies (they don’t sell that many patented drugs in developing countries anyway) but on the ground that it violates the sanctity of intellectual property. “We as a culture have lost this sense of balance,” Lessig writes. “A certain property fundamentalism, having no connection to our tradition, now reigns in this culture.”

Even what Lessig decries as intellectual-property extremism, however, acknowledges that intellectual property has its limits. The United States didn’t say that developing countries could never get access to cheap versions of American drugs. It said only that they would have to wait until the patents on those drugs expired. The arguments that Lessig has with the hard-core proponents of intellectual property are almost all arguments about where and when the line should be drawn between the right to copy and the right to protection from copying, not whether a line should be drawn.

But plagiarism is different, and that’s what’s so strange about it. The ethical rules that govern when it’s acceptable for one writer to copy another are even more extreme than the most extreme position of the intellectual-property crowd: when it comes to literature, we have somehow decided that copying is never acceptable. Not long ago, the Harvard law professor Laurence Tribe was accused of lifting material from the historian Henry Abraham for his 1985 book, God Save This Honorable Court. What did the charge amount to? In an exposé that appeared in the conservative publication The Weekly Standard, Joseph Bottum produced a number of examples of close paraphrasing, but his smoking gun was this one borrowed sentence: “Taft publicly pronounced Pitney to be a ‘weak member’ of the Court to whom he could not assign cases.” That’s it. Nineteen words.

Not long after I learned about Frozen, I went to see a friend of mine who works in the music industry. We sat in his living room on the Upper East Side, facing each other in easy chairs, as he worked his way through a mountain of CDs. He played “Angel,” by the reggae singer Shaggy, and then “The Joker,” by the Steve Miller Band, and told me to listen very carefully to the similarity in bass lines. He played Led Zeppelin’s “Whole Lotta Love” and then Muddy Waters’s “You Need Love,” to show the extent to which Led Zeppelin had mined the blues for inspiration. He played “Twice My Age,” by Shabba Ranks and Krystal, and then the saccharine ’70s pop standard “Seasons in the Sun,” until I could hear the echoes of the second song in the first. He played “Last Christmas,” by Wham! followed by Barry Manilow’s “Can’t Smile Without You” to explain why Manilow might have been startled when he first heard that song, and then “Joanna,” by Kool and the Gang, because, in a different way, “Last Christmas” was an homage to Kool and the Gang as well. “That sound you hear in Nirvana,” my friend said at one point, “that soft and then loud kind of exploding thing, a lot of that was inspired by the Pixies. Yet Kurt Cobain”—Nirvana’s lead singer and songwriter—“was such a genius that he managed to make it his own. And ‘Smells Like Teen Spirit’?”—here he was referring to perhaps the best-known Nirvana song. “That’s Boston’s ‘More Than a Feeling.’” He began to hum the riff of the Boston hit, and said, “The first time I heard ‘Teen Spirit,’ I said, ‘That guitar lick is from “More Than a Feeling.” ’ But it was different—it was urgent and brilliant and new.”

He played another CD. It was Rod Stewart’s “Do Ya Think I’m Sexy,” a huge hit from the 1970s. The chorus has a distinctive, catchy hook—the kind of tune that millions of Americans probably hummed in the shower the year it came out. Then he put on “Taj Mahal,” by the Brazilian artist Jorge Ben Jor, which was recorded several years before the Rod Stewart song. In his twenties, my friend was a DJ at various downtown clubs, and at some point he’d become interested in world music. “I caught it back then,” he said. A small, sly smile spread across his face. The opening bars of “Taj Mahal” were very South American, a world away from what we had just listened to. And then I heard it. It was so obvious and unambiguous that I laughed out loud; virtually note for note, it was the hook from “Do Ya Think I’m Sexy.” It was possible that Rod Stewart had independently come up with that riff, because resemblance is not proof of influence. It was also possible that he’d been in Brazil, listened to some local music, and liked what he heard.

My friend had hundreds of these examples. We could have sat in his living room playing at musical genealogy for hours. Did the examples upset him? Of course not, because he knew enough about music to know that these patterns of influence—cribbing, tweaking, transforming—were at the very heart of the creative process. True, copying could go too far. There were times when one artist was simply replicating the work of another, and to let that pass inhibited true creativity. But it was equally dangerous to be overly vigilant in policing creative expression, because if Led Zeppelin hadn’t been free to mine the blues for inspiration, we wouldn’t have got “Whole Lotta Love,” and if Kurt Cobain couldn’t listen to “More Than a Feeling” and pick out and transform the part he really liked, we wouldn’t have “Smells Like Teen Spirit”—and, in the evolution of rock, “Smells Like Teen Spirit” was a real step forward from “More Than a Feeling.” A successful music executive has to understand the distinction between borrowing that is transformative and borrowing that is merely derivative, and that distinction, I realized, was what was missing from the discussion of Bryony Lavery’s borrowings. Yes, she had copied my work. But no one was asking why she had copied it, or what she had copied, or whether her copying served some larger purpose.

5.

Bryony Lavery came to see me in early October of that year. It was a beautiful Saturday afternoon, and we met at my apartment. She is in her fifties, with short, tousled blond hair and pale blue eyes, and was wearing jeans and a loose green shirt and clogs. There was something rugged and raw about her. In the Times the previous day, the theater critic Ben Brantley had not been kind to her new play, Last Easter. This was supposed to be her moment of triumph. Frozen had been nominated for a Tony. Last Easter had opened Off Broadway. And now? She sat down heavily at my kitchen table. “I’ve had the absolute gamut of emotions,” she said, playing nervously with her hands as she spoke, as if she needed a cigarette. “I think when one’s working, one works between absolute confidence and absolute doubt, and I got a huge dollop of each. I was terribly confident that I could write well after Frozen, and then this opened a chasm of doubt.” She looked up at me. “I’m terribly sorry,” she said.

Lavery began to explain: “What happens when I write is that I find that I’m somehow zoning in on a number of things. I find that I’ve cut things out of newspapers because the story or something in them is interesting to me, and seems to me to have a place onstage. Then it starts coagulating. It’s like the soup starts thickening. And then a story, which is also a structure, starts emerging. I’d been reading thrillers like The Silence of the Lambs, about fiendishly clever serial killers. I’d also seen a documentary of the victims of the Yorkshire killers, Myra Hindley and Ian Brady, who were called the Moors Murderers. They spirited away several children. It seemed to me that killing somehow wasn’t fiendishly clever. It was the opposite of clever. It was as banal and stupid and destructive as it could be. There are these interviews with the survivors, and what struck me was that they appeared to be frozen in time. And one of them said, ‘If that man was out now, I’m a forgiving man but I couldn’t forgive him. I’d kill him.’ That’s in Frozen. I was thinking about that. Then my mother went into hospital for a very simple operation, and the surgeon punctured her womb, and therefore her intestine, and she got peritonitis and died.”

When Lavery started talking about her mother, she stopped, and had to collect herself. “She was seventy-four, and what occurred to me is that I utterly forgave him. I thought it was an honest mistake. I’m very sorry it happened to my mother, but it’s an honest mistake.” Lavery’s feelings confused her, though, because she could think of people in her own life whom she had held grudges against for years, for the most trivial of reasons. “In a lot of ways, Frozen was an attempt to understand the nature of forgiveness,” she said.

Lavery settled, in the end, on a play with three characters. The first is a serial killer named Ralph who kidnaps and murders a young girl. The second is the murdered girl’s mother, Nancy. The third is a psychiatrist from New York, Agnetha, who goes to England to examine Ralph. In the course of the play, the three lives slowly intersect—and the characters gradually change and become “unfrozen” as they come to terms with the idea of forgiveness. For the character of Ralph, Lavery says that she drew on a book about a serial killer titled The Murder of Childhood, by Ray Wyre and Tim Tate. For the character of Nancy, she drew on an article written in the Guardian by a woman named Marian Partington, whose sister had been murdered by the serial killers Frederick and Rosemary West. And, for the character of Agnetha, Lavery drew on a reprint of my article that she had read in a British publication. “I wanted a scientist who would understand,” Lavery said—a scientist who could explain how it was possible to forgive a man who had killed your daughter, who could explain that a serial killing was not a crime of evil but a crime of illness. “I wanted it to be accurate,” she added.

So why didn’t she credit me and Lewis? How could she have been so meticulous about accuracy but not about attribution? Lavery didn’t have an answer. “I thought it was OK to use it,” she said with an embarrassed shrug. “It never occurred to me to ask you. I thought it was news.”

She was aware of how hopelessly inadequate that sounded, and when she went on to say that my article had been in a big folder of source material that she had used in the writing of the play, and that the folder had got lost during the play’s initial run, in Birmingham, she was aware of how inadequate that sounded, too.

But then Lavery began to talk about Marian Partington, her other important inspiration, and her story became more complicated. While she was writing Frozen, Lavery said, she wrote to Partington to inform her of how much she was relying on Partington’s experiences. And when Frozen opened in London, she and Partington met and talked. In reading through articles on Lavery in the British press, I found this, from the Guardian two years ago, long before the accusations of plagiarism surfaced:

Lavery is aware of the debt she owes to Partington’s writing and is eager to acknowledge it. “I always mention it, because I am aware of the enormous debt that I owe to the generosity of Marian Partington’s piece…You have to be hugely careful when writing something like this, because it touches on people’s shattered lives and you wouldn’t want them to come across it unawares.”

Lavery wasn’t indifferent to other people’s intellectual property, then; she was just indifferent to my intellectual property. That’s because, in her eyes, what she took from me was different. It was, as she put it, “news.” She copied my description of Dorothy Lewis’s collaborator, Jonathan Pincus, conducting a neurological examination. She copied the description of the disruptive neurological effects of prolonged periods of high stress. She copied my transcription of the television interview with Franklin. She reproduced a quote that I had taken from a study of abused children, and she copied a quotation from Lewis on the nature of evil. She didn’t copy my musings, or conclusions, or structure. She lifted sentences like “It is the function of the cortex—and, in particular, those parts of the cortex beneath the forehead, known as the frontal lobes—to modify the impulses that surge up from within the brain, to provide judgment, to organize behavior and decision-making, to learn and adhere to rules of everyday life.” It is difficult to have pride of authorship in a sentence like that. My guess is that it’s a reworked version of something I read in a textbook. Lavery knew that failing to credit Partington would have been wrong. Borrowing the personal story of a woman whose sister was murdered by a serial killer matters because that story has real emotional value to its owner. As Lavery put it, it touches on someone’s shattered life. Are boilerplate descriptions of physiological functions in the same league?

It also matters how Lavery chose to use my words. Borrowing crosses the line when it is used for a derivative work. It’s one thing if you’re writing a history of the Kennedys, like Doris Kearns Goodwin, and borrow, without attribution, from another history of the Kennedys. But Lavery wasn’t writing another profile of Dorothy Lewis. She was writing a play about something entirely new—about what would happen if a mother met the man who killed her daughter. And she used my descriptions of Lewis’s work and the outline of Lewis’s life as a building block in making that confrontation plausible. Isn’t that the way creativity is supposed to work? Old words in the service of a new idea aren’t the problem. What inhibits creativity is new words in the service of an old idea.

And this is the second problem with plagiarism. It is not merely extremist. It has also become disconnected from the broader question of what does and does not inhibit creativity. We accept the right of one writer to engage in a full-scale knockoff of another—think how many serial- killer novels have been cloned from The Silence of the Lambs. Yet, when Kathy Acker incorporated parts of a Harold Robbins sex scene verbatim in a satiric novel, she was denounced as a plagiarist (and threatened with a lawsuit). When I worked at a newspaper, we were routinely dispatched to “match” a story from the Times: to do a new version of someone else’s idea. But had we “matched” any of the Times’ words—even the most banal of phrases—it could have been a firing offense. The ethics of plagiarism have turned into the narcissism of small differences: because journalism cannot own up to its heavily derivative nature, it must enforce originality on the level of the sentence.

Dorothy Lewis says that one of the things that hurt her most about Frozen was that Agnetha turns out to have had an affair with her collaborator, David Nabkus. Lewis feared that people would think she had had an affair with her collaborator, Jonathan Pincus. “That’s slander,” Lewis told me. “I’m recognizable in that. Enough people have called me and said, ‘Dorothy, it’s about you,’ and if everything up to that point is true, then the affair becomes true in the mind. So that is another reason that I feel violated. If you are going to take the life of somebody, and make them absolutely identifiable, you don’t create an affair, and you certainly don’t have that as a climax of the play.”

It is easy to understand how shocking it must have been for Lewis to sit in the audience and see her “character” admit to that indiscretion. But the truth is that Lavery has every right to create an affair for Agnetha, because Agnetha is not Dorothy Lewis. She is a fictional character, drawn from Lewis’s life but endowed with a completely imaginary set of circumstances and actions. In real life, Lewis kissed Ted Bundy on the cheek, and in some versions of Frozen, Agnetha kisses Ralph. But Lewis kissed Bundy only because he kissed her first, and there’s a big difference between responding to a kiss from a killer and initiating one. When we first see Agnetha, she’s rushing out of the house and thinking murderous thoughts on the airplane. Dorothy Lewis also charges out of her house and thinks murderous thoughts. But the dramatic function of that scene is to make us think, in that moment, that Agnetha is crazy. And the one inescapable fact about Lewis is that she is not crazy: she has helped get people to rethink their notions of criminality because of her unshakable command of herself and her work. Lewis is upset not just about how Lavery copied her life story, in other words, but about how Lavery changed her life story. She’s not merely upset about plagiarism. She’s upset about art—about the use of old words in the service of a new idea—and her feelings are perfectly understandable, because the alterations of art can be every bit as unsettling and hurtful as the thievery of plagiarism. It’s just that art is not a breach of ethics.

When I read the original reviews of Frozen, I noticed that time and again critics would use, without attribution, some version of the sentence “The difference between a crime of evil and a crime of illness is the difference between a sin and a symptom.” That’s my phrase, of course. I wrote it. Lavery borrowed it from me, and now the critics were borrowing it from her. The plagiarist was being plagiarized. In this case, there is no “art” defense: nothing new was being done with that line. And this was not “news.” Yet do I really own “sins and symptoms”? There is a quote by Gandhi, it turns out, using the same two words, and I’m sure that if I were to plow through the body of English literature I would find the path littered with crimes of evil and crimes of illness. The central fact about the “Phantom” case is that Ray Repp, if he was borrowing from Andrew Lloyd Webber, certainly didn’t realize it, and Andrew Lloyd Webber didn’t realize that he was borrowing from himself. Creative property, Lessig reminds us, has many lives—the newspaper arrives at our door, it becomes part of the archive of human knowledge, then it wraps fish. And, by the time ideas pass into their third and fourth lives, we lose track of where they came from, and we lose control of where they are going. The final dishonesty of the plagiarism fundamentalists is to encourage us to pretend that these chains of influence and evolution do not exist, and that a writer’s words have a virgin birth and an eternal life. I suppose that I could get upset about what happened to my words. I could also simply acknowledge that I had a good, long ride with that line—and let it go.

“It’s been absolutely bloody, really, because it attacks my own notion of my character,” Lavery said, sitting at my kitchen table. A bouquet of flowers she had brought were on the counter behind her. “It feels absolutely terrible. I’ve had to go through the pain for being careless. I’d like to repair what happened, and I don’t know how to do that. I just didn’t think I was doing the wrong thing… and then the article comes out in the New York Times and every continent in the world.” There was a long silence. She was heartbroken. But, more than that, she was confused, because she didn’t understand how 675 rather ordinary words could bring the walls tumbling down. “It’s been horrible and bloody.” She began to cry. “I’m still composting what happened. It will be for a purpose… whatever that purpose is.”

November 22, 2004

Connecting the Dots

THE PARADOXES OF INTELLIGENCE REFORM

1.

In the fall of 1973, the Syrian army began to gather a large number of tanks, artillery batteries, and infantry along its border with Israel. Simultaneously, to the south, the Egyptian army canceled all leaves, called up thousands of reservists, and launched a massive military exercise, building roads and preparing anti-aircraft and artillery positions along the Suez Canal. On October 4, an Israeli aerial reconnaissance mission showed that the Egyptians had moved artillery into offensive positions. That evening, Aman, the Israeli military intelligence agency, learned that portions of the Soviet fleet near Port Said and Alexandria had set sail, and that the Soviet government had begun airlifting the families of Soviet advisers out of Cairo and Damascus. Then, at four o’clock in the morning on October 6, Israel’s director of military intelligence received an urgent telephone call from one of the country’s most trusted intelligence sources. Egypt and Syria, the source said, would attack later that day. Top Israeli officials immediately called a meeting. Was war imminent? The head of Aman, Major General Eli Zeira, looked over the evidence and said he didn’t think so. He was wrong. That afternoon, Syria attacked from the east, overwhelming the thin Israeli defenses in the Golan Heights, and Egypt attacked from the south, bombing Israeli positions and sending eight thousand infantry streaming across the Suez. Despite all the warnings of the previous weeks, Israeli officials were caught by surprise. Why couldn’t they connect the dots?

If you start on the afternoon of October 6 and work backward, the trail of clues pointing to an attack seems obvious; you’d have to conclude that something was badly wrong with the Israeli intelligence service. On the other hand, if you start several years before the Yom Kippur War and work forward, re-creating what people in Israeli intelligence knew in the same order that they knew it, a very different picture emerges. In the fall of 1973, Egypt and Syria certainly looked as if they were preparing to go to war. But, in the Middle East of the time, countries always looked as if they were going to war. In the fall of 1971, for instance, both Egypt’s president and its minister of war stated publicly that the hour of battle was approaching. The Egyptian army was mobilized. Tanks and bridging equipment were sent to the canal. Offensive positions were readied. And nothing happened. In December of 1972, the Egyptians mobilized again. The army furiously built fortifications along the canal. A reliable source told Israeli intelligence that an attack was imminent. Nothing happened. In the spring of 1973, the president of Egypt told Newsweek that everything in his country “is now being mobilized in earnest for the resumption of battle.” Egyptian forces were moved closer to the canal. Extensive fortifications were built along the Suez. Blood donors were rounded up. Civil-defense personnel were mobilized. Blackouts were imposed throughout Egypt. A trusted source told Israeli intelligence that an attack was imminent. It didn’t come. Between January and October of 1973, the Egyptian army mobilized nineteen times without going to war. The Israeli government couldn’t mobilize its army every time its neighbors threatened war. Israel is a small country with a citizen army. Mobilization was disruptive and expensive, and the Israeli government was acutely aware that if its army was mobilized and Egypt and Syria weren’t serious about war, the very act of mobilization might cause them to become serious about war.

Nor did the other signs seem remarkable. The fact that the Soviet families had been sent home could have signified nothing more than a falling-out between the Arab states and Moscow. Yes, a trusted source called at four in the morning, with definite word of a late-afternoon attack, but his last two attack warnings had been wrong. What’s more, the source said that the attack would come at sunset, and an attack so late in the day wouldn’t leave enough time for opening air strikes. Israeli intelligence didn’t see the pattern of Arab intentions, in other words, because, until Egypt and Syria actually attacked, on the afternoon of October 6, 1973, their intentions didn’t form a pattern. They formed a Rorschach blot. What is clear in hindsight is rarely clear before the fact. It’s an obvious point, but one that nonetheless bears repeating, particularly when we’re in the midst of assigning blame for the surprise attack of September 11.

2.

Of the many postmortems conducted after September 11, the one that has received the most attention is The Cell: Inside the 9/11 Plot, and Why the F.B.I. and C.I.A. Failed to Stop It by John Miller, Michael Stone, and Chris Mitchell. The authors begin their tale with El Sayyid Nosair, the Egyptian who was arrested in November of 1990 for shooting Rabbi Meir Kahane, the founder of the Jewish Defense League, in the ballroom of the Marriott Hotel in midtown Manhattan. Nosair’s apartment in New Jersey was searched, and investigators found sixteen boxes of files, including training manuals from the Army Special Warfare School; copies of teletypes that had been routed to the Joint Chiefs of Staff; bomb-making manuals; and maps, annotated in Arabic, of landmarks like the Statue of Liberty, Rockefeller Center, and the World Trade Center. According to The Cell, Nosair was connected to gunrunners and to Islamic radicals in Brooklyn, who were in turn behind the World Trade Center bombing two and a half years later, which was masterminded by Ramzi Yousef, who then showed up in Manila in 1994, apparently plotting to kill the pope, crash a plane into the Pentagon or the CIA, and bomb as many as twelve transcontinental airliners simultaneously. And who was Yousef associating with in the Philippines? Mohammed Khalifa, Wali Khan Amin-Shah, and Ibrahim Munir, all of whom had fought alongside, pledged a loyalty oath to, or worked for a shadowy Saudi Arabian millionaire named Osama bin Laden.

Miller was a network-television correspondent throughout much of the past decade, and the best parts of The Cell recount his own experiences in covering the terrorist story. He is an extraordinary reporter. At the time of the first World Trade Center attack, in February of 1993, he clapped a flashing light on the dashboard of his car and followed the wave of emergency vehicles downtown. (At the bombing site, he was continuously trailed by a knot of reporters—I was one of them—who had concluded that the best way to learn what was going on was to try to overhear his conversations.) Miller became friends with the FBI agents who headed the New York counterterrorist office—Neil Herman and John O’Neill, in particular—and he became as obsessed with Al Qaeda as they were. He was in Yemen, with the FBI, after Al Qaeda bombed the U.S.S. Cole. In 1998, at the Marriott in Islamabad, he and his cameraman met someone known to them only as Akhtar, who spirited them across the border into the hills of Afghanistan to interview Osama bin Laden. In The Cell, the period from 1990 through September 11 becomes a seamless, devastating narrative: the evolution of Al Qaeda. “How did this happen to us?” the book asks in its opening pages. The answer, the authors argue, can be found by following the “thread” connecting Kahane’s murder to September 11. In the events of the past decade, they declare, there is a clear “recurring pattern.”

The same argument is made by Senator Richard Shelby, vice chairman of the Senate Select Committee on Intelligence, in his investigative report on September 11, released this past December. The report is a lucid and powerful document, in which Shelby painstakingly points out all the missed or misinterpreted signals pointing to a major terrorist attack. The CIA knew that two suspected Al Qaeda operatives, Khalid al-Mihdhar and Nawaf al-Hazmi, had entered the country, but the CIA didn’t tell the FBI or the NSC. An FBI agent in Phoenix sent a memo to headquarters that began with the sentence “The purpose of this communication is to advise the Bureau and New York of the possibility of a coordinated effort by Osama Bin Laden to send students to the United States to attend civilian aviation universities and colleges.” But the FBI never acted on the information, and failed to connect it with reports that terrorists were interested in using airplanes as weapons. The FBI took into custody the suspected terrorist Zacarias Moussaoui, on account of his suspicious behavior at flight school, but was unable to integrate his case into a larger picture of terrorist behavior. “The most fundamental problem… is our Intelligence Community’s inability to ‘connect the dots’ available to it before September 11, 2001, about terrorists’ interest in attacking symbolic American targets,” the Shelby report states. The phrase “connect the dots” appears so often in the report that it becomes a kind of mantra. There was a pattern, as plain as day in retrospect, yet the vaunted American intelligence community simply could not see it.

None of these postmortems, however, answer the question raised by the Yom Kippur War: was this pattern obvious before the attack? This question—whether we revise our judgment of events after the fact—is something that psychologists have paid a great deal of attention to. For example, on the eve of Richard Nixon’s historic visit to China, the psychologist Baruch Fischhoff asked a group of people to estimate the probability of a series of possible outcomes of the trip. What were the chances that the trip would lead to permanent diplomatic relations between China and the United States? That Nixon would meet with the leader of China, Mao Tse-tung, at least once? That Nixon would call the trip a success? As it turned out, the trip was a diplomatic triumph, and Fischhoff then went back to the same people and asked them to recall what their estimates of the different outcomes of the visit had been. He found that the subjects now, overwhelmingly, “remembered” being more optimistic than they had actually been. If you originally thought that it was unlikely that Nixon would meet with Mao, afterward, when the press was full of accounts of Nixon’s meeting with Mao, you’d “remember” that you had thought the chances of a meeting were pretty good. Fischhoff calls this phenomenon “creeping determinism”—the sense that grows on us, in retrospect, that what has happened was actually inevitable—and the chief effect of creeping determinism, he points out, is that it turns unexpected events into expected events. As he writes, “The occurrence of an event increases its reconstructed probability and makes it less surprising than it would have been had the original probability been remembered.”

To read the Shelby report, or the seamless narrative from Nosair to bin Laden in The Cell, is to be convinced that if the CIA and the FBI had simply been able to connect the dots, what happened on September 11 should not have been a surprise at all. Is this a fair criticism or is it just a case of creeping determinism?

3.

On August 7, 1998, two Al Qaeda terrorists detonated a cargo truck filled with explosives outside the US embassy in Nairobi, killing 213 people and injuring more than four thousand. Miller, Stone, and Mitchell see the Kenyan embassy bombing as a textbook example of intelligence failure. The CIA, they tell us, had identified an Al Qaeda cell in Kenya well before the attack, and its members were under surveillance. They had an eight-page letter, written by an Al Qaeda operative, speaking of the imminent arrival of “engineers”—the code word for bomb makers—in Nairobi. The US ambassador to Kenya, Prudence Bushnell, had begged Washington for more security. A prominent Kenyan lawyer and legislator says that the Kenyan intelligence service warned US intelligence about the plot several months before August 7, and in November of 1997 a man named Mustafa Mahmoud Said Ahmed, who worked for one of Osama bin Laden’s companies, walked into the US embassy in Nairobi and told American intelligence of a plot to blow up the building. What did our officials do? They forced the leader of the Kenyan cell—a US citizen—to return home, and then abruptly halted their surveillance of the group. They ignored the eight-page letter. They allegedly showed the Kenyan intelligence service’s warning to the Mossad, which dismissed it, and after questioning Ahmed, they decided that he wasn’t credible. After the bombing, The Cell tells us, a senior State Department official phoned Bushnell and asked, “How could this have happened?”

“For the first time since the blast,” Miller, Stone, and Mitchell write, “Bushnell’s horror turned to anger. There was too much history. ‘I wrote you a letter,’ she said.”

This is all very damning, but doesn’t it fall into the creeping-determinism trap? It is not at all clear that it passes the creeping-determinism test. It’s an edited version of the past. What we don’t hear about is all the other people whom American intelligence had under surveillance, how many other warnings they received, and how many other tips came in that seemed promising at the time but led nowhere. The central challenge of intelligence gathering has always been the problem of “noise”: the fact that useless information is vastly more plentiful than useful information. Shelby’s report mentions that the FBI’s counterterrorism division has sixty-eight thousand outstanding and unassigned leads dating back to 1995. And, of those, probably no more than a few hundred are useful. Analysts, in short, must be selective, and the decisions made in Kenya, by that standard, do not seem unreasonable. Surveillance on the cell was shut down, but, then, its leader had left the country. Bushnell warned Washington—but, as The Cell admits, there were bomb warnings in Africa all the time. Officials at the Mossad thought the Kenyan intelligence was dubious, and the Mossad ought to know. Ahmed may have worked for bin Laden but he failed a polygraph test, and it was also learned that he had previously given similar—groundless—warnings to other embassies in Africa. When a man comes into your office, fails a lie-detector test, and is found to have shopped the same unsubstantiated story all over town, can you be blamed for turning him out?

Miller, Stone, and Mitchell make the same mistake when they quote from a transcript of a conversation that was recorded by Italian intelligence in August of 2001 between two Al Qaeda operatives, Abdel Kader Es Sayed and a man known as al Hilal. This, they say, is yet another piece of intelligence that “seemed to forecast the September 11 attacks.”

“I’ve been studying airplanes,” al Hilal tells Es Sayed. “If God wills, I hope to be able to bring you a window or a piece of a plane the next time I see you.”

“What, is there a jihad planned?” Es Sayed asks.

“In the future, listen to the news and remember these words: ‘Up above,’” al Hilal replies. Es Sayed thinks that al Hilal is referring to an operation in his native Yemen, but al Hilal corrects him: “But the surprise attack will come from the other country, one of those attacks you will never forget.”

A moment later al Hilal says about the plan, “It is something terrifying that goes from south to north, east to west. The person who devised this plan is a madman, but a genius. He will leave them frozen [in shock].”

This is a tantalizing exchange. It would now seem that it refers to September 11. But in what sense was it a “forecast”? It gave neither time nor place nor method nor target. It suggested only that there were terrorists out there who liked to talk about doing something dramatic with an airplane—which did not, it must be remembered, reliably distinguish them from any other terrorists of the past thirty years.

In the real world, intelligence is invariably ambiguous. Information about enemy intentions tends to be short on detail. And information that’s rich in detail tends to be short on intentions. In April of 1941, for instance, the Allies learned that Germany had moved a huge army up to the Russian front. The intelligence was beyond dispute: the troops could be seen and counted. But what did it mean? Churchill concluded that Hitler wanted to attack Russia. Stalin concluded that Hitler was serious about attacking, but only if the Soviet Union didn’t meet the terms of the German ultimatum. The British foreign secretary, Anthony Eden, thought that Hitler was bluffing, in the hope of winning further Russian concessions. British intelligence thought—at least, in the beginning—that Hitler simply wanted to reinforce his eastern frontier against a possible Soviet attack. The only way for this piece of intelligence to have been definitive would have been if the Allies had had a second piece of intelligence—like the phone call between al Hilal and Es Sayed—that demonstrated Germany’s true purpose. Similarly, the only way the al Hilal phone call would have been definitive is if we’d also had intelligence as detailed as the Allied knowledge of German troop movements. But rarely do intelligence services have the luxury of both kinds of information. Nor are their analysts mind readers. It is only with hindsight that human beings acquire that skill.

The Cell tells us that, in the final months before September 11, Washington was frantic with worry:

A spike in phone traffic among suspected Al Qaeda members in the early part of the summer [of 2001], as well as debriefings of [an Al Qaeda operative in custody] who had begun cooperating with the government, convinced investigators that bin Laden was planning a significant operation—one intercepted Al Qaeda message spoke of a “Hiroshima-type” event—and that he was planning it soon. Through the summer, the CIA repeatedly warned the White House that attacks were imminent.

The fact that these worries did not protect us is not evidence of the limitations of the intelligence community. It is evidence of the limitations of intelligence.

4.

In the early 1970s, a professor of psychology at Stanford University named David L. Rosenhan gathered together a painter, a graduate student, a pediatrician, a psychiatrist, a housewife, and three psychologists. He told them to check into different psychiatric hospitals under aliases, with the complaint that they had been hearing voices. They were instructed to say that the voices were unfamiliar, and that they heard words like empty, thud, and hollow. Apart from that initial story, the pseudo patients were instructed to answer every question truthfully, to behave as they normally would, and to tell the hospital staff—at every opportunity—that the voices were gone and that they had experienced no further symptoms. The eight subjects were hospitalized, on average, for nineteen days. One was kept for almost two months. Rosenhan wanted to find out if the hospital staffs would ever see through the ruse. They never did.

Rosenhan’s test is, in a way, a classic intelligence problem. Here was a signal (a sane person) buried in a mountain of conflicting and confusing noise (a mental hospital), and the intelligence analysts (the doctors) were asked to connect the dots—and they failed spectacularly. In the course of their hospital stay, the eight pseudo patients were given a total of twenty-one hundred pills. They underwent psychiatric interviews, and sober case summaries documenting their pathologies were written up. They were asked by Rosenhan to take notes documenting how they were treated, and this quickly became part of their supposed pathology. “Patient engaging in writing behavior,” one nurse ominously wrote in her notes. Having been labeled as ill upon admission, they could not shake the diagnosis. “Nervous?” a friendly nurse asked one of the subjects as he paced the halls one day. “No,” he corrected her, to no avail, “bored.”

The solution to this problem seems obvious enough. Doctors and nurses need to be made alert to the possibility that sane people sometimes get admitted to mental hospitals. So Rosenhan went to a research-and-teaching hospital and informed the staff that at some point in the next three months, he would once again send over one or more of his pseudo patients. This time, of the 193 patients admitted in the three-month period, 41 were identified by at least one staff member as being almost certainly sane. Once again, however, they were wrong. Rosenhan hadn’t sent anyone over. In attempting to solve one kind of intelligence problem (overdiagnosis), the hospital simply created another problem (underdiagnosis). This is the second, and perhaps more serious, consequence of creeping determinism: in our zeal to correct what we believe to be the problems of the past, we end up creating new problems for the future.

Pearl Harbor, for example, was widely considered to be an organizational failure. The United States had all the evidence it needed to predict the Japanese attack, but the signals were scattered throughout the various intelligence services. The army and the navy didn’t talk to each other. They spent all their time arguing and competing. This was, in part, why the Central Intelligence Agency was created, in 1947—to ensure that all intelligence would be collected and processed in one place. Twenty years after Pearl Harbor, the United States suffered another catastrophic intelligence failure, at the Bay of Pigs: the Kennedy administration grossly underestimated the Cubans’ capacity to fight and their support for Fidel Castro. This time, however, the diagnosis was completely different. As Irving L. Janis concluded in his famous study of “groupthink,” the root cause of the Bay of Pigs fiasco was that the operation was conceived by a small, highly cohesive group whose close ties inhibited the beneficial effects of argument and competition. Centralization was now the problem. One of the most influential organizational sociologists of the postwar era, Harold Wilensky, went out of his way to praise the “constructive rivalry” fostered by Franklin D. Roosevelt, which, he says, is why the President had such formidable intelligence on how to attack the economic ills of the Great Depression. In his classic 1967 work Organizational Intelligence, Wilensky pointed out that Roosevelt would

use one anonymous informant’s information to challenge and check another’s, putting both on their toes; he recruited strong personalities and structured their work so that clashes would be certain…In foreign affairs, he gave Moley and Welles tasks that overlapped those of Secretary of State Hull; in conservation and power, he gave Ickes and Wallace identical missions; in welfare, confusing both functions and initials, he assigned PWA to Ickes, WPA to Hopkins; in politics, Farley found himself competing with other political advisors for control over patronage. The effect: the timely advertisement of arguments, with both the experts and the President pressured to consider the main choices as they came boiling up from below.

The intelligence community that we had prior to September 11 was the direct result of this philosophy. The FBI and the CIA were supposed to be rivals, just as Ickes and Wallace were rivals. But now we’ve changed our minds. The FBI and the CIA, Senator Shelby tells us disapprovingly, argue and compete with one another. The September 11 story, his report concludes, “should be an object lesson in the perils of failing to share information promptly and efficiently between (and within) organizations.” Shelby wants recentralization and more focus on cooperation. He wants a “central national level knowledge-compiling entity standing above and independent from the disputatious bureaucracies.” He thinks the intelligence service should be run by a small, highly cohesive group, and so he suggests that the FBI be removed from the counterterrorism business entirely. The FBI, according to Shelby, is governed by

deeply entrenched individual mind-sets that prize the production of evidence-supported narratives of defendant wrongdoing over the drawing of probabilistic inferences based on incomplete and fragmentary information in order to support decision-making… Law enforcement organizations handle information, reach conclusions, and ultimately just think differently than intelligence organizations. Intelligence analysts would doubtless make poor policemen, and it has become very clear that policemen make poor intelligence analysts.

In his 2003 State of the Union message, President George W. Bush did what Shelby wanted, and announced the formation of the Terrorist Threat Integration Center—a special unit combining the antiterrorist activities of the FBI and the CIA. The cultural and organizational diversity of the intelligence business, once prized, is now despised.

The truth is, though, that it is just as easy, in the wake of September 11, to make the case for the old system. Isn’t it an advantage that the FBI doesn’t think like the CIA? It was the FBI, after all, that produced two of the most prescient pieces of analysis—the request by the Minneapolis office for a warrant to secretly search Zacarias Moussaoui’s belongings, and the now famous Phoenix memo. In both cases, what was valuable about the FBI’s analysis was precisely the way in which it differed from the traditional “big picture,” probabilistic inference making of the analyst. The FBI agents in the field focused on a single case, dug deep, and came up with an “evidence-supported narrative of defendant wrongdoing” that spoke volumes about a possible Al Qaeda threat.

The same can be said for the alleged problem of rivalry. The Cell describes what happened after police in the Philippines searched the apartment that Ramzi Yousef shared with his coconspirator, Abdul Hakim Murad. Agents from the FBI’s counterterrorism unit immediately flew to Manila and “bumped up against the CIA.” As the old adage about the Bureau and the Agency has it, the FBI wanted to string Murad up, and the CIA wanted to string him along. The two groups eventually worked together, but only because they had to. It was a relationship “marred by rivalry and mistrust.” But what’s wrong with this kind of rivalry? As Miller, Stone, and Mitchell tell us, the real objection of Neil Herman—the FBI’s former domestic counterterrorism chief—to “working with the CIA had nothing to do with procedure. He just didn’t think the Agency was going to be of any help in finding Ramzi Yousef. ‘Back then, I don’t think the CIA could have found a person in a bathroom,’” Herman says. “ ‘Hell, I don’t think they could have found the bathroom.’” The assumption of the reformers is always that the rivalry between the FBI and the CIA is essentially marital, that it is the dysfunction of people who ought to work together but can’t. But it could equally be seen as a version of the marketplace rivalry that leads to companies working harder and making better products.

There is no such thing as a perfect intelligence system, and every seeming improvement involves a trade-off. A couple of months ago, for example, a suspect in custody in Canada, who was wanted in New York on forgery charges, gave police the names and photographs of five Arab immigrants, who he said had crossed the border into the United States. The FBI put out an alert on December 29, posting the names and photographs on its website, in the “war on terrorism” section. Even President Bush joined in, saying, “We need to know why they have been smuggled into the country, what they’re doing in the country.” As it turned out, the suspect in Canada had made the story up. Afterward, an FBI official said that the agency circulated the photographs in order to “err on the side of caution.” Our intelligence services today are highly sensitive. But this kind of sensitivity is not without its costs. As the political scientist Richard K. Betts wrote in his essay “Analysis, War, and Decision: Why Intelligence Failures Are Inevitable,” “Making warning systems more sensitive reduces the risk of surprise, but increases the number of false alarms, which in turn reduces sensitivity.” When we run out and buy duct tape to seal our windows against chemical attack, and nothing happens, and when the government’s warning light is orange for weeks on end, and nothing happens, we soon begin to doubt every warning that comes our way. Why was the Pacific fleet at Pearl Harbor so unresponsive to signs of an impending Japanese attack? Because, in the week before December 7, 1941, they had checked out seven reports of Japanese submarines in the area—and all seven were false. Rosenhan’s psychiatrists used to miss the sane; then they started to see sane people everywhere. That is a change, but it is not exactly progress.

5.

In the wake of the Yom Kippur War, the Israeli government appointed a special investigative commission, and one of the witnesses called was Major General Zeira, the head of Aman. Why, they asked, had he insisted that war was not imminent? His answer was simple:

The Chief of Staff has to make decisions, and his decisions must be clear. The best support that the head of Aman can give the Chief of Staff is to give a clear and unambiguous estimate, provided that it is done in an objective fashion. To be sure, the clearer and sharper the estimate, the clearer and sharper the mistake—but this is a professional hazard for the head of Aman.

The historians Eliot A. Cohen and John Gooch, in their book Military Misfortunes, argue that it was Zeira’s certainty that had proved fatal: “The culpable failure of Aman’s leaders in September and October 1973 lay not in their belief that Egypt would not attack but in their supreme confidence, which dazzled decision-makers… Rather than impress upon the prime minister, the chief of staff and the minister of defense the ambiguity of the situation, they insisted—until the last day—that there would be no war, period.”

But, of course, Zeira gave an unambiguous answer to the question of war because that is what politicians and the public demanded of him. No one wants ambiguity. Today, the FBI gives us color-coded warnings and speaks of increased chatter among terrorist operatives, and the information is infuriating to us because it is so vague. What does increased chatter mean? We want a prediction. We want to believe that the intentions of our enemies are a puzzle that intelligence services can piece together, so that a clear story emerges. But there rarely is a clear story—at least, not until afterward, when some enterprising journalist or investigative committee decides to write one.

March 10, 2003

The Art of Failure

WHY SOME PEOPLE CHOKE AND OTHERS PANIC

1.

There was a moment in the third and deciding set of the 1993 Wimbledon final when Jana Novotna seemed invincible. She was leading 4-1 and serving at 40-30, meaning that she was one point from winning the game, and just five points from the most coveted championship in tennis. She had just hit a backhand to her opponent, Steffi Graf, that skimmed the net and landed so abruptly on the far side of the court that Graf could only watch, in flat-footed frustration. The stands at Center Court were packed. The Duke and Duchess of Kent were in their customary places in the royal box. Novotna was in white, poised and confident, her blond hair held back with a headband—and then something happened. She served the ball straight into the net. She stopped and steadied herself for the second serve—the toss, the arch of the back—but this time it was worse. Her swing seemed halfhearted, all arm and no legs and torso. Double fault. On the next point, she was slow to react to a high shot by Graf and badly missed on a forehand volley. At game point, she hit an overhead straight into the net. Instead of 5-1, it was now 4-2. Graf to serve: an easy victory, 4-3. Novotna to serve. She wasn’t tossing the ball high enough. Her head was down. Her movements had slowed markedly. She double-faulted once, twice, three times. Pulled wide by a Graf forehand, Novotna inexplicably hit a low, flat shot directly at Graf, instead of a high crosscourt forehand that would have given her time to get back into position: 4-4. Did she suddenly realize how terrifyingly close she was to victory? Did she remember that she had never won a major tournament before? Did she look across the net and see Steffi Graf—Steffi Graf!—the greatest player of her generation?

On the baseline, awaiting Graf’s serve, Novotna was now visibly agitated, rocking back and forth, jumping up and down. She talked to herself under her breath. Her eyes darted around the court. Graf took the game at love; Novotna, moving as if in slow motion, did not win a single point: 5-4 Graf. On the sidelines, Novotna wiped her racquet and her face with a towel, and then each finger individually. It was her turn to serve. She missed a routine volley wide, shook her head, talked to herself. She missed her first serve, made the second, then, in the resulting rally, mis-hit a backhand so badly that it sailed off her racquet as if launched into flight. Novotna was unrecognizable, not an elite tennis player but a beginner again. She was crumbling under pressure, but exactly why was as baffling to her as it was to all those looking on. Isn’t pressure supposed to bring out the best in us? We try harder. We concentrate harder. We get a boost of adrenaline. We care more about how well we perform. So what was happening to her?

At championship point, Novotna hit a low, cautious, and shallow lob to Graf. Graf answered with an unreturnable overhead smash, and, mercifully, it was over. Stunned, Novotna moved to the net. Graf kissed her twice. At the awards ceremony, the Duchess of Kent handed Novotna the runner-up’s trophy, a small silver plate, and whispered something in her ear, and what Novotna had done finally caught up with her. There she was, sweaty and exhausted, looming over the delicate white-haired Duchess in her pearl necklace. The Duchess reached up and pulled her head down onto her shoulder, and Novotna started to sob.

2.

Human beings sometimes falter under pressure. Pilots crash and divers drown. Under the glare of competition, basketball players cannot find the basket and golfers cannot find the pin. When that happens, we say variously that people have panicked or, to use the sports colloquialism, choked. But what do those words mean? Both are pejoratives. To choke or panic is considered to be as bad as to quit. But are all forms of failure equal? And what do the forms in which we fail say about who we are and how we think? We live in an age obsessed with success, with documenting the myriad ways by which talented people overcome challenges and obstacles. There is as much to be learned, though, from documenting the myriad ways in which talented people sometimes fail.

Choking sounds like a vague and all-encompassing term, yet it describes a very specific kind of failure. For example, psychologists often use a primitive video game to test motor skills. They’ll sit you in front of a computer with a screen that shows four boxes in a row, and a keyboard that has four corresponding buttons in a row. One at a time, x’s start to appear in the boxes on the screen, and you are told that every time this happens you are to push the key corresponding to the box. According to Daniel Willingham, a psychologist at the University of Virginia, if you’re told ahead of time about the pattern in which those x’s will appear, your reaction time in hitting the right key will improve dramatically. You’ll play the game very carefully for a few rounds, until you’ve learned the sequence, and then you’ll get faster and faster. Willingham calls this explicit learning. But suppose you’re not told that the x’s appear in a regular sequence, and even after playing the game for a while, you’re not aware that there is a pattern. You’ll still get faster: you’ll learn the sequence unconsciously. Willingham calls that implicit learning—learning that takes place outside of awareness. These two learning systems are quite separate, based in different parts of the brain. Willingham says that when you are first taught something—say, how to hit a backhand or an overhead forehand—you think it through in a very deliberate, mechanical manner. But as you get better, the implicit system takes over: you start to hit a backhand fluidly, without thinking. The basal ganglia, where implicit learning partially resides, are concerned with force and timing, and when that system kicks in, you begin to develop touch and accuracy, the ability to hit a drop shot or place a serve at a hundred miles per hour. “This is something that is going to happen gradually,” Willingham says. “You hit several thousand forehands, after a while you may still be attending to it. But not very much. In the end, you don’t really notice what your hand is doing at all.”

Under conditions of stress, however, the explicit system sometimes takes over. That’s what it means to choke. When Jana Novotna faltered at Wimbledon, it was because she began thinking about her shots again. She lost her fluidity, her touch. She double-faulted on her serves and mis-hit her overheads, the shots that demand the greatest sensitivity in force and timing. She seemed like a different person—playing with the slow, cautious deliberation of a beginner—because, in a sense, she was a beginner again: she was relying on a learning system that she hadn’t used to hit serves and overhead forehands and volleys since she was first taught tennis, as a child. The same thing has happened to Chuck Knoblauch, the New York Yankees’ second baseman, who inexplicably has had trouble throwing the ball to first base. Under the stress of playing in front of forty thousand fans at Yankee Stadium, Knoblauch finds himself reverting to explicit mode, throwing like a Little Leaguer again.

Panic is something else altogether. Consider the following account of a scuba-diving accident, recounted to me by Ephimia Morphew, a human-factors specialist at NASA: “It was an open-water certification dive, Monterey Bay, California, about ten years ago. I was nineteen. I’d been diving for two weeks. This was my first time in the open ocean without the instructor. Just my buddy and I. We had to go about forty feet down, to the bottom of the ocean, and do an exercise where we took our regulators out of our mouth, picked up a spare one that we had on our vest, and practiced breathing out of the spare. My buddy did hers. Then it was my turn. I removed my regulator. I lifted up my secondary regulator. I put it in my mouth, exhaled, to clear the lines, and then I inhaled, and, to my surprise, it was water. I inhaled water. Then the hose that connected that mouthpiece to my tank, my air source, came unlatched and air from the hose came exploding into my face.

“Right away, my hand reached out for my partner’s air supply, as if I was going to rip it out. It was without thought. It was a physiological response. My eyes are seeing my hand do something irresponsible. I’m fighting with myself. Don’t do it. Then I searched my mind for what I could do. And nothing came to mind. All I could remember was one thing: if you can’t take care of yourself, let your buddy take care of you. I let my hand fall back to my side, and I just stood there.”

This is a textbook example of panic. In that moment, Morphew stopped thinking. She forgot that she had another source of air, one that worked perfectly well and that, moments before, she had taken out of her mouth. She forgot that her partner had a working air supply as well, which could easily be shared, and she forgot that grabbing her partner’s regulator would imperil both of them. All she had was her most basic instinct: get air. Stress wipes out short-term memory. People with lots of experience tend not to panic, because when the stress suppresses their short-term memory they still have some residue of experience to draw on. But what did a novice like Morphew have? I searched my mind for what I could do. And nothing came to mind.

Panic also causes what psychologists call perceptual narrowing. In one study, from the early seventies, a group of subjects were asked to perform a visual-acuity task while undergoing what they thought was a sixty-foot dive in a pressure chamber. At the same time, they were asked to push a button whenever they saw a small light flash on and off in their peripheral vision. The subjects in the pressure chamber had much higher heart rates than the control group, indicating that they were under stress. That stress didn’t affect their accuracy at the visual-acuity task, but they were only half as good as the control group at picking up the peripheral light. “You tend to focus or obsess on one thing,” Morphew says. “There’s a famous airplane example, where the landing light went off, and the pilots had no way of knowing if the landing gear was down. The pilots were so focused on that light that no one noticed the autopilot had been disengaged, and they crashed the plane.” Morphew reached for her buddy’s air supply because it was the only air supply she could see.

Panic, in this sense, is the opposite of choking. Choking is about thinking too much. Panic is about thinking too little. Choking is about loss of instinct. Panic is reversion to instinct. They may look the same, but they are worlds apart.

3.

Why does this distinction matter? In some instances, it doesn’t much. If you lose a close tennis match, it’s of little moment whether you choked or panicked; either way, you lost. But there are clearly cases when how failure happens is central to understanding why failure happens.

Take the plane crash in which John F. Kennedy, Jr., was killed. The details of the flight are well known. On a Friday evening in July of 1999, Kennedy took off with his wife and sister-in-law for Martha’s Vineyard. The night was hazy, and Kennedy flew along the Connecticut coastline, using the trail of lights below him as a guide. At Westerly, Rhode Island, he left the shoreline, heading straight out over Rhode Island Sound, and at that point, apparently disoriented by the darkness and haze, he began a series of curious maneuvers: He banked his plane to the right, farther out into the ocean, and then to the left. He climbed and descended. He sped up and slowed down. Just a few miles from his destination, Kennedy lost control of the plane, and it crashed into the ocean.

Kennedy’s mistake, in technical terms, was that he failed to keep his wings level. That was critical, because when a plane banks to one side it begins to turn and its wings lose some of their vertical lift. Left unchecked, this process accelerates. The angle of the bank increases, the turn gets sharper and sharper, and the plane starts to dive toward the ground in an ever-narrowing corkscrew. Pilots call this the graveyard spiral. And why didn’t Kennedy stop the dive? Because, in times of low visibility and high stress, keeping your wings level—indeed, even knowing whether you are in a graveyard spiral—turns out to be surprisingly difficult. Kennedy failed under pressure.

Had Kennedy been flying during the day or with a clear moon, he would have been fine. If you are the pilot, looking straight ahead from the cockpit, the angle of your wings will be obvious from the straight line of the horizon in front of you. But when it’s dark outside, the horizon disappears. There is no external measure of the plane’s bank. On the ground, we know whether we are level even when it’s dark, because of the motion-sensing mechanisms in the inner ear. In a spiral dive, though, the effect of the plane’s G-force on the inner ear means that the pilot feels perfectly level even if his plane is not. Similarly, when you are in a jetliner that is banking at thirty degrees after takeoff, the book on your neighbor’s lap does not slide into your lap, nor will a pen on the floor roll toward the “down” side of the plane. The physics of flying is such that an airplane in the midst of a turn always feels perfectly level to someone inside the cabin.

This is a difficult notion, and to understand it I went flying with William Langewiesche, the author of a superb book on flying, Inside the Sky. We met at San Jose Airport, in the jet center where the Silicon Valley billionaires keep their private planes. Langewiesche is a rugged man in his forties, deeply tanned, and handsome in the way that pilots (at least since the movie The Right Stuff) are supposed to be. We took off at dusk, heading out toward Monterey Bay, until we had left the lights of the coast behind and night had erased the horizon. Langewiesche let the plane bank gently to the left. He took his hands off the stick. The sky told me nothing now, so I concentrated on the instruments. The nose of the plane was dropping. The gyroscope told me that we were banking, first fifteen, then thirty, then forty-five degrees. “We’re in a spiral dive,” Langewiesche said calmly. Our airspeed was steadily accelerating, from 180 to 190 to 200 knots. The needle on the altimeter was moving down. The plane was dropping like a stone, at three thousand feet per minute. I could hear, faintly, a slight increase in the hum of the engine, and the wind noise as we picked up speed. But if Langewiesche and I had been talking, I would have caught none of that. Had the cabin been unpressurized, my ears might have popped, particularly as we went into the steep part of the dive. But beyond that? Nothing at all. In a spiral dive, the G-load—the force of inertia—is normal. As Langewiesche puts it, the plane likes to spiral-dive. The total time elapsed since we started diving was no more than six or seven seconds. Suddenly, Langewiesche straightened the wings and pulled back on the stick to get the nose of the plane up, breaking out of the dive. Only now did I feel the full force of the G-load, pushing me back in my seat. “You feel no G-load in a bank,” Langewiesche said. “There’s nothing more confusing for the uninitiated.”

I asked Langewiesche how much longer we could have fallen. “Within five seconds, we would have exceeded the limits of the airplane,” he replied, by which he meant that the force of trying to pull out of the dive would have broken the plane into pieces. I looked away from the instruments and asked Langewiesche to spiral-dive again, this time without telling me. I sat and waited. I was about to tell Langewiesche that he could start diving anytime, when, suddenly, I was thrown back in my chair. “We just lost a thousand feet,” he said.

This inability to sense, experientially, what your plane is doing is what makes night flying so stressful. And this was the stress that Kennedy must have felt when he turned out across the water at Westerly, leaving the guiding lights of the Connecticut coastline behind him. A pilot who flew into Nantucket that night told the National Transportation Safety Board that when he descended over Martha’s Vineyard, he looked down and there was “nothing to see. There was no horizon and no light…I thought the island might [have] suffered a power failure.” Kennedy was now blind, in every sense, and he must have known the danger he was in. He had very little experience in flying strictly by instruments. Most of the time when he had flown up to the Vineyard, the horizon or lights had still been visible. That strange, final sequence of maneuvers was Kennedy’s frantic search for a clearing in the haze. He was trying to pick up the lights of Martha’s Vineyard, to restore the lost horizon. Between the lines of the National Transportation Safety Board’s report on the crash, you can almost feel his desperation:

About 2138 the target began a right turn in a southerly direction. About 30 seconds later, the target stopped its descent at 2200 feet and began a climb that lasted another 30 seconds. During this period of time, the target stopped the turn, and the airspeed decreased to about 153 KIAS. About 2139, the target leveled off at 2500 feet and flew in a southeasterly direction. About 50 seconds later, the target entered a left turn and climbed to 2600 feet. As the target continued in the left turn, it began a descent that reached a rate of about 900 fpm.

But was he choking or panicking? Here the distinction between those two states is critical. Had he choked, he would have reverted to the mode of explicit learning. His movements in the cockpit would have become markedly slower and less fluid. He would have gone back to the mechanical, self-conscious application of the lessons he had first received as a pilot—and that might have been a good thing. Kennedy needed to think, to concentrate on his instruments, to break away from the instinctive flying that served him when he had a visible horizon.

But instead, from all appearances, he panicked. At the moment when he needed to remember the lessons he had been taught about instrument flying, his mind—like Morphew’s when she was underwater—must have gone blank. Instead of reviewing the instruments, he seems to have been focused on one question: Where are the lights of Martha’s Vineyard? His gyroscope and his other instruments may well have become as invisible as the peripheral lights in the underwater-panic experiments. He had fallen back on his instincts—on the way the plane felt—and in the dark, of course, instinct can tell you nothing. The NTSB report says that the last time the Piper’s wings were level was seven seconds past 9:40, and the plane hit the water at about 9:41, so the critical period here was less than sixty seconds. At twenty-five seconds past the minute, the plane was tilted at an angle greater than forty-five degrees. Inside the cockpit it would have felt normal. At some point, Kennedy must have heard the rising wind outside, or the roar of the engine as it picked up speed. Again, relying on instinct, he might have pulled back on the stick, trying to raise the nose of the plane. But pulling back on the stick without first leveling the wings only makes the spiral tighter and the problem worse. It’s also possible that Kennedy did nothing at all, and that he was frozen at the controls, still frantically searching for the lights of the Vineyard, when his plane hit the water. Sometimes pilots don’t even try to make it out of a spiral dive. Langewiesche calls that “one G all the way down.”

4.

What happened to Kennedy that night illustrates a second major difference between panicking and choking. Panicking is conventional failure, of the sort we tacitly understand. Kennedy panicked because he didn’t know enough about instrument flying. If he’d had another year in the air, he might not have panicked, and that fits with what we believe—that performance ought to improve with experience, and that pressure is an obstacle that the diligent can overcome. But choking makes little intuitive sense. Novotna’s problem wasn’t lack of diligence; she was as superbly conditioned and schooled as anyone on the tennis tour. And what did experience do for her? In 1995, in the third round of the French Open, Novotna choked even more spectacularly than she had against Graf, losing to Chanda Rubin after surrendering a 5-0 lead in the third set. There seems little doubt that part of the reason for her collapse against Rubin was her collapse against Graf—that the second failure built on the first, making it possible for her to be up 5-0 in the third set and yet entertain the thought I can still lose. If panicking is conventional failure, choking is paradoxical failure.

Claude Steele, a psychologist at Stanford University, and his colleagues have done a number of experiments in recent years looking at how certain groups perform under pressure, and their findings go to the heart of what is so strange about choking. Steele and Joshua Aronson found that when they gave a group of Stanford undergraduates a standardized test and told them that it was a measure of their intellectual ability, the white students did much better than their black counterparts. But when the same test was presented simply as an abstract laboratory tool, with no relevance to ability, the scores of blacks and whites were virtually identical. Steele and Aronson attribute this disparity to what they call “stereotype threat”: when black students are put into a situation where they are directly confronted with a stereotype about their group—in this case one having to do with intelligence—the resulting pressure causes their performance to suffer.

Steele and others have found stereotype threat at work in any situation where groups are depicted in negative ways. Give a group of qualified women a math test and tell them it will measure their quantitative ability and they’ll do much worse than equally skilled men will; present the same test simply as a research tool and they’ll do just as well as the men. Or consider a handful of experiments conducted by one of Steele’s former graduate students, Julio Garcia, a professor at Tufts University. Garcia gathered together a group of white, athletic students and had a white instructor lead them through a series of physical tests: to jump as high as they could, to do a standing broad jump, and to see how many pushups they could do in twenty seconds. The instructor then asked them to do the tests a second time, and, as you’d expect, Garcia found that the students did a little better on each of the tasks the second time around. Then Garcia ran a second group of students through the tests, this time replacing the instructor between the first and second trials with an African-American. Now the white students ceased to improve on their vertical leaps. He did the experiment again, only this time he replaced the white instructor with a black instructor who was much taller and heavier than the previous black instructor. In this trial, the white students actually jumped less high than they had the first time around. Their performance on the pushups, though, was unchanged in each of the conditions. There is no stereotype, after all, that suggests that whites can’t do as many pushups as blacks. The task that was affected was the vertical leap, because of what our culture says: white men can’t jump.

It doesn’t come as news, of course, that black students aren’t as good at test-taking as white students, or that white students aren’t as good at jumping as black students. The problem is that we’ve always assumed that this kind of failure under pressure is panic. What is it we tell underperforming athletes and students? The same thing we tell novice pilots or scuba divers: to work harder, to buckle down, to take the tests of their ability more seriously. But Steele says that when you look at the way black or female students perform under stereotype threat, you don’t see the wild guessing of a panicked test taker. “What you tend to see is carefulness and second-guessing,” he explains. “When you go and interview them, you have the sense that when they are in the stereotype-threat condition they say to themselves, ‘Look, I’m going to be careful here. I’m not going to mess things up.’ Then, after having decided to take that strategy, they calm down and go through the test. But that’s not the way to succeed on a standardized test. The more you do that, the more you will get away from the intuitions that help you, the quick processing. They think they did well, and they are trying to do well. But they are not.” This is choking, not panicking. Garcia’s athletes and Steele’s students are like Novotna, not Kennedy. They failed because they were good at what they did: only those who care about how well they perform ever feel the pressure of stereotype threat. The usual prescription for failure—to work harder and take the test more seriously—would only make their problems worse.

That is a hard lesson to grasp, but harder still is the fact that choking requires us to concern ourselves less with the performer and more with the situation in which the performance occurs. Novotna herself could do nothing to prevent her collapse against Graf. The only thing that could have saved her is if—at that critical moment in the third set—the television cameras had been turned off, the Duke and Duchess had gone home, and the spectators had been told to wait outside. In sports, of course, you can’t do that. Choking is a central part of the drama of athletic competition, because the spectators have to be there—and the ability to overcome the pressure of the spectators is part of what it means to be a champion. But the same ruthless inflexibility need not govern the rest of our lives. We have to learn that sometimes a poor performance reflects not the innate ability of the performer but the complexion of the audience; and that sometimes a poor test score is the sign not of a poor student but of a good one.

5.

Through the first three rounds of the 1996 Masters golf tournament, Greg Norman held a seemingly insurmountable lead over his nearest rival, the Englishman Nick Faldo. He was the best player in the world. His nickname was the Shark. He didn’t saunter down the fairways; he stalked the course, blond and broad-shouldered, his caddy behind him, struggling to keep up. But then came the ninth hole on the tournament’s final day. Norman was paired with Faldo, and the two hit their first shots well. They were now facing the green. In front of the pin, there was a steep slope, so that any ball hit short would come rolling back down the hill into oblivion. Faldo shot first, and the ball landed safely long, well past the cup.

Norman was next. He stood over the ball. “The one thing you guard against here is short,” the announcer said, stating the obvious. Norman swung and then froze, his club in midair, following the ball in flight. It was short. Norman watched, stone-faced, as the ball rolled thirty yards back down the hill, and with that error something inside of him broke.

At the tenth hole, he hooked the ball to the left, hit his third shot well past the cup, and missed a makeable putt. At eleven, Norman had a three-and-a-half-foot putt for par—the kind he had been making all week. He shook out his hands and legs before grasping the club, trying to relax. He missed: his third straight bogey. At twelve, Norman hit the ball straight into the water. At thirteen, he hit it into a patch of pine needles. At sixteen, his movements were so mechanical and out of synch that, when he swung, his hips spun out ahead of his body and the ball sailed into another pond. At that, he took his club and made a frustrated scythelike motion through the grass, because what had been obvious for twenty minutes was now official: he had fumbled away the chance of a lifetime.

Faldo had begun the day six strokes behind Norman. By the time the two started their slow walk to the eighteenth hole, through the throng of spectators, Faldo had a four-stroke lead. But he took those final steps quietly, giving only the smallest of nods, keeping his head low. He understood what had happened on the greens and fairways that day. And he was bound by the particular etiquette of choking, the understanding that what he had earned was something less than a victory and what Norman had suffered was something less than a defeat.

When it was all over, Faldo wrapped his arms around Norman. “I don’t know what to say—I just want to give you a hug,” he whispered, and then he said the only thing you can say to a choker: “I feel horrible about what happened. I’m so sorry.” With that, the two men began to cry.

August 21 and 28, 2000

Blowup

WHO CAN BE BLAMED FOR A DISASTER LIKE THE CHALLENGER EXPLOSION? NO ONE, AND WE’D BETTER GET USED TO IT

1.

In the technological age, there is a ritual to disaster. When planes crash or chemical plants explode, each piece of physical evidence—of twisted metal or fractured concrete—becomes a kind of fetish object, painstakingly located, mapped, tagged, and analyzed, with findings submitted to boards of inquiry that then probe and interview and soberly draw conclusions. It is a ritual of reassurance, based on the principle that what we learn from one accident can help us prevent another, and a measure of its effectiveness is that Americans did not shut down the nuclear industry after Three Mile Island and do not abandon the skies after each new plane crash. But the rituals of disaster have rarely been played out so dramatically as they were in the case of the Challenger space shuttle, which blew up over southern Florida on January 28, 1986.

Fifty-five minutes after the explosion, when the last of the debris had fallen into the ocean, recovery ships were on the scene. They remained there for the next three months, as part of what turned into the largest maritime salvage operation in history, combing a hundred and fifty thousand square nautical miles for floating debris, while the ocean floor surrounding the crash site was inspected by submarines. In mid-April of 1986, the salvage team found several chunks of charred metal that confirmed what had previously been only suspected: the explosion was caused by a faulty seal in one of the shuttle’s rocket boosters, which had allowed a stream of flame to escape and ignite an external fuel tank.

Armed with this confirmation, a special presidential investigative commission concluded the following June that the deficient seal reflected shoddy engineering and lax management at NASA and its prime contractor, Morton Thiokol. Properly chastised, NASA returned to the drawing board, to emerge thirty-two months later with a new shuttle—Discovery—redesigned according to the lessons learned from the disaster. During that first post-Challenger flight, as America watched breathlessly, the crew of the Discovery held a short commemorative service. “Dear friends,” the mission commander, Captain Frederick H. Hauck, said, addressing the seven dead Challenger astronauts, “your loss has meant that we could confidently begin anew.” The ritual was complete. NASA was back.

But what if the assumptions that underlie our disaster rituals aren’t true? What if these public postmortems don’t help us avoid future accidents? Over the past few years, a group of scholars has begun making the unsettling argument that the rituals that follow things like plane crashes or the Three Mile Island crisis are as much exercises in self-deception as they are genuine opportunities for reassurance. For these revisionists, high-technology accidents may not have clear causes at all. They may be inherent in the complexity of the technological systems we have created.

This revisionism has now been extended to the Challenger disaster, with the publication of The Challenger Launch Decision, by the sociologist Diane Vaughan, which is the first truly definitive analysis of the events leading up to January 28, 1986. The conventional view is that the Challenger accident was an anomaly, that it happened because people at NASA had not done their job. But the study’s conclusion is the opposite: it says that the accident happened because people at NASA had done exactly what they were supposed to do. “No fundamental decision was made at NASA to do evil,” Vaughan writes. “Rather, a series of seemingly harmless decisions were made that incrementally moved the space agency toward a catastrophic outcome.”

No doubt Vaughan’s analysis will be hotly disputed, but even if she is only partly right, the implications of this kind of argument are enormous. We have surrounded ourselves in the modern age with things like power plants and nuclear weapons systems and airports that handle hundreds of planes an hour, on the understanding that the risks they represent are, at the very least, manageable. But if the potential for catastrophe is actually found in the normal functioning of complex systems, this assumption is false. Risks are not easily manageable, accidents are not easily preventable, and the rituals of disaster have no meaning. The first time around, the story of the Challenger was tragic. In its retelling, a decade later, it is merely banal.

2.

Perhaps the best way to understand the argument over the Challenger explosion is to start with an accident that preceded it—the near disaster at the Three Mile Island (TMI) nuclear-power plant in March of 1979. The conclusion of the president’s commission that investigated the TMI accident was that it was the result of human error, particularly on the part of the plant’s operators. But the truth of what happened there, the revisionists maintain, is a good deal more complicated than that, and their arguments are worth examining in detail.

The trouble at TMI started with a blockage in what is called the plant’s polisher—a kind of giant water filter. Polisher problems were not unusual at TMI, or particularly serious. But in this case the blockage caused moisture to leak into the plant’s air system, inadvertently tripping two valves and shutting down the flow of cold water into the plant’s steam generator.

As it happens, TMI had a backup cooling system for precisely this situation. But on that particular day, for reasons that no one really knows, the valves for the backup system weren’t open. They had been closed, and an indicator in the control room showing they were closed was blocked by a repair tag hanging from a switch above it. That left the reactor dependent on another backup system, a special sort of relief valve. But, as luck would have it, the relief valve wasn’t working properly that day, either. It stuck open when it was supposed to close, and, to make matters even worse, a gauge in the control room which should have told the operators that the relief valve wasn’t working was itself not working. By the time TMI’s engineers realized what was happening, the reactor had come dangerously close to a meltdown.

Here, in other words, was a major accident caused by five discrete events. There is no way the engineers in the control room could have known about any of them. No glaring errors or spectacularly bad decisions were made that exacerbated those events. And all the malfunctions—the blocked polisher, the shut valves, the obscured indicator, the faulty relief valve, and the broken gauge—were in themselves so trivial that individually they would have created no more than a nuisance. What caused the accident was the way minor events unexpectedly interacted to create a major problem.

This kind of disaster is what the Yale University sociologist Charles Perrow has famously called a normal accident. By normal, Perrow does not mean that it is frequent; he means that it is the kind of accident one can expect in the normal functioning of a technologically complex operation. Modern systems, Perrow argues, are made up of thousands of parts, all of which interrelate in ways that are impossible to anticipate. Given that complexity, he says, it is almost inevitable that some combinations of minor failures will eventually amount to something catastrophic. In a classic 1984 treatise on accidents, Perrow takes examples of well-known plane crashes, oil spills, chemical-plant explosions, and nuclear-weapons mishaps and shows how many of them are best understood as normal. If you saw the movie Apollo 13, in fact, you have seen a perfect illustration of one of the most famous of all normal accidents: the Apollo flight went awry because of the interaction of failures of the spacecraft’s oxygen and hydrogen tanks, and an indicator light that diverted the astronauts’ attention from the real problem.

Had this been a “real” accident—if the mission had run into trouble because of one massive or venal error—the story would have made for a much inferior movie. In real accidents, people rant and rave and hunt down the culprit. They do, in short, what people in Hollywood thrillers always do. But what made Apollo 13 unusual was that the dominant emotion was not anger but bafflement—bafflement that so much could go wrong for so little apparent reason. There was no one to blame, no dark secret to unearth, no recourse but to re-create an entire system in place of one that had inexplicably failed. In the end, the normal accident was the more terrifying one.

3.

Was the Challenger explosion a normal accident? In a narrow sense, the answer is no. Unlike what happened at TMI, its explosion was caused by a single, catastrophic malfunction: the so-called O-rings that were supposed to prevent hot gases from leaking out of the rocket boosters didn’t do their job. But Vaughan argues that the O-ring problem was really just a symptom. The cause of the accident was the culture of NASA, she says, and that culture led to a series of decisions about the Challenger that very much followed the contours of a normal accident.

The heart of the question is how NASA chose to evaluate the problems it had been having with the rocket boosters’ O-rings. These are the thin rubber bands that run around the lips of each of the rocket’s four segments, and each O-ring was meant to work like the rubber seal on the top of a bottle of preserves, making the fit between each part of the rocket snug and airtight. But from as far back as 1981, on one shuttle flight after another, the O-rings had shown increasing problems. In a number of instances, the rubber seal had been dangerously eroded—a condition suggesting that hot gases had almost escaped. What’s more, O-rings were strongly suspected to be less effective in cold weather, when the rubber would harden and not give as tight a seal. On the morning of January 28, 1986, the shuttle launchpad was encased in ice, and the temperature at liftoff was just above freezing. Anticipating these low temperatures, engineers at Morton Thiokol, the manufacturer of the shuttle’s rockets, recommended that the launch be delayed. Morton Thiokol brass and NASA, however, overruled the recommendation, and that decision led both the president’s commission and numerous critics since to accuse NASA of egregious—if not criminal—misjudgment.

Vaughan doesn’t dispute that the decision was fatally flawed. But, after reviewing thousands of pages of transcripts and internal NASA documents, she can’t find any evidence of people acting negligently, or nakedly sacrificing safety in the name of politics or expediency. The mistakes that NASA made, she says, were made in the normal course of operation. For example, in retrospect it may seem obvious that cold weather impaired O-ring performance. But it wasn’t obvious at the time. A previous shuttle flight that had suffered worse O-ring damage had been launched in 75-degree heat. And on a series of previous occasions when NASA had proposed—but eventually scrubbed for other reasons—shuttle launches in weather as cold as 41 degrees, Morton Thiokol had not said a word about the potential threat posed by the cold, so its pre-Challenger objection had seemed to NASA not reasonable but arbitrary. Vaughan confirms that there was a dispute between managers and engineers on the eve of the launch but points out that in the shuttle program, disputes of this sort were commonplace. And, while the president’s commission was astonished by NASA’s repeated use of the phrases acceptable risk and acceptable erosion in internal discussion of the rocket-booster joints, Vaughan shows that flying with acceptable risks was a standard part of NASA culture. The lists of acceptable risks on the space shuttle, in fact, filled six volumes. “Although [O-ring] erosion itself had not been predicted, its occurrence conformed to engineering expectations about large-scale technical systems,” she writes. “At NASA, problems were the norm. The word anomaly was part of everyday talk…The whole shuttle system operated on the assumption that deviation could be controlled but not eliminated.”

What NASA had created was a closed culture that, in her words, “normalized deviance” so that to the outside world, decisions that were obviously questionable were seen by NASA’s management as prudent and reasonable. It is her depiction of this internal world that makes her book so disquieting: when she lays out the sequence of decisions that led to the launch—each decision as trivial as the string of failures that led to the near disaster at TMI—it is difficult to find any precise point where things went wrong or where things might be improved next time. “It can truly be said that the Challenger launch decision was a rule-based decision,” she concludes. “But the cultural understandings, rules, procedures, and norms that always had worked in the past did not work this time. It was not amorally calculating managers violating rules that were responsible for the tragedy. It was conformity.”

4.

There is another way to look at this problem, and that is from the standpoint of how human beings handle risk. One of the assumptions behind the modern disaster ritual is that when a risk can be identified and eliminated, a system can be made safer. The new booster joints on the shuttle, for example, are so much better than the old ones that the overall chances of a Challenger-style accident’s ever happening again must be lower, right? This is such a straightforward idea that questioning it seems almost impossible. But that is just what another group of scholars has done, under what is called the theory of risk homeostasis. It should be said that within the academic community, there are huge debates over how widely the theory of risk homeostasis can and should be applied. But the basic idea, which has been laid out brilliantly by the Canadian psychologist Gerald Wilde in his book Target Risk, is quite simple: under certain circumstances, changes that appear to make a system or an organization safer in fact don’t. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.

Consider, for example, the results of a famous experiment conducted several years ago in Germany. Part of a fleet of taxicabs in Munich was equipped with antilock brake systems (ABS), a technological innovation that vastly improves braking, particularly on slippery surfaces. The rest of the fleet was left alone, and the two groups—which were otherwise perfectly matched—were placed under careful and secret observation for three years. You would expect the better brakes to make for safer driving. But that is exactly the opposite of what happened. Giving some drivers ABS made no difference at all in their accident rate; in fact, it turned them into markedly inferior drivers. They drove faster. They made sharper turns. They showed poorer lane discipline. They braked harder. They were more likely to tailgate. They didn’t merge as well, and they were involved in more near misses. In other words, the ABS systems were not used to reduce accidents; instead, the drivers used the additional element of safety to enable them to drive faster and more recklessly without increasing their risk of getting into an accident. As economists would say, they consumed the risk reduction, they didn’t save it.

Risk homeostasis doesn’t happen all the time. Often—as in the case of seat belts, say—compensatory behavior only partly offsets the risk reduction of a safety measure. But it happens often enough that it must be given serious consideration. Why are more pedestrians killed crossing the street at marked crosswalks than at unmarked crosswalks? Because they compensate for the “safe” environment of a marked crossing by being less vigilant about oncoming traffic. Why did the introduction of childproof lids on medicine bottles lead, according to one study, to a substantial increase in fatal child poisonings? Because adults became less careful in keeping pill bottles out of the reach of children.

Risk homeostasis also works in the opposite direction. In the late 1960s, Sweden changed over from driving on the left-hand side of the road to driving on the right, a switch that one would think would create an epidemic of accidents. But, in fact, the opposite was true. People compensated for their unfamiliarity with the new traffic patterns by driving more carefully. During the next twelve months, traffic fatalities dropped 17 percent before returning slowly to their previous levels. As Wilde only half-facetiously argues, countries truly interested in making their streets and highways safer should think about switching over from one side of the road to the other on a regular basis.

It doesn’t take much imagination to see how risk homeostasis applies to NASA and the space shuttle. In one frequently quoted phrase, Richard Feynman, the Nobel Prize-winning physicist who served on the Challenger commission, said that at NASA decision-making was “a kind of Russian roulette.” When the O-rings began to have problems and nothing happened, the agency began to believe that “the risk is no longer so high for the next flights,” Feynman said, and that “we can lower our standards a little bit because we got away with it last time.” But fixing the O-rings doesn’t mean that this kind of risk-taking stops. There are six whole volumes of shuttle components that are deemed by NASA to be as risky as O-rings. It is entirely possible that better O-rings just give NASA the confidence to play Russian roulette with something else.

This is a depressing conclusion, but it shouldn’t come as a surprise. The truth is that our stated commitment to safety, our faithful enactment of the rituals of disaster, has always masked a certain hypocrisy. We don’t really want the safest of all possible worlds. The national 55-mile-per-hour speed limit probably saved more lives than any other single government intervention of the past generation. But the fact that Congress lifted it last month with a minimum of argument proves that we would rather consume the recent safety advances of things like seat belts and air bags than save them. The same is true of the dramatic improvements that have been made in recent years in the design of aircraft and flight-navigation systems. Presumably, these innovations could be used to bring down the airline accident rate as low as possible. But that is not what consumers want. They want air travel to be cheaper, more reliable, or more convenient, and so those safety advances have been at least partly consumed by flying and landing planes in worse weather and heavier traffic conditions.

What accidents like the Challenger should teach us is that we have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life. At some point in the future—for the most mundane of reasons, and with the very best of intentions—a NASA spacecraft will again go down in flames. We should at least admit this to ourselves now. And if we cannot—if the possibility is too much to bear—then our only option is to start thinking about getting rid of things like space shuttles altogether.

January 22, 1996

PART THREE

Personality, Character, and Intelligence

“‘He’ll be wearing a double-breasted suit. Buttoned.’—And he was.”

Late Bloomers

WHY DO WE EQUATE GENIUS WITH PRECOCITY?

1.

Ben Fountain was an associate in the real-estate practice at the Dallas offices of Akin, Gump, Strauss, Hauer & Feld, just a few years out of law school, when he decided he wanted to write fiction. The only thing Fountain had ever published was a law-review article. His literary training consisted of a handful of creative-writing classes in college. He had tried to write when he came home at night from work, but usually he was too tired to do much. He decided to quit his job.

“I was tremendously apprehensive,” Fountain recalls. “I felt like I’d stepped off a cliff and I didn’t know if the parachute was going to open. Nobody wants to waste their life, and I was doing well at the practice of law. I could have had a good career. And my parents were very proud of me—my dad was so proud of me…It was crazy.”

He began his new life on a February morning—a Monday. He sat down at his kitchen table at 7:30 a.m. He made a plan. Every day, he would write until lunchtime. Then he would lie down on the floor for twenty minutes to rest his mind. Then he would return to work for a few more hours. He was a lawyer. He had discipline. “I figured out very early on that if I didn’t get my writing done I felt terrible. So I always got my writing done. I treated it like a job. I did not procrastinate.” His first story was about a stockbroker who uses inside information and crosses a moral line. It was sixty pages long and took him three months to write. When he finished that story, he went back to work and wrote another—and then another.

In his first year, Fountain sold two stories. He gained confidence. He wrote a novel. He decided it wasn’t very good, and he ended up putting it in a drawer. Then came what he describes as his dark period, when he adjusted his expectations and started again. He got a short story published in Harper’s. A New York literary agent saw it and signed him up. He put together a collection of short stories titled Brief Encounters with Che Guevara, and Ecco, a HarperCollins imprint, published it. The reviews were sensational. The Times Book Review called it “heartbreaking.” It won the Hemingway Foundation/PEN award. It was named a No. 1 Book Sense Pick. It made major regional bestseller lists, was named one of the best books of the year by the San Francisco Chronicle, the Chicago Tribune, and Kirkus Reviews, and drew comparisons to Graham Greene, Evelyn Waugh, Robert Stone, and John le Carré.

Ben Fountain’s rise sounds like a familiar story: the young man from the provinces suddenly takes the literary world by storm. But Ben Fountain’s success was far from sudden. He quit his job at Akin, Gump in 1988. For every story he published in those early years, he had at least thirty rejections. The novel that he put away in a drawer took him four years. The dark period lasted for the entire second half of the 1990s. His breakthrough with Brief Encounters came in 2006, eighteen years after he first sat down to write at his kitchen table. The “young” writer from the provinces took the literary world by storm at the age of forty-eight.

2.

Genius, in the popular conception, is inextricably tied up with precocity—doing something truly creative, we’re inclined to think, requires the freshness and exuberance and energy of youth. Orson Welles made his masterpiece, Citizen Kane, at twenty-five. Herman Melville wrote a book a year through his late twenties, culminating, at age thirty-two, with Moby-Dick. Mozart wrote his breakthrough Piano Concerto No. 9 in E-flat major at the age of twenty-one. In some creative forms, like lyric poetry, the importance of precocity has hardened into an iron law. How old was T. S. Eliot when he wrote “The Love Song of J. Alfred Prufrock” (“I grow old… I grow old”)? Twenty-three. “Poets peak young,” the creativity researcher James Kaufman maintains. Mihály Csíkszentmihályi, the author of “Flow,” agrees: “The most creative lyric verse is believed to be that written by the young.” According to the Harvard psychologist Howard Gardner, a leading authority on creativity, “Lyric poetry is a domain where talent is discovered early, burns brightly, and then peters out at an early age.”

A few years ago, an economist at the University of Chicago named David Galenson decided to find out whether this assumption about creativity was true. He looked through forty-seven major poetry anthologies published since 1980 and counted the poems that appear most frequently. Some people, of course, would quarrel with the notion that literary merit can be quantified. But Galenson simply wanted to poll a broad cross-section of literary scholars about which poems they felt were the most important in the American canon. The top eleven are, in order, T. S. Eliot’s “Prufrock,” Robert Lowell’s “Skunk Hour,” Robert Frost’s “Stopping by Woods on a Snowy Evening,” William Carlos Williams’s “Red Wheelbarrow,” Elizabeth Bishop’s “The Fish,” Ezra Pound’s “The River Merchant’s Wife,” Sylvia Plath’s “Daddy,” Pound’s “In a Station of the Metro,” Frost’s “Mending Wall,” Wallace Stevens’s “The Snow Man,” and Williams’s “The Dance.” Those eleven were composed at the ages of twenty-three, forty-one, forty-eight, forty, twenty-nine, thirty, thirty, twenty-eight, thirty-eight, forty-two, and fifty-nine, respectively. There is no evidence, Galenson concluded, for the notion that lyric poetry is a young person’s game. Some poets do their best work at the beginning of their careers. Others do their best work decades later. Forty-two percent of Frost’s anthologized poems were written after the age of fifty. For Williams, it’s 44 percent. For Stevens, it’s 49 percent.

The same was true of film, Galenson points out in his study “Old Masters and Young Geniuses: The Two Life Cycles of Artistic Creativity.” Yes, there was Orson Welles, peaking as a director at twenty-five. But then there was Alfred Hitchcock, who made Dial M for Murder, Rear Window, To Catch a Thief, The Trouble with Harry, Vertigo, North by Northwest, and Psycho—one of the greatest runs by a director in history—between his fifty-fourth and sixty-first birthdays. Mark Twain published Adventures of Huckleberry Finn at forty-nine. Daniel Defoe wrote Robinson Crusoe at fifty-eight.

The examples that Galenson could not get out of his head, however, were Picasso and Cézanne. He was an art lover, and he knew their stories well. Picasso was the incandescent prodigy. His career as a serious artist began with a masterpiece, Evocation: The Burial of Casagemas, produced at age twenty. In short order, he painted many of the greatest works of his career—including Les Demoiselles d’Avignon, at the age of twenty-six. Picasso fit our usual ideas about genius perfectly.

Cézanne didn’t. If you go to the Cézanne room at the Musée d’Orsay, in Paris—the finest collection of Cézannes in the world—the array of masterpieces you’ll find along the back wall were all painted at the end of his career. Galenson did a simple economic analysis, tabulating the prices paid at auction for paintings by Picasso and Cézanne with the ages at which they created those works. A painting done by Picasso in his midtwenties was worth, he found, an average of four times as much as a painting done in his sixties. For Cézanne, the opposite was true. The paintings he created in his midsixties were valued fifteen times as highly as the paintings he created as a young man. The freshness, exuberance, and energy of youth did little for Cézanne. He was a late bloomer—and for some reason in our accounting of genius and creativity we have forgotten to make sense of the Cézannes of the world.

3.

The first day that Ben Fountain sat down to write at his kitchen table went well. He knew how the story about the stockbroker was supposed to start. But the second day, he says, he “completely freaked out.” He didn’t know how to describe things. He felt as if he were back in first grade. He didn’t have a fully formed vision, waiting to be emptied onto the page. “I had to create a mental image of a building, a room, a facade, haircut, clothes—just really basic things,” he says. “I realized I didn’t have the facility to put those into words. I started going out and buying visual dictionaries, architectural dictionaries, and going to school on those.”

He began to collect articles about things he was interested in, and before long he realized that he had developed a fascination with Haiti. “The Haiti file just kept getting bigger and bigger,” Fountain says. “And I thought, OK, here’s my novel. For a month or two I said, I really don’t need to go there, I can imagine everything. But after a couple of months I thought, Yeah, you’ve got to go there, and so I went, in April or May of ’ninety-one.”

He spoke little French, let alone Haitian Creole. He had never been abroad. Nor did he know anyone in Haiti. “I got to the hotel, walked up the stairs, and there was this guy standing at the top of the stairs,” Fountain recalls. “He said, ‘My name is Pierre. You need a guide.’ I said, ‘You’re sure as hell right, I do.’ He was a very genuine person, and he realized pretty quickly I didn’t want to go see the girls, I didn’t want drugs, I didn’t want any of that other stuff,” Fountain went on. “And then it was, boom, ‘I can take you there. I can take you to this person.’”

Fountain was riveted by Haiti. “It’s like a laboratory, almost,” he says. “Everything that’s gone on in the last five hundred years—colonialism, race, power, politics, ecological disasters—it’s all there in very concentrated form. And also I just felt, viscerally, pretty comfortable there.” He made more trips to Haiti, sometimes for a week, sometimes for two weeks. He made friends. He invited them to visit him in Dallas. (“You haven’t lived until you’ve had Haitians stay in your house,” Fountain says.) “I mean, I was involved. I couldn’t just walk away. There’s this very nonrational, nonlinear part of the whole process. I had a pretty specific time era that I was writing about, and certain things that I needed to know. But there were other things I didn’t really need to know. I met a fellow who was with Save the Children, and he was on the Central Plateau, which takes about twelve hours to get to on a bus, and I had no reason to go there. But I went up there. Suffered on that bus, and ate dust. It was a hard trip, but it was a glorious trip. It had nothing to do with the book, but it wasn’t wasted knowledge.”

In Brief Encounters with Che Guevara, four of the stories are about Haiti, and they are the strongest in the collection. They feel like Haiti; they feel as if they’ve been written from the inside looking out, not the outside looking in. “After the novel was done, I don’t know, I just felt like there was more for me, and I could keep going, keep going deeper there,” Fountain recalls. “Always there’s something—always something—here for me. How many times have I been? At least thirty times.”

Prodigies like Picasso, Galenson argues, rarely engage in that kind of open-ended exploration. They tend to be “conceptual,” Galenson says, in the sense that they start with a clear idea of where they want to go, and then they execute it. “I can hardly understand the importance given to the word research,” Picasso once said in an interview with the artist Marius de Zayas. “In my opinion, to search means nothing in painting. To find is the thing.” He continued, “The several manners I have used in my art must not be considered as an evolution or as steps toward an unknown ideal of painting…I have never made trials or experiments.”

But late bloomers, Galenson says, tend to work the other way around. Their approach is experimental. “Their goals are imprecise, so their procedure is tentative and incremental,” Galenson writes in “Old Masters and Young Geniuses,” and he goes on:

The imprecision of their goals means that these artists rarely feel they have succeeded, and their careers are consequently often dominated by the pursuit of a single objective. These artists repeat themselves, painting the same subject many times, and gradually changing its treatment in an experimental process of trial and error. Each work leads to the next, and none is generally privileged over others, so experimental painters rarely make specific preparatory sketches or plans for a painting. They consider the production of a painting as a process of searching, in which they aim to discover the image in the course of making it; they typically believe that learning is a more important goal than making finished paintings. Experimental artists build their skills gradually over the course of their careers, improving their work slowly over long periods. These artists are perfectionists and are typically plagued by frustration at their inability to achieve their goal.

Where Picasso wanted to find, not search, Cézanne said the opposite: “I seek in painting.”

An experimental innovator would go back to Haiti thirty times. That’s how that kind of mind figures out what it wants to do. When Cézanne was painting a portrait of the critic Gustave Geffroy, he made him endure eighty sittings, over three months, before announcing the project a failure. (The result is one of that string of masterpieces in the Musée d’Orsay.) When Cézanne painted his dealer, Ambrose Vollard, he made Vollard arrive at eight in the morning and sit on a rickety platform until eleven-thirty, without a break, on 150 occasions—before abandoning the portrait. He would paint a scene, then repaint it, then paint it again. He was notorious for slashing his canvases to pieces in fits of frustration.

Mark Twain was the same way. Galenson quotes the literary critic Franklin Rogers on Twain’s trial-and-error method: “His routine procedure seems to have been to start a novel with some structural plan which ordinarily soon proved defective, whereupon he would cast about for a new plot which would overcome the difficulty, rewrite what he had already written, and then push on until some new defect forced him to repeat the process once again.” Twain fiddled and despaired and revised and gave up on Huckleberry Finn so many times that the book took him nearly a decade to complete. The Cézannes of the world bloom late not as a result of some defect in character, or distraction, or lack of ambition, but because the kind of creativity that proceeds through trial and error necessarily takes a long time to come to fruition.

One of the best stories in Brief Encounters is called “Near-Extinct Birds of the Central Cordillera.” It’s about an ornithologist taken hostage by the FARC guerrillas of Colombia. Like so much of Fountain’s work, it reads with an easy grace. But there was nothing easy or graceful about its creation. “I struggled with that story,” Fountain says. “I always try to do too much. I mean, I probably wrote five hundred pages of it in various incarnations.” Fountain is at work right now on a novel. It was supposed to come out this year. It’s late.

4.

Galenson’s idea that creativity can be divided into these types—conceptual and experimental—has a number of important implications. For example, we sometimes think of late bloomers as late starters. They don’t realize they’re good at something until they’re fifty, so of course they achieve late in life. But that’s not quite right. Cézanne was painting almost as early as Picasso was. We also sometimes think of them as artists who are discovered late; the world is just slow to appreciate their gifts. In both cases, the assumption is that the prodigy and the late bloomer are fundamentally the same, and that late blooming is simply genius under conditions of market failure. What Galenson’s argument suggests is something else—that late bloomers bloom late because they simply aren’t much good until late in their careers.

“All these qualities of his inner vision were continually hampered and obstructed by Cézanne’s incapacity to give sufficient verisimilitude to the personae of his drama,” the great English art critic Roger Fry wrote of the early Cézanne. “With all his rare endowments, he happened to lack the comparatively common gift of illustration, the gift that any draughtsman for the illustrated papers learns in a school of commercial art; whereas, to realize such visions as Cézanne’s required this gift in high degree.” In other words, the young Cézanne couldn’t draw. Of The Banquet, which Cézanne painted at thirty-one, Fry writes, “It is no use to deny that Cézanne has made a very poor job of it.” Fry goes on, “More happily endowed and more integral personalities have been able to express themselves harmoniously from the very first. But such rich, complex, and conflicting natures as Cézanne’s require a long period of fermentation.” Cézanne was trying something so elusive that he couldn’t master it until he’d spent decades practicing.

This is the vexing lesson of Fountain’s long attempt to get noticed by the literary world. On the road to great achievement, the late bloomer will resemble a failure: while the late bloomer is revising and despairing and changing course and slashing canvases to ribbons after months or years, what he or she produces will look like the kind of thing produced by the artist who will never bloom at all. Prodigies are easy. They advertise their genius from the get-go. Late bloomers are hard. They require forbearance and blind faith. (Let’s just be thankful that Cézanne didn’t have a guidance counselor in high school who looked at his primitive sketches and told him to try accounting.) Whenever we find a late bloomer, we can’t but wonder how many others like him or her we have thwarted because we prematurely judged their talents. But we also have to accept that there’s nothing we can do about it. How can we ever know which of the failures will end up blooming?

Not long after meeting Ben Fountain, I went to see the novelist Jonathan Safran Foer, the author of the 2002 bestseller Everything Is Illuminated. Fountain is a graying man, slight and modest, who looks, in the words of a friend of his, like a “golf pro from Augusta, Georgia.” Foer is in his early thirties and looks barely old enough to drink. Fountain has a softness to him, as if years of struggle have worn away whatever sharp edges he once had. Foer gives the impression that if you touched him while he was in full conversational flight, you would get an electric shock.

“I came to writing really by the back door,” Foer said. “My wife is a writer, and she grew up keeping journals—you know, parents said, ‘Lights out, time for bed,’ and she had a little flashlight under the covers, reading books. I don’t think I read a book until much later than other people. I just wasn’t interested in it.”

Foer went to Princeton and took a creative-writing class in his freshman year with Joyce Carol Oates. It was, he explains, “sort of on a whim, maybe out of a sense that I should have a diverse course load.” He’d never written a story before. “I didn’t really think anything of it, to be honest, but halfway through the semester I arrived to class early one day, and she said, ‘Oh, I’m glad I have this chance to talk to you. I’m a fan of your writing.’ And it was a real revelation for me.”

Oates told him that he had the most important of writerly qualities, which was energy. He had been writing fifteen pages a week for that class, an entire story for each seminar.

“Why does a dam with a crack in it leak so much?” he said, with a laugh. “There was just something in me, there was like a pressure.”

As a sophomore, he took another creative-writing class. During the following summer, he went to Europe. He wanted to find the village in Ukraine where his grandfather had come from. After the trip, he went to Prague. There he read Kafka, as any literary undergraduate would, and sat down at his computer.

“I was just writing,” he said. “I didn’t know that I was writing until it was happening. I didn’t go with the intention of writing a book. I wrote three hundred pages in ten weeks. I really wrote. I’d never done it like that.”

It was a novel about a boy named Jonathan Safran Foer who visits the village in Ukraine where his grandfather had come from. Those three hundred pages were the first draft of Everything Is Illuminated—the exquisite and extraordinary novel that established Foer as one of the most distinctive literary voices of his generation. He was nineteen years old.

Foer began to talk about the other way of writing books, where you painstakingly honed your craft, over years and years. “I couldn’t do that,” he said. He seemed puzzled by it. It was clear that he had no understanding of how being an experimental innovator would work. “I mean, imagine if the craft you’re trying to learn is to be an original. How could you learn the craft of being an original?”

He began to describe his visit to Ukraine. “I went to the shtetl where my family came from. It’s called Trachimbrod, the name I use in the book. It’s a real place. But you know what’s funny? It’s the single piece of research that made its way into the book.” He wrote the first sentence, and he was proud of it, and then he went back and forth in his mind about where to go next. “I spent the first week just having this debate with myself about what to do with this first sentence. And once I made the decision, I felt liberated to just create—and it was very explosive after that.”

If you read Everything Is Illuminated, you end up with the same feeling you get when you read Brief Encounters with Che Guevara—the sense of transport you experience when a work of literature draws you into its own world. Both are works of art. It’s just that, as artists, Fountain and Foer could not be less alike. Fountain went to Haiti thirty times. Foer went to Trachimbrod just once. “I mean, it was nothing,” Foer said. “I had absolutely no experience there at all. It was just a springboard for my book. It was like an empty swimming pool that had to be filled up.” Total time spent getting inspiration for his novel: three days.

5.

Ben Fountain did not make the decision to quit the law and become a writer all by himself. He is married and has a family. He met his wife, Sharon, when they were both in law school at Duke. When he was doing real-estate work at Akin, Gump, she was on the partner track in the tax practice at Thompson & Knight. The two actually worked in the same building in downtown Dallas. They got married in 1985, and had a son in April of 1987. Sharie, as Fountain calls her, took four months of maternity leave before returning to work. She made partner by the end of that year.

“We had our son in a day care downtown,” she recalls. “We would drive in together, one of us would take him to day care, the other one would go to work. One of us would pick him up, and then, somewhere around eight o’clock at night, we would have him bathed, in bed, and then we hadn’t even eaten yet, and we’d be looking at each other, going, ‘This is just the beginning.’” She made a face. “That went on for maybe a month or two, and Ben’s like, ‘I don’t know how people do this.’ We both agreed that continuing at that pace was probably going to make us all miserable. Ben said to me, ‘Do you want to stay home?’ Well, I was pretty happy in my job, and he wasn’t, so as far as I was concerned it didn’t make any sense for me to stay home. And I didn’t have anything besides practicing law that I really wanted to do, and he did. So I said, ‘Look, can we do this in a way that we can still have some day care and so you can write?’ And so we did that.”

Ben could start writing at seven-thirty in the morning because Sharie took their son to day care. He stopped working in the afternoon because that was when he had to pick him up, and then he did the shopping and the household chores. In 1989, they had a second child, a daughter. Fountain was a full-fledged North Dallas stay-at-home dad.

“When Ben first did this, we talked about the fact that it might not work, and we talked about, generally, ‘When will we know that it really isn’t working?’ and I’d say, ‘Well, give it ten years,’” Sharie recalled. To her, ten years didn’t seem unreasonable. “It takes a while to decide whether you like something or not,” she says. And when ten years became twelve and then fourteen and then sixteen, and the kids were off in high school, she stood by him, because, even during that long stretch when Ben had nothing published at all, she was confident that he was getting better. She was fine with the trips to Haiti, too. “I can’t imagine writing a novel about a place you haven’t at least tried to visit,” she says. She even went with him once, and on the way into town from the airport there were people burning tires in the middle of the road.

“I was making pretty decent money, and we didn’t need two incomes,” Sharie went on. She has a calm, unflappable quality about her. “I mean, it would have been nice, but we could live on one.”

Sharie was Ben’s wife. But she was also—to borrow a term from long ago—his patron. That word has a condescending edge to it today, because we think it far more appropriate for artists (and everyone else for that matter) to be supported by the marketplace. But the marketplace works only for people like Jonathan Safran Foer, whose art emerges, fully realized, at the beginning of their career, or Picasso, whose talent was so blindingly obvious that an art dealer offered him a hundred-and-fifty-franc-a-month stipend the minute he got to Paris, at age twenty. If you are the type of creative mind that starts without a plan, and has to experiment and learn by doing, you need someone to see you through the long and difficult time it takes for your art to reach its true level.

This is what is so instructive about any biography of Cézanne. Accounts of his life start out being about Cézanne, and then quickly turn into the story of Cézanne’s circle. First and foremost is always his best friend from childhood, the writer Emile Zola, who convinces the awkward misfit from the provinces to come to Paris, and who serves as his guardian and protector and coach through the long, lean years.

Here is Zola, already in Paris, in a letter to the young Cézanne back in Provence. Note the tone, more paternal than fraternal:

You ask me an odd question. Of course one can work here, as anywhere else, if one has the will. Paris offers, further, an advantage you can’t find elsewhere: the museums in which you can study the old masters from 11 to 4. This is how you must divide your time. From 6 to 11 you go to a studio to paint from a live model; you have lunch, then from 12 to 4 you copy, in the Louvre or the Luxembourg, whatever masterpiece you like. That will make up nine hours of work. I think that ought to be enough.

Zola goes on, detailing exactly how Cézanne could manage financially on a monthly stipend of a hundred and twenty-five francs:

I’ll reckon out for you what you should spend. A room at 20 francs a month; lunch at 18 sous and dinner at 22, which makes two francs a day, or 60 francs a month…Then you have the studio to pay for: the Atelier Suisse, one of the least expensive, charges, I think, 10 francs. Add 10 francs for canvas, brushes, colors; that makes 100. So you’ll have 25 francs left for laundry, light, the thousand little needs that turn up.

Camille Pissarro was the next critical figure in Cézanne’s life. It was Pissarro who took Cézanne under his wing and taught him how to be a painter. For years, there would be periods in which they went off into the country and worked side by side.

Then there was Ambrose Vollard, the sponsor of Cézanne’s first one-man show, at the age of fifty-six. At the urging of Pissarro, Renoir, Degas, and Monet, Vollard hunted down Cézanne in Aix. He spotted a still-life in a tree, where it had been flung by Cézanne in disgust. He poked around the town, putting the word out that he was in the market for Cézanne’s canvases. In Lost Earth: A Life of Cézanne, the biographer Philip Callow writes about what happened next:

Before long someone appeared at his hotel with an object wrapped in a cloth. He sold the picture for 150 francs, which inspired him to trot back to his house with the dealer to inspect several more magnificent Cézannes. Vollard paid a thousand francs for the job lot, then on the way out was nearly hit on the head by a canvas that had been overlooked, dropped out the window by the man’s wife. All the pictures had been gathering dust, half buried in a pile of junk in the attic.

All this came before Vollard agreed to sit 150 times, from eight in the morning to eleven-thirty, without a break, for a picture that Cézanne disgustedly abandoned. Once, Vollard recounted in his memoir, he fell asleep, and toppled off the makeshift platform. Cézanne berated him, incensed: “Does an apple move?” This is called friendship.

Finally, there was Cézanne’s father, the banker Louis-Auguste. From the time Cézanne first left Aix, at the age of twenty-two, Louis-Auguste paid his bills, even when Cézanne gave every indication of being nothing more than a failed dilettante. But for Zola, Cézanne would have remained an unhappy banker’s son in Provence; but for Pissarro, he would never have learned how to paint; but for Vollard (at the urging of Pissarro, Renoir, Degas, and Monet), his canvases would have rotted away in some attic; and, but for his father, Cézanne’s long apprenticeship would have been a financial impossibility. That is an extraordinary list of patrons. The first three—Zola, Pissarro, and Vollard—would have been famous even if Cézanne never existed, and the fourth was an unusually gifted entrepreneur who left Cézanne four hundred thousand francs when he died. Cézanne didn’t just have help. He had a dream team in his corner.

This is the final lesson of the late bloomer: his or her success is highly contingent on the efforts of others. In biographies of Cézanne, Louis-Auguste invariably comes across as a kind of grumpy philistine, who didn’t appreciate his son’s genius. But Louis-Auguste didn’t have to support Cézanne all those years. He would have been within his rights to make his son get a real job, just as Sharie might well have said no to her husband’s repeated trips to the chaos of Haiti. She could have argued that she had some right to the lifestyle of her profession and status—that she deserved to drive a BMW, which is what power couples in North Dallas drive, instead of a Honda Accord, which is what she settled for.

But she believed in her husband’s art, or perhaps, more simply, she believed in her husband, the same way Zola and Pissarro and Vollard and—in his own querulous way—Louis-Auguste must have believed in Cézanne. Late bloomers’ stories are invariably love stories, and this may be why we have such difficulty with them. We’d like to think that mundane matters like loyalty, steadfastness, and the willingness to keep writing checks to support what looks like failure have nothing to do with something as rarefied as genius. But sometimes genius is anything but rarefied; sometimes it’s just the thing that emerges after twenty years of working at your kitchen table.

“Sharie never once brought up money, not once—never,” Fountain said. She was sitting next to him, and he looked at her in a way that made it plain that he understood how much of the credit for Brief Encounters belonged to his wife. His eyes welled up with tears. “I never felt any pressure from her,” he said. “Not even covert, not even implied.”

October 20, 2008

Most Likely to Succeed

HOW DO WE HIRE WHEN WE CAN’T TELL WHO’S RIGHT FOR THE JOB?

1.

On the day of the big football game between the University of Missouri Tigers and the Cowboys of Oklahoma State, a football scout named Dan Shonka sat in his hotel in Columbia, Missouri, with a portable DVD player. Shonka has worked for three National Football League teams. Before that, he was a football coach, and before that, he played linebacker—although, he says, “that was three knee operations and a hundred pounds ago.” Every year, he evaluates somewhere between eight hundred and twelve hundred players around the country, helping professional teams decide whom to choose in the college draft, which means that over the last thirty years he has probably seen as many football games as anyone else in America. In his DVD player was his homework for the evening’s big game—an edited video of the Tigers’ previous contest, against the University of Nebraska Cornhuskers.

Shonka methodically made his way through the video, stopping and rewinding whenever he saw something that caught his eye. He liked Jeremy Maclin and Chase Coffman, two of the Mizzou receivers. He loved William Moore, the team’s bruising strong safety. But most of all, he was interested in the Tigers’ quarterback and star, a stocky, strong-armed senior named Chase Daniel.

“I like to see that the quarterback can hit a receiver in stride so he doesn’t have to slow for the ball,” Shonka began. He had a stack of evaluation forms next to him, and as he watched the game, he was charting and grading every throw that Daniel made. “Then judgment. Hey, if it’s not there, throw it away and play another day. Will he stand in there and take a hit, with a guy breathing down his face? Will he be able to step right in there, throw, and still take that hit? Does the guy throw better when he’s in the pocket, or does he throw equally well when he’s on the move? You want a great competitor. Durability. Can they hold up, their strength, toughness? Can they make big plays? Can they lead a team down the field and score late in the game? Can they see the field? When your team’s way ahead, that’s fine. But when you’re getting your ass kicked, I want to see what you’re going to do.”

He pointed to his screen. Daniel had thrown a dart, and, just as he did, a defensive player had hit him squarely. “See how he popped up?” Shonka said. “He stood right there and threw the ball in the face of that rush. This kid has got a lot of courage.” Daniel was six feet tall and weighed 225 pounds: thick through the chest and trunk. He carried himself with a self-assurance that bordered on cockiness. He threw quickly and in rhythm. He nimbly evaded defenders. He made short throws with touch and longer throws with accuracy. By the game’s end, he had completed an astonishing 78 percent of his passes, and handed Nebraska its worst home defeat in fifty-three years. “He can zip it,” Shonka said. “He can really gun when he has to.” Shonka had seen all the promising college quarterbacks, charted and graded their throws, and to his mind Daniel was special: “He might be one of the best college quarterbacks in the country.”

But then Shonka began to talk about when he was on the staff of the Philadelphia Eagles, in 1999. Five quarterbacks were taken in the first round of the college draft that year, and each looked as promising as Chase Daniel did now. But only one of them, Donovan McNabb, ended up fulfilling that promise. Of the rest, one descended into mediocrity after a decent start. Two were complete busts, and the last was so awful that after failing out of the NFL he ended up failing out of the Canadian Football League as well.

The year before, the same thing happened with Ryan Leaf, who was the Chase Daniel of 1998. The San Diego Chargers made him the second player taken over all in the draft, and gave him an $11 million signing bonus. Leaf turned out to be terrible. In 2002, it was Joey Harrington’s turn. Harrington was a golden boy out of the University of Oregon, and the third player taken in the draft. Shonka still can’t get over what happened to him.

“I tell you, I saw Joey live,” he said. “This guy threw lasers, he could throw under tight spots, he had the arm strength, he had the size, he had the intelligence.” Shonka got as misty as a 280-pound ex-linebacker in a black tracksuit can get. “He’s a concert pianist, you know? I really—I mean, I really—liked Joey.” And yet Harrington’s career consisted of a failed stint with the Detroit Lions and a slide into obscurity. Shonka looked back at the screen, where the young man he felt might be the best quarterback in the country was marching his team up and down the field. “How will that ability translate to the National Football League?” He shook his head slowly. “Shoot.”

This is the quarterback problem. There are certain jobs where almost nothing you can learn about candidates before they start predicts how they’ll do once they’re hired. So how do we know whom to choose in cases like that? In recent years, a number of fields have begun to wrestle with this problem, but none with such profound social consequences as the profession of teaching.

2.

One of the most important tools in contemporary educational research is value added analysis. It uses standardized test scores to look at how much the academic performance of students in a given teacher’s classroom changes between the beginning and the end of the school year. Suppose that Mrs. Brown and Mr. Smith both teach a classroom of third graders who score at the fiftieth percentile on math and reading tests on the first day of school in September. When the students are retested in June, Mrs. Brown’s class scores at the seventieth percentile, while Mr. Smith’s students have fallen to the fortieth percentile. That change in the students’ rankings, value-added theory says, is a meaningful indicator of how much more effective Mrs. Brown is as a teacher than Mr. Smith.

It’s only a crude measure, of course. A teacher is not solely responsible for how much is learned in a classroom, and not everything of value that a teacher imparts to his or her students can be captured on a standardized test. Nonetheless, if you follow Brown and Smith for three or four years, their effect on their students’ test scores starts to become predictable: with enough data, it is possible to identify who the very good teachers are and who the very poor teachers are. What’s more—and this is the finding that has galvanized the educational world—the difference between good teachers and poor teachers turns out to be vast.

Eric Hanushek, an economist at Stanford, estimates that the students of a very bad teacher will learn, on average, half a year’s worth of material in one school year. The students in the class of a very good teacher will learn a year and a half’s worth of material. That difference amounts to a year’s worth of learning in a single year. Teacher effects dwarf school effects: your child is actually better off in a bad school with an excellent teacher than in an excellent school with a bad teacher. Teacher effects are also much stronger than class-size effects. You’d have to cut the average class almost in half to get the same boost that you’d get if you switched from an average teacher to a teacher in the eighty-fifth percentile. And remember that a good teacher costs as much as an average one, whereas halving class size would require that you build twice as many classrooms and hire twice as many teachers.

Hanushek recently did a back-of-the-envelope calculation about what even a rudimentary focus on teacher quality could mean for the United States. If you rank the countries of the world in terms of the academic performance of their schoolchildren, the United States is just below average, half a standard deviation below a clump of relatively high-performing countries like Canada and Belgium. According to Hanushek, the United States could close that gap simply by replacing the bottom 6 percent to 10 percent of public-school teachers with teachers of average quality. After years of worrying about issues like school funding levels, class size, and curriculum design, many reformers have come to the conclusion that nothing matters more than finding people with the potential to be great teachers. But there’s a hitch: no one knows what a person with the potential to be a great teacher looks like. The school system has a quarterback problem.

3.

Kickoff time for Missouri’s game against Oklahoma State was seven o’clock. It was a perfect evening for football: cloudless skies and a light fall breeze. For hours, fans had been tailgating in the parking lots around the stadium. Cars lined the roads leading to the university, many with fuzzy yellow-and-black Tiger tails hanging from their trunks. It was one of Mizzou’s biggest games in years. The Tigers were undefeated and had a chance to become the No. 1 college football team in the country. Shonka made his way through the milling crowds and took a seat in the press box. Below him, the players on the field looked like pieces on a chessboard.

The Tigers held the ball first. Chase Daniel stood a good seven yards behind his offensive line. He had five receivers, two to his left and three to his right, spaced from one side of the field to the other. His linemen were widely spaced as well. In play after play, Daniel caught the snap from his center, planted his feet, and threw the ball in quick seven- and eight-yard diagonal passes to one of his five receivers.

The style of offense that the Tigers run is called the spread, and most of the top quarterbacks in college football—the players who will be drafted into the pros—are spread quarterbacks. By spacing out the offensive linemen and wide receivers, the system makes it easy for the quarterback to figure out the intentions of the opposing defense before the ball is snapped: he can look up and down the line, “read” the defense, and decide where to throw the ball before anyone has moved a muscle. Daniel had been playing in the spread since high school; he was its master. “Look how quickly he gets the ball out,” Shonka said. “You can hardly go a thousand and one, a thousand and two, and it’s out of his hand. He knows right where he’s going. When everyone is spread out like that, the defense can’t disguise its coverage. Chase knows right away what they are going to do. The system simplifies the quarterback’s decisions.”

But for Shonka this didn’t help matters. It had always been hard to predict how a college quarterback would fare in the pros. The professional game was, simply, faster and more complicated. With the advent of the spread, though, the correspondence between the two levels of play had broken down almost entirely. NFL teams don’t run the spread. They can’t. The defenders in the pros are so much faster than their college counterparts that they would shoot through those big gaps in the offensive line and flatten the quarterback. In the NFL, the offensive line is bunched closely together. Daniel wouldn’t have five receivers. Most of the time, he’d have just three or four. He wouldn’t have the luxury of standing seven yards behind the center, planting his feet, and knowing instantly where to throw. He’d have to crouch right behind the center, take the snap directly, and run backward before planting his feet to throw. The onrushing defenders wouldn’t be seven yards away. They would be all around him, from the start. The defense would no longer have to show its hand, because the field would not be so spread out. It could now disguise its intentions. Daniel wouldn’t be able to read the defense before the snap was taken. He’d have to read it in the seconds after the play began.

“In the spread, you see a lot of guys wide open,” Shonka said. “But when a guy like Chase goes to the NFL, he’s never going to see his receivers that open—only in some rare case, like someone slips or there’s a bust in the coverage. When that ball’s leaving your hands in the pros, if you don’t use your eyes to move the defender a little bit, they’ll break on the ball and intercept it. The athletic ability that they’re playing against in the league is unbelievable.”

As Shonka talked, Daniel was moving his team down the field. But he was almost always throwing those quick, diagonal passes. In the NFL, he would have to do much more than that—he would have to throw long, vertical passes over the top of the defense. Could he make that kind of throw? Shonka didn’t know. There was also the matter of his height. Six feet was fine in a spread system, where the big gaps in the offensive line gave Daniel plenty of opportunity to throw the ball and see downfield. But in the NFL, there wouldn’t be gaps, and the linemen rushing at him would be six five, not six one.

“I wonder,” Shonka went on. “Can he see? Can he be productive in a new kind of offense? How will he handle that? I’d like to see him set up quickly from center. I’d like to see his ability to read coverages that are not in the spread. I’d like to see him in the pocket. I’d like to see him move his feet. I’d like to see him do a deep dig, or deep comeback. You know, like a throw twenty to twenty-five yards down the field.”

It was clear that Shonka didn’t feel the same hesitancy in evaluating the other Mizzou stars—the safety Moore, the receivers Maclin and Coffman. The game that they would play in the pros would also be different from the game they were playing in college, but the difference was merely one of degree. They had succeeded at Missouri because they were strong and fast and skilled, and these traits translate in kind to professional football.

A college quarterback joining the NFL, by contrast, has to learn to play an entirely new game. Shonka began to talk about Tim Couch, the quarterback taken first in that legendary draft of 1999. Couch set every record imaginable in his years at the University of Kentucky. “They used to put five garbage cans on the field,” Shonka recalled, shaking his head, “and Couch would stand there and throw and just drop the ball into every one.” But Couch was a flop in the pros. It wasn’t that professional quarterbacks didn’t need to be accurate. It was that the kind of accuracy required to do the job well could be measured only in a real NFL game.

Similarly, all quarterbacks drafted into the pros are required to take an IQ test—the Wonderlic Personnel Test. The theory behind the test is that the pro game is so much more cognitively demanding than the college game that high intelligence should be a good predictor of success. But when the economists David Berri and Rob Simmons analyzed the scores—which are routinely leaked to the press—they found that Wonderlic scores are all but useless as predictors. Of the five quarterbacks taken in round one of the 1999 draft, Donovan McNabb, the only one of the five with a shot at the Hall of Fame, had the lowest Wonderlic score. And who else had IQ scores in the same range as McNabb? Dan Marino and Terry Bradshaw, two of the greatest quarterbacks ever to play the game.

We’re used to dealing with prediction problems by going back and looking for better predictors. We now realize that being a good doctor requires the ability to communicate, listen, and empathize—and so there is increasing pressure on medical schools to pay attention to interpersonal skills as well as to test scores. We can have better physicians if we’re just smarter about how we choose medical school students. But no one is saying that Dan Shonka is somehow missing some key ingredient in his analysis; that if he were only more perceptive he could predict Chase Daniel’s career trajectory. The problem with picking quarterbacks is that Chase Daniel’s performance can’t be predicted. The job he’s being groomed for is so particular and specialized that there is no way to know who will succeed at it and who won’t. In fact, Berri and Simmons found no connection between where a quarterback was taken in the draft—that is, how highly he was rated on the basis of his college performance—and how well he played in the pros.

The entire time that Chase Daniel was on the field against Oklahoma State, his backup, Chase Patton, stood on the sidelines, watching. Patton didn’t play a single down. In his four years at Missouri, up to that point, he had thrown a total of twenty-six passes. And yet there were people in Shonka’s world who thought that Patton would end up as a better professional quarterback than Daniel. The week of the Oklahoma State game, the national sports magazine ESPN even put the two players on its cover, with the title “CHASE DANIEL MIGHT WIN THE HEISMAN”—referring to the trophy given to college football’s best player. “HIS BACKUP COULD WIN THE SUPER BOWL.” Why did everyone like Patton so much? It wasn’t clear. Maybe he looked good in practice. Maybe it was because this season in the NFL a quarterback who had also never started in a single college game is playing superbly for the New England Patriots. It sounds absurd to put an athlete on the cover of a magazine for no particular reason. But perhaps that’s just the quarterback problem taken to an extreme. If college performance doesn’t tell us anything, why shouldn’t we value someone who hasn’t had the chance to play as highly as we do someone who plays as well as anyone in the land?

4.

Picture a young preschool teacher, sitting on a classroom floor surrounded by seven children. She is holding an alphabet book, and working through the letters with the children, one by one: “A is for apple…C is for cow.” The session was taped, and the videotape is being watched by a group of experts, who are charting and grading each of the teacher’s moves.

After thirty seconds, the leader of the group—Bob Pianta, the dean of the University of Virginia’s Curry School of Education—stops the tape. He points to two little girls on the right side of the circle. They are unusually active, leaning into the circle and reaching out to touch the book.

“What I’m struck by is how lively the affect is in this room,” Pianta said. “One of the things the teacher is doing is creating a holding space for that. And what distinguishes her from other teachers is that she flexibly allows the kids to move and point to the book. She’s not rigidly forcing the kids to sit back.”

Pianta’s team has developed a system for evaluating various competencies relating to student-teacher interaction. Among them is “regard for student perspective”; that is, a teacher’s knack for allowing students some flexibility in how they become engaged in the classroom. Pianta stopped and rewound the tape twice, until what the teacher had managed to achieve became plain: the children were active, but somehow the class hadn’t become a free-for-all.

“A lesser teacher would have responded to the kids’ leaning over as misbehavior,” Pianta went on. “ ‘We can’t do this right now. You need to be sitting still.’ She would have turned this off.”

Bridget Hamre, one of Pianta’s colleagues, chimed in: “These are three- and four-year-olds. At this age, when kids show their engagement it’s not like the way we show our engagement, where we look alert. They’re leaning forward and wriggling. That’s their way of doing it. And a good teacher doesn’t interpret that as bad behavior. You can see how hard it is to teach new teachers this idea, because the minute you teach them to have regard for the student’s perspective, they think you have to give up control of the classroom.”

The lesson continued. Pianta pointed out how the teacher managed to personalize the material. “C is for cow” turned into a short discussion of which of the kids had ever visited a farm. “Almost every time a child says something, she responds to it, which is what we describe as teacher sensitivity,” Hamre said.

The teacher then asked the children if anyone’s name began with that letter. “Calvin,” a boy named Calvin says. The teacher nods, and says, “Calvin starts with C.” A little girl in the middle says, “Me!” The teacher turns to her. “Your name’s Venisha. Letter V. Venisha.”

It was a key moment. Of all the teacher elements analyzed by the Virginia group, feedback—a direct, personal response by a teacher to a specific statement by a student—seems to be most closely linked to academic success. Not only did the teacher catch the “Me!” amid the wiggling and tumult; she addressed it directly.

“Mind you, that’s not great feedback,” Hamre said. “High-quality feedback is where there is a back-and-forth exchange to get a deeper understanding.” The perfect way to handle that moment would have been for the teacher to pause and pull out Venisha’s name card, point to the letter V, show her how different it is from C, and make the class sound out both letters. But the teacher didn’t do that—either because it didn’t occur to her or because she was distracted by the wiggling of the girls to her right.

“On the other hand, she could have completely ignored the girl, which happens a lot,” Hamre went on. “The other thing that happens a lot is the teacher will just say, ‘You’re wrong.’ Yes-no feedback is probably the predominant kind of feedback, which provides almost no information for the kid in terms of learning.”

Pianta showed another tape, of a nearly identical situation: a circle of preschoolers around a teacher. The lesson was about how we can tell when someone is happy or sad. The teacher began by acting out a short conversation between two hand puppets, Henrietta and Twiggle: Twiggle is sad until Henrietta shares some watermelon with him.

“The idea that the teacher is trying to get across is that you can tell by looking at somebody’s face how they’re feeling, whether they’re feeling sad or happy,” Hamre said. “What kids of this age tend to say is you can tell how they’re feeling because of something that happened to them. They lost their puppy and that’s why they’re sad. They don’t really get this idea. So she’s been challenged, and she’s struggling.”

The teacher begins, “Remember when we did something and we drew our face?” She touches her face, pointing out her eyes and mouth. “When somebody is happy, their face tells us that they’re happy. And their eyes tell us.” The children look on blankly. The teacher plunges on: “Watch, watch.” She smiles broadly. “This is happy! How can you tell that I’m happy? Look at my face. Tell me what changes about my face when I’m happy. No, no, look at my face…No…”

A little girl next to her says, “Eyes,” providing the teacher with an opportunity to use one of her students to draw the lesson out. But the teacher doesn’t hear her. Again, she asks, “What’s changed about my face?” She smiles and she frowns, as if she can reach the children by sheer force of repetition. Pianta stopped the tape. One problem, he pointed out, was that Henrietta made Twiggle happy by sharing watermelon with him, which doesn’t illustrate what the lesson is about.

“You know, a better way to handle this would be to anchor something around the kids,” Pianta said. “She should ask, ‘What makes you feel happy?’ The kids could answer. Then she could say, ‘Show me your face when you have that feeling. OK, what does So-and-So’s face look like? Now tell me what makes you sad. Show me your face when you’re sad. Oh, look, her face changed!’ You’ve basically made the point. And then you could have the kids practice, or something. But this is going to go nowhere.”

“What’s changed about my face?” the teacher repeated, for what seemed like the hundredth time. One boy leaned forward into the circle, trying to engage himself in the lesson, in the way that little children do. His eyes were on the teacher. “Sit up!” she snapped at him.

As Pianta played one tape after another, the patterns started to become clear. Here was a teacher who read out sentences in a spelling test, and every sentence came from her own life—“I went to a wedding last week”—which meant she was missing an opportunity to say something that engaged her students. Another teacher walked over to a computer to do a PowerPoint presentation, only to realize that she hadn’t turned it on. As she waited for it to boot up, the classroom slid into chaos.

Then there was the superstar—a young high-school math teacher in jeans and a green polo shirt. “So let’s see,” he began, standing up at the blackboard. “Special right triangles. We’re going to do practice with this, just throwing out ideas.” He drew two triangles. “Label the length of the side, if you can. If you can’t, we’ll all do it.” He was talking and moving quickly, which Pianta said might be interpreted as a bad thing, because this was trigonometry. It wasn’t easy material. But his energy seemed to infect the class. And all the time he offered the promise of help. If you can’t, we’ll all do it. In a corner of the room was a student named Ben, who’d evidently missed a few classes. “See what you can remember, Ben,” the teacher said. Ben was lost. The teacher quickly went to his side: “I’m going to give you a way to get to it.” He made a quick suggestion: “How about that?” Ben went back to work. The teacher slipped over to the student next to Ben, and glanced at her work. “That’s all right!” He went to a third student, then a fourth. Two and a half minutes into the lesson—the length of time it took that subpar teacher to turn on the computer—he had already laid out the problem, checked in with nearly every student in the class, and was back at the blackboard, to take the lesson a step further.

“In a group like this, the standard MO would be: he’s at the board, broadcasting to the kids, and has no idea who knows what he’s doing and who doesn’t know,” Pianta said. “But he’s giving individualized feedback. He’s off the charts on feedback.” Pianta and his team watched in awe.

5.

Educational-reform efforts typically start with a push for higher standards for teachers—that is, for the academic and cognitive requirements for entering the profession to be as stiff as possible. But after you’ve watched Pianta’s tapes and seen how complex the elements of effective teaching are, this emphasis on book smarts suddenly seems peculiar. The preschool teacher with the alphabet book was sensitive to her students’ needs and knew how to let the two girls on the right wiggle and squirm without disrupting the rest of the students; the trigonometry teacher knew how to complete a circuit of his classroom in two and a half minutes and make everyone feel that he or she was getting his personal attention. But these aren’t cognitive skills.

A group of researchers—Thomas J. Kane, an economist at Harvard’s school of education; Douglas Staiger, an economist at Dartmouth; and Robert Gordon, a policy analyst at the Center for American Progress—have investigated whether it helps to have a teacher who has earned a teaching certification or a master’s degree. Both are expensive, time-consuming credentials that almost every district expects teachers to acquire; neither makes a difference in the classroom. Test scores, graduate degrees, and certifications—as much as they appear related to teaching prowess—turn out to be about as useful in predicting success as having a quarterback throw footballs into a bunch of garbage cans.

Another educational researcher, Jacob Kounin, once did an analysis of “desist” events, in which a teacher has to stop some kind of misbehavior. In one instance, “Mary leans toward the table to her right and whispers to Jane. Both she and Jane giggle. The teacher says, ‘Mary and Jane, stop that!’” That’s a desist event. But how a teacher desists—her tone of voice, her attitudes, her choice of words—appears to make no difference at all in maintaining an orderly classroom. How can that be? Kounin went back over the videotape and noticed that forty-five seconds before Mary whispered to Jane, Lucy and John had started whispering. Then Robert had noticed and joined in, making Jane giggle, whereupon Jane said something to John. Then Mary whispered to Jane. It was a contagious chain of misbehavior, and what really was significant was not how a teacher stopped the deviancy at the end of the chain but whether she was able to stop the chain before it started. Kounin called that ability withitness, which he defined as “a teacher’s communicating to the children by her actual behavior (rather than by verbally announcing: ‘I know what’s going on’) that she knows what the children are doing, or has the proverbial eyes in the back of her head.” It stands to reason that to be a great teacher you have to have withitness. But how do you know whether someone has withitness until she stands up in front of a classroom of twenty-five wiggly Janes, Lucys, Johns, and Roberts and tries to impose order?

6.

Perhaps no profession has taken the implications of the quarterback problem more seriously than the financial-advice field, and the experience of financial advisers is a useful guide to what could happen in teaching as well. There are no formal qualifications for entering the field except a college degree. Financial-services firms don’t look for only the best students or require graduate degrees or specify a list of prerequisites. No one knows beforehand what makes a high-performing financial adviser different from a low-performing one, so the field throws the door wide open.

“A question I ask is, ‘Give me a typical day,’” Ed Deutschlander, the co-president of North Star Resource Group, in Minneapolis, says. “If that person says, ‘I get up at five-thirty, hit the gym, go to the library, go to class, go to my job, do homework until eleven,’ that person has a chance.” Deutschlander, in other words, begins by looking for the same general traits that every corporate recruiter looks for.

Deutschlander says that last year his firm interviewed about a thousand people, and found forty-nine it liked, a ratio of twenty interviewees to one candidate. Those candidates were put through a four-month “training camp,” in which they tried to act like real financial advisers. “They should be able to obtain in that four-month period a minimum of ten official clients,” Deutschlander said. “If someone can obtain ten clients, and is able to maintain a minimum of ten meetings a week, that means that person has gathered over a hundred introductions in that four-month period. Then we know that person is at least fast enough to play this game.”

Of the forty-nine people invited to the training camp, twenty-three made the cut and were hired as apprentice advisers. Then the real sorting began. “Even with the top performers, it really takes three to four years to see whether someone can make it,” Deutschlander says. “You’re just scratching the surface at the beginning. Four years from now, I expect to hang on to at least thirty to forty percent of that twenty-three.”

People like Deutschlander are referred to as gatekeepers, a title that suggests that those at the door of a profession are expected to discriminate—to select who gets through the gate and who doesn’t. But Deutschlander sees his role as keeping the gate as wide open as possible: to find ten new financial advisers, he’s willing to interview a thousand people. The equivalent of that approach in the NFL would be for a team to give up trying to figure out who the best college quarterback is, and, instead, try out three or four good candidates.

In teaching, the implications are even more profound. They suggest that we shouldn’t be raising standards. We should be lowering them, because there is no point in raising standards if standards don’t track with what we care about. Teaching should be open to anyone with a pulse and a college degree—and teachers should be judged after they have started their jobs, not before. That means that the profession needs to start the equivalent of Ed Deutschlander’s training camp. It needs an apprenticeship system that allows candidates to be rigorously evaluated. Kane and Staiger have calculated that, given the enormous differences between the top and the bottom of the profession, you’d probably have to try out four candidates to find one good teacher. That means tenure can’t be routinely awarded, the way it is now. Currently, the salary structure of the teaching profession is highly rigid, and that would also have to change in a world where we want to rate teachers on their actual performance. An apprentice should get apprentice wages. But if we find eighty-fifth-percentile teachers who can teach a year and a half’s material in one year, we’re going to have to pay them a lot—both because we want them to stay and because the only way to get people to try out for what will suddenly be a high-risk profession is to offer those who survive the winnowing a healthy reward.

Is this solution to teaching’s quarterback problem politically possible? Taxpayers might well balk at the costs of trying out four teachers to find one good one. Teachers’ unions have been resistant to even the slightest move away from the current tenure arrangement. But all the reformers want is for the teaching profession to copy what firms like North Star have been doing for years. Deutschlander interviews a thousand people to find ten advisers. He spends large amounts of money to figure out who has the particular mixture of abilities to do the job. “Between hard and soft costs,” he says, “most firms sink between a hundred thousand dollars and two hundred and fifty thousand dollars on someone in their first three or four years,” and in most cases, of course, that investment comes to naught. But if you are willing to make that kind of investment and show that kind of patience, you wind up with a truly high-performing financial adviser. “We have a hundred and twenty-five full-time advisers,” Deutschlander says. “Last year, we had seventy-one of them qualify for the Million Dollar Round Table”—the industry’s association of its most successful practitioners. “We’re seventy-one out of a hundred and twenty-five in that elite group.” What does it say about a society that it devotes more care and patience to the selection of those who handle its money than of those who handle its children?

7.

Midway through the fourth quarter of the Oklahoma State-Missouri game, the Tigers were in trouble. For the first time all year, they were behind late in the game. They needed to score, or they’d lose any chance of a national championship. Daniel took the snap from his center and planted his feet to pass. His receivers were covered. He began to run. The Oklahoma State defenders closed in on him. He was under pressure, something that rarely happened to him in the spread. Desperate, he heaved the ball downfield, right into the arms of a Cowboy defender.

Shonka jumped up. “That’s not like him!” he cried out. “He doesn’t throw stuff up like that.”

Next to Shonka, a scout for the Kansas City Chiefs looked crestfallen. “Chase never throws something up for grabs!”

It was tempting to see Daniel’s mistake as definitive. The spread had broken down. He was finally under pressure. This was what it would be like to be an NFL quarterback, wasn’t it? But there is nothing like being an NFL quarterback except being an NFL quarterback. A prediction, in a field where prediction is not possible, is no more than a prejudice. Maybe that interception means that Daniel won’t be a good professional quarterback, or maybe he made a mistake that he’ll learn from. “In a great big piece of pie,” Shonka said, “that was just a little slice.”[5]

December 15, 2008

Dangerous Minds

CRIMINAL PROFILING MADE EASY

1.

On November 16, 1940, workers at the Consolidated Edison building on West 64th Street in Manhattan found a homemade pipe bomb on a windowsill. Attached was a note: “Con Edison crooks, this is for you.” In September of 1941, a second bomb was found, on 19th Street, just a few blocks from Con Edison’s headquarters, near Union Square. It had been left in the street, wrapped in a sock. A few months later, the New York police received a letter promising to “bring the Con Edison to justice—they will pay for their dastardly deeds.” Sixteen other letters followed, between 1941 and 1946, all written in block letters, many repeating the phrase dastardly deeds and all signed with the initials F.P. In March of 1950, a third bomb—larger and more powerful than the others—was found on the lower level of Grand Central Terminal. The next was left in a phone booth at the New York Public Library. It exploded, as did one placed in a phone booth in Grand Central. In 1954, the Mad Bomber—as he came to be known—struck four times, once in Radio City Music Hall, sending shrapnel throughout the audience. In 1955, he struck six times. The city was in an uproar. The police were getting nowhere. Late in 1956, in desperation, Inspector Howard Finney, of the New York City Police Department’s crime laboratory, and two plainclothesmen paid a visit to a psychiatrist by the name of James Brussel.

Brussel was a Freudian. He lived on 12th Street, in the West Village, and smoked a pipe. In Mexico, early in his career, he had done counterespionage work for the FBI. He wrote many books, including Instant Shrink: How to Become an Expert Psychiatrist in Ten Easy Lessons. Finney put a stack of documents on Brussel’s desk: photographs of unexploded bombs, pictures of devastation, photostats of F.P.’s neatly lettered missives. “I didn’t miss the look in the two plainclothesmen’s eyes,” Brussel writes in his memoir, Casebook of a Crime Psychiatrist. “I’d seen that look before, most often in the Army, on the faces of hard, old-line, field-grade officers who were sure this newfangled psychiatry business was all nonsense.”

He began to leaf through the case materials. For sixteen years, F.P. had been fixated on the notion that Con Ed had done him some terrible injustice. Clearly, he was clinically paranoid. But paranoia takes some time to develop. F.P. had been bombing since 1940, which suggested that he was now middle-aged. Brussel looked closely at the precise lettering of F.P.’s notes to the police. This was an orderly man. He would be cautious. His work record would be exemplary. Further, the language suggested some degree of education. But there was a stilted quality to the word choice and the phrasing. Con Edison was often referred to as the Con Edison. And who still used the expression dastardly deeds? F.P. seemed to be foreign-born. Brussel looked more closely at the letters and noticed that they were all perfect block capitals, except the Ws. They were misshapen, like two U’s. To Brussel’s eye, those Ws looked like a pair of breasts. He flipped to the crime-scene descriptions. When F.P. planted his bombs in movie theaters, he would slit the underside of the seat with a knife and stuff his explosives into the upholstery. Didn’t that seem like a symbolic act of penetrating a woman, or castrating a man—or perhaps both? F.P. had probably never progressed beyond the Oedipal stage. He was unmarried, a loner. Living with a mother figure. Brussel made another leap. F.P. was a Slav. Just as the use of a garrote would have suggested someone of Mediterranean extraction, the bomb-knife combination struck him as Eastern European. Some of the letters had been posted from Westchester County, but F.P. wouldn’t have mailed the letters from his hometown. Still, a number of cities in southeastern Connecticut had a large Slavic population. And didn’t you have to pass through Westchester to get to the city from Connecticut?

Brussel waited a moment, and then, in a scene that has become legendary among criminal profilers, he made a prediction:

“One more thing.” I closed my eyes because I didn’t want to see their reaction. I saw the Bomber: impeccably neat, absolutely proper. A man who would avoid the newer styles of clothing until long custom had made them conservative. I saw him clearly—much more clearly than the facts really warranted. I knew I was letting my imagination get the better of me, but I couldn’t help it.

“One more thing,” I said, my eyes closed tight. “When you catch him—and I have no doubt you will—he’ll be wearing a double-breasted suit.”

“Jesus!” one of the detectives whispered.

“And it will be buttoned,” I said. I opened my eyes. Finney and his men were looking at each other.

“A double-breasted suit,” said the Inspector.

“Yes.”

“Buttoned.”

“Yes.”

He nodded. Without another word, they left.

A month later, George Metesky was arrested by police in connection with the New York City bombings. His name had been changed from Milauskas. He lived in Waterbury, Connecticut, with his two older sisters. He was unmarried. He was unfailingly neat. He attended Mass regularly. He had been employed by Con Edison from 1929 to 1931, and claimed to have been injured on the job. When he opened the door to the police officers, he said, “I know why you fellows are here. You think I’m the Mad Bomber.” It was midnight, and he was in his pajamas. The police asked that he get dressed. When he returned, his hair was combed into a pompadour and his shoes were newly shined. He was also wearing a double-breasted suit—buttoned.

2.

In Inside the Mind of BTK, the eminent FBI criminal profiler John Douglas tells the story of a serial killer who stalked the streets of Wichita, Kansas, in the 1970s and ’80s. Douglas was the model for Agent Jack Crawford in The Silence of the Lambs. He was the protégé of the pioneering FBI profiler Howard Teten, who helped establish the bureau’s Behavioral Science Unit, at Quantico, in 1972, and who was a protégé of Brussel—which, in the close-knit fraternity of profilers, is like being analyzed by the analyst who was analyzed by Freud. To Douglas, Brussel was the father of criminal profiling, and, in both style and logic, Inside the Mind of BTK pays homage to Casebook of a Crime Psychiatrist at every turn.

BTK stood for “Bind, Torture, Kill”—the three words that the killer used to identify himself in his taunting notes to the Wichita police. He had struck first in January 1974, when he killed thirty-eight-year-old Joseph Otero in his home, along with his wife, Julie, their son, Joey, and their eleven-year-old daughter, who was found hanging from a water pipe in the basement with semen on her leg. The following April, he stabbed a twenty-four-year-old woman. In March 1977, he bound and strangled another young woman, and over the next few years, he committed at least four more murders. The city of Wichita was in an uproar. The police were getting nowhere. In 1984, in desperation, two police detectives from Wichita paid a visit to Quantico.

The meeting, Douglas writes, was held in a first-floor conference room of the FBI’s forensic-science building. He was then nearly a decade into his career at the Behavioral Science Unit. His first two bestsellers, Mindhunter: Inside the FBI’s Elite Serial Crime Unit and Obsession: The FBI’s Legendary Profiler Probes the Psyches of Killers, Rapists, and Stalkers and Their Victims and Tells How to Fight Back, were still in the future. Working 150 cases a year, he was on the road constantly, but BTK was never far from his thoughts. “Some nights I’d lie awake, asking myself, ‘Who the hell is this BTK?’” he writes. “What makes a guy like this do what he does? What makes him tick?”

Roy Hazelwood sat next to Douglas. A lean chain-smoker, Hazelwood specialized in sex crimes, and went on to write the bestsellers Dark Dreams and The Evil That Men Do. Beside Hazelwood was an ex-Air Force pilot named Ron Walker. Walker, Douglas writes, was “whip smart” and an “exceptionally quick study.” The three bureau men and the two detectives sat around a massive oak table. “The objective of our session was to keep moving forward until we ran out of juice,” Douglas writes. They would rely on the typology developed by their colleague Robert Ressler, himself the author of the true-crime bestsellers Whoever Fights Monsters and I Have Lived in the Monster. The goal was to paint a picture of the killer—of what sort of man BTK was, and what he did, and where he worked, and what he was like—and with that scene Inside the Mind of BTK begins.

We are now so familiar with crime stories told through the eyes of the profiler that it is easy to lose sight of how audacious the genre is. The traditional detective story begins with the body and centers on the detective’s search for the culprit. Leads are pursued. A net is cast, widening to encompass a bewilderingly diverse pool of suspects: the butler, the spurned lover, the embittered nephew, the shadowy European. That’s a whodunit. In the profiling genre, the net is narrowed. The crime scene doesn’t initiate our search for the killer. It defines the killer for us. The profiler sifts through the case materials, looks off into the distance, and knows. “Generally, a psychiatrist can study a man and make a few reasonable predictions about what the man may do in the future—how he will react to such-and-such a stimulus, how he will behave in such-and-such a situation,” Brussel writes. “What I have done is reverse the terms of the prophecy. By studying a man’s deeds, I have deduced what kind of man he might be.” Look for a middle-aged Slav in a double-breasted suit. Profiling stories aren’t whodunits; they’re hedunits.

In the hedunit, the profiler does not catch the criminal. That’s for local law enforcement. He takes the meeting. Often, he doesn’t write down his predictions. It’s up to the visiting police officers to take notes. He does not feel the need to involve himself in the subsequent investigation, or even, it turns out, to justify his predictions. Once, Douglas tells us, he drove down to the local police station and offered his services in the case of an elderly woman who had been savagely beaten and sexually assaulted. The detectives working the crime were regular cops, and Douglas was a bureau guy, so you can imagine him perched on the edge of a desk, the others pulling up chairs around him.

“ ‘Okay,’ I said to the detectives…‘Here’s what I think,’” Douglas begins. “It’s a sixteen- or seventeen-year-old high school kid…He’ll be disheveled-looking, he’ll have scruffy hair, generally poorly groomed.” He went on: a loner, kind of weird, no girlfriend, lots of bottled-up anger. He comes to the old lady’s house. He knows she’s alone. Maybe he’s done odd jobs for her in the past. Douglas continues:

I pause in my narrative and tell them there’s someone who meets this description out there. If they can find him, they’ve got their offender.

One detective looks at another. One of them starts to smile. “Are you a psychic, Douglas?”

“No,” I say, “but my job would be a lot easier if I were.”

“Because we had a psychic, Beverly Newton, in here a couple of weeks ago, and she said just about the same things.”

You might think that Douglas would bridle at that comparison. He is, after all, an agent of the Federal Bureau of Investigation, who studied with Teten, who studied with Brussel. He is an ace profiler, part of a team that restored the FBI’s reputation for crime-fighting, inspired countless movies, television shows, and bestselling thrillers, and brought the modern tools of psychology to bear on the savagery of the criminal mind—and some cop is calling him a psychic. But Douglas doesn’t object. Instead, he begins to muse on the ineffable origins of his insights, at which point the question arises of what exactly this mysterious art called profiling is, and whether it can be trusted. Douglas writes,

What I try to do with a case is to take in all the evidence I have to work with… and then put myself mentally and emotionally in the head of the offender. I try to think as he does. Exactly how this happens, I’m not sure, any more than the novelists such as Tom Harris who’ve consulted me over the years can say exactly how their characters come to life. If there’s a psychic component to this, I won’t run from it.

3.

In the late 1970s, John Douglas and his FBI colleague Robert Ressler set out to interview the most notorious serial killers in the country. They started in California, since, as Douglas says, “ California has always had more than its share of weird and spectacular crimes.” On weekends and days off, over the next months, they stopped by one federal prison after another, until they had interviewed thirty-six murderers.

Douglas and Ressler wanted to know whether there was a pattern that connected a killer’s life and personality with the nature of his crimes. They were looking for what psychologists would call a homology, an agreement between character and action, and after comparing what they learned from the killers with what they already knew about the characteristics of their murders, they became convinced that they’d found one.

Serial killers, they concluded, fall into one of two categories. Some crime scenes show evidence of logic and planning. The victim has been hunted and selected in order to fulfill a specific fantasy. The recruitment of the victim might involve a ruse or a con. The perpetrator maintains control throughout the offense. He takes his time with the victim, carefully enacting his fantasies. He is adaptable and mobile. He almost never leaves a weapon behind. He meticulously conceals the body. Douglas and Ressler, in their respective books, call that kind of crime organized.

In a disorganized crime, the victim isn’t chosen logically. She’s seemingly picked at random and “blitz-attacked,” not stalked and coerced. The killer might grab a steak knife from the kitchen and leave the knife behind. The crime is so sloppily executed that the victim often has a chance to fight back. The crime might take place in a high-risk environment. “Moreover, the disorganized killer has no idea of, or interest in, the personalities of his victims,” Ressler writes in Whoever Fights Monsters. “He does not want to know who they are, and many times takes steps to obliterate their personalities by quickly knocking them unconscious or covering their faces or otherwise disfiguring them.”

Each of these styles, the argument goes, corresponds to a personality type. The organized killer is intelligent and articulate. He feels superior to those around him. The disorganized killer is unattractive and has a poor self-image. He often has some kind of disability. He’s too strange and withdrawn to be married or have a girlfriend. If he doesn’t live alone, he lives with his parents. He has pornography stashed in his closet. If he drives at all, his car is a wreck.

“The crime scene is presumed to reflect the murderer’s behavior and personality in much the same way as furnishings reveal the homeowner’s character,” we’re told in a crime manual that Douglas and Ressler helped write. The more they learned, the more precise the associations became. If the victim was white, the killer would be white. If the victim was old, the killer would be sexually immature.

“In our research, we discovered that… frequently serial offenders had failed in their efforts to join police departments and had taken jobs in related fields, such as security guard or night watchman,” Douglas writes. Given that organized rapists were preoccupied with control, it made sense that they would be fascinated by the social institution that symbolizes control. Out of that insight came another prediction: “One of the things we began saying in some of our profiles was that the UNSUB”—the unknown subject—“would drive a policelike vehicle, say a Ford Crown Victoria or Chevrolet Caprice.”

4.

On the surface, the FBI’s system seems extraordinarily useful. Consider a case study widely used in the profiling literature. The body of a twenty-six-year-old special-education teacher was found on the roof of her Bronx apartment building. She was apparently abducted just after she left her house for work, at six-thirty in the morning. She had been beaten beyond recognition and tied up with her stockings and belt. The killer had mutilated her sexual organs, chopped off her nipples, covered her body with bites, written obscenities across her abdomen, masturbated, and then defecated next to the body.

Let’s pretend that we’re an FBI profiler. First question: race. The victim is white, so let’s call the offender white. Let’s say he’s in his midtwenties to early thirties, which is when the thirty-six men in the FBI’s sample started killing. Is the crime organized or disorganized? Disorganized, clearly. It’s on a rooftop, in the Bronx, in broad daylight—high risk. So what is the killer doing in the building at six-thirty in the morning? He could be some kind of serviceman, or he could live in the neighborhood. Either way, he appears to be familiar with the building. He’s disorganized, though, so he’s not stable. If he is employed, it’s blue-collar work at best. He probably has a prior offense, having to do with violence or sex. His relationships with women will be either nonexistent or deeply troubled. And the mutilation and the defecation are so strange that he’s probably mentally ill or has some kind of substance-abuse problem. How does that sound? As it turns out, it’s spot-on. The killer was Carmine Calabro, age thirty, a single, unemployed, deeply troubled actor who, when he was not in a mental institution, lived with his widowed father on the fourth floor of the building where the murder took place.

But how useful is that profile really? The police already had Calabro on their list of suspects: if you’re looking for the person who killed and mutilated someone on the roof, you don’t really need a profiler to tell you to check out the disheveled, mentally ill guy living with his father on the fourth floor.

That’s why the FBI’s profilers have always tried to supplement the basic outlines of the organized/disorganized system with telling details—something that lets the police zero in on a suspect. In the early 1980s, Douglas gave a presentation to a roomful of police officers and FBI agents in Marin County about the Trailside Killer, who was murdering female hikers in the hills north of San Francisco. In Douglas’s view, the killer was a classic disorganized offender—a blitz attacker, white, early to midthirties, blue collar, probably with “a history of bed-wetting, fire-starting, and cruelty to animals.” Then he went back to how asocial the killer seemed. Why did all the killings take place in heavily wooded areas miles from the road? Douglas reasoned that the killer required such seclusion because he had some condition that he was deeply self-conscious about. Was it something physical, like a missing limb? But then how could he hike miles into the woods and physically overpower his victims? Finally, it came to him: “ ‘Another thing,’ I added after a pregnant pause, ‘the killer will have a speech impediment.’”

And so he did. Now, that’s a useful detail. Or is it? Douglas then tells us that he pegged the offender’s age as early thirties and he turned out to be fifty. Detectives use profiles to narrow down the range of suspects. It doesn’t do any good to get a specific detail right if you get general details wrong.

In the case of Derrick Todd Lee, the Baton Rouge serial killer, the FBI profile described the offender as a white male blue-collar worker between twenty-five and thirty-five years old who “wants to be seen as someone who is attractive and appealing to women.” The profile went on, “However, his level of sophistication in interacting with women, especially women who are above him in the social strata, is low. Any contact he has had with women he has found attractive would be described by these women as ‘awkward.’” The FBI was right about the killer being a blue-collar male between twenty-five and thirty-five. But Lee turned out to be charming and outgoing, the sort to put on a cowboy hat and snakeskin boots and head for the bars. He was an extrovert with a number of girlfriends and a reputation as a ladies’ man. And he wasn’t white. He was black.

A profile isn’t a test, where you pass if you get most of the answers right. It’s a portrait, and all the details have to cohere in some way if the image is to be helpful. In the mid-nineties, the British Home Office analyzed 184 crimes to see how many times profiles led to the arrest of a criminal. The profile worked in five of those cases. That’s just 2.7 percent, which makes sense if you consider the position of the detective on the receiving end of a profiler’s list of conjectures. Do you believe the stuttering part? Or do you believe the thirty-year-old part? Or do you throw up your hands in frustration?

5.

There is a deeper problem with FBI profiling. Douglas and Ressler didn’t interview a representative sample of serial killers to come up with their typology. They talked to whoever happened to be in the neighborhood. Nor did they interview their subjects according to a standardized protocol. They just sat down and chatted, which isn’t a particularly firm foundation for a psychological system. So you might wonder whether serial killers can really be categorized by their level of organization.

Not long ago, a group of psychologists at the University of Liverpool decided to test the FBI’s assumptions. First, they made a list of crime-scene characteristics generally considered to show organization: perhaps the victim was alive during the sex acts, or the body was posed in a certain way, or the murder weapon was missing, or the body was concealed, or torture and restraints were involved. Then they made a list of characteristics showing disorganization: perhaps the victim was beaten, the body was left in an isolated spot, the victim’s belongings were scattered, or the murder weapon was improvised.

If the FBI was right, they reasoned, the crime-scene details on each of those two lists should co-occur—that is, if you see one or more organized traits in a crime, there should be a reasonably high probability of seeing other organized traits. When they looked at a sample of a hundred serial crimes, however, they couldn’t find any support for the FBI’s distinction. Crimes don’t fall into one camp or the other. It turns out that they’re almost always a mixture of a few key organized traits and a random array of disorganized traits. Laurence Alison, one of the leaders of the Liverpool group and the author of The Forensic Psychologist’s Casebook, told me, “The whole business is a lot more complicated than the FBI imagines.”

Alison and another of his colleagues also looked at homology. If Douglas was right, then a certain kind of crime should correspond to a certain kind of criminal. So the Liverpool group selected a hundred stranger rapes in the United Kingdom, classifying them according to twenty-eight variables, such as whether a disguise was worn, whether compliments were given, whether there was binding, gagging, or blindfolding, whether there was apologizing or the theft of personal property, and so on. They then looked at whether the patterns in the crimes corresponded to attributes of the criminals—like age, type of employment, ethnicity, level of education, marital status, number of prior convictions, type of prior convictions, and drug use. Were rapists who bind, gag, and blindfold more like one another than they were like rapists who, say, compliment and apologize? The answer is no—not even slightly.

“The fact is that different offenders can exhibit the same behaviors for completely different reasons,” Brent Turvey, a forensic scientist who has been highly critical of the FBI’s approach, says. “You’ve got a rapist who attacks a woman in the park and pulls her shirt up over her face. Why? What does that mean? There are ten different things it could mean. It could mean he doesn’t want to see her. It could mean he doesn’t want her to see him. It could mean he wants to see her breasts, he wants to imagine someone else, he wants to incapacitate her arms—all of those are possibilities. You can’t just look at one behavior in isolation.”

A few years ago, Alison went back to the case of the teacher who was murdered on the roof of her building in the Bronx. He wanted to know why, if the FBI’s approach to criminal profiling was based on such simplistic psychology, it continues to have such a sterling reputation. The answer, he suspected, lay in the way the profiles were written, and, sure enough, when he broke down the rooftop-killer analysis, sentence by sentence, he found that it was so full of unverifiable and contradictory and ambiguous language that it could support virtually any interpretation.

Astrologers and psychics have known these tricks for years. The magician Ian Rowland, in his classic The Full Facts Book of Cold Reading, itemizes them one by one, in what could easily serve as a manual for the beginner profiler. First is the Rainbow Ruse—the “statement which credits the client with both a personality trait and its opposite.” (“I would say that on the whole you can be rather a quiet, self-effacing type, but when the circumstances are right, you can be quite the life and soul of the party if the mood strikes you.”) The Jacques Statement, named for the character in As You Like It who gives the Seven Ages of Man speech, tailors the prediction to the age of the subject. To someone in his late thirties or early forties, for example, the psychic says, “If you are honest about it, you often get to wondering what happened to all those dreams you had when you were younger.” There is the Barnum Statement, the assertion so general that anyone would agree, and the Fuzzy Fact, the seemingly factual statement couched in a way that “leaves plenty of scope to be developed into something more specific.” (“I can see a connection with Europe, possibly Britain, or it could be the warmer, Mediterranean part?”) And that’s only the start: there is the Greener Grass technique, the Diverted Question, the Russian Doll, Sugar Lumps, not to mention Forking and the Good Chance Guess—all of which, when put together in skillful combination, can convince even the most skeptical observer that he or she is in the presence of real insight.

“Moving on to career matters, you don’t work with children, do you?” Rowland will ask his subjects, in an example of what he dubs the “Vanishing Negative.”

No, I don’t.

“No, I thought not. That’s not really your role.”

Of course, if the subject answers differently, there’s another way to play the question: “Moving on to career matters, you don’t work with children, do you?”

I do, actually, part time.

“Yes, I thought so.”

After Alison had analyzed the rooftop-killer profile, he decided to play a version of the cold-reading game. He gave the details of the crime, the profile prepared by the FBI, and a description of the offender to a group of senior police officers and forensic professionals in England. How did they find the profile? Highly accurate. Then Alison gave the same packet of case materials to another group of police officers, but this time he invented an imaginary offender, one who was altogether different from Calabro. The new killer was thirty-seven years old. He was an alcoholic. He had recently been laid off from his job with the water board and had met the victim before on one of his rounds. What’s more, Alison claimed, he had a history of violent relationships with women, and prior convictions for assault and burglary. How accurate did a group of experienced police officers find the FBI’s profile when it was matched with the phony offender? Every bit as accurate as when it was matched to the real offender.

James Brussel didn’t really see the Mad Bomber in that pile of pictures and photostats, then. That was an illusion. As the literary scholar Donald Foster pointed out in his 2000 book Author Unknown, Brussel cleaned up his predictions for his memoirs. He actually told the police to look for the bomber in White Plains, sending the NYPD’s bomb unit on a wild goose chase in Westchester County, sifting through local records. Brussel also told the police to look for a man with a facial scar, which Metesky didn’t have. He told them to look for a man with a night job, and Metesky had been largely unemployed since leaving Con Edison in 1931. He told them to look for someone between forty and fifty, and Metesky was over fifty. He told them to look for someone who was an “expert in civil or military ordnance” and the closest Metesky came to that was a brief stint in a machine shop. And Brussel, despite what he wrote in his memoir, never said that the bomber would be a Slav. He actually told the police to look for a man “born and educated in Germany,” a prediction so far off the mark that the Mad Bomber himself was moved to object. At the height of the police investigation, when the New York Journal American offered to print any communications from the Mad Bomber, Metesky wrote in huffily to say that “the nearest to my being ‘Teutonic’ is that my father boarded a liner in Hamburg for passage to this country—about sixty-five years ago.”

The true hero of the case wasn’t Brussel; it was a woman named Alice Kelly, who had been assigned to go through Con Edison’s personnel files. In January 1957, she ran across an employee complaint from the early 1930s: a generator wiper at the Hell Gate plant had been knocked down by a backdraft of hot gases. The worker said that he was injured. The company said that he wasn’t. And in the flood of angry letters from the ex-employee Kelly spotted a threat—to “take justice in my own hands”—that had appeared in one of the Mad Bomber’s letters. The name on the file was George Metesky.

Brussel did not really understand the mind of the Mad Bomber. He seems to have understood only that, if you make a great number of predictions, the ones that were wrong will soon be forgotten, and the ones that turn out to be true will make you famous. The hedunit is not a triumph of forensic analysis. It’s a party trick.

6.

“Here’s where I’m at with this guy,” Douglas said, kicking off the profiling session with which Inside the Mind of BTK begins. It was 1984. The killer was still at large. Douglas, Hazelwood, and Walker and the two detectives from Wichita were all seated around the oak table. Douglas took off his suit jacket and draped it over his chair. “Back when he started in 1974, he was in his mid to late twenties,” Douglas began. “It’s now ten years later, so that would put him in his mid to late thirties.”

It was Walker’s turn: BTK had never engaged in any sexual penetration. That suggested to him someone with an “inadequate, immature sexual history.” He would have a “lone-wolf type of personality. But he’s not alone because he’s shunned by others—it’s because he chooses to be alone…He can function in social settings, but only on the surface. He may have women friends he can talk to, but he’d feel very inadequate with a peer-group female.” Hazelwood was next. BTK would be “heavily into masturbation.” He went on, “Women who have had sex with this guy would describe him as aloof, uninvolved, the type who is more interested in her servicing him than the other way around.”

Douglas followed his lead. “The women he’s been with are either many years younger, very naive, or much older and depend on him as their meal ticket,” he ventured. What’s more, the profilers determined, BTK would drive a “decent” automobile, but it would be “nondescript.”

At this point, the insights began piling on. Douglas said he’d been thinking that BTK was married. But now maybe he was thinking he was divorced. He speculated that BTK was lower middle class, probably living in a rental. Walker felt BTK was in a “lower-paying white-collar job, as opposed to blue-collar.” Hazelwood saw him as “middle class” and “articulate.” The consensus was that his IQ was somewhere between 105 and 145. Douglas wondered whether he was connected with the military. Hazelwood called him a “now” person, who needed “instant gratification.”

Walker said that those who knew him “might say they remember him, but didn’t really know much about him.” Douglas then had a flash—“It was a sense, almost a knowing”—and said, “I wouldn’t be surprised if, in the job he’s in today, that he’s wearing some sort of uniform…This guy isn’t mental. But he is crazy like a fox.”

They had been at it for almost six hours. The best minds in the FBI had given the Wichita detectives a blueprint for their investigation. Look for an American male with a possible connection to the military. His IQ will be above 105. He will like to masturbate and will be aloof and selfish in bed. He will drive a decent car. He will be a “now” person. He won’t be comfortable with women. But he may have women friends. He will be a lone wolf. But he will be able to function in social settings. He won’t be unmemorable. But he will be unknowable. He will be either never married, divorced, or married, and if he was or is married, his wife will be younger or older. He may or may not live in a rental, and might be lower class, upper lower class, lower middle class, or middle class. And he will be crazy like a fox as opposed to being mental. If you’re keeping score, that’s a Jacques Statement, two Barnum Statements, four Rainbow Ruses, a Good Chance Guess, two predictions that aren’t really predictions because they could never be verified—and nothing even close to the salient fact that BTK was a pillar of his community, the president of his church, and the married father of two.

“This thing is solvable,” Douglas told the detectives as he stood up and put on his jacket. “Feel free to pick up the phone and call us if we can be of any further assistance.” You can imagine him taking the time for an encouraging smile and a slap on the back. “You’re gonna nail this guy.”*

November 12, 2007

The Talent Myth

ARE SMART PEOPLE OVERRATED?

1.

At the height of the dot-com boom of the 1990s, several executives at McKinsey & Company, America’s largest and most prestigious management-consulting firm, launched what they called the War for Talent. Thousands of questionnaires were sent to managers across the country. Eighteen companies were singled out for special attention, and the consultants spent up to three days at each firm, interviewing everyone from the CEO down to the human-resources staff. McKinsey wanted to document how the top-performing companies in America differed from other firms in the way they handled matters like hiring and promotion. But, as the consultants sifted through the piles of reports and questionnaires and interview transcripts, they grew convinced that the difference between winners and losers was more profound than they had realized. “We looked at one another and suddenly the lightbulb blinked on,” the three consultants who headed the project—Ed Michaels, Helen Handfield-Jones, and Beth Axelrod—write in their book, also called The War for Talent. The very best companies, they concluded, had leaders who were obsessed with the talent issue. They recruited ceaselessly, finding and hiring as many top performers as possible. They singled out and segregated their stars, rewarding them disproportionately, and pushing them into ever more senior positions. “Bet on the natural athletes, the ones with the strongest intrinsic skills,” the authors approvingly quote one senior General Electric executive as saying. “Don’t be afraid to promote stars without specifically relevant experience, seemingly over their heads.” Success in the modern economy, according to Michaels, Handfield-Jones, and Axelrod, requires “the talent mind-set”: the “deep-seated belief that having better talent at all levels is how you outperform your competitors.”

This “talent mind-set” is the new orthodoxy of American management. It is the intellectual justification for why such a high premium is placed on degrees from first-tier business schools, and why the compensation packages for top executives have become so lavish. In the modern corporation, the system is considered only as strong as its stars, and in the past few years, this message has been preached by consultants and management gurus all over the world. None, however, have spread the word quite so ardently as McKinsey, and, of all its clients, one firm took the talent mind-set closest to heart. It was a company where McKinsey conducted twenty separate projects, where McKinsey’s billings topped $10 million a year, where a McKinsey director regularly attended board meetings, and where the CEO himself was a former McKinsey partner. The company, of course, was Enron.

The Enron scandal is now almost a year old. The reputations of Jeffrey Skilling and Kenneth Lay, the company’s two top executives, have been destroyed. Arthur Andersen, Enron’s auditor, has been all but driven out of business, and now investigators have turned their attention to Enron’s investment bankers. The one Enron partner that has escaped largely unscathed is McKinsey, which is odd, given that it essentially created the blueprint for the Enron culture. Enron was the ultimate “talent” company. When Skilling started the corporate division known as Enron Capital and Trade, in 1990, he “decided to bring in a steady stream of the very best college and MBA graduates he could find to stock the company with talent,” Michaels, Handfield-Jones, and Axelrod tell us. During the nineties, Enron was bringing in 250 newly minted MBAs a year. “We had these things called Super Saturdays,” one former Enron manager recalls. “I’d interview some of these guys who were fresh out of Harvard, and these kids could blow me out of the water. They knew things I’d never heard of.” Once at Enron, the top performers were rewarded inordinately, and promoted without regard for seniority or experience. Enron was a star system. “The only thing that differentiates Enron from our competitors is our people, our talent,” Lay, Enron’s former chairman and CEO, told the McKinsey consultants when they came to the company’s headquarters, in Houston. Or, as another senior Enron executive put it to Richard Foster, a McKinsey partner who celebrated Enron in his 2001 book, Creative Destruction, “We hire very smart people and we pay them more than they think they are worth.”

The management of Enron, in other words, did exactly what the consultants at McKinsey said that companies ought to do in order to succeed in the modern economy. It hired and rewarded the very best and the very brightest—and it is now in bankruptcy. The reasons for its collapse are complex, needless to say. But what if Enron failed not in spite of its talent mind-set but because of it? What if smart people are overrated?

2.

At the heart of the McKinsey vision is a process that the War for Talent advocates refer to as differentiation and affirmation. Employers, they argue, need to sit down once or twice a year and hold a “candid, probing, no-holds-barred debate about each individual,” sorting employees into A, B, and C groups. The A’s must be challenged and disproportionately rewarded. The B’s need to be encouraged and affirmed. The C’s need to shape up or be shipped out. Enron followed this advice almost to the letter, setting up internal Performance Review Committees. The members got together twice a year, and graded each person in their section on ten separate criteria, using a scale of 1 to 5. The process was called rank and yank. Those graded at the top of their unit received bonuses two-thirds higher than those in the next 30 percent; those who ranked at the bottom received no bonuses and no extra stock options—and in some cases were pushed out.

How should that ranking be done? Unfortunately, the McKinsey consultants spend very little time discussing the matter. One possibility is simply to hire and reward the smartest people. But the link between, say, IQ and job performance is distinctly underwhelming. On a scale where 0.1 or below means virtually no correlation and 0.7 or above implies a strong correlation (your height, for example, has a 0.7 correlation with your parents’ height), the correlation between IQ and occupational success is between 0.2 and 0.3. “What IQ doesn’t pick up is effectiveness at common-sense sorts of things, especially working with people,” Richard Wagner, a psychologist at Florida State University, says. “In terms of how we evaluate schooling, everything is about working by yourself. If you work with someone else, it’s called cheating. Once you get out in the real world, everything you do involves working with other people.”

Wagner and Robert Sternberg, a psychologist at Yale University, have developed tests of this practical component, which they call tacit knowledge. Tacit knowledge involves things like knowing how to manage yourself and others and how to navigate complicated social situations. Here is a question from one of their tests:

You have just been promoted to head of an important department in your organization. The previous head has been transferred to an equivalent position in a less important department. Your understanding of the reason for the move is that the performance of the department as a whole has been mediocre. There have not been any glaring deficiencies, just a perception of the department as so-so rather than very good. Your charge is to shape up the department. Results are expected quickly. Rate the quality of the following strategies for succeeding at your new position.

a) Always delegate to the most junior person who can be trusted with the task.

b) Give your superiors frequent progress reports.

c) Announce a major reorganization of the department that includes getting rid of whomever you believe to be “dead wood.”

d) Concentrate more on your people than on the tasks to be done.

e) Make people feel completely responsible for their work.

Wagner finds that how well people do on a test like this predicts how well they will do in the workplace: good managers pick (b) and (e); bad managers tend to pick (c). Yet there’s no clear connection between such tacit knowledge and other forms of knowledge and experience. The process of assessing ability in the workplace is a lot messier than it appears.

An employer really wants to assess not potential but performance. Yet that’s just as tricky. In The War for Talent, the authors talk about how the Royal Air Force used the A, B, and C ranking system for its pilots during the Battle of Britain. But ranking fighter pilots—for whom there are limited and relatively objective performance criteria (enemy kills, for example, and the ability to get their formations safely home)—is a lot easier than assessing how the manager of a new unit is doing at, say, marketing or business development. And whom do you ask to rate the manager’s performance? Studies show that there is very little correlation between how someone’s peers rate him and how his boss rates him. The only rigorous way to assess performance, according to human-resources specialists, is to use criteria that are as specific as possible. Managers are supposed to take detailed notes on their employees throughout the year, in order to remove subjective personal reactions from the process of assessment. You can grade someone’s performance only if you know their performance. And, in the freewheeling culture of Enron, this was all but impossible. People deemed talented were constantly being pushed into new jobs and given new challenges. Annual turnover from promotions was close to 20 percent. Lynda Clemmons, the so-called weather babe who started Enron’s weather derivatives business, jumped, in seven quick years, from trader to associate to manager to director and, finally, to head of her own business unit. How do you evaluate someone’s performance in a system where no one is in a job long enough to allow such evaluation?

The answer is that you end up doing performance evaluations that aren’t based on performance. Among the many glowing books about Enron written before its fall was the bestseller Leading the Revolution, by the management consultant Gary Hamel, which tells the story of Lou Pai, who launched Enron’s power-trading business. Pai’s group began with a disaster: it lost tens of millions of dollars trying to sell electricity to residential consumers in newly deregulated markets. The problem, Hamel explains, is that the markets weren’t truly deregulated: “The states that were opening their markets to competition were still setting rules designed to give their traditional utilities big advantages.” It doesn’t seem to have occurred to anyone that Pai ought to have looked into those rules more carefully before risking millions of dollars. He was promptly given the chance to build the commercial electricity-outsourcing business, where he ran up several more years of heavy losses before cashing out of Enron with $270 million. Because Pai had “talent,” he was given new opportunities, and when he failed at those new opportunities he was given still more opportunities… because he had “talent.” “At Enron, failure—even of the type that ends up on the front page of the Wall Street Journal—doesn’t necessarily sink a career,” Hamel writes, as if that were a good thing. Presumably, companies that want to encourage risk-taking must be willing to tolerate mistakes. Yet if talent is defined as something separate from an employee’s actual performance, what use is it exactly?

3.

What the War for Talent amounts to is an argument for indulging A employees, for fawning over them. “You need to do everything you can to keep them engaged and satisfied—even delighted,” Michaels, Handfield-Jones, and Axelrod write. “Find out what they would most like to be doing, and shape their career and responsibilities in that direction. Solve any issues that might be pushing them out the door, such as a boss that frustrates them or travel demands that burden them.” No company was better at this than Enron. In one oft-told story, Louise Kitchin, a twenty-nine-year-old gas trader in Europe, became convinced that the company ought to develop an online-trading business. She told her boss, and she began working in her spare time on the project, until she had 250 people throughout Enron helping her. After six months, Skilling was finally informed. “I was never asked for any capital,” Skilling said later. “I was never asked for any people. They had already purchased the servers. They had already started ripping apart the building. They had started legal reviews in twenty-two countries by the time I heard about it.” It was, Skilling went on approvingly, “exactly the kind of behavior that will continue to drive this company forward.”

Kitchin’s qualification for running EnronOnline, it should be pointed out, was not that she was good at it. It was that she wanted to do it, and Enron was a place where stars did whatever they wanted. “Fluid movement is absolutely necessary in our company. And the type of people we hire enforces that,” Skilling told the team from McKinsey. “Not only does this system help the excitement level for each manager, it shapes Enron’s business in the direction that its managers find most exciting.” Here is Skilling again: “If lots of [employees] are flocking to a new business unit, that’s a good sign that the opportunity is a good one…If a business unit can’t attract people very easily, that’s a good sign that it’s a business Enron shouldn’t be in.” You might expect a CEO to say that if a business unit can’t attract customers very easily, that’s a good sign it’s a business the company shouldn’t be in. A company’s business is supposed to be shaped in the direction that its managers find most profitable. But at Enron the needs of the customers and the shareholders were secondary to the needs of its stars.

In the early 1990s, the psychologists Robert Hogan, Robert Raskin, and Dan Fazzini wrote a brilliant essay called “The Dark Side of Charisma.” It argued that flawed managers fall into three types. One is the High Likability Floater, who rises effortlessly in an organization because he never takes any difficult decisions or makes any enemies. Another is the Homme de Ressentiment, who seethes below the surface and plots against his enemies. The most interesting of the three is the Narcissist, whose energy and self-confidence and charm lead him inexorably up the corporate ladder. Narcissists are terrible managers. They resist accepting suggestions, thinking it will make them appear weak, and they don’t believe that others have anything useful to tell them. “Narcissists are biased to take more credit for success than is legitimate,” Hogan and his coauthors write, and “biased to avoid acknowledging responsibility for their failures and shortcomings for the same reasons that they claim more success than is their due.” Moreover:

Narcissists typically make judgments with greater confidence than other people… and, because their judgments are rendered with such conviction, other people tend to believe them and the narcissists become disproportionately more influential in group situations. Finally, because of their self-confidence and strong need for recognition, narcissists tend to “self-nominate”; consequently, when a leadership gap appears in a group or organization, the narcissists rush to fill it.

Tyco Corporation and WorldCom were the Greedy Corporations: they were purely interested in short-term financial gain. Enron was the Narcissistic Corporation—a company that took more credit for success than was legitimate, that did not acknowledge responsibility for its failures, that shrewdly sold the rest of us on its genius, and that substituted self-nomination for disciplined management. At one point in Leading the Revolution, Hamel tracks down a senior Enron executive, and what he breathlessly recounts—the braggadocio, the self-satisfaction—could be an epitaph for the talent mind-set:

“You cannot control the atoms within a nuclear fusion reaction,” said Ken Rice when he was head of Enron Capital and Trade Resources (ECT), America’s largest marketer of natural gas and largest buyer and seller of electricity. Adorned in a black T-shirt, blue jeans, and cowboy boots, Rice drew a box on an office whiteboard that pictured his business unit as a nuclear reactor. Little circles in the box represented its “contract originators,” the gunslingers charged with doing deals and creating new businesses. Attached to each circle was an arrow. In Rice’s diagram the arrows were pointing in all different directions. “We allow people to go in whichever direction that they want to go.”

The distinction between the Greedy Corporation and the Narcissistic Corporation matters, because the way we conceive our attainments helps determine how we behave. Carol Dweck, a psychologist at Columbia University, has found that people generally hold one of two fairly firm beliefs about their intelligence: they consider it either a fixed trait or something that is malleable and can be developed over time. Dweck once did a study at the University of Hong Kong, where all classes are conducted in English. She and her colleagues approached a large group of social-sciences students, told them their English-proficiency scores, and asked them if they wanted to take a course to improve their language skills. One would expect all those who scored poorly to sign up for the remedial course. The University of Hong Kong is a demanding institution, and it is hard to do well in the social sciences without strong English skills. Curiously, however, only the ones who believed in malleable intelligence expressed interest in the class. The students who believed that their intelligence was a fixed trait were so concerned about appearing to be deficient that they preferred to stay home. “Students who hold a fixed view of their intelligence care so much about looking smart that they act dumb,” Dweck writes, “for what could be dumber than giving up a chance to learn something that is essential for your own success?”

In a similar experiment, Dweck gave a class of preadolescent students a test filled with challenging problems. After they were finished, one group was praised for its effort and another group was praised for its intelligence. Those praised for their intelligence were reluctant to tackle difficult tasks, and their performance on subsequent tests soon began to suffer. Then Dweck asked the children to write a letter to students at another school, describing their experience in the study. She discovered something remarkable: 40 percent of those students who were praised for their intelligence lied about how they had scored on the test, adjusting their grade upward. They weren’t naturally deceptive people, and they weren’t any less intelligent or self-confident than anyone else. They simply did what people do when they are immersed in an environment that celebrates them solely for their innate “talent.” They begin to define themselves by that description, and when times get tough and that self-image is threatened, they have difficulty with the consequences. They will not take the remedial course. They will not stand up to investors and the public and admit that they were wrong. They’d sooner lie.

4.

The broader failing of McKinsey and its acolytes at Enron is their assumption that an organization’s intelligence is simply a function of the intelligence of its employees. They believe in stars, because they don’t believe in systems. In a way, that’s understandable, because our lives are so obviously enriched by individual brilliance. Groups don’t write great novels, and a committee didn’t come up with the theory of relativity. But companies work by different rules. They don’t just create; they execute and compete and coordinate the efforts of many different people, and the organizations that are most successful at that task are the ones where the system is the star.

There is a wonderful example of this in the story of the so-called Eastern Pearl Harbor, of the Second World War. During the first nine months of 1942, the United States Navy suffered a catastrophe. German U-boats, operating just off the Atlantic coast and in the Caribbean, were sinking our merchant ships almost at will. U-boat captains marveled at their good fortune. “Before this sea of light, against this footlight glare of a carefree new world were passing the silhouettes of ships recognizable in every detail and sharp as the outlines in a sales catalogue,” one U-boat commander wrote. “All we had to do was press the button.”

What made this such a puzzle is that, on the other side of the Atlantic, the British had much less trouble defending their ships against U-boat attacks. The British, furthermore, eagerly passed on to the Americans everything they knew about sonar and depth-charge throwers and the construction of destroyers. And still the Germans managed to paralyze America’s coastal zones.

You can imagine what the consultants at McKinsey would have concluded: they would have said that the Navy did not have a talent mind-set, that President Roosevelt needed to recruit and promote top performers into key positions in the Atlantic command. In fact, he had already done that. At the beginning of the war, he had pushed out the solid and unspectacular Admiral Harold R. Stark as Chief of Naval Operations and replaced him with the legendary Ernest Joseph King. “He was a supreme realist with the arrogance of genius,” Ladislas Farago writes in The Tenth Fleet, a history of the Navy’s U-boat battles in the Second World War. “He had unbounded faith in himself, in his vast knowledge of naval matters and in the soundness of his ideas. Unlike Stark, who tolerated incompetence all around him, King had no patience with fools.”

The Navy had plenty of talent at the top, in other words. What it didn’t have was the right kind of organization. As Eliot A. Cohen, a scholar of military strategy at Johns Hopkins, writes in his brilliant book Military Misfortunes in the Atlantic:

To wage the antisubmarine war well, analysts had to bring together fragments of information, direction-finding fixes, visual sightings, decrypts, and the “flaming datum” of a U-boat attack—for use by a commander to coordinate the efforts of warships, aircraft, and convoy commanders. Such synthesis had to occur in near “real time”—within hours, even minutes in some cases.

The British excelled at the task because they had a centralized operational system. The controllers moved the British ships around the Atlantic like chess pieces, in order to outsmart U-boat “wolf packs.” By contrast, Admiral King believed strongly in a decentralized management structure: he held that managers should never tell their subordinates “how as well as what to ‘do.’” In today’s jargon, we would say he was a believer in “loose-tight” management, of the kind celebrated by the McKinsey consultants Thomas J. Peters and Robert H. Waterman in their 1982 bestseller, In Search of Excellence. But “loose-tight” doesn’t help you find U-boats. Throughout most of 1942, the Navy kept trying to act smart by relying on technical know-how, and stubbornly refused to take operational lessons from the British. The Navy also lacked the organizational structure necessary to apply the technical knowledge it did have to the field. Only when the Navy set up the Tenth Fleet—a single unit to coordinate all antisubmarine warfare in the Atlantic—did the situation change. In the year and a half before the Tenth Fleet was formed, in May of 1943, the Navy sank thirty-six U-boats. In the six months afterward, it sank seventy-five. “The creation of the Tenth Fleet did not bring more talented individuals into the field of ASW”—antisubmarine warfare—“than had previous organizations,” Cohen writes. “What Tenth Fleet did allow, by virtue of its organization and mandate, was for these individuals to become far more effective than previously.” The talent myth assumes that people make organizations smart. More often than not, it’s the other way around.

5.

There is ample evidence of this principle among America’s most successful companies. Southwest Airlines hires very few MBAs, pays its managers modestly, and gives raises according to seniority, not “rank and yank.” Yet it is by far the most successful of all United States airlines, because it has created a vastly more efficient organization than its competitors have. At Southwest, the time it takes to get a plane that has just landed ready for takeoff—a key index of productivity—is, on average, twenty minutes, and requires a ground crew of four, and two people at the gate. (At United Airlines, by contrast, turnaround time is closer to thirty-five minutes, and requires a ground crew of twelve, and three agents at the gate.)

In the case of the giant retailer Wal-Mart, one of the most critical periods in its history came in 1976, when Sam Walton “unretired,” pushing out his handpicked successor, Ron Mayer. Mayer was just over forty. He was ambitious. He was charismatic. He was, in the words of one Walton biographer, “the boy-genius financial officer.” But Walton was convinced that Mayer was, as people at McKinsey would say, “differentiating and affirming” in the corporate suite, in defiance of Wal-Mart’s inclusive culture. Mayer left, and Wal-Mart survived. After all, Wal-Mart is an organization, not an all-star team. Walton brought in David Glass, late of the Army and Southern Missouri State University, as CEO; the company is now ranked No. 1 on the Fortune 500 list.

Procter & Gamble doesn’t have a star system, either. How could it? Would the top MBA graduates of Harvard and Stanford move to Cincinnati to work on detergent when they could make three times as much reinventing the world in Houston? Procter & Gamble isn’t glamorous. Its CEO is a lifer—a former Navy officer who began his corporate career as an assistant brand manager for Joy dishwashing liquid—and if Procter & Gamble’s best played Enron’s best at Trivial Pursuit, no doubt the team from Houston would win handily. But Procter & Gamble has dominated the consumer-products field for close to a century, because it has a carefully conceived managerial system, and a rigorous marketing methodology that has allowed it to win battles for brands like Crest and Tide decade after decade. In Procter & Gamble’s Navy, Admiral Stark would have stayed. But a cross-divisional management committee would have set the Tenth Fleet in place before the war ever started.

6.

Among the most damning facts about Enron, in the end, was something its managers were proudest of. They had what, in McKinsey terminology, is called an open market for hiring. In the open-market system—McKinsey’s assault on the very idea of a fixed organization—anyone could apply for any job that he or she wanted, and no manager was allowed to hold anyone back. Poaching was encouraged. When an Enron executive named Kevin Hannon started the company’s global broadband unit, he launched what he called Project Quick Hire. A hundred top performers from around the company were invited to the Houston Hyatt to hear Hannon give his pitch. Recruiting booths were set up outside the meeting room. “Hannon had his fifty top performers for the broadband unit by the end of the week,” Michaels, Handfield-Jones, and Axelrod write, “and his peers had fifty holes to fill.” Nobody, not even the consultants who were paid to think about the Enron culture, seemed worried that those fifty holes might disrupt the functioning of the affected departments, that stability in a firm’s existing businesses might be a good thing, that the self-fulfillment of Enron’s star employees might possibly be in conflict with the best interests of the firm as a whole.

These are the sorts of concerns that management consultants ought to raise. But Enron’s management consultant was McKinsey, and McKinsey was as much a prisoner of the talent myth as its clients were. In 1998, Enron hired ten Wharton MBAs; that same year, McKinsey hired forty. In 1999, Enron hired twelve from Wharton; McKinsey hired sixty-one. The consultants at McKinsey were preaching at Enron what they believed about themselves. “When we would hire them, it wouldn’t just be for a week,” one former Enron manager recalls, of the brilliant young men and women from McKinsey who wandered the hallways at the company’s headquarters. “It would be for two to four months. They were always around.” They were there looking for people who had the talent to think outside the box. It never occurred to them that, if everyone had to think outside the box, maybe it was the box that needed fixing.

July 22, 2002

The New-Boy Network

WHAT DO JOB INTERVIEWS REALLY TELL US?

1.

Nolan Myers grew up in Houston, the elder of two boys in a middle-class family. He went to Houston’s High School for the Performing and Visual Arts and then Harvard, where he intended to major in history and science. After discovering the joys of writing code, though, he switched to computer science. “Programming is one of those things you get involved in, and you just can’t stop until you finish,” Myers says. “You get involved in it, and all of a sudden you look at your watch and it’s four in the morning! I love the elegance of it.” Myers is short and slightly stocky and has pale-blue eyes. He smiles easily, and when he speaks he moves his hands and torso for emphasis. He plays in a klezmer band called the Charvard Chai Notes. He talks to his parents a lot. He gets Bs and B-pluses.

In the last stretch of his senior year, Myers spent a lot of time interviewing for jobs with technology companies. He talked to a company named Trilogy, down in Texas, but he didn’t think he would fit in. “One of Trilogy’s subsidiaries put ads out in the paper saying that they were looking for the top tech students, and that they’d give them two hundred thousand dollars and a BMW,” Myers said, shaking his head in disbelief. In another of his interviews, a recruiter asked him to solve a programming problem, and he made a stupid mistake and the recruiter pushed the answer back across the table to him, saying that his “solution” accomplished nothing. As he remembers the moment, Myers blushes. “I was so nervous. I thought, Hmm, that sucks!” The way he says that, though, makes it hard to believe that he really was nervous, or maybe what Nolan Myers calls nervous the rest of us call a tiny flutter in the stomach. Myers doesn’t seem like the sort to get flustered. He’s the kind of person you would call the night before the big test in seventh grade when nothing made sense and you had begun to panic.

I like Nolan Myers. He will, I am convinced, be very good at whatever career he chooses. I say those two things even though I have spent no more than ninety minutes in his presence. We met only once, on a sunny afternoon just before his graduation at the Au Bon Pain in Harvard Square. He was wearing sneakers and khakis and a polo shirt in a dark-green pattern. He had a big backpack, which he plopped on the floor beneath the table. I bought him an orange juice. He fished around in his wallet and came up with a dollar to try to repay me, which I refused. We sat by the window. Previously, we had talked for perhaps three minutes on the phone, setting up the interview. Then I e-mailed him, asking him how I would recognize him at Au Bon Pain. He sent me the following message, with what I’m convinced—again, on the basis of almost no evidence—to be typical Myers panache: “22ish, five foot seven, straight brown hair, very good-looking.:).” I have never talked to his father, his mother, or his little brother, or any of his professors. I have never seen him ecstatic or angry or depressed. I know nothing of his personal habits, his tastes, or his quirks. I cannot even tell you why I feel the way I do about him. He’s good-looking and smart and articulate and funny, but not so good-looking and smart and articulate and funny that there is some obvious explanation for the conclusions I’ve drawn about him. I just like him, and I’m impressed by him, and if I were an employer looking for bright young college graduates, I’d hire him in a heartbeat.

I heard about Nolan Myers from Hadi Partovi, an executive with Tellme, a highly touted Silicon Valley startup offering Internet access through the telephone. If you were a computer-science major at MIT, Harvard, Stanford, Caltech, or the University of Waterloo this spring, looking for a job in software, Tellme was probably at the top of your list. Partovi and I talked in the conference room at Tellme’s offices, just off the soaring, open floor where all the firm’s programmers and marketers and executives sit, some of them with bunk beds built over their desks. (Tellme recently moved into an old printing plant—a low-slung office building with a huge warehouse attached—and, in accordance with new-economy logic, promptly turned the old offices into a warehouse and the old warehouse into offices.) Partovi is a handsome man of twenty-seven, with olive skin and short curly black hair, and throughout our entire interview he sat with his chair tilted precariously at a forty-five-degree angle. At the end of a long riff about how hard it is to find high-quality people, he blurted out one name: Nolan Myers. Then, from memory, he rattled off Myers’s telephone number. He very much wanted Myers to come to Tellme.

Partovi had met Myers in January of Myers’s senior year, during a recruiting trip to Harvard. “It was a heinous day,” Partovi remembers. “I started at seven and went until nine. I’d walk one person out and walk the other in.” The first fifteen minutes of every interview he spent talking about Tellme—its strategy, its goals, and its business. Then he gave everyone a short programming puzzle. For the rest of the hour-long meeting, Partovi asked questions. He remembers that Myers did well on the programming test, and after talking to him for thirty to forty minutes he became convinced that Myers had, as he puts it, “the right stuff.” Partovi spent even less time with Myers than I did. He didn’t talk to Myers’s family, or see him ecstatic or angry or depressed, either. He knew that Myers had spent last summer as an intern at Microsoft and was about to graduate from an Ivy League school. But virtually everyone recruited by a place like Tellme has graduated from an elite university, and the Microsoft summer-internship program has more than six hundred people in it. Partovi didn’t even know why he liked Myers so much. He just did. “It was very much a gut call,” he says.

This wasn’t so very different from the experience Nolan Myers had with Steve Ballmer, the CEO of Microsoft. Earlier that year, Myers attended a party for former Microsoft interns called Gradbash. Ballmer gave a speech there, and at the end of his remarks Myers raised his hand. “He was talking a lot about aligning the company in certain directions,” Myers told me, “and I asked him about how that influences his ability to make bets on other directions. Are they still going to make small bets?” Afterward, a Microsoft recruiter came up to Myers and said, “Steve wants your e-mail address.” Myers gave it to him, and soon he and Ballmer were e-mailing. Ballmer, it seems, badly wanted Myers to come to Microsoft. “He did research on me,” Myers says. “He knew which group I was interviewing with, and knew a lot about me personally. He sent me an e-mail saying that he’d love to have me come to Microsoft, and if I had any questions I should contact him. So I sent him a response, saying thank you. After I visited Tellme, I sent him an e-mail saying I was interested in Tellme, here were the reasons, that I wasn’t sure yet, and if he had anything to say I said I’d love to talk to him. I gave him my number. So he called, and after playing phone tag we talked—about career trajectory, how Microsoft would influence my career, what he thought of Tellme. I was extremely impressed with him, and he seemed very genuinely interested in me.”

What convinced Ballmer he wanted Myers? A glimpse! He caught a little slice of Nolan Myers in action and—just like that—the CEO of a $400 billion company was calling a college senior in his dorm room. Ballmer somehow knew he liked Myers, the same way Hadi Partovi knew, and the same way I knew after our little chat at Au Bon Pain. But what did we know? What could we know? By any reasonable measure, surely none of us knew Nolan Myers at all.

It is a truism of the new economy that the ultimate success of any enterprise lies with the quality of the people it hires. At many technology companies, employees are asked to all but live at the office, in conditions of intimacy that would have been unthinkable a generation ago. The artifacts of the prototypical Silicon Valley office—the videogames, the espresso bar, the bunk beds, the basketball hoops—are the elements of the rec room, not the workplace. And in the rec room you want to play only with your friends. But how do you find out who your friends are? Today, recruiters canvas the country for résumés. They analyze employment histories and their competitors’ staff listings. They call references and then do what I did with Nolan Myers: sit down with a perfect stranger for an hour and a half and attempt to draw conclusions about that stranger’s intelligence and personality. The job interview has become one of the central conventions of the modern economy. But what, exactly, can you know about a stranger after sitting down and talking with him for an hour?

2.

Some years ago, an experimental psychologist at Harvard University, Nalini Ambady, together with Robert Rosenthal, set out to examine the nonverbal aspects of good teaching. As the basis of her research, she used videotapes of teaching fellows that had been made during a training program at Harvard. Her plan was to have outside observers look at the tapes with the sound off and rate the effectiveness of the teachers by their expressions and physical cues. Ambady wanted to have at least a minute of film to work with. When she looked at the tapes, though, there was really only about ten seconds when the teachers were shown apart from the students. “I didn’t want students in the frame, because obviously it would bias the ratings,” Ambady says. “So I went to my adviser, and I said, ‘This isn’t going to work.’”

But it did. The observers, presented with a ten-second silent video clip, had no difficulty rating the teachers on a fifteen-item checklist of personality traits. In fact, when Ambady cut the clips back to five seconds, the ratings were the same. They were the same even when she showed her raters just two seconds of videotape. That sounds unbelievable unless you actually watch Ambady’s teacher clips, as I did, and realize that the eight seconds that distinguish the longest clips from the shortest are superfluous: anything beyond the first flash of insight is unnecessary. When we make a snap judgment, it is made in a snap. It’s also, very clearly, a judgment: we get a feeling that we have no difficulty articulating.

Ambady’s next step led to an even more remarkable conclusion. She compared those snap judgments of teacher effectiveness with evaluations made, after a full semester of classes, by students of the same teachers. The correlation between the two, she found, was astoundingly high. A person watching a two-second silent video clip of a teacher he has never met will reach conclusions about how good that teacher is that are very similar to those of a student who sits in the teacher’s class for an entire semester.

Recently, a comparable experiment was conducted by Frank Bernieri, a psychologist at the University of Toledo. Bernieri, working with one of his graduate students, Neha Gada-Jain, selected two people to act as interviewers, and trained them for six weeks in the proper procedures and techniques of giving an effective job interview. The two then interviewed ninety-eight volunteers of various ages and backgrounds. The interviews lasted between fifteen and twenty minutes, and afterward each interviewer filled out a six-page, five-part evaluation of the person he’d just talked to. Originally, the intention of the study was to find out whether applicants who had been coached in certain nonverbal behaviors designed to ingratiate themselves with their interviewers—like mimicking the interviewers’ physical gestures or posture—would get better ratings than applicants who behaved normally. As it turns out, they didn’t. But then another of Bernieri’s students, an undergraduate named Tricia Prickett, decided that she wanted to use the interview videotapes and the evaluations that had been collected to test out the adage that the handshake is everything.

“She took fifteen seconds of videotape showing the applicant as he or she knocks on the door, comes in, shakes the hand of the interviewer, sits down, and the interviewer welcomes the person,” Bernieri explained. Then, like Ambady, Prickett got a series of strangers to rate the applicants based on the handshake clip, using the same criteria that the interviewers had used. Once more, against all expectations, the ratings were very similar to those of the interviewers. “On nine out of the eleven traits the applicants were being judged on, the observers significantly predicted the outcome of the interview,” Bernieri says. “The strength of the correlations was extraordinary.”

This research takes Ambady’s conclusions one step further. In the Toledo experiment, the interviewers were trained in the art of interviewing. They weren’t dashing off a teacher evaluation on their way out the door. They were filling out a formal, detailed questionnaire, of the sort designed to give the most thorough and unbiased account of an interview. And still their ratings weren’t all that different from those of people off the street who saw just the greeting.

This is why Hadi Partovi, Steve Ballmer, and I all agreed on Nolan Myers. Apparently, human beings don’t need to know someone in order to believe that they know someone. Nor does it make that much difference, apparently, that Partovi reached his conclusion after putting Myers through the wringer for an hour, I reached mine after ninety minutes of amiable conversation at Au Bon Pain, and Ballmer reached his after watching and listening as Myers asked a question.

Bernieri and Ambady believe that the power of first impressions suggests that human beings have a particular kind of prerational ability for making searching judgments about others. In Ambady’s teacher experiments, when she asked her observers to perform a potentially distracting cognitive task—like memorizing a set of numbers—while watching the tapes, their judgments of teacher effectiveness were unchanged. But when she instructed her observers to think hard about their ratings before they made them, their accuracy suffered substantially. Thinking only gets in the way. “The brain structures that are involved here are very primitive,” Ambady speculates. “All of these affective reactions are probably governed by the lower brain structures.” What we are picking up in that first instant would seem to be something quite basic about a person’s character, because what we conclude after two seconds is pretty much the same as what we conclude after twenty minutes or, indeed, an entire semester. “Maybe you can tell immediately whether someone is extroverted, or gauge the person’s ability to communicate,” Bernieri says. “Maybe these clues or cues are immediately accessible and apparent.” Bernieri and Ambady are talking about the existence of a powerful form of human intuition. In a way, that’s comforting, because it suggests that we can meet a perfect stranger and immediately pick up on something important about him. It means that I shouldn’t be concerned that I can’t explain why I like Nolan Myers, because, if such judgments are made without thinking, then surely they defy explanation.

But there’s a troubling suggestion here as well. I believe that Nolan Myers is an accomplished and likable person. But I have no idea from our brief encounter how honest he is, or whether he is self-centered, or whether he works best by himself or in a group, or any number of other fundamental traits. That people who simply see the handshake arrive at the same conclusions as people who conduct a full interview also implies, perhaps, that those initial impressions matter too much—that they color all the other impressions that we gather over time.

For example, I asked Myers if he felt nervous about the prospect of leaving school for the workplace, which seemed like a reasonable question, since I remember how anxious I was before my first job. Would the hours scare him? Oh no, he replied, he was already working between eighty and a hundred hours a week at school. “Are there things that you think you aren’t good at that, make you worry?” I continued.

His reply was sharp: “Are there things that I’m not good at, or things that I can’t learn? I think that’s the real question. There are a lot of things I don’t know anything about, but I feel comfortable that given the right environment and the right encouragement I can do well at.” In my notes, next to that reply, I wrote “Great answer!” and I can remember at the time feeling the little thrill you experience as an interviewer when someone’s behavior conforms with your expectations. Because I had decided, right off, that I liked him, what I heard in his answer was toughness and confidence. Had I decided early on that I didn’t like Nolan Myers, I would have heard in that reply arrogance and bluster. The first impression becomes a self-fulfilling prophecy: we hear what we expect to hear. The interview is hopelessly biased in favor of the nice.

3.

When Ballmer and Partovi and I met Nolan Myers, we made a prediction. We looked at the way he behaved in our presence—at the way he talked and acted and seemed to think—and drew conclusions about how he would behave in other situations. I had decided, remember, that Myers was the kind of person you called the night before the big test in seventh grade. Was I right to make that kind of generalization?

This is a question that social psychologists have looked at closely. In the late 1920s, in a famous study, the psychologist Theodore Newcomb analyzed extroversion among adolescent boys at a summer camp. He found that how talkative a boy was in one setting—say, at lunch—was highly predictive of how talkative that boy would be in the same setting in the future. A boy who was curious at lunch on Monday was likely to be curious at lunch on Tuesday. But his behavior in one setting told you almost nothing about how he would behave in a different setting: from how someone behaved at lunch, you couldn’t predict how he would behave during, say, afternoon playtime. In a more recent study, of conscientiousness among students at Carleton College, the researchers Walter Mischel, Neil Lutsky, and Philip K. Peake showed that how neat a student’s assignments were or how punctual he was told you almost nothing about how often he attended class or how neat his room or his personal appearance was. How we behave at any one time, evidently, has less to do with some immutable inner compass than with the particulars of our situation.

This conclusion, obviously, is at odds with our intuition. Most of the time, we assume that people display the same character traits in different situations. We habitually underestimate the large role that context plays in people’s behavior. In the Newcomb summer-camp experiment, for example, the results showing how little consistency there was from one setting to another in talkativeness, curiosity, and gregariousness were tabulated from observations made and recorded by camp counselors on the spot. But when, at the end of the summer, those same counselors were asked to give their final impressions of the kids, they remembered the children’s behavior as being highly consistent.

“The basis of the illusion is that we are somehow confident that we are getting what is there, that we are able to read off a person’s disposition,” Richard Nisbett, a psychologist at the University of Michigan, says. “When you have an interview with someone and have an hour with them, you don’t conceptualize that as taking a sample of a person’s behavior, let alone a possibly biased sample, which is what it is. What you think is that you are seeing a hologram, a small and fuzzy image but still the whole person.”

Then Nisbett mentioned his frequent collaborator, Lee Ross, who teaches psychology at Stanford. “There was one term when he was teaching statistics and one term when he was teaching a course with a lot of humanistic psychology. He gets his teacher evaluations. The first referred to him as cold, rigid, remote, finicky, and uptight. And the second described this wonderful warmhearted guy who was so deeply concerned with questions of community and getting students to grow. It was Jekyll and Hyde. In both cases, the students thought they were seeing the real Lee Ross.”

Psychologists call this tendency—to fixate on supposedly stable character traits and overlook the influence of context—the Fundamental Attribution Error, and if you combine this error with what we know about snap judgments, the interview becomes an even more problematic encounter. Not only had I let my first impressions color the information I gathered about Myers, but I had also assumed that the way he behaved with me in an interview setting was indicative of the way he would always behave. It isn’t that the interview is useless; what I learned about Myers—that he and I get along well—is something I could never have gotten from a résumé or by talking to his references. It’s just that our conversation turns out to have been less useful, and potentially more misleading, than I had supposed. That most basic of human rituals—the conversation with a stranger—turns out to be a minefield.

4.

Not long after I met with Nolan Myers, I talked with a human-resources consultant from Pasadena named Justin Menkes. Menkes’s job is to figure out how to extract meaning from face-to-face encounters, and with that in mind he agreed to spend an hour interviewing me the way he thinks interviewing ought to be done. It felt, going in, not unlike a visit to a shrink, except that instead of having months, if not years, to work things out, Menkes was set upon stripping away my secrets in one session. Consider, he told me, a commonly asked question like “Describe a few situations in which your work was criticized. How did you handle the criticism?” The problem, Menkes said, is that it’s much too obvious what the interviewee is supposed to say. “There was a situation where I was working on a project, and I didn’t do as well as I could have,” he said, adopting a mock-sincere singsong. “My boss gave me some constructive criticism. And I redid the project. It hurt. Yet we worked it out.” The same is true of the question “What would your friends say about you?”—to which the correct answer (preferably preceded by a pause, as if to suggest that it had never dawned on you that someone would ask such a question) is “My guess is that they would call me a people person—either that or a hard worker.”

Myers and I had talked about obvious questions, too. “What is your greatest weakness?” I asked him. He answered, “I tried to work on a project my freshman year, a children’s festival. I was trying to start a festival as a benefit here in Boston. And I had a number of guys working with me. I started getting concerned with the scope of the project we were working on—how much responsibility we had, getting things done. I really put the brakes on, but in retrospect I really think we could have done it and done a great job.”

Then Myers grinned and said, as an aside, “Do I truly think that is a fault? Honestly, no.” And, of course, he’s right. All I’d really asked him was whether he could describe a personal strength as if it were a weakness, and in answering as he did, he had merely demonstrated his knowledge of the unwritten rules of the interview.

But, Menkes said, what if those questions were rephrased so that the answers weren’t obvious? For example: “At your weekly team meetings, your boss unexpectedly begins aggressively critiquing your performance on a current project. What do you do?”

I felt a twinge of anxiety. What would I do? I remembered a terrible boss I’d had years ago. “I’d probably be upset,” I said. “But I doubt I’d say anything. I’d probably just walk away.” Menkes gave no indication whether he was concerned or pleased by that answer. He simply pointed out that another person might well have said something like “I’d go and see my boss later in private, and confront him about why he embarrassed me in front of my team.” I was saying that I would probably handle criticism—even inappropriate criticism—from a superior with stoicism; in the second case, the applicant was saying he or she would adopt a more confrontational style. Or, at least, we were telling the interviewer that the workplace demands either stoicism or confrontation—and to Menkes these are revealing and pertinent pieces of information.

Menkes moved on to another area—handling stress. A typical question in this area is something like “Tell me about a time when you had to do several things at once. How did you handle the situation? How did you decide what to do first?” Menkes says this is also too easy. “I just had to be very organized,” he began again in his mock-sincere singsong. “I had to multitask. I had to prioritize and delegate appropriately. I checked in frequently with my boss.” Here’s how Menkes rephrased it: “You’re in a situation where you have two very important responsibilities that both have a deadline that is impossible to meet. You cannot accomplish both. How do you handle that situation?”

“Well,” I said, “I would look at the two and decide what I was best at, and then go to my boss and say, ‘It’s better that I do one well than both poorly,’ and we’d figure out who else could do the other task.”

Menkes immediately seized on a telling detail in my answer. I was interested in what job I would do best. But isn’t the key issue what job the company most needed to have done? With that comment, I had revealed something valuable: that in a time of work-related crisis I start from a self-centered consideration. “Perhaps you are a bit of a solo practitioner,” Menkes said diplomatically. “That’s an essential bit of information.”

Menkes deliberately wasn’t drawing any broad conclusions. If we are not people who are shy or talkative or outspoken but people who are shy in some contexts, talkative in other situations, and outspoken in still other areas, then what it means to know someone is to catalog and appreciate all those variations. Menkes was trying to begin that process of cataloging. This interviewing technique is known as structured interviewing, and in studies by industrial psychologists it has been shown to be the only kind of interviewing that has any success at all in predicting performance in the workplace. In the structured interviews, the format is fairly rigid. Each applicant is treated in precisely the same manner. The questions are scripted. The interviewers are carefully trained, and each applicant is rated on a series of predetermined scales.

What is interesting about the structured interview is how narrow its objectives are. When I interviewed Nolan Myers I was groping for some kind of global sense of who he was; Menkes seemed entirely uninterested in arriving at that same general sense of me—he seemed to realize how foolish that expectation was for an hour-long interview. The structured interview works precisely because it isn’t really an interview; it isn’t about getting to know someone, in a traditional sense. It’s as much concerned with rejecting information as it is with collecting it.

Not surprisingly, interview specialists have found it extraordinarily difficult to persuade most employers to adopt the structured interview. It just doesn’t feel right. For most of us, hiring someone is essentially a romantic process, in which the job interview functions as a desexualized version of a date. We are looking for someone with whom we have a certain chemistry, even if the coupling that results ends in tears and the pursuer and the pursued turn out to have nothing in common. We want the unlimited promise of a love affair. The structured interview, by contrast, seems to offer only the dry logic and practicality of an arranged marriage.

5.

Nolan Myers agonized over which job to take. He spent half an hour on the phone with Steve Ballmer, and Ballmer was very persuasive. “He gave me very, very good advice,” Myers says of his conversations with the Microsoft CEO. “He felt that I should go to the place that excited me the most and that I thought would be best for my career. He offered to be my mentor.” Myers says he talked to his parents every day about what to do. In February, he flew out to California and spent a Saturday going from one Tellme executive to another, asking and answering questions. “Basically, I had three things I was looking for. One was long-term goals for the company. Where did they see themselves in five years? Second, what position would I be playing in the company?” He stopped and burst out laughing. “And I forget what the third one is.” In March, Myers committed to Tellme.

Will Nolan Myers succeed at Tellme? I think so, although I honestly have no idea. It’s a harder question to answer now than it would have been thirty or forty years ago. If this were 1965, Nolan Myers would have gone to work at IBM and worn a blue suit and sat in a small office and kept his head down, and the particulars of his personality would not have mattered so much. It was not so important that IBM understood who you were before it hired you, because you understood what IBM was. If you walked through the door at Armonk or at a branch office in Illinois, you knew what you had to be and how you were supposed to act. But to walk through the soaring, open offices of Tellme, with the bunk beds over the desks, is to be struck by how much more demanding the culture of Silicon Valley is. Nolan Myers will not be provided with a social script, that blue suit, and organization chart. Tellme, like any technology startup these days, wants its employees to be part of a fluid team, to be flexible and innovative, to work with shifting groups in the absence of hierarchy and bureaucracy, and in that environment, where the workplace doubles as the rec room, the particulars of your personality matter a great deal.

This is part of the new economy’s appeal, because Tellme’s soaring warehouse is a more productive and enjoyable place to work than the little office boxes of the old IBM. But the danger here is that we will be led astray in judging these newly important particulars of character. If we let personability—some indefinable, prerational intuition, magnified by the Fundamental Attribution Error—bias the hiring process today, then all we will have done is replace the old-boy network, where you hired your nephew, with the new-boy network, where you hire whoever impressed you most when you shook his hand. Social progress, unless we’re careful, can merely be the means by which we replace the obviously arbitrary with the not so obviously arbitrary.

Myers has spent much of the past year helping to teach Introduction to Computer Science. He realized, he says, that one of the reasons that students were taking the course was that they wanted to get jobs in the software industry. “I decided that, having gone through all this interviewing, I had developed some expertise, and I would like to share that. There is a real skill and art in presenting yourself to potential employers. And so what we did in this class was talk about the kinds of things that employers are looking for—what are they looking for in terms of personality. One of the most important things is that you have to come across as being confident in what you are doing and in who you are. How do you do that? Speak clearly and smile.” As he said that, Nolan Myers smiled. “For a lot of people, that’s a very hard skill to learn. But for some reason I seem to understand it intuitively.”

May 29, 2000

Troublemakers

WHAT PIT BULLS CAN TEACH US ABOUT CRIME

1.

One sunny winter afternoon, Guy Clairoux picked up his two-and-a-half-year-old son, Jayden, from day care and walked him back to their house in the west end of Ottawa, Ontario. They were almost home. Jayden was straggling behind, and, as his father’s back was turned, a pit bull jumped over a backyard fence and lunged at Jayden. “The dog had his head in its mouth and started to do this shake,” Clairoux’s wife, JoAnn Hartley, said later. As she watched in horror, two more pit bulls jumped over the fence, joining in the assault. She and Clairoux came running, and he punched the first of the dogs in the head, until it dropped Jayden, and then he threw the boy toward his mother. Hartley fell on her son, protecting him with her body. “JoAnn!” Clairoux cried out, as all three dogs descended on his wife. “Cover your neck, cover your neck.” A neighbor, sitting by her window, screamed for help. Her partner and a friend, Mario Gauthier, ran outside. A neighborhood boy grabbed his hockey stick and threw it to Gauthier. He began hitting one of the dogs over the head, until the stick broke. “They wouldn’t stop,” Gauthier said. “As soon as you’d stop, they’d attack again. I’ve never seen a dog go so crazy. They were like Tasmanian devils.” The police came. The dogs were pulled away, and the Clairouxes and one of the rescuers were taken to the hospital. Five days later, the Ontario legislature banned the ownership of pit bulls. “Just as we wouldn’t let a great white shark in a swimming pool,” the province’s attorney general, Michael Bryant, had said, “maybe we shouldn’t have these animals on the civilized streets.”

Pit bulls, descendants of the bulldogs used in the nineteenth century for bull baiting and dogfighting, have been bred for “gameness,” and thus a lowered inhibition to aggression. Most dogs fight as a last resort, when staring and growling fail. A pit bull is willing to fight with little or no provocation. Pit bulls seem to have a high tolerance for pain, making it possible for them to fight to the point of exhaustion. Whereas guard dogs like German shepherds usually attempt to restrain those they perceive to be threats by biting and holding, pit bulls try to inflict the maximum amount of damage on an opponent. They bite, hold, shake, and tear. They don’t growl or assume an aggressive facial expression as warning. They just attack. “They are often insensitive to behaviors that usually stop aggression,” one scientific review of the breed states. “For example, dogs not bred for fighting usually display defeat in combat by rolling over and exposing a light underside. On several occasions, pit bulls have been reported to disembowel dogs offering this signal of submission.” In epidemiological studies of dog bites, the pit bull is overrepresented among dogs known to have seriously injured or killed human beings, and as a result, pit bulls have been banned or restricted in several Western European countries, China, and numerous cities and municipalities across North America. Pit bulls are dangerous.

Of course, not all pit bulls are dangerous. Most don’t bite anyone. Meanwhile, Dobermans and Great Danes and German shepherds and Rottweilers are frequent biters as well, and the dog that recently mauled a Frenchwoman so badly that she was given the world’s first face transplant was, of all things, a Labrador retriever. When we say that pit bulls are dangerous, we are making a generalization, just as insurance companies use generalizations when they charge young men more for car insurance than the rest of us (even though many young men are perfectly good drivers), and doctors use generalizations when they tell overweight middle-aged men to get their cholesterol checked (even though many overweight middle-aged men won’t experience heart trouble). Because we don’t know which dog will bite someone or who will have a heart attack or which drivers will get in an accident, we can make predictions only by generalizing. As the legal scholar Frederick Schauer has observed, “painting with a broad brush” is “an often inevitable and frequently desirable dimension of our decision-making lives.”

Another word for generalization, though, is stereotype, and stereotypes are usually not considered desirable dimensions of our decision-making lives. The process of moving from the specific to the general is both necessary and perilous. A doctor could, with some statistical support, generalize about men of a certain age and weight. But what if generalizing from other traits—such as high blood pressure, family history, and smoking—saved more lives? Behind each generalization is a choice of what factors to leave in and what factors to leave out, and those choices can prove surprisingly complicated. After the attack on Jayden Clairoux, the Ontario government chose to make a generalization about pit bulls. But it could also have chosen to generalize about powerful dogs, or about the kinds of people who own powerful dogs, or about small children, or about backyard fences—or, indeed, about any number of other things to do with dogs and people and places. How do we know when we’ve made the right generalization?

2.

In July of 2005, following a series of bombings in subways and on buses in London, the New York City Police Department announced that it would send officers into the subways to conduct random searches of passengers’ bags. On the face of it, doing random searches in the hunt for terrorists—as opposed to being guided by generalizations—seems like a silly idea. As a columnist in New York magazine wrote at the time, “Not just ‘most’ but nearly every jihadi who has attacked a Western European or American target is a young Arab or Pakistani man. In other words, you can predict with a fair degree of certainty what an Al Qaeda terrorist looks like. Just as we have always known what Mafiosi look like—even as we understand that only an infinitesimal fraction of Italian-Americans are members of the mob.”

But wait: do we really know what mafiosi look like? In The Godfather, where most of us get our knowledge of the Mafia, the male members of the Corleone family were played by Marlon Brando, who was of Irish and French ancestry, James Caan, who is Jewish, and two Italian-Americans, Al Pacino and John Cazale. To go by The Godfather, mafiosi look like white men of European descent, which, as generalizations go, isn’t terribly helpful. Figuring out what an Islamic terrorist looks like isn’t any easier. Muslims are not like the Amish: they don’t come dressed in identifiable costumes. And they don’t look like basketball players; they don’t come in predictable shapes and sizes. Islam is a religion that spans the globe.

“We have a policy against racial profiling,” Raymond Kelly, New York City’s police commissioner, told me. “I put it in here in March of the first year I was here. It’s the wrong thing to do, and it’s also ineffective. If you look at the London bombings, you have three British citizens of Pakistani descent. You have Germaine Lindsay, who is Jamaican. You have the next crew, on July 21, who are East African. You have a Chechen woman in Moscow in early 2004 who blows herself up in the subway station. So whom do you profile? Look at New York City. Forty percent of New Yorkers are born outside the country. Look at the diversity here. Who am I supposed to profile?”

Kelly was pointing out what might be called profiling’s “category problem.” Generalizations involve matching a category of people to a behavior or trait—overweight middle-aged men to heart-attack risk, young men to bad driving. But, for that process to work, you have to be able both to define and to identify the category you are generalizing about. “You think that terrorists aren’t aware of how easy it is to be characterized by ethnicity?” Kelly went on. “Look at the 9/11 hijackers. They came here. They shaved. They went to topless bars. They wanted to blend in. They wanted to look like they were part of the American dream. These are not dumb people. Could a terrorist dress up as a Hasidic Jew and walk into the subway, and not be profiled? Yes. I think profiling is just nuts.”

3.

Pit bull bans involve a category problem, too, because pit bulls, as it happens, aren’t a single breed. The name refers to dogs belonging to a number of related breeds, such as the American Staffordshire terrier, the Staffordshire bull terrier, and the American pit bull terrier—all of which share a square and muscular body, a short snout, and a sleek, short-haired coat. Thus the Ontario ban prohibits not only these three breeds but any “dog that has an appearance and physical characteristics that are substantially similar” to theirs; the term of art is “pit bull-type” dogs. But what does that mean? Is a cross between an American pit bull terrier and a golden retriever a pit bull-type dog or a golden retriever-type dog? If thinking about muscular terriers as pit bulls is a generalization, then thinking about dangerous dogs as anything substantially similar to a pit bull is a generalization about a generalization. “The way a lot of these laws are written, pit bulls are whatever they say they are,” Lora Brashears, a kennel manager in Pennsylvania, says. “And for most people it just means big, nasty, scary dog that bites.”

The goal of pit bull bans, obviously, isn’t to prohibit dogs that look like pit bulls. The pit bull appearance is a proxy for the pit bull temperament—for some trait that these dogs share. But “pit bull-ness” turns out to be elusive as well. The supposedly troublesome characteristics of the pit bull type—its gameness, its determination, its insensitivity to pain—are chiefly directed toward other dogs. Pit bulls were not bred to fight humans. On the contrary: a dog that went after spectators, or its handler, or the trainer, or any of the other people involved in making a dogfighting dog a good dogfighter was usually put down. (The rule in the pit bull world was “Man-eaters die.”)

A Georgia-based group called the American Temperament Test Society has put twenty-five thousand dogs through a ten-part standardized drill designed to assess a dog’s stability, shyness, aggressiveness, and friendliness in the company of people. A handler takes a dog on a six-foot lead and judges its reaction to stimuli such as gunshots, an umbrella opening, and a weirdly dressed stranger approaching in a threatening way. Eighty-four percent of the pit bulls that have been given the test have passed, which ranks pit bulls ahead of beagles, Airedales, bearded collies, and all but one variety of dachshund. “We have tested somewhere around a thousand pit bull-type dogs,” Carl Herkstroeter, the president of the ATTS, says. “I’ve tested half of them. And of the number I’ve tested I have disqualified one pit bull because of aggressive tendencies. They have done extremely well. They have a good temperament. They are very good with children.” It can even be argued that the same traits that make the pit bull so aggressive toward other dogs are what make it so nice to humans. “There are a lot of pit bulls these days who are licensed therapy dogs,” the writer Vicki Hearne points out. “Their stability and resoluteness make them excellent for work with people who might not like a more bouncy, flibbertigibbet sort of dog. When pit bulls set out to provide comfort, they are as resolute as they are when they fight, but what they are resolute about is being gentle. And, because they are fearless, they can be gentle with anybody.”

Then which are the pit bulls that get into trouble? “The ones that the legislation is geared toward have aggressive tendencies that are either bred in by the breeder, trained in by the trainer, or reinforced in by the owner,” Herkstroeter says. A mean pit bull is a dog that has been turned mean, by selective breeding, by being cross-bred with a bigger, human-aggressive breed like German shepherds or Rottweilers, or by being conditioned in such a way that it begins to express hostility to human beings. A pit bull is dangerous to people, then, not to the extent that it expresses its essential pit bull-ness but to the extent that it deviates from it. A pit-bull ban is a generalization about a generalization about a trait that is not, in fact, general. That’s a category problem.

4.

One of the puzzling things about New York City is that, after the enormous and well-publicized reductions in crime in the mid-1990s, the crime rate has continued to fall. From 2004 to 2006, for instance, murder in New York declined by almost 10 percent, rape by 12 percent, and burglary by more than 18 percent. To pick another random year, in 2005 auto theft went down 11.8 percent. On a list of two hundred and forty cities in the United States with a population of a hundred thousand or more, New York City ranks two hundred-and-twenty-second in crime, down near the bottom with Fontana, California, and Port St. Lucie, Florida. In the 1990s, the crime decrease was attributed to big obvious changes in city life and government—the decline of the drug trade, the gentrification of Brooklyn, the successful implementation of broken windows policing. But all those big changes happened a decade ago. Why is crime still falling?

The explanation may have to do with a shift in police tactics. The NYPD has a computerized map showing, in real time, precisely where serious crimes are being reported, and at any moment the map typically shows a few dozen constantly shifting high-crime hot spots, some as small as two or three blocks square. What the NYPD has done, under Commissioner Kelly, is to use the map to establish impact zones, and to direct newly graduated officers—who used to be distributed proportionally to precincts across the city—to these zones, in some cases doubling the number of officers in the immediate neighborhood. “We took two-thirds of our graduating class and linked them with experienced officers, and focused on those areas,” Kelly said. “Well, what has happened is that over time we have averaged about a thirty-five-percent crime reduction in impact zones.”

For years, experts have maintained that the incidence of violent crime is inelastic relative to police presence—that people commit serious crimes because of poverty and psychopathology and cultural dysfunction, along with spontaneous motives and opportunities. The presence of a few extra officers down the block, it was thought, wouldn’t make much difference. But the NYPD experience suggests otherwise. More police means that some crimes are prevented, others are more easily solved, and still others are displaced—pushed out of the troubled neighborhood—which Kelly says is a good thing, because it disrupts the patterns and practices and social networks that serve as the basis for lawbreaking. In other words, the relation between New York City (a category) and criminality (a trait) is unstable, and this kind of instability is another way in which our generalizations can be derailed.

Why, for instance, is it a useful rule of thumb that Kenyans are good distance runners? It’s not just that it’s statistically supportable today. It’s that it has been true for almost half a century, and that in Kenya the tradition of distance running is sufficiently rooted that something cataclysmic would have to happen to dislodge it. By contrast, the generalization that New York City is a crime-ridden place was once true and now, manifestly, isn’t. People who moved to sunny retirement communities like Port St. Lucie because they thought they were much safer there than in New York are suddenly in the position of having made the wrong bet.

The instability issue is a problem for profiling in law enforcement as well. The law professor David Cole once tallied up some of the traits that Drug Enforcement Administration agents have used over the years in making generalizations about suspected smugglers. Here is a sample:

Arrived late at night; arrived early in the morning; arrived in afternoon; one of the first to deplane; one of the last to deplane; deplaned in the middle; purchased ticket at the airport; made reservation on short notice; bought coach ticket; bought first-class ticket; used one way ticket; used round-trip ticket; paid for ticket with cash; paid for ticket with small denomination currency; paid for ticket with large denomination currency; made local telephone calls after deplaning; made long-distance telephone call after deplaning; pretended to make telephone call; traveled from New York to Los Angeles; traveled to Houston; carried no luggage; carried brand-new luggage; carried a small bag; carried a medium-sized bag; carried two bulky garment bags; carried two heavy suitcases; carried four pieces of luggage; overly protective of luggage; disassociated self from luggage; traveled alone; traveled with a companion; acted too nervous; acted too calm; made eye contact with officer; avoided making eye contact with officer; wore expensive clothing and jewelry; dressed casually; went to restroom after deplaning; walked rapidly through airport; walked slowly through airport; walked aimlessly through airport; left airport by taxi; left airport by limousine; left airport by private car; left airport by hotel courtesy van.

Some of these reasons for suspicion are plainly absurd, suggesting that there’s no particular rationale to the generalizations used by DEA agents in stopping suspected drug smugglers. A way of making sense of the list, though, is to think of it as a catalog of unstable traits. Smugglers may once have tended to buy one-way tickets in cash and carry two bulky suitcases. But they don’t have to. They can easily switch to round-trip tickets bought with a credit card, or a single carry-on bag, without losing their capacity to smuggle. There’s a second kind of instability here as well. Maybe the reason some of them switched from one-way tickets and two bulky suitcases was that law enforcement got wise to those habits, so the smugglers did the equivalent of what the jihadis seemed to have done in London when they switched to East Africans because the scrutiny of young Arab and Pakistani men grew too intense. It doesn’t work to generalize about a relationship between a category and a trait when that relationship isn’t stable—or when the act of generalizing may itself change the basis of the generalization.

Before Kelly became the New York City police commissioner, he served as the head of the US Customs Service, and while he was there, he overhauled the criteria that border-control officers use to identify and search suspected smugglers. There had been a list of forty-three suspicious traits. He replaced it with a list of six broad criteria. Is there something suspicious about their physical appearance? Are they nervous? Is there specific intelligence targeting this person? Does the drug-sniffing dog raise an alarm? Is there something amiss in their paperwork or explanations? Has contraband been found that implicates this person?

You’ll find nothing here about race or gender or ethnicity, and nothing here about expensive jewelry or deplaning at the middle or the end, or walking briskly or walking aimlessly. Kelly removed all the unstable generalizations, forcing customs officers to make generalizations about things that don’t change from one day or one month to the next. Some percentage of smugglers will always be nervous, will always get their story wrong, and will always be caught by the dogs. That’s why those kinds of inferences are more reliable than the ones based on whether smugglers are white or black, or carry one bag or two. After Kelly’s reforms, the number of searches conducted by the Customs Service dropped by about 75 percent, but the number of successful seizures improved by 25 percent. The officers went from making fairly lousy decisions about smugglers to making pretty good ones. “We made them more efficient and more effective at what they were doing,” Kelly said.

5.

Does the notion of a pit bull menace rest on a stable or an unstable generalization? The best data we have on breed dangerousness are fatal dog bites, which serve as a useful indicator of just how much havoc certain kinds of dogs are causing. Between the late 1970s and the late 1990s, more than twenty-five breeds were involved in fatal attacks in the United States. Pit bull breeds led the pack, but the variability from year to year is considerable. For instance, in the period from 1981 to 1982, fatalities were caused by five pit bulls, three mixed breeds, two St. Bernards, two German shepherd mixes, a pure-bred German shepherd, a husky-type, a Doberman, a Chow Chow, a Great Dane, a wolf-dog hybrid, a husky mix, and a pit bull mix—but no Rottweilers. In 1995 and 1996, the list included ten Rottweilers, four pit bulls, two German shepherds, two huskies, two Chow Chows, two wolf-dog hybrids, two shepherd mixes, a Rottweiler mix, a mixed breed, a Chow Chow mix, and a Great Dane. The kinds of dogs that kill people change over time, because the popularity of certain breeds changes over time. The one thing that doesn’t change is the total number of the people killed by dogs. When we have more problems with pit bulls, it’s not necessarily a sign that pit bulls are more dangerous than other dogs. It could just be a sign that pit bulls have become more numerous.

“I’ve seen virtually every breed involved in fatalities, including Pomeranians and everything else, except a beagle or a basset hound,” Randall Lockwood, a senior vice president of the ASPCA and one of the country’s leading dog-bite experts, told me. “And there’s always one or two deaths attributable to malamutes or huskies, although you never hear people clamoring for a ban on those breeds. When I first started looking at fatal dog attacks, they largely involved dogs like German shepherds and shepherd mixes and St. Bernards—which is probably why Stephen King chose to make Cujo a St. Bernard, not a pit bull. I haven’t seen a fatality involving a Doberman for decades, whereas in the 1970s they were quite common. If you wanted a mean dog, back then, you got a Doberman. I don’t think I even saw my first pit bull case until the middle to late 1980s, and I didn’t start seeing Rottweilers until I’d already looked at a few hundred fatal dog attacks. Now those dogs make up the preponderance of fatalities. The point is that it changes over time. It’s a reflection of what the dog of choice is among people who want to own an aggressive dog.”

There is no shortage of more stable generalizations about dangerous dogs, though. A 1991 study in Denver, for example, compared 178 dogs that had a history of biting people with a random sample of 178 dogs with no history of biting. The breeds were scattered: German shepherds, Akitas, and Chow Chows were among those most heavily represented. (There were no pit bulls among the biting dogs in the study, because Denver banned pit bulls in 1989.) But a number of other, more stable factors stand out. The biters were 6.2 times as likely to be male than female, and 2.6 times as likely to be intact than neutered. The Denver study also found that biters were 2.8 times as likely to be chained as unchained. “About twenty percent of the dogs involved in fatalities were chained at the time, and had a history of long-term chaining,” Lockwood said. “Now, are they chained because they are aggressive or aggressive because they are chained? It’s a bit of both. These are animals that have not had an opportunity to become socialized to people. They don’t necessarily even know that children are small human beings. They tend to see them as prey.”

In many cases, vicious dogs are hungry or in need of medical attention. Often, the dogs had a history of aggressive incidents, and, overwhelmingly, dog-bite victims were children (particularly small boys) who were physically vulnerable to attack and may also have unwittingly done things to provoke the dog, like teasing it, or bothering it while it was eating. The strongest connection of all, though, is between the trait of dog viciousness and certain kinds of dog owners. In about a quarter of fatal dog-bite cases, the dog owners were previously involved in illegal fighting. The dogs that bite people are, in many cases, socially isolated because their owners are socially isolated, and they are vicious because they have owners who want a vicious dog. The junkyard German shepherd—which looks as if it would rip your throat out—and the German-shepherd guide dog are the same breed. But they are not the same dog, because they have owners with different intentions.

“A fatal dog attack is not just a dog bite by a big or aggressive dog,” Lockwood went on. “It is usually a perfect storm of bad human-canine interactions—the wrong dog, the wrong background, the wrong history in the hands of the wrong person in the wrong environmental situation. I’ve been involved in many legal cases involving fatal dog attacks, and, certainly, it’s my impression that these are generally cases where everyone is to blame. You’ve got the unsupervised three-year-old child wandering in the neighborhood killed by a starved, abused dog owned by the dogfighting boyfriend of some woman who doesn’t know where her child is. It’s not old Shep sleeping by the fire who suddenly goes bonkers. Usually there are all kinds of other warning signs.”

6.

Jayden Clairoux was attacked by Jada, a pit bull terrier, and her two pit bull-bullmastiff puppies, Agua and Akasha. The dogs were owned by a twenty-one-year-old man named Shridev Café, who worked in construction and did odd jobs. Five weeks before the Clairoux attack, Café’s three dogs got loose and attacked a sixteen-year-old boy and his four-year-old half brother while they were ice skating. The boys beat back the animals with a snow shovel and escaped into a neighbor’s house. Café was fined, and he moved the dogs to his seventeen-year-old girlfriend’s house. This was not the only time that he had run into trouble; a few months later, he was charged with domestic assault and, in another incident, involving a street brawl, with aggravated assault. “Shridev has personal issues,” Cheryl Smith, a canine-behavior specialist who consulted on the case, says. “He’s certainly not a very mature person.” Agua and Akasha were now about seven months old. The court order in the wake of the first attack required that they be muzzled when they were outside the home and kept in an enclosed yard. But Café did not muzzle them, because, he said later, he couldn’t afford muzzles, and apparently no one from the city ever came by to force him to comply. A few times, he talked about taking his dogs to obedience classes, but he never did. The subject of neutering them also came up—particularly Agua, the male—but neutering cost a hundred dollars, which he evidently thought was too much money, and when the city temporarily confiscated his animals after the first attack, it did not neuter them, either, because Ottawa does not have a policy of preemptively neutering dogs that bite people.

On the day of the second attack, according to some accounts, a visitor came by the house of Café’s girlfriend, and the dogs got wound up. They were put outside, where the snowbanks were high enough that the backyard fence could be readily jumped. Jayden Clairoux stopped and stared at the dogs, saying, “Puppies, puppies.” His mother called out to his father. His father came running, which is the kind of thing that will rile up an aggressive dog. The dogs jumped the fence, and Agua took Jayden’s head in his mouth and started to shake. It was a textbook dog-biting case: unneutered, ill-trained, charged-up dogs with a history of aggression and an irresponsible owner somehow get loose and set upon a small child. The dogs had already passed through the animal bureaucracy of Ottawa, and the city could easily have prevented the second attack with the right kind of generalization—a generalization based not on breed but on the known and meaningful connection between dangerous dogs and negligent owners. But that would have required someone to track down Shridev Café and check to see whether he had bought muzzles, and someone to send the dogs to be neutered after the first attack, and an animal-control law that ensured that those whose dogs attack small children forfeit their right to have a dog. It would have required, that is, a more exacting set of generalizations to be more exactingly applied. It’s always easier just to ban the breed.

February 6, 2006

Acknowledgments

Every one of these stories was rigorously perfected by the copy and fact-checking departments of The New Yorker magazine. They are all wizards. Thank you.

About the Author

Malcolm Gladwell has been a staff writer with The New Yorker magazine since 1996, and all of the essays in What the Dog Saw first appeared in the pages of that magazine. He is the author of three previous books, The Tipping Point: How Little Things Can Make a Big Difference; Blink: The Power of Thinking Without Thinking; and Outliers: The Story of Success, all of which were number one New York Times bestsellers. Prior to joining The New Yorker, he was a reporter with the Washington Post, where he covered business and science and then served as the newspaper’s New York City bureau chief. Gladwell was born in England, grew up in rural Ontario, and now lives in New York City.

For more information about Malcolm Gladwell, visit his website at www.gladwell.com.

Taleb has since become famous. His second book—
Sometimes the ga
By today’s standards, of course, Enron barely meets the threshold for financial scandal—not after the multitrillion-dollar financial meltdown of the
This article was written during the 2008 college football season. Missouri ended u
Not long after this article came out, I debated John Douglas on