A Cautionary Tale
A week after the attacks of 9/11, as most ameri- can stocks plummeted, a few companies, with products particularly well suited for a new and anxious age, soared in value. One of the fastest-growing stocks was Visionics, whose price more than tripled. The New Jersey company is an industry leader in the fledgling science of biometrics, a method of identifying people by scanning and quantifying their unique physical characteristics—their facial structures, for example, or their retinal patterns. Visionics manufactures a face-recognition technology called FaceIt, which creates identification codes for individuals based on eighty unique aspects of their facial structures, like the width of the nose and the location of the temples. FaceIt can instantly compare an image of any individual’s face with a database of the faces of suspected terrorists, or anyone else.
Visionics was quick to understand that the terrorist attacks represented not only a tragedy but also a business opportunity. On the afternoon of 9/11, the company sent out an e-mail message to reporters, announcing that its founder and CEO, Joseph Atick, “has been speaking worldwide about the need for bio- metric systems to catch known terrorists and wanted criminals.” On September 20, Atick testified before a special government committee appointed by the secretary of transportation. Atick’s message—that security in airports and embassies could be improved using face-recognition technology as part of a comprehensive national surveillance plan that he called Operation Noble Shield—was greeted enthusiastically by members of the committee. To identify terrorists concealed in the crowd, Atick proposed to wire up Reagan National Airport in Washington and other vulnerable airports throughout the country with more than 300 cameras each. Cameras would scan the faces of passengers standing in line, and biometric technology would be used to analyze their faces and make sure they were not on an international terrorist watch list. More cameras unobtrusively installed throughout the airport could identify passengers as they walked through metal detectors and public areas. And a final scan could ensure that no suspected terrorist boarded a plane. “We have created a biometric network platform that turns every camera into a Web browser submitting images to a database in Washington, querying for matches,” Atick said. “If a match occurs, it will set off an alarm in Washington, and someone will make a decision to wire the image to marshals at the airport.”
Of course, protecting airports is only one aspect of homeland security: A terrorist could be lurking on any corner in America. In the wake of the 9/11 attacks, Howard Safir, the former New York police commissioner, recommended the installation of 100 biometric surveillance cameras in Times Square to scan the faces of pedestrians and compare them with a database of suspected terrorists. Atick told me that since the attacks, he has been approached by local and federal authorities from across the country about the possibility of installing biometric surveillance cameras in stadiums and subway systems and near national monuments. “The Office of Homeland Security might be the overall umbrella that will coordinate with local police forces” to install cameras linked to a biometric network throughout American cities, Atick suggested. “How can we be alerted when someone is entering the subway? How can we be sure when someone is entering Madison Square Garden? How can we protect monuments? We need to create an invisible fence, an invisible shield.”
Before 9/11, the idea that Americans would voluntarily agree to live their lives under the gaze of a network of biometric surveillance cameras, peering at them in government buildings, shopping malls, subways, and stadiums, would have seemed unthinkable, a dystopian fantasy of a society that had surrendered privacy and anonymity. But after 9/11, the fear of terrorism was so overwhelming that people were happy to give up privacy without experiencing a corresponding increase in security. More concerned about feeling safe than actually being safe, they demanded the construction of vast technological architectures of surveillance even though the most reliable empirical studies suggested that the proliferation of surveillance cameras had “no effect on violent crime" or terrorism. In this regard, however, America was at least a decade behind the times. In the 1990s, Britain experienced similar public demands for surveillance cameras as a feel-good response to fears of terrorism. And in Britain, the cameras were implemented on a wide scale, providing a cautionary tale about the dangers of constructing ineffective but popular architectures of surveillance that continue to expand after the initial fears that led to their installation have passed.
At the beginning of September 2001, I had gone to Britain to answer a question that seems far more pertinent today than it did when I arrived: Why would a free and flourishing Western democracy wire itself up with so many closed-circuit television cameras that it resembled the set of Real World or The Truman Show? The answer, I discovered, was fear of terrorism. In 1993 and 1994, two terrorist bombs planted by the IRA exploded in London’s financial district, a historic and densely packed square mile known as the City of London. In response to widespread public anxiety about terrorism, the government decided to install a “ring of steel”—a network of closed-circuit television cameras mounted on the eight official entry gates that control access to the City. Anxiety about terrorism didn’t go away, and the cameras in Britain continued to multiply. In 1994, a two-year-old boy named Jamie Bulger was kidnapped and murdered by two ten-year-old schoolboys, and surveillance cameras captured a grainy shot of the killers leading their victim out of a shopping center. Bulger’s assailants couldn’t, in fact, be identified on camera—they were caught because they boasted to their friends—but the video footage, replayed over and over again on television, shook the country to its core. Riding a wave of enthusiasm for closed-circuit television, or CCTV, created by the attacks, John Major’s Conservative government decided to devote more than three-quarters of its crime-prevention budget to encourage local authorities to install CCTV. The promise of cameras as a magic bullet against crime and terrorism inspired one of Major’s most successful campaign slogans: “If you’ve got nothing to hide, you’ve got nothing to fear.”
Instead of being perceived as an Orwellian intrusion, the cameras in Britain proved to be extremely popular. They were hailed as the people’s technology, a friendly eye in the sky, not Big Brother but a kindly and watchful uncle or aunt. Local governments couldn’t get enough of them; each hamlet and fen in the British countryside wanted its own CCTV surveillance system, even when the most serious threat to public safety was coming from rampaging soccer fans. In 1994, 79 city centers had surveillance networks; by 1998, 440 city centers were wired, including all the major cities with a population over 500,000. By the late 1990s, as part of its center-left campaign to be tough on crime, Tony Blair’s New Labour government decided to support the cameras with a vengeance. Between 1996 and 1998, CCTV became the “single most heavily funded non-criminal justice crime prevention measure.” There are now so many cameras attached to so many different surveillance systems in the United Kingdom that people have stopped counting. According to one estimate, there are 4.2 million surveillance cameras in Britain, and, in fact, there may be far more.
As I filed through customs at Heathrow Airport, there were cameras concealed in domes in the ceiling. There were cameras pointing at the ticket counters, at the escalators, and at the tracks as I waited for the Heathrow Express to Paddington Station. When I got out at Paddington, there were cameras on the platform and cameras on the pillars in the main terminal. Cameras followed me as I walked from the main station to the underground, and there were cameras at each of the stations on the way to King’s Cross. Outside King’s Cross, there were cameras trained on the bus stand and the taxi stand and the sidewalk, and still more cameras in the station. There were cameras on the backs of buses to record people who crossed into the wrong traffic lane. Throughout Britain today, there are speed cameras and red-light cameras, cameras in lobbies and elevators, in hotels and restaurants, in nursery schools and high schools. There are even cameras in hospitals. (After a raft of “baby thefts” in the early 1990s, the government gave hospitals money to install cameras in waiting rooms, maternity wards, and operating rooms.) And everywhere there are warning signs, announcing the presence of cameras with a jumble of different icons, slogans, and exhortations, from the bland “CCTV in Operation” to the peppy “CCTV: Watching for You!” By one estimate, the average Briton is now photographed by more than 300 separate cameras from 30 separate CCTV networks in a single day.
britain’s experience under the watchful eye of the CCTV cameras is a vision of what Americans can expect if we choose to go down the same road in our efforts to achieve homeland security. Although the cameras in Britain were initially justified as a way of combating terrorism, they soon came to serve very different functions: Seven hundred cameras now record the license plate number of every car that enters central London during peak hours, to confirm that the drivers have paid a £5 traffic-abatement tax. (Those who haven’t paid are charged a fine.) The cameras are designed not to produce arrests but to make people feel that they are being watched at all times. Instead of keeping terrorists off planes, biometric surveillance is being used to keep punks out of shopping malls. The people behind the live video screens are zooming in on unconventional behavior in public that, in fact, has nothing to do with terrorism. And rather than thwarting serious crime, the cameras are being used for different purposes that Americans may prefer to avoid.
The dream of a biometric surveillance system that can identify people’s faces in public places and separate the innocent from the guilty is not new. Clive Norris, a criminologist at the University of Hull, is Britain’s leading authority on the social effects of CCTV. In his definitive study, The Maximum Surveillance Society: The Rise of CCTV, Norris notes that in the nineteenth century, police forces in England and France began to focus on how to distinguish the casual offender from the “habitual criminal” who might evade detection by moving from town to town. In the 1870s, Alphonse Bertillon, a records clerk at the prefecture of police in Paris, used his knowledge of statistics and anthropomorphic measurements to create a system for comparing the thousands of photographs of arrested suspects in Parisian police stations. He took a series of measurements—of skull size, for example, and the distance between the ear and chin—and created a unique code for every suspect whom the police had photographed. Photographs were then grouped according to the codes, and a new suspect could be compared only with the photos that had similar measurements, instead of with the entire portfolio. A procedure that had taken hours or days was now reduced to a few minutes. Although widely adopted, Bertillon’s system was hard for unskilled clerks to administer. For this reason, it was edged out, as an identification system, by the fingerprint, championed by Francis Galton, the founder of the eugenics movement, who saw fingerprints as a way of classifying “hereditary” criminals.
It wasn’t until the 1980s, with the development of computerized biometric and other face-recognition systems, that Bertillon’s dream became feasible on a broad scale. In the course of studying how biometric scanning could be used to authenticate the identities of people who sought admission to secure buildings, innovators like Joseph Atick realized that the same technology could be used to pick suspects or license plates out of a crowd. It’s the license plate technology that the London police have found most attractive, because it tends to be more reliable: In 1996, the City of London adopted a predecessor to the current automated license-plate-recognition system that records the plates of all cars entering and leaving the city. The stored license plate numbers are compared with a database of those of stolen cars, and the system can set off alarms whenever a suspicious car enters the city.8 By contrast with this relatively effective license-plate-recognition system, a test of the best face-recognition systems, funded by the U.S. Department of Defense, found that they failed to identify matches a third of the time.9 And a review by the International Biometrics Group, an impartial industry trade organization, found that facial-scan technologies have very high false rejection rates over time, and that they have trouble identifying people with darker skin, as well as people who change their hairstyles or facial hair.
Soon after arriving in London, I visited the CCTV monitoring room in the City of London police station, where the British war against terrorism began. On the corner of Love Lane, the station has two cameras pointed at the entrance and a sign by the door inviting citizens to “Rat on a Rat: Call Crime Stoppers Anonymously.” I was met by the press officer, Tim Parsons, and led up to the control station, a modest-size installation that looks like an air-traffic-control room, with uniformed officers scanning two rows of monitors in search of car thieves and traffic offenders. “The technology here is geared up to terrorism,” Parsons told me. “The fact that we’re getting ordinary people—burglars stealing cars—as a result of it is sort of a bonus.” “Have you caught any terrorists?” I asked. “No, not using this technology, no,” he replied.
As we watched the monitors, rows of slow-moving cars filed through the gates into the City, and cameras recorded their license plate numbers and the faces of the drivers. After several minutes, one monitor set off a soft, pinging alarm. We had a match! But, no, it was a false alarm. The license plate that set off the system was 8620bmc, but the number of the stolen car recorded in the database was 8670amc. After a few more mismatches, the machine finally found an offender, though not a serious one. A red van had gone through a speed camera, and the local authority that issued the ticket couldn’t identify the driver. An alert went out on the central police national computer, and it set off the alarm when the van entered the City. “We’re not going to do anything about it because it’s not a desperately important call,” said the sergeant on duty.
Because the cameras on the “ring of steel” surrounding the City take clear pictures of each driver’s face, I asked whether the City used the biometric facial-recognition technology that American airports are now being urged to adopt. “We’re experimenting with it to see if we could pick faces out of the crowd, but the technology is not sufficiently good enough,” Parsons said. “The system that I saw demonstrated two or three years ago, a lot of the time it couldn’t differentiate between a man and a woman.” Nevertheless, Parsons insisted that the technology will become more accurate. “It’s just a matter of time. Then we can use it to detect the presence of criminals on foot in the city,” he said.
In the future, as face-recognition technology becomes more accurate, it will become even more intrusive, because of pressures to expand the biometric database. I mentioned to Joseph Atick of Visionics that the City of London was thinking about using his technology to establish a database that would include not only terrorists but also all British citizens whose faces were registered with the national driver’s license bureau. If that occurs, every citizen who walks the streets of the City could be instantly identified by the police and evaluated in light of his or her past misdeeds, no matter how trivial. With the impatience of a rationalist, Atick dismissed the possibility. “Technically, they won’t be able to do it without coming back to me,” he said. “They will have to justify it to me.” Atick struck me as a refined and thoughtful man (he is the former director of the Computational Neuroscience Laboratory at Rockefeller University), but it seems odd to put the liberties of a democracy in the hands of one unelected scientist, no matter how well supervised he may be.
Atick says that his technology is an enlightened alternative to racial and ethnic profiling, and if the faces in the biometric database were, in fact, restricted to known terrorists, his argument might be convincing. Instead of stopping all passengers who appear to be Middle Eastern and victimizing thousands of innocent people, the system could focus with laserlike precision on a handful of the guilty. (This assumes that the terrorists aren’t cunning enough to disguise themselves.) But when I asked whether any of the existing biometric databases in England or America are limited to suspected terrorists, Atick confessed that they aren’t. There is a simple reason for this: Few terrorists are suspected in advance of their crimes. For this reason, cities in England and elsewhere have tried to justify their investment in face-recognition systems by filling their databases with those troublemakers whom the authorities can easily identify: local criminals. When FaceIt technology was used to scan the faces of the thousands of fans entering the Super Bowl in Tampa, for example, the matches produced by the database weren’t terrorists. They were low-level ticket scalpers and pickpockets.
Biometrics is a feel-good technology that is being marketed based on a false promise—that the database will be limited to suspected terrorists. But the FaceIt technology, as it’s now being used in England, isn’t really intended to catch terrorists at all. It’s intended to scare local hoodlums into thinking they might be setting off alarms even when the cameras are turned off. I came to understand this “Wizard of Oz” aspect of the technology when I visited Bob Lack’s monitoring station in the London borough of Newham. A former London police officer, Lack attracted national attention—including a visit from Tony Blair—by pioneering the use of face-recognition technology before other people were convinced that it was entirely reliable. What Lack grasped early on was that reliability was in many ways beside the point.
Lack installed his first CCTV system in 1997, and he intentionally exaggerated its powers from the beginning. “We put one camera out and twelve signs” announcing the presence of cameras, Lack told me. “We reduced crime by sixty percent in the area where we posted the signs. Then word on the street went out that we had dummy cameras.” So Lack turned his attention to face-recognition technology and tried to create the impression that far more people’s faces were in the database than actually were. “We’ve designed a poster now about making Newham a safe place for a family,” he said. “And we’re telling the criminal we have this information on him: We know his name; we know his address; we know what crimes he commits.” It’s not true, Lack admitted, “but then, we’re entitled to disinform some people, aren’t we?” “So you’re telling the criminal that you know his name even though you don’t,” I asked. “Right,” Lack replied. “Pretty much that’s about advertising, isn’t it?”
Lack was elusive when I asked him who, exactly, was in his database. “I don’t know,” he replied, noting that the local police chief decided who went into the database. He would only make an “educated guess” that the database contained 100 “violent street robbers” under the age of eighteen. “You have to have been convicted of a crime—nobody suspected goes on, unless they’re a suspected murderer—and there has to be sufficient police intelligence to say you are committing those crimes and have been so in the last twelve weeks.” When I asked for the written standards that determined who, precisely, was put in the database, and what crimes they had to have committed, Lack never produced them.
From Lack’s point of view, it didn’t matter who was in his database, because his system wasn’t designed to catch terrorists or violent criminals. In the three years that the system had been up and running, it hadn’t resulted in a single arrest. “I’m not in the business of having people arrested,” Lack said. “The deterrent value has far exceeded anything you imagine.” The alarms went off an average of three times a day during the month of August 2001, but the only people Lack would conclusively identify were local youths who had volunteered to be put in the database as part of an “intensive surveillance supervision program,” as an alternative to serving a custodial sentence. “The public statements about the efficacy of the Newham facial-recognition system bear little relationship to its actual operational capabilities, which are rather weak and poor,” I was told by Clive Norris of the University of Hull. “They want everyone to believe that they are potentially under scrutiny. Its effectiveness, perhaps, is based on a lie.”
This lie has a venerable place in the philosophy of surveillance. In his preface to Panopticon, Jeremy Bentham imagined the social benefits of a ring-shaped “inspection-house” whose inmates could be subject to constant surveillance by monitors in a central inspection tower who were concealed by vene- tian blinds. Uncertain about whether or not they were being watched, the inhabitants would be inhibited from engaging in antisocial behavior. Michel Foucault described the purpose of the Panopticon—“to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power.” Foucault predicted that this condition of ubiquitous, unverifiable surveillance would come to define the modern age.
Britain, at the moment, is not quite the Panopticon, because its various camera networks aren’t linked and there aren’t enough operators to watch all the cameras. But over the next few years, that seems likely to change, as Britain moves toward an integrated Web-based surveillance system. Today, for example, the surveillance systems for the London underground and the British police feed into separate control rooms, but Sergio Velastin, a computer-vision scientist, says he believes the two systems will eventually be linked, using digital technology. Velastin is working on behavioral-recognition technology for the London and Paris subway systems that can look for unusual movements in crowds, setting off an alarm, for example, when people appear to be fighting or trying to jump on the tracks. (Because human CCTV operators are easily bored and distracted, automatic alarms are viewed as the wave of the future.) After ten years of research, Velastin and his colleagues have produced a system called the Modular Intelligent Pedestrian Surveillance Architecture, which can be programmed to detect unusual situations—such as stationary loiterers or unaccompanied bags—and to alert the operator with automated alarms. “Imagine you see a piece of unattended baggage which might contain a bomb,” Velastin told me. “You can back-drag on the image and locate the person who left it there. You can say, ‘Where did that person come from and where is that person now?’ You can conceive in the future that you might be able to do that for every person in every place in the system.” Without social agreement and legal restrictions on how the system could be deployed, it could create a kind of ubiquitous surveillance that the government could use to harass its political enemies or that citizens could use, with the help of subpoenas, to blackmail or embarrass one another.
Once thousands of cameras from hundreds of separate CCTV systems are able to feed their digital images to a central monitoring station, and the images can be analyzed with face- and behavioral-recognition software to identify unusual patterns, then the possibilities of the Panopticon will suddenly become very real. And few people doubt that connectivity is around the corner. Stephen Graham has predicted gloomily that CCTV will become the fifth utility, following the pattern of gas, electricity, water, and telecommunications, which began as disconnected networks in the nineteenth century and were eventually standardized, integrated, and ubiquitous.
At the moment, there is only one fully integrated CCTV system in Britain: It transmits digital images over a broadband wireless network, like the one Joseph Atick has proposed for American airports, rather than relying on traditional video cameras that are chained to dedicated cables. It is located in the fading city of Hull, Britain’s leading timber port, about three hours northeast of London. Hull has traditionally been associated not with dystopian fantasies but with fantasies of a more basic sort: For hundreds of years, it has been the prostitution capital of northeastern Britain. Six years ago, a heroin epidemic created an influx of addicted young women who took to streetwalking to sustain their drug habits. Nearly two years ago, the residents association of a low-income housing project called the Goodwin Centre hired a likable and enterprising young civil engineer named John Marshall to address the problem of underage prostitutes having sex on people’s windowsills. When Marshall met me at the Hull railway station, he identified himself by carrying a CCTV warning sign. Armed with more than $1 million in public financing from the European Union, Marshall decided to build what he calls the world’s first Ethernet-based, wireless CCTV system.From the Hardcover edition.
Excerpted from The Naked Crowd by Jeffrey Rosen. Copyright © 2004 by Jeffrey Rosen. Excerpted by permission of Random House Trade Paperbacks, a division of Random House LLC. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.