WASHINGTON, WE HAVE A PROBLEM
Although it wasn’t so long ago, the autumn of 1999 belongs to an era now past. Today, cyber security is a serious issue that touches nearly everyone personally; in those days, it was still seen mostly as a nuisance issue. Today, there are armies of specialists charged with taking it seriously; in those days, there were few. Then, some of us were housed in a place now called the Eisenhower Executive Office Building. But even in those old days, its name was “Old.”
The Old Executive Office Building, or OEOB, was built in Washington, D.C., in the 1870s. One of the few surviving examples of Second Empire architecture in the U.S., it looks like an ornate layer cake with a mansard roof. It stands a few feet away from the West Wing of the more classical White House, and shares with the White House both the black iron fence surrounding the White House Complex and the strict security that greets any visitor who, passing the brass cannon on the grounds (captured from the Manila arsenal by Admiral Dewey in the Spanish American War), enters its lobby.
My office in November 1999 was OEOB 302. One Colonel Oliver North had been a previous occupant. North was notorious for taking policy into his own hands during the Iran-Contra affair of the 1980s, and when I moved into this office I found he’d also taken architecture into his own hands, destroying the magnificent twenty-foot ceilings to install a warren of desks in a loft. However, he couldn’t take away the view. On one particularly beautiful day that fall, my view of the Washington Monument, and in the distance the Jefferson Memorial and the planes landing at National Airport, never looked better.
I was Senior Director for Critical Infrastructure Protection on the staff of the National Security Council. On this November 1999 afternoon I was talking with a visitor from Pittsburgh: Rich Pethia, head of the federal government’s Computer Emergency Readiness Team/Coordination Center (CERT/CC) based at Carnegie Mellon University, which was and still is the closest thing the United States has to a headquarters for Internet security.1 That made Rich Pethia the closest thing to the country’s chief cyber-inspector.2
“You might be interested in a small workshop that we’ve just had,” Rich told me. “It’s about a set of new attack tools we’ve spotted on the Internet.” It’s not a figure of speech to say my ears perked up. One of the tools, he went on, was a software program that went by the name Stacheldraht
– German for “barbed wire.” The program could launch against a Web site a “distributed denial-of-service attack.” A major attack of this type could shut off all traffic to and from the site, as if a barbed-wire fence had been thrown around the perimeter.
The basic means of working such mischief was (and is) fairly simple. Web sites respond to incoming signals. When you enter a Web address in your browser, you send a signal requesting that site to display a page view, and as you click through the site you may ask it to perform transactions: download a file, sell you a book, play a video. However, even big Web sites can’t process an unlimited volume of traffic at the same time. They can be overloaded. When this overloading is done intentionally, by someone (through some program) flooding a site with signals that induce it to use up its resources doing useless tasks, you have what is called a denial-of-service attack: legitimate users can’t be served. Depending on the effectiveness of the attack, results may range from a mere slowing of the site to a system-crashing takedown for hours or days.
Many hackers had tried the trick before.3 However, Pethia and his crew were now seeing ever-more-powerful forms of the insidious variation just mentioned: the distributed
denial-of-service (DDOS) attack. Instead of attacking a site directly from their own computers, some hackers had figured out how to have armies of “zombie” computers do their bidding. As Internet use grew, millions of poorly protected machines, many of them home PCs (personal computers), were coming online. It wasn’t hard to find, say, a few thousand – and then to implant upon them small, automated program modules called bots (short for “robots”). The owners of the PCs would never know they were harbouring these bots, and each machine could be made to send signals to any Web site that the botmaster might target. This gave the dark side thousands of points from which to launch an attack.
DDOS attacks remain very difficult to defend. When a Web site is bombarded by signals from all over the Internet, with various tricks thrown in to hide the many sources of attack, filtering out the bad stuff becomes a tall order. And there is little that systems administrators can do in terms of prevention. As with many challenges in computer security, preventing a DDOS attack requires making everyone else on the Internet secure from intrusion. This still remains impossible to achieve.
The first DDOS attacks had been noticed in the summer of 1998. They were primitive and fairly small-scale attempts, not serious enough to merit much concern. But by the fall of 1999, sophisticated master tools like Stacheldraht
had appeared and victims were starting to feel the effects. One attack had brought down a crucial Web server at the University of Minnesota, knocking related systems at the school out of kilter as well. Recovery to normal Web service had taken several days.4
Worse, Pethia told me, new attack tools were evolving. His CERT/CC team had seen evidence of scary advances as recently as October, while planning the workshop. Some tools were being put into easy-to-use form and then passed around, almost like packaged commercial software, so that even inexpert hackers (called “script kiddies” for their lack of skill in writing code) could dream of being DDOS commandoes. Meanwhile, CERT/CC was labouring to put together a public report on its findings. What could be said that might be useful, other than warning the world to brace for major DDOS incidents?
To me this was bad news at a bad time. In November 1999, like most people working with computer security and network reliability, I was busy preparing for the Y2K event. There, of course, we were dealing with a software problem that was self-inflicted. In the past, to save memory space in computers, a lot of code had been written with date fields that registered the year in two digits: 76 for 1976, and so on. Important functions in many programs were tied to the tracking of the current date, and yet for a long while, few of us worried about whether those functions might get boggled when the calendar seemed to flip backward from 99 to 00. We assumed our programs would be obsolete and out of use by then. Thus, as the late 90s ticked away, we all became caught up in a massive act of penance for this sin of omission: trying to track down and patch countless programs that were still in everyday use, often for critical work such as managing power grids and air traffic.
Some argued that Y2K fears were overblown – The Economist
published a cover cartoon of a tiny Y2K “bug” grinning under a magnifying glass that made him look monstrous. But no one was certain what could occur, and none in my circle cared to take chances. My position had put me in the midst of coordinating Y2K efforts in the U.S. In the last weeks before 00 hour there were myriad lists to be gone over once more, and the conversation with Rich Pethia started a new list forming in my mind. It was a list of reasons why some people might find the Y2K moment a perfect moment to launch waves of DDOS attacks.
Following my meeting with Rick, I briefed my colleagues and then the National Security Advisor, Sandy Berger. We pulled together a team of experts from government and industry to confront this new threat. As December 31 drew near, we worked around the clock, learning a great deal about DDOS tools but failing to come up with an effective defence. (In hindsight this is not surprising, given that only partial and imperfect defences have been developed in all the years since.)
Fortunately, there were no major attacks or serious Y2K glitches of any kind come the new millennium.5 Exhausted, I fell asleep before midnight on New Year’s Eve. By then we knew that computer systems across Europe had survived the turnover from 1999 to 2000 without crashing or letting planes fall from the sky. I awoke to a world of blessed normality on January 1.
The calm before the storm lasted not quite six weeks. On February 8, 2000, the public Web site of Yahoo! was taken down by a DDOS attack. The next few days brought similar attacks against Amazon, Buy.com, CNN, eBay, Datek, E*Trade, Excite@Home, America Online, and others. Whoever was doing it couldn’t be traced. Although the service disruptions were for hours, not days or weeks, the attacks set off ripple effects that magnified their impact. The targeted companies saw their stock prices drop; collectively they lost billions in market valuation. A media frenzy ensued. Moreover, the dotcom boom was then in full swing (though it would end soon), and the targeted firms were all among the darlings of the new economy. The fact that they had somehow turned up on a hit list – and had been shown to be vulnerable to a new menace – was unsettling.
That’s why, less than ten days later, I found myself sitting behind President Clinton in the Cabinet Room of the White House. THE PLAN
Few people get the opportunity to examine the back of the President’s head for a couple of hours. Bill Clinton has a full head of hair, silvered, with a nice swirl in it. The Cabinet Room, with chairs marked for every Cabinet member (they get to buy them upon leaving office), opens out onto the Rose Garden and sits next to the Oval Office. As the President remarked, “The room is smaller than it looks on television,” 6 and on that day it was quite full.
The occasion was a meeting with industry and civic leaders to announce joint actions for strengthening Internet and network security. Present were top executives from information-technology companies – Microsoft, IBM, Cisco, and others – plus experts from security firms, research universities, and civil liberties groups. President Clinton had brought his Chief of Staff, John Podesta; Attorney General Janet Reno; Commerce Secretary Bill Daley; and his Science Advisor, Neal Lane; also there were National Security Advisor Sandy Berger and the National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism, Dick Clarke.
I had organized the meeting. It was the first time – and to date the only time – that the U.S. President and many of his Cabinet met with industry and academia because the Internet was shown to be insecure. We were using the power of the White House as a bully pulpit to declare a high-level, widespread commitment to improving cyber security.
Practical steps had not been ignored, either. We had made sure an actual groundwork for policy was in place before taking our seats in the Cabinet Room that day. The Administration’s “National Plan for Information Systems Protection, Version 1.0” (a.k.a. “the Plan”) had been put into draft form and released just a month before. It was the first-ever such policy statement. Having had a key hand in crafting it, I was proud of it.
The plan had been written in consultation with those now gathered around the table, so I was confident of their agreement. Essentially, the plan called for the federal government to take a seed-and-support role through actions such as funding research and setting good examples through its own practice. It would largely refrain from regulating, or intervening in, the private sector on cyber security. In return, the private sector would come together to do the lion’s share of the work needed.
Along with research funds, government steps announced by President Clinton included new programs for education in cyber security and for protection of the government’s own computers. A new Institute for Information Infrastructure Protection would be formed (the I3P, which I later served as vice-chair. The purpose of this consortium of universities, government labs, and nonprofits would be to identify and study problems in cyber security that were outside of the direct mission of any existing agency – in other words, to plug gaps in the R&D portfolio). Also, the government’s Cabinet officers were to meet with members of the business community.
Individually, business leaders pledged to work to make each firm’s products more secure. Collectively, they would create an industry mechanism to share information on cyber attacks in order to better respond to them. Nearly forty leading IT companies and most major industry associations signed the agreement.
The course was set and publicly declared. The government would fund, facilitate and educate, and become a model for others to emulate, but it would neither regulate nor dictate. For its part, the private sector would collaborate, marshalling its problem-solvers for the challenges that now loomed.
It was a policy in keeping with the market-driven, deregulating spirit of the times, and one whose basic wisdom seemed to be affirmed by recent events. Government and industry had just worked together well along similar lines to address the Y2K issue. The IT industries were booming and innovating brilliantly. The U.S. government had turned its own huge budget deficit into a surplus and seemed to have perfected the art of deftly nudging a free-market economy ever upward. Surely the emerging threats in cyber security could be conquered – or at least brought under control – before long.
Through the first decade of the 2000s, these policies have been carried forward in the United States with almost no real changes. Approaches in other countries have differed a bit in the organizational details, but except for countries like Singapore and China, most countries in the Organisation for Economic Co-operation and Development (OECD) have followed the non-regulatory government-business partnership model blazed by the U.S. In both Canada and the European Union, policies are less explicit than in the U.S. In Canada, various government departments and agencies, including the RCMP, deal with cyber crime, and there is also an integrated partnership between international, federal, and provincial law enforcement agencies.7 And the EU has moved to strengthen its countries’ cyber-policing systems (which are just one aspect of cyber security, by the way) – forging ties between police and the private sector, creating a common alert platform for coordinating with Europol, and allowing more latitude for cross-border searches (whereby investigators in one country can pry remotely into computers in others, searching for evidence).8 Still, there are no national differences in cyber-security policy that are anything like the differences in, say, health care policy between the U.S. and Canada or Europe. Although the Internet is worldwide and not governed by any single entity, the U.S. has long been the de facto leader in matters related to the Net, so that what is done in the U.S. largely shapes what’s done elsewhere.
The question is: Has it worked? THE SHORT ANSWER, THE REAL STORY
The short answer is no. Many would say the obvious
answer is no. The record for “biggest data theft in history” keeps being broken. Business and government computers are frequently hacked into, making it ever more likely that your personal information has been stolen at least once. Your own computer may be hacked into use as part of a botnet, a malicious hacking network. The odds of this may still be less than fifty-fifty, but the estimates have kept rising, with various sources saying that anywhere from one of every ten to one of every four PCs harbours a bot.9 If you are Estonian, you saw your entire country crippled by massive waves of DDOS attacks in 2007. If you live anywhere else and work in national security, you know that your country could be the next target of cyber war or cyber terror and you can think of scenarios in which the effects would be far more serious than they were in Estonia.
Leading periodicals are handing in their verdicts – “Internet security is broken” (New York Times
),10 “Just Another Oxymoron: Internet Security” (PC World
).11 This is not hysteria. This and following chapters will revisit some major incidents and trends on the cyber-security front since the first plan was launched. I hope that by inviting you behind the scenes you will see the real story that has led so many of us to conclude that the cyber-security system is, indeed, broken. ARE WE DOING THE BEST WE CAN?
The question of whether policies have worked is tricky, as another question lurks behind it. How does a society decide if policies are “working”? Ideally goals and metrics are clearly defined. For example, “Our goal is to reduce drunk driving by 25 per cent over the next five years, as measured by police reports. To achieve this, we will . . .” Our policies for cyber security do not provide such clarity, however. They tend to set “process” goals – create this new organization, fund that program. Or they are hortatory: We hereby exhort and encourage others to do something. “Output” or “performance” metrics are mostly lacking, and those that exist are fuzzy. For example, a key goal of U.S. policy has been to “minimize disruptions” to critical infrastructure (including the Internet and other not-so-public networks).
Just asking the question about “working” reveals one of the key underlying problems in Internet security. It is hard to set specific goals or even measure progress when statistics and data-gathering are spotty. For tracking physical crime there are excellent records such as the Uniform Crime Reports in the U.S., but for cyber crime we have a hodgepodge of data ranging from the reliable to the dubious, plagued by gaps and under- or over-reporting. What evidence we do have is mostly bad news. It shows marked increases in both the frequency and the scale of many kinds of cyber incidents since 2000, along with a rapid growth of new kinds.
DDOS extortion – pay us or we’ll shut down your Web site – was once unheard of but now is rampant, say law officers in England and elsewhere. Spam and phishing attacks have gone from unheard of to nearly uncountable.
But noting that cyber crime has “gone up” is just one metric and a rather simplistic one at that. There are two more tests we ought to apply, two further questions that any keen observer would ask.
First: Isn’t it possible that the cyber-security people are doing the best they can, against a very tough and fast-growing variety of threats?
Not surprisingly, this line of thinking is often promoted by software firms and other firms in the IT industries, which have a natural interest in persuading customers that they’re doing everything they can. At one press event, when Microsoft’s security chief was badgered about vulnerabilities in the software, he came back with a sly twist on the old what-if-Microsoft-made-cars jokes: How well do you think your car would hold up if it was attacked every fifteen minutes?
Still, it is well worth exploring whether business and government are in fact doing “the best that they can.” In public policy, when the results are disappointing, one must always look at the effort side of the equation – to ask if appropriate strategies are in place and are being carried out properly; to ask if all reasonable options have been tried. So are we doing the best that we can in cyber security?
The answer appears to be “not by a long shot.”
Having pledged to become a model of good security practice after that White House meeting in 2000, the U.S. government set internal standards. It also began issuing a Federal Computer Security Report Card, giving “grades” to its major branches – not for actual security outcomes but just for implementing good practice and adhering to the standards. Some two dozen departments and agencies have been graded yearly. In 2008 the Report Card gave grades of D or lower to ten of these, including the Department of Defense (D minus), Treasury (F), and the Nuclear Regulatory Commission (F).
Some experts say the government’s approach was flawed from the start – that in cyber security, it doesn’t work well to prescribe common, do-it-this-way procedures for organizations with different IT systems and different functions. Still, the grades are not good.
Nor does the private sector seem to have done better. IT vendors, the firms that make the software and hardware we all buy, have uneven records at making their products less vulnerable to intrusion. Software companies are often slow to issue patches for vulnerabilities once they are found; it’s sometimes months later.
Internet service providers (ISPs), the companies that link users to the Internet, have been criticized for not deploying security features that are within their purview. In a survey in 2008, the chief security engineers at major ISPs worldwide reported progress in some respects, but expressed pessimism and frustration overall. According to Arbor Networks, the private firm that did the survey, more than half of these engineers “believe serious security threats will increase . . . while their security groups make do with ‘fewer resources, less management support and increased workload.’”12
In another 2008 survey, many end-user firms – big companies using the Internet – were found to have boards of directors that gave little or no thought or oversight to information security. This survey was done by the CyLab research centre at Carnegie Mellon. The researchers spoke of their concern at finding large numbers of board members who, in their view, still “don’t get it,” who still don’t seem to grasp the importance of cyber security.13
Across the public and private sectors there are countless stories of security improvements that could have been made, should have been made, but that were left undone or poorly done. One is the story of the so-called SCADA systems that control electric power grids and other vital infrastructure such as chemical plants, oil refineries, and pipelines. SCADA stands for “supervisory control and data acquisition,” and it is frightening how easy it is to hack into many of these systems. A U.S. government security review of the control systems at TVA, America’s largest public power company, resulted in a searing indictment of the network security systems, or lack thereof.14
When SCADA systems fail, the consequences can be significant. In August 2003, an alarm processor in the control system of an Ohio-based electric utility failed, so that control room operators were not adequately alerted to critical changes to the electrical grid. Then the regional electric grid management system failed, compounding the problem. With these two systems compromised, when several key transmission lines in northern Ohio “tripped” from contact with trees, it was enough to set off a cascading failure of electric power across eight states and a Canadian province.
SCADA software comes from firms like Siemens that are not active in the personal computing market, and until recently industrial plants using SCADA have felt themselves free of security threats.15 But as industrial processes such as pipelines have connected to the Internet, these systems have become fully open to cyber intrusions. So far only accidental SCADA failures have occurred, but the vulnerabilities are there.
Another story about a missed opportunity in security is that of Internet Protocol version 6, or IPv6, a “new” set of technical standards meant to change some of the digital rules for transmitting data over the Internet. Among other benefits, if all software and hardware were set up to follow these standards, it would help make the Internet more secure. The trouble is, though the IPv6 standards are actually not new – they were written and duly promulgated by the Internet Engineering Task Force in 1998 – a sweep of Internet traffic in 2008 showed that hardly anyone was using them.16With there being no one to orchestrate a mass conversion (the IETF has neither the power nor the ability), it just didn’t happen. Thus for ten years IPv6 remained little more than an urban legend, discussed fondly and poked at skeptically, like tales of a superhero who might someday come along to help save Gotham.
One can find many areas like these where we are doing far from our best. In nearly all of them, it’s clear that what are needed are new public policies or policy changes – to manage the adoption of new core technologies and protocols for the Internet; to truly protect critical infrastructure rather than counting on staying lucky. Recent history shows that adopting new technologies for networks is not inherently impossible: the Canada and the U.S. have both shifted from analogue to digital TV transmissions. Mobile telephones worldwide have migrated toward 3G and 4G cellular technologies. The poor adoption of new Internet technologies is a failure of policy.
And in cyber security, just as in economics, we need policies that can correct for market failures in the private sector. After years of relying largely on private initiative and market forces to address security, it is apparent that the market alone just does not respond well. I work with many security experts from the private sector, brilliant people who work long and hard. They are also aware of the complex perversities of the security marketplace. Markets alone may drive companies to push to increase revenues or to rush a breakthrough product to market, but they will not necessarily drive companies to do their utmost in security. Public policies must provide the carrots and sticks that will balance market forces. BUT IS IT GOOD ENOUGH?
So we haven’t been doing “the best that we can.” But we are left with the next question: Is what we are doing, perhaps, “good enough”?
Although that too is a fuzzy standard (and highly subjective), it is the ultimate test of any policy, the one that can lead to political regimes being toppled or not. Are the results good enough to satisfy people, good enough to keep society functioning at a level we’d like, good enough to be, well, acceptable?
Some would argue that in cyber security, yes, the results are
good enough. After all, despite alarming increases in cyber crime, the cyber city has been ticking merrily along. Despite rumours of imminent disaster, the sky hasn’t fallen. For many of us, the main issue is not securing the Internet but getting more of it.
We can’t stand to be away from cyberhome; we want to access the Internet from our cellphones and a growing array of other mobile devices; we want to transform Africa with one laptop per child so the children can be on the Internet. Yes, one can find room for improvement in security, but isn’t it obviously good enough?
Unfortunately, there is evidence that the name of this tune is “Fiddling While Rome Burns” – so much evidence that it’s hard to know where to start.
In the realm of security, be it the physical kind or the cyber kind, we actually have a pretty fair definition of “good enough.” The security consultant Noam Eppel used it in the article I mentioned earlier, which argued that cyber security has broken down utterly. He said that people should feel able to “conduct ‘normal and common’ activities” without being victimized, so long as they take reasonable and not-too-onerous precautions.
Eppel gave a parked-car example. If you park on a public street in daytime and lock your car, with no valuables left visible, the risk of a break-in should be very small. But if someone does break in, it’s probably because you are in a bad neighbourhood where the security is not up to par and, though the locals may have grown used to toughing it out, this is a place where the prospects are not good.
Is the Internet a place where we feel able to “conduct ‘normal and common’ activities” without being victimized? Hardly. We hesitate to click on a link or open a file, knowing that we could be opening the door to strangers. We’ve learned that even if it looks like a friend at the door, it could be a bad guy in disguise.
And the precautions we are advised to take are extremely onerous. If you are diligent about security on your home PC, you may spend so much time surfing the Web to learn about the latest worms and viruses that you feel like a hypochondriac browsing WebMD for symptoms. If you want the best possible protection, you will be spending even more time (and now money) shopping for and installing arcane items such as anti-spyware software, server certificates, and secure FTP clients. (What, you don’t have those last two? Or even know what they are?)
For administrators of big networks with hundreds or thousands of computers, such work is vastly multiplied. And still we aren’t secure. Even the gated communities are not safe. I’ve lived in one of the best – while writing this book, I was on the faculty at Carnegie Mellon, behind the cyber firewalls at one of the most technically savvy universities in the world. Yet while I was working on this very chapter, I asked another professor to join me for coffee and he declined because “my computer has been completely taken over by something, and I can’t use it. I need to get this sorted out.”
Sorting it out can be tough when you can’t trust anyone. At the university I received a courteous but firm e-mail from the security staff, asking me to please do a series of tasks on my office computer to assure that it was guarded against the latest round of threats. I almost did. It was a perfectly official-looking e-mail like others I had seen, but at the risk of seeming paranoid I placed a couple of phone calls instead. The security staff had sent no such e-mail; it was a phishing attack.
Things have gotten to the point where many experts wouldn’t think of engaging in some “normal and common activities.” At a conference of cyber-security experts I decided to ask the attendees if they used online banking. While some thought it was fine to do so, about one-third said they did not and would not. Another third said they did, or would, but only while taking extra security measures far beyond the ken of lay Internet users.
Things have gotten to the point where normal and common activities are being disrupted or, in some cases, prevented from happening at all. In late 2008 and early 2009, the Conficker worm raced across the Internet. The potential for damage was so great that government systems in France and hospital systems in England were shut down to clean out the worm, even though it had not yet done any harm. And after it was learned that the Conficker worm could not only propagate over the Internet but also ride on the USB flash drives that people like to use for moving files from one computer to another, some big institutions (including the U.S. Department of Defense) took drastic steps: they banned USB drives in their offices and sealed up the USB ports on computers with cement.
Paranoia? Perhaps, but paranoia is one of the inevitable results of security that’s not good enough. Two security researchers, Klaus Kursawe of Switzerland and Stefan Katzenbeisser of Germany, co-authored a paper whose title aptly described what the security situation has come to even in the eyes of dispassionate experts – “Computing Under Occupation.”
If in early 2009 you had gone to Professor Katzenbeisser’s Web page at the Technische Universität Darmstadt,
thinking of contacting him for further information, you would have found this notice (in English) under his e-mail address:
Due to the large amount of spam I use rigorous anti-spam filtering. If you suspect that your mail does not reach me, please contact me by phone or fax.
Why not use Morse code and send a telegram? Delivery is said to be reliable.
CSIS (the Center for Strategic and International Studies in the U.S.) is a think tank devoted to subjects it considers of great consequence, such as nuclear proliferation. In December 2008, aiming to catch the eye of then President-elect Barack Obama, CSIS released a report titled “Securing Cyberspace for the 44th Presidency.” It said that “inadequate cyber security and loss of information has inflicted unacceptable damage to U.S. national and economic security.”17 Inadequate, unacceptable.
Notable breaches of U.S. security have ranged from the Titan Rain cyber-espionage attacks (see Chapter 5) to incidents in which U.S. businesses and trade delegations have gone overseas to find that the negotiators across the table from them somehow seemed to know all their key negotiating points, and what the Americans would settle for. They’d been hacked.
Nor are damages limited to the U.S. At the 2009 World Economic Forum in Davos, Switzerland, the cyber-security firm McAfee presented a survey and rough estimate of the global costs of cyber insecurity. It was a shocking one trillion dollars per year. Granted, that estimate could be off by a few hundred billion either way. But it counted only
the costs incurred by business firms, and only from a certain type of cyber incident, data breaches leading to the loss or compromise of valuable information. If the number is anywhere close to correct, it would represent a stunning burden on firms operating in the world’s already-troubled economies. Is that acceptable? Good enough?
Or look at a series of events around the history-making times of Barack Obama’s election and inauguration.
• November 4, 2008: Mr. Obama wins the election, becoming the first American whose run to the Presidency touched off a mass epidemic of phishing scams. E-mails with startling messages have been flooding the Internet for months; now the tide is rising higher yet. The teaser lines range from “Obama Sex Scandal!” to “Amazing Speech!” and each e-mail has a link to a fake Web site such as greatobamaonline.com. The Web sites look very convincing; some use actual campaign graphics. A visitor who clicks to get the real news gets a malicious download – perhaps a keystroke-logger that will record every password the victim uses on the Internet, so others can get in to see what’s worth stealing. Of course, not everyone takes the bait. But still these complex schemes have all the earmarks of professional, well-planned criminal operations.*
* You may also enjoy a footnote. In this election Sarah Palin, the losing candidate for Vice-President, won a consolation prize. Phishing counts showed that phishy e-mails themed to Ms. Palin outnumbered those themed to the less colourful Joe Biden, Obama’s VP. The ratio was about five to four.
• December 9, 2008: “Patch Tuesday.” On the second Tuesday of every month, big software firms such as Microsoft and Oracle release patches to fix recently found security flaws in their products. IT administrators at companies and nonprofits around the world brace for Patch Tuesday. They may be sent dozens of patches, for various software programs, all to be installed on thousands of PCs under their care. On this Tuesday, several critical patches are for the Internet Explorer program. Installing doesn’t help. Within days, hackers are exploiting or working around the patches.
• January 20, 2009: Barack Obama is inaugurated. A firm called Heartland Payment Systems is criticized for trying to bury its own bad news by announcing it on that day. Heartland – one of the world’s largest online processors of credit- and debit-card transactions – admits that hackers have broken into its system and stolen an estimated one hundred million customer card-numbers, breaking the existing record for cyber theft of personal financial data.
If you have ever used a card to pay for a restaurant meal or buy a gift, your number might have been part of that haul. Heartland’s niche is clearing payments for restaurants, bars, and small retailers: the clerk swipes the card through the machine, the data goes up into the system, someone swipes the data. What’s done on the Internet doesn’t stay on the Internet
– the trouble can follow you anywhere.
• March 1, 2009: This day’s data-breach news story concerns the special military helicopter used to carry President Obama on short trips. Copies of the blueprints and other sensitive data for this aircraft have been stolen, a security audit has shown. The files were stored on a computer at a defence contractor’s office. Apparently, someone at the office downloaded a file-sharing program to receive and send free music over the Internet, which allowed everything else on the person’s hard drive to be inadvertently “shared” as well. The audit found that hackers in Iran and other countries came window-shopping via the Net, and helped themselves to the details of Mr. Obama’s high-security helicopter.18 THE END OF AN ERA
We have come a long way – in the wrong direction – since the days in early February of 2000, when a then-unprecedented wave of DDOS attacks prompted a White House meeting and the announcement of the first National Plan for cyber security.
The perpetrators of those attacks turned out to be just one person: a fifteen-year-old boy in the suburbs of Montreal. Michael Calce, who went by the online name Mafiaboy, wasn’t really Mafia, or even a computer prodigy. He was just a software-literate teenager, a script kiddie, who got his hands on some of the easy-to-use automated attack tools that had started to circulate. It appears that he brought down Web sites across North America mainly to see if he could do it and then brag about it online.
Which he did. Bragging was not as risky as it might seem because the news coverage of the attacks meant that all sorts of people were likely to claim credit, giving investigators a maze of false leads. But then Mafiaboy bragged about hitting the Dell, Inc. Web site, which had indeed been one of his targets but not one publicly reported in the news accounts, and he was traced and caught.
Mafiaboy was a historic figure in more ways than one. When so many Web sites went down in February 2000, we thought we were seeing the start of a grim new era in cyber attacks, and this was certainly true in terms of the scale and scope of the attacks.
But really these attacks were farewell salvos. Like the climax of a fireworks display, they signalled the end of an era, for Mafiaboy was part of the last great wave of “mischief hackers.” His attacks were among the last significant cyber exploits done for thrills, for bragging rights. Since 2000, the teenaged vandals have been replaced by real mafiosi.
The nature of the adversary has changed profoundly and the world’s cyber-security policies – its laws and regulations – have not kept up. Notes
1. Although the US-CERT, based in Washington, is now officially the central response-coordinating organization, much of its real work is done by the CERT/CC at Carnegie Mellon.
2. According to the US-CERT Web site: “Worldwide, there are more than 250 organizations that use the name ‘CERT’ or a similar name and deal with cyber security response. US-CERT is independent of these groups, though we may coordinate with them on security incidents. The first of these types of organizations is the CERT® Coordination Center (CERT/CC), established at Carnegie Mellon University in 1988. When the Department of Homeland Security (DHS) created US-CERT, it called upon the CERT/CC to contribute expertise for protecting the nation’s information infrastructure by coordinating defense against and response to cyber attacks. Through US-CERT, DHS and the CERT/CC work jointly on these activities. US-CERT is the operational arm of the National Cyber Security Division (NCSD) at the Department of Homeland Security (DHS).” (www.us-cert.gov)
3. “Results of the Distributed-Systems Intruder Tools Workshop, Pittsburgh, PA, USA, November 2–4, 1999.
” Pittsburgh, PA: CERT Coordination Center, Software Engineering Institute, Carnegie Mellon University, 7 December 1999. 228
5. CERT Advisory CA-2000-01, “Denial-of-Service Developments,” original release date January 3, 2000, www.cert.org/advisories/CA-2000-01.html.
6. Remarks by the President in photo opportunity with leaders of high-tech industry and experts in computer security, 15 February 2000.
7. “Cybercrime,” International Crime and Terrorism Web page, Foreign Affairs and International Trade Canada, http://www.dfait-maeci.gc.ca/foreign_policy/internationalcrime-old/cybercrime-en.asp.
8. “Fight Against Cyber Crime: Cyber Patrols and Internet Investigation Teams to Reinforce the EU Strategy,” Press release, Reference IP/08/1827, Brussels: Council of Ministers of the European Union, 27 November 2008.
9. One paper in 2006 noted that “we discovered evidence of botnet infections in 11% of the 800,000 DNS domains” investigated; another found botnet infections at the rate of 25% in China, 14% in the EU, and 8% in the USA. No figures for Canada were provided. See Moheeb Abu Rajab, Jay Zarfoss, Fabian Monrose, and Andreas Terzis, “A Multifaceted Approach to Understanding the Botnet Phenomenon,” in Proceedings of the 6th ACM SIGCOMM conference, 2006, pp. 41–52; and Rick Wesson, “Botnets and the Global Infection Rate: Anticipating Security Failures,” EE Department Systems Colloquium, Stanford University, 6 June 2007. For a more recent comment on the ranges of bot estimates see Brian Krebs, “Oprah, KFC and the Great PC Cleanup?,” Washington Post,
11 May 2009, at http://voices.washingtonpost.com/securityfix/2009/05/oprah_kfc_and_the_great_pc_cle.html.
10. John Markoff, “Thieves Winning Online War, Maybe in Your PC,” The New York Times,
6 December 2008.
11. Dan Tynan, “The 15 Biggest Tech Disappointments of 2007,” PC World,
17 December 2007.
12. Arbor Networks, Worldwide Infrastructure Security Report
, October 2008, p. 28.
13. Michael S. Mimoso, “IT Security Risks Dismissed by Boards, Survey Finds,” SearchSecurity.com, 4 December 2008. The report itself: Jody R. Westby and Richard Power, Governance of Enterprise Security Survey: CyLab 2008 Report
, Carnegie Mellon CyLab, 1 December 2008.
14. “Largest US Power Company Is a Network Security Black Hole,” Layer 8 by Michael Cooney, 21 May 2008, NetworkWorld Blogs & Columns, http://www.networkworld.com/community/node/28031.
15. The results of the Institute for Information Infrastructure Protection’s research on SCADA are available on their Web site, http://www.thei3p.org/.
16. James Niccolai, “IPv6 Adoption Sluggish: Study; Vendor-sponsored Survey Shows Slow Migration Rate,” Computerworld, Fairfax [NZ] Media Group, 25 August 2008, http://computerworld.co.nz/news.nsf/tech/8CF2F74925C98009CC2574AC00750583.
17. James A. Lewis et al., “Securing Cyberspace for the 44th Presidency,” Center for Strategic and International Studies, Washington, D.C., December 2008.
18. The stories of false sites related to Obama’s campaign and election are from news reports and public Web postings, such as Ed Dickson, “Fake Obama Site Is a Malware Booby Trap,” blogspot.com, 19 January 2009, http://fraudwar.blogspot.com/2009/01/fake-obama-site-is-malware-booby-trap.html, and many similar. “Patch Tuesday”: Dan Goodin, “Microsoft Issues Emergency IE Patch as Attacks Escalate,” The Register
, 17 December 2008, http://www.theregister.co.uk/2008/12/17/emergency_microsoft_patch/; John Leyden, “Bumper MS Patch Batch Spells Client-side Misery / IE Still Vulnerable after Bombardment,” The Register,
10 December 2008, http://www.theregister.co.uk/2008/12/10/ms_patch_tuesday_december/. Heartland breach: Brian Krebs, “Payment Processor Breach May Be Largest Ever,” Security Fix blog, washingtonpost.com, 20 January 2009, http://voices.washingtonpost.com/securityfix/2009/01/payment_processor_breach_may_b.html. Helicopter security breach: Angela Moscsaritolo, “Blueprints of Obama’s Marine Helicopter Leaked on P2P,” Secure Computing Magazine,
3 March 2009, http://www.securecomputing.net.au/News/138741,blueprints-of-obamas-marine-one-helicopter-leaked-on-p2p.aspx, and many similar.From the Hardcover edition.
Excerpted from Creeping Failure by Jeffrey Hunker. Copyright © 2010 by Jeffrey Hunker. Excerpted by permission of McClelland & Stewart, a division of Random House, Inc. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.