The ignorance of one voter in a democracy impairs the security of all.
--John F. Kennedy, 1963
If I knew then what I know now, I might have recognized the sound of the phone ringing on my desk as something like the bell at the start of a wild horse race. My life, relatively calm on that warm day in July 2003, was about to change in ways I could never have imagined, and sometimes still can't quite believe. Even so, if I knew then what I know now, I probably would have answered the call anyway.
The call came from David Dill, like me a professor of computer science, a man I had never met but knew by reputation. David was one of the leading voices in the computer science community to speak up with concerns about electronic voting. He was calling from his office at Stanford, and I was speaking from mine at Johns Hopkins in Baltimore. He asked me if I knew that Diebold's source code was available for download on the Internet. I was a little embarrassed that my first thought was, What's Diebold?
until I told my wife the story and she asked, "What's source code?" David explained that Diebold was the country's leading manufacturer of electronic voting machines and that in the midterm elections of 2002 its Accuvote TS and TSx machines had been used in thirty-seven states.
"You mean the actual elections, where people go to the polls and vote?" I asked.
I hadn't realized that e-voting had already caught on to such an extent. I asked David if the code he was talking about was part of the back-end processing system, the code in the individual voting machines themselves, or some other part of the larger system. He told me that the source code for the individual terminals themselves, the machines that millions of Americans would use to cast their votes, was available for download from a New Zealand-based website. An activist named Bev Harris had found the files, completely unprotected, on Diebold's own servers and had posted them.
I couldn't believe it. The companies that make such machines are notoriously--and fanatically--secretive about their systems. Outsiders could never examine the source code except under ironclad nondisclosure agreements, and inside the companies very few people had access to the code. Even professionals like us had no idea how the systems were designed and developed, and we didn't believe there was much chance we'd ever find out. As unnerving as David's news was, it was also exciting. Conceivably, we could have our first glimpse into the inner workings of the machines at the heart of our electoral future. David let me know that he had called a number of computer scientists with this news but had been particularly interested in contacting me.
My special expertise is in computer security. I had become involved in electronic voting somewhat haphazardly in the late 1990s when a colleague enlisted me in an analysis of computerized voting commissioned by the government of Costa Rica. That project led to my appearance before a National Science Foundation (NSF) panel on the subject, after which I wrote up my comments for publication and began giving academic talks on the subject. When David Dill circulated a petition advocating verified voting
--the simple idea that when people go to the polls to cast their votes, they should be able to verify with a receipt that their votes have been recorded correctly--I was among the first to sign. It wasn't that I had any special passion for politics, but the principle seemed straightforward to me, and the problems caused by computerized voting seemed obvious, as they did to most computer scientists.
I was already aware of several compelling arguments against electronic voting machines, notably that because the systems were proprietary, they hadn't been proven to be tamperproof; nor was the voting verifiable after the fact. If it were now possible to do a security analysis of the Diebold terminals, it would be an opportunity to address the first problem publicly. As for the second, it was the computer security community, ironically, that first recognized that achieving the goal of verifiability might involve old-fashioned, low-tech paper ballots.
There are several levels of code, that is, the instructions created by programmers to run computers. Source code
, which is written in a variety of commonly used and readable programming languages, instructs the computer to perform specific tasks. Once source code is written, a program called a compiler is used to convert it into object code
, also referred to as executables
, the seemingly endless stream of ones and zeros that is comprehensible only to a machine and is what actually runs inside a computer. Professionals are trained to read source code, but even for us it can be profoundly difficult to decipher and understand a complete, complex program. Computer and software companies look upon their source code as their most precious asset and guard it like Coca-Cola guards its secret formula. That any company, especially one as sophisticated as Diebold, would allow such a basic breakdown in security that its source code would be available for all the world to see was unimaginable to me.
I had to see for myself. I opened up Google and typed in "Diebold source code." Sure enough, the very first hit was a website in New Zealand that appeared to contain all of the source code and documentation for an electronic voting machine. The documentation was password-protected, but the source code itself, the real treasure, was not. Apparently, whoever had set up the site had copied the code from Diebold's site before the company wised up and shut it down. If this was what it seemed to be, it was indeed a stunning discovery.
In that first phone call, David Dill suggested that we assemble a group of computer security experts to analyze the Diebold code. While David got busy contacting colleagues by phone and e-mail, I turned instinctively to Adam and Yoshi, the two PhD students who were working with me that summer. If anyone was going to get excited about getting their hands on this code, it would be these two guys.
I've worked with a lot of brilliant students, but none have impressed me like Adam Stubblefield. Adam was brought to my attention by Dan Wallach, his undergraduate adviser at Rice University. I introduced myself to Adam at a computer security research conference where, though only nineteen, Adam had presented a fascinating paper he co-authored with some well-established and accomplished researchers. His modesty, charm, and obvious intelligence immediately won me over, and on the spot I offered him a summer internship working with me at AT&T Labs. It was a remarkably successful and productive summer, with major national media taking notice of the work we did (professor-speak for the work he
did). Adam was the first person to crack the security of the wireless networking protocol that was designed to protect Wi-Fi communication. And it took him less than a week. Adam and I had developed a great relationship, and I invited him to come with me when I went to Johns Hopkins.
A few years older than Adam, and equally impressive, Tadayoshi Kohno, who goes by "Yoshi," brought to our team experience and discipline that nicely complemented Adam's raw, untamed genius. Yoshi, who had been a PhD student in computer science at the University of California at San Diego, had approached me at a conference and asked if I was hiring any summer interns. This was just before I was to leave AT&T for Hopkins, so I jokingly told him he'd be welcome to come spend the summer with me, earning the lowly wages of an academic. Amazingly, Yoshi wasted no time, subletting an apartment in Baltimore and leaving his wife on the other side of the country for the summer. Later Yoshi even lived for a while in a spare room in our house. He was often off to work before we got up in the morning, and still out working by the time we went to bed at night, but somehow he found time to develop a strong and lasting friendship with our four-year-old daughter.
With Yoshi in my office and Adam at home connected by speakerphone, I described David Dill's phone call, suggested that the three of us form the core of the group doing the analysis, and asked if they were willing to drop everything else we were working on to concentrate on this project. Adam, who must have been sitting at his computer, replied that he was already downloading the Diebold code. Without missing a beat, Yoshi asked if he could go do the same. We hadn't even gotten started, and I could tell these guys were hooked.
Meanwhile, David Dill had assembled a formidable team of interested computer scientists and got about seven of us together on a conference call to discuss the logistics of a collaborative and distributed analysis. It's hard enough to work on a big project with people right down the hall. I couldn't imagine how a group this size, spread all over the country, could work effectively, but I was eager to get started and went along with things. It was agreed that results would all be shared on an e-mail list.
At Hopkins, Adam and Yoshi dove right in and spent the first few hours poring over the source code. They compiled the code and were able to run it on the Windows machines in our lab, literally creating a working voting machine. Adam began posting their results to the e-mail list, but only Dill responded. The pair made astonishing headway over the next several days, while we continued to hear nothing from any of the other members of the group. I decided to move ahead on the project as a small Hopkins team rather than keep up the pretense that it was the work of a larger group. It may have been a rash decision, and it was not without its cost. At least one of the people on the original call bore me a grudge for a very long time.
Also in that first conference call, someone raised the specter of the DMCA. The Digital Millennium Copyright Act, a law signed by Bill Clinton in 1998 that, among other things, makes it illegal to circumvent a digital copyright protection system, can be a great thorn in the side of computer security researchers, especially those whose work focuses on exposing the weaknesses in those very systems. In the past, the law has been interpreted broadly and has often been invoked to hinder the kind of research or analysis that I like to do. It reinforces the notion that companies like Diebold never need to disclose the code that runs their machines, not even to the government and not even when the machines serve critical public functions. A young Russian computer scientist went to jail in 2001 for demonstrating how to break one of Adobe's protection schemes. He was a young PhD candidate with two small children, presenting at an academic conference in Las Vegas, arrested for doing what he was trained to do. He was released on bail but not permitted to return to his family for six months. The DMCA is a sword hanging over our heads.
Knowing I needed to understand the legal risks better, I contacted Cindy Cohn, a sharp, tenacious attorney known as an effective defender of computer scientists against just this kind of legislation and legal trickery. Cindy works for the Electronic Frontiers Foundation (EFF), an organization founded in 1990 to help safeguard civil liberties in the age of digital communications technology. I brought her up to speed on our investigation into the Diebold code and asked if we needed to worry about DMCA. Cindy immediately--although already too late--said that she wanted to have a conversation with me, Adam, and Yoshi before any of us started looking at the code. When she had us all on the phone, she explained a point that should have been self-evident: since there was no protection scheme attached to the code, it probably wasn't relevant in this situation. The documentation, however, was another story. It was password-protected, and even though we had easily found the passwords online, she instructed us not to use them. The documentation would have saved us considerable time, but we chose not to look at it. Cindy added that she needed time to investigate whether or not the code represented a trade secret of Diebold's, like Coca-Cola's recipe. If so, we would be upping the ante considerably if we were to publicly expose any flaws. Cindy offered to represent us pro bono, and I can hardly express how much her reassuring presence comforted me. I'm not sure we would have continued without her.
In the end, Cindy concluded that there was no real case to be made for the code as a trade secret. The code had been copied from Diebold's own website and had been available online for months. By checking with the host of the site in New Zealand and with that person's Internet service provider (ISP), Cindy established that Diebold had made no attempt to get its code removed. We could safely assume that Diebold knew about the online availability of its intellectual property, yet had made no effort to protect it. Cindy felt this pretty well deflated any trade-secret argument the company might ever hope to make, and she gave us the green light. The legal staff at Johns Hopkins confirmed Cindy's opinion: the documentation was off-limits, but the code itself was fair game.
Adam and Yoshi were consumed by the dissection of the code. Almost immediately, they flagged some serious problems. It seemed like every hour or two one of them would pop into my office and breathlessly tell me about a new wrinkle he had discovered. We were stunned by some of these discoveries and began to sense how big this thing might be. When I called Cindy after a few days to tell her that we would soon be ready to go public with our findings, she was amazed at how quickly we had moved. She mobilized the EFF legal team to be ready to review our drafts and come to our rescue if necessary.
I'll never forget what Adam and Yoshi put themselves through that week. They got almost no sleep, and Yoshi seemed to barely register the presence of his visiting wife, whom he hadn't seen in weeks. I've gotten most of the recognition for this work, but these two did the heavy lifting.
One other researcher soon joined our little core group. Dan Wallach, the professor who had been Adam Stubblefield's adviser at Rice, had been in on the conference call with David Dill and had also begun assembling a team of students to analyze the code. He had no way of knowing that Adam and Yoshi had moved like demons and were preparing a report by the end of that first week. When Dan e-mailed me with some thoughts on the project, I had to explain how far along we already were at Johns Hopkins. Dan didn't take it well that we had moved ahead unilaterally, but after hearing about the sacrifices, not to mention the discoveries, that the two grad students had made, he softened. The truth was that the project needed Dan, not only because he could help with the analysis, but also because he's a terrific writer and could bring much-needed clarity to the report, an invaluable contribution given how widely it was distributed.
Dan was also well acquainted with the legal issues surrounding security research, as were the lawyers at Rice. There had been complications relating to the DMCA with a controversial paper he co-authored about digital music, and now he correctly surmised that he would need clearance if his name was going on a report as explosive as ours seemed it would be. It was the middle of summer, however, and he couldn't get the attention he needed from the legal staff. This seemingly small roadblock would have unfortunate consequences for him down the road. Our work ultimately became known as the "Hopkins Report" instead of the "Hopkins/Rice Report."From the Hardcover edition.
Excerpted from Brave New Ballot by Aviel Rubin, Ph.D.. Copyright © 2006 by Aviel David Rubin. Excerpted by permission of Broadway Books, a division of Random House, Inc. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.