Education’s History of Technotopia
“Ibelieve that the motion picture is destined to revolutionize our educational system,” Thomas Edison said in 1922, “and that in a few years it will supplant largely, if not entirely, the use of textbooks. I should say that on the average we get only about two percent efficiency out of textbooks as they are written today.” A decade earlier, Edison had been even more pedagogically expansive, saying that film makes it “possible to touch every branch of human knowledge.” Now he added: “The education of the future, as I see it, will be conducted through the medium of the motion picture, a visualized education, where it should be possible to obtain one hundred percent efficiency.” Three years later, Edison’s vision was undiluted: “In ten years textbooks as the principal medium of teaching will be as obsolete as the horse and carriage are now. . . . There is no limitation to the camera.”
Almost as curious as this snippet of grandiose soothsaying from one of America’s greatest inventors is the context in which it was presented. Edison’s outlook was reported in a 1939 book, by which time the author had already found reason to be skeptical of technologists’ promises to schools. The book was entitled Motion Pictures As an Aid In Teaching American History, by Harry Arthur Wise, who used Edison’s quotes to prove an axiom. “Like many new educative devices,” Wise wrote, “the motion picture was received into the school with a confidence and an enthusiasm not well founded.” Educators’ faith in films was particularly unjustifiable, Wise asserted, because it was “more far-reaching and all-inclusive than can be justified by the findings of more recent educational research.” Wise, a specialist on this subject, arrived at this conclusion after reviewing seven previous studies of teaching through films and finding mixed results; he then conducted his own study, which carefully used equivalent experimental and control groups and other measures of scientific validity current at the time. Here’s what he found: The group treated to films did the best, with test gains deemed “statistically significant.” The films proved particularly valuable in engaging the students’ imagination and in giving them a sense of the historical atmosphere of the period. The boys benefited more than the girls did. The films encouraged low-ability students to learn factual information while helping high-ability students in “acquiring spirit and atmosphere.”
Overall, however, Wise found the benefit of classroom films so dependent on the circumstances—the particular subject matter, the course objectives, the students’ knowledge base, and the skill of the teacher—that they could be endorsed only for use “as a supplement.” Nor, Wise counseled, should teachers feel pressured to abandon their normal routines. “The teacher who is interested in making effective use of any type of visual aid does not need to assume that existing courses of study should be thrown aside and that new units should be built up around particular devices.” Not surprisingly, Wise closed by stressing the need for better teacher training. The film, he noted, “is not self-operating and its use requires much time for preparation if it is to function effectively.” Instructing teachers in how to do just that, he said, “is a matter of paramount importance . . .” As the years progressed, most schools did not follow Wise’s advice. Classroom films—as most of us remember—eventually became a rare occurrence, treated more as a welcome moment of relaxation and entertainment than a study aid.
In 1945, six years after Wise’s book came out, William Levenson, the director of the Cleveland public schools’ radio station, had a whole new technological vision. He claimed that “the time may come when a portable radio receiver will be as common in the classroom as is the blackboard. Radio instruction will be integrated into school life as an accepted educational medium.” It wasn’t long before the famous psychologist B. F. Skinner joined the chorus. “I was soon saying,” Skinner observed while reflecting on the first days of one of his great inventions—the behavioral “teaching machines” of the late 1950s and early 1960s—“that, with the help of teaching machines and programmed instruction, students could learn twice as much in the same time and with the same effort as in a standard classroom.”
Soon after the teaching machines’ moment in the sun, President Lyndon Johnson spoke, in 1968, to a school in American Samoa about the next technological hope. The “one requirement for a good and universal education,” Johnson said, “is an inexpensive and readily available means of teaching children. Unhappily, the world has only a fraction of the teachers it needs. Samoa has met this problem through educational television.”
Johnson’s remarks were something of an understatement. Education in American Samoa was in such a shambles that students were being taught in antiquated one-room schools, and not a single teacher on the island possessed a teaching certificate from the U.S. mainland. In response, Samoa’s governor, H. Rex Lee, had made the overhaul of the school system his top priority. In so doing, Lee rejected the standard solutions—pouring money into the school system, retooling the curriculum, hiring mainland teachers, and retraining Samoan instructors—all of which had been recommended by his aides and by local educators. But Lee wanted fast and total change. So he set out to invest in television. In 1964, Congress came to Lee’s aid, giving American Samoa $1 million for a system of televised instruction—an amount equivalent to approximately $200 per student. Two years later, four of every five Samoan students were spending from a quarter to a third of their class time watching TV. The rest of their day was spent preparing for the telecasts and, later, following up with activities related to the shows they’d watched.
Unfortunately, the proliferation of classroom televisions in American Samoa far outpaced indications of academic achievement. By 1972, three out of four high school teachers and administrators wanted to cut back heavily on classroom telecasts, and over half the elementary school students and 70 percent of the high school students agreed with them. Samoan policy makers soon began returning more control for school management to the teachers, which did not bode well for the TV campaign. In 1979, Wilbur Schramm, a mass-communications specialist, concluded that classroom telecasts had been relegated to “a supplemental enrichment service, to be used when and if the teacher decided it was appropriate.”
Back on the mainland, the attitude toward televised learning was on a similar trajectory. By 1961, $20 million had been invested in classroom television by the Ford Foundation’s Fund for the Advancement of Education. A year later, President John F. Kennedy plowed another $32 million into the venture. As of 1971, public and private sources had spent a total of $100 million on classroom TV.
The reader of this abbreviated history will undoubtedly notice the parallels to today’s hot classroom technology. One could rewrite each of these anecdotes, substituting the word computers for the words motion pictures or radios or televisions and most people would think they were recent news reports. Not too many years ago, in fact, President Bill Clinton was in the news with his own rendition of technology’s old song. In 1995, during his second presidential campaign, he pitched the nation on “a bridge to the twenty-first century . . . where computers are as much a part of the classroom as blackboards.” Despite the fact that this latest technological messiah was estimated at the time to cost somewhere between $40 billion and $100 billion over the next five years, Clinton’s Republican adversaries were happy to sing along. Newt Gingrich, talking about computers to the Republican National Committee as Speaker of the House in 1996, said, “We could do so much to make education available twenty-four hours a day, seven days a week, that people could literally have a whole different attitude toward learning.”
If history is again repeating itself, the schools are in serious trouble. In a 1986 book, Teachers and Machines: The Classroom Use of Technology Since 1920, Larry Cuban, a professor of education at Stanford University and a former school superintendent, observed a pattern in how schools handled each round of technology that mirrored and elaborated Harry Wise’s tale. The cycle always began with big promises, backed by the technology developers’ research. In the classroom, teachers never really embraced the new tools, and no significant academic improvement occurred. This provoked consistent responses from technology promoters: The problem was money, or teacher resistance, or the paralyzing school bureaucracy. Meanwhile, few people questioned the technology advocates’ claims. As results continued to lag, the blame was finally laid on the machines. Soon schools were sold on the next generation of technology, and the lucrative cycle started all over again.
Today’s technology evangels commonly argue that we’ve learned our lesson from past mistakes. As in each previous round, they say that when today’s technology (the computer) is compared with yesterday’s machine, today’s is better. “It can do the same things, plus,” Richard Riley, the former secretary of education, told The Atlantic Monthly in 1997. In a 2002 interview, John Bailey, the director of educational technology under President George W. Bush, bolstered Riley’s view. There is a great opportunity with computers, he argued, that is not yet realized but seems entirely possible: to “personalize and individualize” instruction—pinpointing certain students’ weaknesses, for example, or customizing homework assignments—in ways that their mass-media predecessors couldn’t.
Considering the obvious power of today’s personal computers, Riley and Bailey might appear to be right. However, since schools have been badly burned by so many of technology’s unfulfilled promises, it’s worth pausing a moment to ask an obvious question: What does the record on school computing so far really show? Apparently, hindsight has airbrushed its history quite heavily.
A NEW DAWN, TAKE ONE
In January 1975, a new machine appeared on the cover of Popular Mechanics. It was a funny-looking device—a square box with flip switches on its front plate, connected to a Teletype machine. The machine was called the Altair 8800 personal computer kit and manufactured by H. Edward Roberts, the president of a small outfit in Albuquerque, New Mexico, called Micro Instrumentation Telemetry Systems, or MITS. The Altair offered 256 bytes of memory, an immeasurable speck by today’s standards, and sold for $397. MITS was promptly bombarded with thousands of orders, and the personal computer was born. One of the people captivated by the Popular Mechanics story was a young Harvard student named Bill Gates, who subsequently ditched Harvard and traveled to Albuquerque in search of a job. Before long, the Altair was shipping with an old mainframe programming language specially adapted for this machine by Gates and his future partner, Paul Allen. Soon, Microsoft too was born, along with a whole new industry, called software. Two years later, in 1977, a handful of machines, called microcomputers, most of which looked like today’s large microwave ovens, were shown off at the inaugural West Coast Computer Faire. One of these was a long, slim, sloping package called the Apple II. The world was suddenly treated to an assortment of computers being manufactured in a form that was relatively small and inexpensive (the first Apple IIs, equipped with a mere 4K of memory, sold for $1,298, a sharp drop from the $18,000 to $20,000 for which a mini-computer had been sellin). It was only natural now to start promoting these nimble machines in the nation’s classrooms.
Perhaps it’s a reflection of the personal computer’s increasingly compressed intensity; perhaps it’s simply fate. Whatever the case, this machine’s history in schools repeats, in quickened and more dynamic form, technology’s entire education story. As the years have rolled on, the aspirations attached to each version and function of the computer have washed over the schools in noisy successions of swells and crashes. Indeed, the somewhat quieter discussion we see at the turn of the new century about classroom technology (and computer technology in general) is a predictable ebb, sliding back from the high computer frenzy of the late 1990s. The pattern is not terribly different, in fact, from the nation’s first big flood of computopia, which hit in the early 1980s.
From the personal computer’s debut in 1975 through most of 1981, its manufacturers (and their enthusiasts in the schools) primarily busied themselves with getting their houses in order. Commercial breakthroughs and innovative programs steadily popped up, and schools began the slow process of buying and installing these new machines—in computer labs, in school libraries, and, occasionally, in a few classrooms. Not surprisingly, some of the first innovative classroom uses of the PC arose in its Silicon Valley seedbed. Some of those early visions were quite ambitious, aiming for the same pedagogical goals that twenty-first-century technology leaders would be striving toward two decades later.
An example was the Crittenden Middle School in Mountain View, on the northern edge of the valley. In 1981, Crittenden was already using PCs to make simple graphs, execute geometry exercises, and navigate problem-solving activities. As one teacher, Steve King, put it, the computer let students simulate science experiments “that otherwise would be too expensive or difficult to perform.” King also said he was taken with the computer’s apparent ability to adapt to each student’s individual pace—the same gold mine that John Bailey, George W. Bush’s technology chief, would still be dreaming about in 2002. Crittenden never found that pot of gold, for reasons that the John Baileys of the world might do well to remember. In fact, despite Crittenden’s herculean efforts, computer technology continually failed to take hold at the school in general. Steve King, the technology enthusiast, is of course long gone by now. But Sue Nelson, a longtime language arts teacher at Crittenden, remembers those days quite clearly. “We never got any support,” Nelson said. “Anyone who is any good with computers is out working in industry making big bucks.” Nelson, who had been at Crittenden since the early 1980s, recalled numerous attempts to scale up the school’s computer program that were continually foiled by system crashes, which have continued to this day. “The teachers then spend a day and a half rebuilding everything. The computers have been absolutely frustrating. We’ve got all this potential with three labs on campus, but that’s all it is—a lot of potential.”
While plenty of technology obviously did take hold elsewhere in Silicon Valley, it was years before California launched any organized campaign for computers in schools. It wasn’t even the first state to do so; Minnesota had seen the digital writing on the wall almost a decade earlier. In 1973, it formed the Minnesota Educational Computing Consortium (MECC), an ambitious and, in time, a nationally influential cooperative of state agencies and Minnesota colleges and universities. As an early sign of the coming academic attitude, Don Rawitsch, MECC’s manager of user services, told a reporter in late 1981, “We’ve got to get computers away from the image of being separate from everything else.”From the Hardcover edition.
Excerpted from The Flickering Mind by Todd Oppenheimer. Copyright © 2003 by Todd Oppenheimer. Excerpted by permission of Random House Trade Paperbacks, a division of Random House LLC. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.