Internet, yay! Internet, oh no!—surely, it’s obvious by now that there is as much reason for hope as there is for fear from our technological future. A rational and nuanced criticism will seek to define our true circumstances, identify dangers, and encourage beneficial progress. Thus far, however, tech critics have tended to extremes, either for or against the Internet: wringing their hands á la Nicholas Carr (The Shallows), or busting out the pompoms in the manner of Jeff Jarvis (What Would Google Do?). This simple-minded stuff will no longer do. It’s into the vacuum of a powerfully felt need that contemporary theorists like Evgeny Morozov and Jaron Lanier have been drawn.
Morozov is a noted critic and scholar whose influential 2011 book, The Net Delusion, took issue with the idea of the Internet as a tool for encouraging the spread of democracy. He made the useful point that the political dimensions of our new technologies can be as dangerous and as deceptive as they are liberating. Authoritarian regimes, for example, can use online tools to quash dissent quite as easily as dissidents can take to Twitter to organize and to disseminate their views.
Morozov’s perspective was initially grounded in foreign policy and government, but he has since made a name for himself as a techno-gadfly, writing a boatload of articles in newspapers across the globe with titles like “Amordazados por los robots informáticos”, “Gli oggetti «smart» che ci rendono stupidi”, and “Not By Memes Alone.” His new book continues in this generally “boo, Internet” vein.
To Save Everything, Click Here is a polemic against what Morozov calls “solutionism” and “Internet-centrism.” “Solutionism” is the tendency to assume that technology can solve any problem efficiently and free of unintended consequences (“an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions”); “Internet-centrism” is the belief that “the Internet” will fix everything (gratingly, Morozov puts the word in quotes throughout the book, in order to remind us that the falsely unifying concept itself keeps us from a true understanding of technology’s effects: “[A]nyone who is desperately trying to understand how today’s digital platforms work is much better off simply assuming that ‘the Internet does not exist.'”).
I made it through the first pages, despite their absurd denunciations of spy trash cans and scolding techno-kitchens (nonexistent products that will never exist), still following gamely along. My engagement, however, was deeply compromised very shortly thereafter.
“… Silicon Valley innovators [...] are the same people who are planning to scan all the world’s books and mine asteroids. Ten years ago, both ideas would have seemed completely crazy; today, only one of them does.”
“Ten years ago” would mean 2003. In fact this vision, and the practical work of digitizing the world’s books, began more than thirty years before that: in 1971, with the late Michael Hart, the founder of Project Gutenberg (who was in no way a “Silicon Valley innovator,” then or ever): he observed that “the greatest value created by computers would not be computing, but would be the storage, retrieval, and searching of what was stored in our libraries.” By 2003, Project Gutenberg, a free resource for public domain texts, had already digitized and made available over 10,000 books; only a very blinkered observer wouldn’t already have known—for a decade at least—where the project of scanning all the world’s books was headed.
This brought home to me that Morozov does not describe the Internet I know at all. My Internet is not only the Mark Zuckerberg Internet, or the Kleiner Perkins Internet; it’s the Internet of Michael Hart and Brewster Kahle, Aaron Swartz and the Electronic Frontier Foundation, the Public Library of Science and the new Digital Public Library of America, JSTOR and countless public archives and library and museum sites all over the world. It’s the Internet of preservationists and digital humanitarians, of scholars and intellectuals of all kinds.
So it makes no sense to me at all to hear nihilist talk of how “solutionism” is particularly rooted in the Internet. If the Internet were a world, Morozov blithely ignores whole continents, whole oceans, to make his criticisms of certain aspects of one small province—Silicon Valley—and then extrapolate from them to encompass the rest.
Whatever may be wrong with Silicon Valley startup culture (and I am the first to agree that there is plenty) is not wrong with the whole of the Internet. Nor is there anything stopping anyone with a better idea of how to manage online political activism or recycling or restaurant reviews from trying it out this very minute. That’s the beauty part of the Internet (which, yes, is a thing that exists, despite Morozov’s ludicrous claims to the contrary).
Morozov fails to make a meaningful distinction between the particular frailties of technologists and the regular human kind. Human beings have always had problems; have always attempted to solve them using available means; have sometimes, to some degree, fallen prey to irrational optimism about their chances of success (and prey, as well, to Eeyore-like melancholy and pessimism regarding those same chances).
His reputation as a pugnacious enemy of techno-utopianism nevertheless obliges Morozov to gang all the Internet theorists up and bash them en masse, whether the underlying premise of his assault makes sense or not. The big trouble here is that Morozov often has to distort and/or misread the work of these authors in order to buttress his points. Anyone crazy enough to call attention to favorable aspects of the Internet’s role in our lives can and will be subjected to beatings with the same stick he hauls out for authors of the most far-out views, such as those of Singularity University co-founder Peter Diamandis—who apparently believes that all of humanity’s major problems will be solved via technology within the next twenty-five years or so.
There are a lot of examples of Morozov’s misreading in To Save Everything, but I’ll illustrate with just one from Clay Shirky, the tech critic with whom Morozov seems to have the most particular beef. (Shirky is mentioned sixty-eight times in the book.)
It appears that in 2000, an attorney and food writer named Steven Shaw took to the pages of Commentary to rail at the then-ubiquitous Zagat restaurant guides (which had been compiled, beginning in 1979, through the earlier, mail-order form of crowdsourcing). Shaw detailed the shortcomings of the Zagat system, and scoffed at the very idea that the “top restaurant” chosen by Zagat reviewers in New York for four years running should have been the unpretentious Union Square Cafe, when you could have been going to Lespinasse, Jean Georges or Daniel instead.
In his book Cognitive Surplus, Shirky cited Shaw in order to illustrate the effects of crowdsourced opinion on the professional kind.
Shaw is unwilling to condemn Union Square as a bad restaurant; it’s just not the kind of restaurant people like him prefer, which is to say people who eat in restaurants professionally and are happy to have a little intimidation with their appetizers. [...] [But] when we can all now find an aggregate answer to the question “What is your favorite restaurant?” we want that information, and we may even prefer it to judgments produced by professional critics.
Inexplicably, Morozov’s interpretation of this is that Shirky “brims with populist, antiestablishment rage against professional critics and promises that, thanks to ‘the Internet,’ the masses can finally dispense with their highbrow pretensions.” For Morozov, Shirky’s message is that “pre-Internet meant expertise, post-Internet means populism; we are post-Internet, hence, populism.”
Except that I got no such thing from reading Shirky’s original remarks. He poked a little fun at the pretentiousness of fancy restaurant critics, it’s true, but his main point was that if we want to know places where lots of people like to go, well, now we can find out, and the availability of this knowledge necessarily alters the role of professional critics.
So I contacted Shirky directly and asked him: were you saying that Yelp is superior to professional restaurant criticism, or that professional restaurant critics should be done away with? He replied:
Obviously not true. The point is that the conditions in which a professional has to exist have to be relevant to what the public they’re serving is interested in. [...] The argument for the professional should be: We can add a kind of value that the aggregate mass of opinion can’t.
Shirky’s message for professional food critics was simply: add value. (And it seems to me that they really have, too, since 2000; to a far greater degree, restaurant critics have become leaders and allies rather than authorities, addressing not a distant public, but a dynamic, engaged group of fellow-travelers.)
All these nuances are lost on Morozov: all he seems to see is, Shirky thinks the Internet always knows best, and he’s always wrong. Neither of those premises is correct.
A number of Morozov’s recent targets have responded with similar attempts at disentanglement; many have added that at bottom, there really isn’t so much disagreement between themselves and their attacker. Farhad Manjoo, addressing Morozov as a colleague, in Slate: “I and pretty much everyone else who thinks or writes about the digital world—both skeptics and boosters—rely on broad terms like “technology,” “the Internet” [...] You may be right that such generalizations sometimes obscure rather than illuminate our conversations.” Tim Wu, in The Washington Post: “[...] tech thinkers do have a bad tendency to believe a little magic dust can fix any problem. [...] And I tend to agree with Morozov that writers such as Jeff Jarvis [...] are entirely too forgiving of firms such as Facebook.”
In a separate (and wearying) 16,000-word piece in The Baffler‘s 22nd issue, Morozov took aim at open source software advocate Tim O’Reilly, whom he accuses of being a “meme hustler.” O’Reilly responded in a Google+ post that the piece “[is] well researched and captures many of my ideas, but then twists each of them in order to serve Morozov’s own ends. Truth and untruth are so cleverly mixed [...] I suspect Morozov and I agree on many things about the Internet and its effect on society, though you’d never think so from what he’s written.”
(Also: how is Morozov’s peddling of the idea of “solutionism” not itself “meme hustling”?)
Shirky added: “What Evgeny wants is to make certain kinds of conversations harder to have; he doesn’t want to add to the debate so much as to stop it from happening. His MO is essentially to say, there are a bunch of people thinking about the Internet, and here is one of them who is visibly crazy, right? [...] and therefore, all of them believe this. That strategy will [be effective] to the degree that he can convince people not to read our work. You wouldn’t know from reading him that I’d put a critique of slacktivism in a book in 2008; you would not know that he and I agree about WikiLeaks. It’s this kind of burning down the house.”
* * *
The second and worse disruption in my engagement with this book occurred a little later, with a seemingly throwaway remark to do with the history of science. We must be mindful of history; this is a constant refrain of Morozov’s: okay, so far so good.
As historian of science Steven Shapin argues, “The past is not transformed into the ‘modern world’ at any single moment: we should never be surprised to find that seventeenth-century scientific practitioners often had about them as much of the ancient as the modern.” Our contemporary framing of those changes as an event or series of events—as a well-contained “revolution” with start and end dates—is a relatively recent phenomenon; the very phrase “scientific revolution” was probably coined by philosopher Alexandre Koyré in 1939.
1939?! Alexandre Koyré?! Who he? That’s the sole mention of him in the book; Koyré doesn’t even appear in the index. What purpose can this baldly unsupported remark possibly serve? It advances no argument, it is just dangling out there, maybe as a bit of would-be intellectual braggadocio. But mainly I thought, no way, that cannot be true. And it isn’t: it was the work of five minutes on Google Books to discover the phrase “scientific revolution“, used in the sense given, in Popular Science in 1896 and 1909. Expand your search to French and you will find, among dozens of other nineteenth-century citations, one from Théophile Gautier in the Revue de Paris of 1857, and again, applied to the works of Newton and Descartes, in Le Producteur in 1826.
At this point I admit that though I slogged through the book, I could no longer take it seriously, particularly when Morozov continued as he does throughout to chafe other, better writers for failing to attend to history.
Because of the Internet’s meteoric growth and global reach, because of the near-magical changes it has already wrought in everything from buying a plane ticket to reading the news, it’s true that some theorists and entrepreneurs exhibit irrational exuberance about its future potential. Pretty much everyone who writes about technology at the moment appears to agree that there is too much techno-utopianism in the air, too much Internet cheerleading, and that a more deeply questioning, more nuanced tech criticism is needed. So, because Morozov approaches technology as a self-avowed skeptic—taking the position currently fashionable among tech critics—pointed criticism against Morozov himself is in effect preemptively blunted.
For example, writing in The Atlantic, Alexis Madrigal called To Save Everything, Click Here “a delight: [...] a high-wire performance, a feat of intellectual daring”; the headline of his review is “Toward a Complex, Realistic, and Moral Tech Criticism.” Who’s going to argue with that as a goal? Nobody! It sounds great. Though a closer look at Madrigal’s long review reveals a whole lot of caveats regarding Morozov’s approach and his conclusions, he seems to believe that any weaknesses are outweighed by the value of the much-needed soul-searching engendered by the book’s challenges to techno-utopianism. Fair enough, I guess.
But Morozov’s criticism seems to me to be neither complex, nor realistic, nor moral. He is anything but a dialectician. He has little respect for anyone whose opinion deviates from his own—surely the first quality required of a real scholar. He maintains a pyrotechnically unfortunate Twitter feed, wherein he will totally yell at and abuse anybody who contradicts him: to my mind that reflects very badly on Morozov, and not at all on his targets. For a person who seemingly wants to be congratulated on his scholarship, you’d think he would adhere to the most basic obligations of scholarship, like paying close attention to the arguments you are attacking, giving your opponent his due, representing his ideas fairly and accurately, listening to his responses and attending to them respectfully, and so on. Ha! Well. This book!—basically, it’s a glorious victory over an army of straw men.
* * *
Much has been written about the storied career of computer scientist, virtual reality pioneer and musician Jaron Lanier (including a New Yorker profile two years ago). He entered college at age 13; he hung out with physicist Richard Feynman at Cal Tech, and with Marvin Minsky, the artificial intelligence pioneer, at MIT. In 1983, he programmed a deliriously spacey music-generating art video game called Moon Dust for the Commodore 64. He worked at Atari, and co-founded startups that later became parts of Oracle, Adobe, and Google. He does research at Microsoft. He collects and plays rare musical instruments, performs and gives talks (“I have a 200-year old Chinese flute with a concealed dagger!“) He is an eccentric, a visionary and a polymath.
My first exposure to Jaron Lanier the writer was with the influential 2006 essay, Digital Maoism: the Hazards of the New Online Collectivism. It contains the memorable remark, “[T]he hive mind is for the most part stupid and boring. Why pay attention to it?” I found this essay arrogant and wrongheaded; I disagreed with Lanier almost entirely about Wikipedia, and never quite shook that first impression of him as an elitist thinker, someone no dedicated egalitarian could ever enjoy reading.
Man, was I ever wrong. Lanier’s new book, Who Owns the Future? is rich in ideas, imagination and humanity. Despite quite a lot of loopy bits, this self-described “book of hypotheticals, speculation, advocacy” succeeds in proposing the beginnings of a possible—even practical—way out of the soup of wealth inequality and economic decline in which we have unhappily landed ourselves in the information age.
Such a simple idea: redesign the Internet so that all who participate, whether by providing personal data or by writing influential blog posts, make money by their participation. This would require implementing a system of universal micropayments to create what Lanier calls a “humanistic information economy.”
Our data has monetary value, Lanier argues; we should all be compensated for it. We’ve been tempted into contributing data and content for free, thereby enabling the development of massive online monopolies with the lure of things like cheap or free music, books, games and/or social interactions, but we don’t see the real motivation for offering these things: “the creation of ultrasecret mega-dossiers about what others are doing, and using this information to concentrate money and power. It doesn’t matter whether the concentration is called a social network, an insurance company, a derivatives fund, a search engine, or an online store. It’s all fundamentally the same.”
“We love our treats,” Lanier concludes, “but will eventually discover we are depleting our own value.”
Lanier doesn’t tell the story of the nineteen-year-old Mark Zuckerberg, but it sprang to mind immediately. Even as a teenager, Zuckerberg understood the value of private information, and exhibited the tendency to control and hoard it that would later characterize his company, both in its management and in its treatment of users. His infamous line—“They trust me—dumb fucks”—resonates for a reason, and subsequent events suggest that Zuckerberg’s attitude has altered but little in the intervening years. It’s very refreshing to see a Valley insider like Lanier willing to reveal the Zuckerberg brand of information profiteering—in the guise of providing a service—for what it really is.
There are a few serious weaknesses in the book: Lanier’s views on higher education (and the alleged ease and practicality of replacing it with online tools) are highly questionable; he shares with many technologists a low degree of respect for, or understanding of, the work of teachers; otherwise, how could he write: “Why are we still bothering with higher education in the network age? We have Wikipedia and a world of other tools. You can educate yourself without paying a university. All it takes is discipline.” Maybe that is relatively easy to say for someone who had access to teachers like Feynman and Minsky as a young man.
Lanier’s position as a top technologist may also blind him to the potentially growing difficulty of herding consumers who’ve grown up on the Internet. Facebook fatigue, for example, is increasing among people of college age and younger who’ve already spent hundreds or thousands of hours of on Facebook and Instagram as children. For them, social networks are no longer new toys; they’re old ones, perhaps ready to be discarded. Tomorrow’s Internet users may yet develop into far more discerning citizens, online and off, than the mahoffs of Sand Hill Road are giving them credit for.
But when Lanier sticks to expertly tracing today’s economic ills back to the growth of large networks, or revealing the arcane workings of Silicon Valley’s startup culture, or describing exactly how Big Data is mined and used, his insights are arresting and persuasive, even brilliant. As in this passage, a capsule explanation of the fundamental cognitive dissonance of Silicon Valley:
Someone might be playing the technological triumphalist, celebrating the brashest entrepreneurs of the moment, but then end up imagining a weirdly socialist utopia in the future. This is one of the most common switchbacks, one that never fails to amaze me. “Free Google tools and free Twitter are leading to a world where everything is fee because people share, but isn’t it great that we can corner billions of dollars by gathering data no one else has?” If everything will be free, why are we trying to corner anything?
He can also be pretty funny. A favorite passage recasts Pascal’s Wager in the image of Captain James T. Kirk in order to make the case that in projecting a future for humanity, optimism, foolish as it may seem, is in fact—like Pascal’s belief in God—merely a good bet.
Pascal suggested that one ought to believe in God because if God exists, it will have been the correct choice, while if God turns out to not exist, little harm will have been done by holding a false metaphysical belief. Does optimism really affect outcomes? The best bet is to believe that the answer is “Yes.” I suppose the vulgar construction “Kirk’s Wager” is a workable moniker for it.
(Just for the record, I don’t find that construction the least bit vulgar.)
Though it provides a good point of departure, there are problems, too, with the end of the book, which describes possible avenues for the implementation of Lanier’s ideas. Those who are already extracting billions from the information economy are not going down without a fight, and I think Lanier underestimates the fury with which they will try to squash any attempt to deprive them of so much as a nickel. Startups may arise to challenge their hegemony, but it will take a monumental amount of grassroots support to dethrone our current information oligarchs. Nor does it appear that Lanier sees a role for the fourth estate in creating the humanistic information economy, an oversight that strikes me as short-sighted.
But the aims of this book, and much of its underlying reasoning, are really exciting and great. In short, count me on the side of Captain Kirk.
* * *
Last month, Nieman Labs published a piece by Nicco Mele and John Wihbey, in advance of Mele’s new book The End of Big. In it, they advance the notion that news organizations should “move from brands to platforms for talent,” capitalizing on the star power of individual writers to generate new business models for media.
The most serious problem with this proposal is its neglect of the scourge of punditry, which is a dire threat to the quality of public discourse. Once a writer suffers the misfortune of stardom he is in danger of becoming a caricature of himself, inextricably trapped in his signature style, opinions or ideas; examples of this abound in television news and on the op-ed pages. With the enlargement of a writer’s fame comes the lessening of broader editorial imperatives to which he must conform. Thus do famous authors sometimes calcify. Their capacity for nuance deteriorates and gives way to a tired old bag of tricks. Punditry, in short, is apt to create monsters.
But it needn’t do so. Though a great many of our tech critics (Morozov among them) are calcifying before our very eyes, it is encouraging to find that an authority of the stature of Jaron Lanier can still be more concerned with the truth—complex, contradictory, and difficult as it is—than with his own place in the galaxy of media stars.