“We Don’t Want the Program”: Jill Lepore on How Tech Can’t Fix Democracy


The interview you are about to read took place on September 17, 2020, inside the hallowed walls of Zoom. Dozens of people gathered around the virtual fireplace to celebrate the launch of Jill Lepore’s latest book, If Then, a book that has been described as a revelatory account of the Cold War origins of the data-mad, algorithmic 20th century.

Writing in the Financial Times, Hannah Murphy said, “Harvard professor and New Yorker writer meticulously chronicles how Simulmatics laid some of the earliest foundations for the field of predictive analytics, today wielded by internet platforms, advertisers, and political strategists to help sell consumer products or election candidates. The chaotic band of scientists, psychologists, and slick Madison Avenue advertising prophets have been—until now—the unknown grandfathers of Facebook and Google, and all of their whizzy algorithms.” This book provides an exquisite foundation for understanding the relationship between American electoral politics and data science, an understanding that is critical as we enter the final throes of an election season that will be unquestionably marred by epistemic debates. We can only hope that the New York Times does not punish us with yet another wiggly dial predicting the outcome of the election.

Jill Lepore is the David Woods Kemper ’41 Professor of American History and an affiliate professor of law at Harvard University. She’s also a staff writer at the New Yorker and host of the podcast The Last Archive. Her many books include These Truths: A History of the United States, an international best seller that was named one of Time magazine’s top-10 nonfiction books of the decade.


Jill Lepore (JL): To get this conversation started, let me begin by telling you a little bit about the Simulmatics Corporation. It’s an extraordinarily obscure story that I fell into when I opened an archival box at the MIT Library. The contents of that box, of hundreds of boxes, really, answered a lot of questions that I didn’t even know I had. And so, I felt obligated to write a book.

The Simulmatics Corporation was founded in 1959 by a guy named Ed Greenfield, a dazzlingly charismatic Madison Avenue adman who was also a devoted liberal philanthropist and a very devout supporter of civil rights causes who had worked on Democratic Party campaigns throughout the 1950s.

Greenfield was also a really smart guy. He was very drawn to the kinds of men that David Halberstam called—with considerable irony—the “Best and the Brightest.” Greenfield was particularly interested in the research being done in the behavioral sciences and in the emerging field of computer science in the 1950s. He was like Danny Ocean: he put together this incredible team of people—in this case, to design an election simulator.

Today, people simulate elections all the time, but it was brand new in the 1950s. When you think about it, that makes a lot of sense: if you were interested in trying to undertake the creation of a predictive model for human behavior in the 1950s, voting would be the thing that you’d most likely want to work on, because it’s one of the few realms of human behavior for which there was, at the time, a vast quantity of data: census data, public-opinion surveys, polls, and election returns. Democracy generates its own data. And so, people working in the quantitative social sciences were really drawn to the study of voting behavior. Greenfield’s company, Simulmatics, operated on the idea that, once you could perfect a model to predict voting behavior, you could use it to predict all sorts of behavior, including consumer choices.

Founded in 1959, the company’s first client was the DNC, and its next client was the John F. Kennedy campaign, for which it provided advice on how to defeat Richard Nixon in 1960. And after that project, Simulmatics worked in nearly every realm that, today, you can find predictive analytics at work. It provided advertising advice for companies like Colgate-Palmolive and Ralston Purina; it provided media advice for television stations. It did a big project for the New York Times on data analysis on election night. And it conducted a number of projects for the federal government.

But the company was suffering by the middle of the 1960s, because there just wasn’t enough data for most of the projects they wanted to do—at least, not enough data to do those projects well. Not to mention, computers weren’t fast enough or cheap enough to use to make this feasible as a business model. So although Simulmatics had the idea to use computer technology to predict human behavior and sell those predictions as a product, this was a hard sell.

In 1965, the company turned to a new kind of work. Simulmatics set up an office in Saigon and undertook contracts for the US Department of Defense, collecting and analyzing public-opinion data among the peasants of South Vietnam. That work was extraordinarily controversial; it led, in many ways, to the company’s decline and eventual bankruptcy in 1970.

I’m really interested in the degree to which everything the company did badly has since been done well. When I came across the story, it was a little bit like uncovering an unexploded landmine. Here is this thing that was buried a long time ago, but that’s now blowing up.

 

danah boyd (db): If Then is a brilliant and beautiful book. It is the story of a corporation and its people and its tentacles. But it’s also the story of American democracy’s very complicated relationship with data and technology.

This relationship, as I’ve seen from a lot of your other books, goes way back. But being able to dive deep into the Simulmatics Corporation is eye-opening. Through this angle, your book allows us to look at a period of time when our politicians were very willing to embrace not just data but the mirage of the technology. They didn’t necessarily want to look under the hood and understand the details of it; they didn’t want to understand technology’s limitations. Instead, they were interested in the performance of data, what data could say if it could speak. And that’s a really interesting place for data to be, especially in such high-stakes contexts as you describe in your book.

What is it about the American psyche and the political structures that make that obsession with having data speak come so alive repeatedly over time, especially in this period?

 

JL: American democracy depends on demography. Our political system is mathematical; the Constitution requires a census because this republic works like it’s a math problem, even down to this problem of slavery, which was “solved” with a fraction: three-fifths. But of course the government of the people is more than just a math problem; it’s a problem that can be influenced and is in fact very vulnerable to the influence of changes in technologies of communication. There is a pattern across all of American history: every new technology of communication is followed by a period of political disequilibrium.

With a new technology, communication is faster, more people can communicate, information is freer. The people rise up because they have this new power. And then that new power is contained, or constrained in some meaningful ways, and equilibrium is restored. But before, really, the mainframe computer in the 1950s, these technologies of communication are not inscrutable or difficult to understand. Anybody can understand how a telegraph machine operates. A telephone: it is a little spooky, it’s kind of invisible, but it’s not hard. The radio seems kind of like a telephone call. They’re not self-mystifying technologies.

That begins to change by the time you get to the UNIVAC in 1951 (that’s built to count the census in 1950). The culture has really wrapped itself around men of science. It seems like part of the Cold War mandate. The Cold War, the Space Race: the federal government is investing an enormous amount of money in the pursuit of science for the aims of national security. Somehow, it becomes necessary to worship at the altar of engineers from MIT, so much so that it is a joke when you make fun of them. Consider, for instance, the film Desk Set from 1957, when Spencer Tracy plays an MIT systems engineer and Katharine Hepburn pokes fun at him. Part of the reason the film works, as a comedy, is: Oh, my gosh, she gets to say that to him, even though he’s a man of science?

That is the culture that gave birth to the scientists of the Simulmatics Corporation. One of the things that still fascinates me about Simulmatics is these guys are trying to build a machine to predict human behavior. But they’re generally using it to predict the behavior of only three groups. One is Black voters, because their first project is a study of Black voters. One is women, housewives. And the third is Vietnamese peasants.

To me, as a humanist, I look at that, and I think: Really? You’re going to build a machine to do these things? In 1960 they construct a mathematical model and write a computer program in FORTRAN to understand Black voters. I mean, you could watch the Greensboro lunch counter sit-ins on television, but they decide to build that mathematical model and program a computer instead.

This hubris is born of midcentury white liberalism at the height of a particular technocratic moment, put in place by the research agenda of the national security state.

db: That’s super important. Early on in the book you quote from Eugene Burdick:

The new underworld is made up of innocent and well-intentioned people who work with slide rules and calculating machines and computers. Most of these people are highly educated, many of them are Ph.D.s, and none that I have met have malignant political designs on the American public. They may, however, radically reconstruct the American political system, build a new politics, and even modify revered and venerable American institutions—facts of which they are blissfully innocent.

As you point out, this is the story of technocrats and, with a modern eye, the story of how the culture of white supremacy gets upheld through these bureaucratic systems. Even so, many readers, I imagine, care a lot about how to do good with technology. So what should they take from this story? What should they learn from a technology that was “designed for good,” but in fact upheld so many systems of repression?

 

JL: Burdick wrote that in 1964. He was a University of California, Berkeley, political theorist who had worked for Greenfield in 1956. He was then asked to work for Simulmatics, but instead he wrote a novel indicting Greenfield’s company.

That’s why I study history. I read that Burdick passage in the library and I thought: Wow. Somebody figured this out in 1964. He predicted that American politics as we know it would be destroyed by the prediction of human behavior, and that other venerable institutions, for instance the local newspaper, might be destroyed as well. That you could foresee that!

It is important to remember the reason Burdick could foresee that is because he was a political theorist and a writer, and his study of the American political order led him to believe American democracy to be inconsistent with the computer simulation of elections. Start-ups: they need people like Burdick. Philosophers, political theorists, historians, poets. Critics. But, instead, the people who run those companies reject the liberal arts, as if entire disciplines, the entirety of the humanities, were like being able to know how to tie a bowtie. Rather than understanding that these bodies of knowledge and methods of analysis are elemental to the human condition.

One of the things I love about this Simulmatics story is that these scientists weren’t bad people. They’re not villains; they were trying to get the Democratic Party to take a stronger position on civil rights. They were idealists. But they didn’t think about the implications of what they were doing. Burdick did, and refused to participate in it.

db: This is also such a fascinating time because there’s a rearrangement of political parties during this period, and on page 57, you note that conservatives damn the godlessness and moral idiocy of behavioral science, citing as technocratic postures a species of socialism, the control of the people, even their very minds by the state.

I’m fascinated by the different political attitudes toward scientific and social-scientific methods during this period, and how this is getting structured. Before this talk, Erica Robles-Anderson prepared a question for you: Given the attitudes toward social science, how do we understand the realignments happening in the 1960s as Southern Democrats and Black voters switch parties? And so, did these pieces all come together in that realignment?

 

JL: That’s a quite interesting question. Politics was fundamentally changed by the modern public-opinion polling industry, which really emerges in the 1930s, which was itself a major realignment.

In 1932, when FDR ran for president and was elected in 1933, he won on the back of what is called the New Deal Coalition: he was able to pull in Black voters, who, since Lincoln and Emancipation, since the 15th Amendment, had voted Republican. By the 1930s, Jim Crow laws mean that Blacks can’t vote in the South, but, in places where they could vote, they voted Republican. Until FDR.

The modern polling industry began in 1935, with George Gallup, and it distorts the electorate. Gallup, for instance, refused to ask white people questions about civil rights, and he does not poll Black voters. There were sit-ins throughout the 1930s, there were anti-lynching bills in Congress every year. Gallup didn’t ask people questions about civil rights. That’s because he had a nationally syndicated newspaper column called “America Speaks,” and Southern newspapers didn’t want to run columns about civil rights, and they didn’t want to hear about the opinions of Black voters in the North. Gallup, not wanting to lose those newspapers, obliged them.

Anyway, by the 1950s, FDR’s New Deal Coalition has begun to fall apart, because Dwight Eisenhower, elected president in 1952 and again in 1956, as a Republican, took a stronger position on civil rights than his Democratic challenger, Adlai Stevenson. Black voters left the New Deal Coalition and voted for Eisenhower in huge numbers. This is where Ed Greenfield comes in and says, “Hey, what if we run a simulation of the election of 1960 and actually count Black voters? We could take those results to Stevenson, or whoever becomes the Democratic nominee against Eisenhower’s successor, Richard Nixon, and convince him to take a stronger position on civil rights.” Simulmatics’ work relied on an increasing sophistication at pitting demographic groups against one another. Simulmatics’ “People Machine” sorted American voters into 480 possible voter types—and then used those different categories for its simulations.

That’s why Burdick called his book about Simulmatics The 480. He objected not only to the simulation but to the sorting. You live in Brooklyn, you’re upper-class, you’re Asian American, and you voted for Obama twice: that’s a voter type. From Burdick’s perspective as a political theorist, if you divide the population into voter types, and custom-fit political messages by type—like this is a message for you, and you alone—then you are actually dividing Americans against themselves. You are defeating the philosophy behind our form of representative government.

Because I am not supposed to go to the polls and vote as a middle-aged Catholic New England woman who voted for Obama twice. Instead, I am supposed to go to the polls and think about who on the ballot can best represent everyone’s interests, whose policy positions are in the public interest and for the common good. I can’t do that if I’ve never received a message that says, “Here is my vision for everyone.”

That’s what Burdick meant by saying that the Simulmatics scientists didn’t actually understand our political system. Because if you understood our political system, you wouldn’t do this—because it will destroy it.

db: Part of what becomes so painful in the story is that the Simulmatics Corporation worked hard to model Black voters to increase enfranchisement. Yet, later, they would turn to play a role in the war in Vietnam; their propaganda was used as a campaign against the Viet Cong.

Charlton Mcllwain, who is a provost at NYU and the author of Black Software, is fascinated by these different components. He wants to know: How transparent—internally or externally—was Simulmatics about its active role in shaping and even fomenting the antiblack racial politics of the ’60s? How would you characterize Simulmatics’ long-term impact on civil rights, beyond the particular period in which they were working?

 

JL: Any of the people who worked for the corporation in the 1960s would be shocked to imagine that anyone could think of it as a company that was opposed to civil rights.

These were some of the most progressive liberals in the country working in the social sciences. Most of the scientists who had helped to found the company refused to go to Vietnam because they disagreed with the one scientist who really supported that effort. They tended to be people who were opposed to the war, who marched against the war, who urged their universities to withdraw support for research related to the war.

These people were trying to address inequality. That’s what they saw themselves as doing. For example: attempting to predict race riots. They were trying to forestall violence and they were also trying to amplify the voices of people who were protesting on the streets because they were protesting police brutality. What emerges from that is something super creepy, but it’s largely unintended. So, would any of these people have thought of themselves as advancing an anti–civil rights cause or advancing some kind of federal race? No, they wouldn’t have understood themselves that way at all.


db: In my own research, I am currently focused on the 2020 census. As we grapple with the spectacle that is underway about the legitimacy of that data, one thing that Dan Bouk and I have been struggling with concerns the unintended consequences of being able to see infrastructure. In other words, even if we all know that census data are made—not found—it’s not always pretty to see how the sausage is made. And yet, here we are, learning about the sausage.

You make a reference to Barrington Moore’s notion of being blinded by the illusion of technical omnipotence. That that doesn’t just apply to computers; sometimes technical omnipotence is also what allows us to just make certain that that data is infrastructure.

As you’re going through the history of Simulmatics, do you think is it possible to even talk about the sausage of large data projects without delegitimizing the organizations or institutions that produce them? Obviously there are times where they should be delegitimized. But what are the unexpected consequences of revealing the system behind the data? How do we balance these moments of being able to see and understanding the place of data and technology within a broader set of contexts?

 

JL: I was recently reading two different things that kind of speak to this question. One is an essay from the ’60s called, I believe, “The Political Consequences of the Rise of Science,” and the other is an essay by Danielle Allen, my colleague at Harvard, called “The Road from Serfdom.” Both essays make the same point: our constitutional system was devised by lawyers, 18th-century lawyers. I would argue they were also historians. When they set about drafting the Constitution, instead of coming up with a secret document that they hid away and said, from on high, Here are the rules!, they published it. They sent it to the people for ratification. And they encouraged people to convene and have conversations, and then they had formal conventions where people debated it, and then it went through a ratification process.

The Constitution is extremely complicated. It’s only [about] four thousand words, but it embraces a lot of ideas. These men really did have faith, even though their notion of who “the People” were is very small, from our vantage, very small. But from their vantage, “the People” is unbelievably democratic. And they believed that the people could decide.

They called the Constitution a machine and they thought that it was an engineered device. A device that was legible and transparent to everybody.

But at some point in the 20th century, the people who acted as the engineers of society became … economists. And then, scientists replaced economists. But there’s a problem when people doing very complicated scientific work are making decisions about how the government should work. Let’s say with preventing a pandemic. Because, yes, we can all read the Constitution. But can we all understand epidemiology? And then, consider the strangeness of when the people who are actually engineering how people behave toward one another aren’t lawyers, or economists, or scientists—they’re Mark Zuckerbergs. They cherish the obfuscation, the mystification: that’s part of their business model. It’s also why regulation has been so elusive. Remember this summer when the heads of Facebook, Google, Amazon, and Apple appeared before Congress? And it was very clear that most members of Congress had not the least idea how any of those businesses work. If we have systems, even technologically sophisticated systems, that are driving our politics—such that the people can’t understand them—then we no longer have a democracy.

db: Legal scholar Kate Klonick refers to the CEOs of the tech companies as the new governors, as a way of capturing some of their struggle. And yet they themselves don’t know what to make of their own role in all of this.

Another question that we received in advance of the talk tonight was from Satya Nadella, the CEO of Microsoft. He wants to know how he should be thinking about the role of industry in building systems to support democracy rather than break it. And what are the points of light for tech from your own work? Particularly for those who are in these positions as the executives of major tech companies, overseeing a lot of their future. What would you tell him about how he should be thinking about his responsibilities?

 

JL: The question at least indirectly rests on the assumption that more code will fix it, that we just need to have better code. We just need to do some debugging. There is a debugging mentality behind the very question. And that mentality misses the much deeper critique that many people are making, which is: we don’t want the program at all. We don’t want a program that works better. We don’t want a program that works faster. We don’t want the program.

I appreciate the question and I by no means dispute its earnestness. But it rests on the supposition that CEOs of tech companies should be fixing our democracy. They shouldn’t be.

The people need to fix the democracy and the people elected to office need to fix the democracy. The CEOs of tech companies should try to do less harm to our democracy. But I didn’t elect them to fix it. No one elected them.

 

This article was commissioned by Caitlin Zaloomicon

Featured image: Photograph of Jill Lepore courtesy of Jill Lepore



Source link

Recommend0 recommendationsPublished in TECHNOLOGY

Related Articles

Responses

Your email address will not be published. Required fields are marked *

Verification: 06fd59f0b1d07607