Public Thinker: B. R. Cohen on How Food Became “Pure”


Bright pink margarine. “Olive oil” made from cottonseed in Tennessee. A cross-country police chase to arrest a sugar scammer’s widow. In Pure Adulteration, Benjamin Cohen brings us into the corrupt, contaminated, deceptive world of food adulteration in the late 19th century. In this conversation, Cohen, a professor at Lafayette College, explains how the United States arrived at the analytic, standardized, and regulated food system we know today—and why we still face questions about which foods should count as pure and which as impure. And he discusses how his own work crosses boundaries between pure academia and the messier world of public-facing scholarship.


David Schleifer (DS): Pure Adulteration starts by explaining that concerns about food purity and how to regulate food are age old, citing as examples the Kosher laws in the Bible and Plato’s protocols for dealing with food-market “rogueries and adulterations.” You write that someone was actually burned at the stake in 14th-century Nuremberg for selling fake saffron.

 

Benjamin R. Cohen (BRC): Yes. Harsh.

 

DS: Well, saffron’s expensive.

 

BRC: His partner was buried alive.

 

DS: If food frauds, fakery, and cheating—or adulterations, in general—have been problems for centuries, why focus your book on the 19th-century United States? What was happening then that brought concerns about food purity to the forefront in activism, science, and the law?

 

BRC: They called it “the pure food crusades” at the time, and much of it had to do with new forms of distance, both in the cultural and physical sense. Increasing geographic distance is the common villain. When there’s more space between producer and consumer, that consumer finds it harder to recognize their food’s source or identity.

It’s that timeless question: Do you know what you’re eating? With physical distance, consumers fear that the people who make food and the companies that sell it have opportunities to deceive you—they can water down milk or cut lard with horse fat or try to pass off corn syrup as honey.

But you could deceive someone face to face, too; it’s not only about physical distance. Take immigration, continental expansion, global settlement, and new modes of urbanization and mobility. Add those up and the United States was in an unprecedented period of redefining norms of interaction. That cultural distance fractured prevailing senses of character and trust with a debate about the breakdown of communities.

 

DS: In other words, there were many more reasons to distrust someone.

 

BRC: Right. I use an epigraph from Rochefoucauld, who was writing in 1660s France about face values: that you can know someone by reading their face, that the surface is a window onto the interior truth. But that way of knowing one another (or your food) was crumbling in the 19th-century United States.

Add the rise of industrialization in the later 1800s—making food in factories instead of from the fields—and that’s a whole other layer to contend with. To your question, that combination of foods from factories, of expansion and mobility, of fractured forms of community trust and familiarity—it all led me to find that, for a story about the origins of modern food regulation, the later 19th century was the place to look.

 

DS: You use three main case studies as the core of the book. Let’s talk about your first big case: butter and margarine. Margarine was invented in 1869. Some US states went so far as to outlaw it entirely. What was so offensive about margarine? Why did it become one of the most litigated, regulated products in American history?

 

BRC: With margarine, the argument between purity and adulteration mapped on as a proxy for an argument over natural and artificial. As a product of a factory, not a farm, margarine struck people as essentially artificial, and therefore essentially bad.

 

DS: Right.

BRC: At first, margarine was made of beef fat and milk. The advantage was you could use less milk to make it than if you were making straight butter. There were even patents where the claim was: We’re just doing God’s work by merely shortcutting the process. We’re taking the fat out of the cow before the cow expresses it into milk, and aren’t we clever?

But for others, because it was made with beef fat it was associated with the meatpacking industry, which was known for being pretty unsavory. And the challenge to the dominance of the dairy industry certainly drove the dairy lobby crazy, generating this host of brawls over the legal status of margarine. There were so many new laws, I had to make a map showing the spread and intensity of antimargarine laws in states over a quarter century.

And that was before the 20th century. Granted, by the 1900s, most margarine was made with vegetable oils and not with animal fats. But the idea stuck that it was inherently unnatural.

 

DS: What was really going on with that claim about natural and unnatural?

 

BRC: To me, this kind of debate revealed fault lines in how people made claims about what “natural” meant. It couldn’t be about humans not intervening in nature, because all agriculture is about intervening. It was more about the presumed right way to intervene. And when people are fighting over moral claims about rightness, they’re also fighting over who gets to say so.

With industrialization in the 19th century expanding so rapidly, products like margarine exacerbated and challenged people’s sense of what they thought was natural, which had long been tied up with what they thought was pure.

But the whole premise of agriculture includes artifice. It is based on humans acting to produce something which wasn’t there without us. After all, cows don’t make butter, people do.

 

DS: You write that butter was sometimes dyed yellow to make it look more buttery!

 

BRC: That’s right, and why would you add color to butter if it was already naturally yellow? That’s because people in the 19th century knew butter to be different colors at different times of year. The environmental context of traditional dairy farms was an integrated system of farm and land management. You had cows so you could fertilize the soil. The richer the soil, the healthier the grass, the better the wheat, and so on, which was also good for the government’s expansionist settler goals across the continent. Cows that ate the best grass gave milk that made the best color, June yellow being the ideal. So, butter from other times of year would be dyed to look more like June yellow.

 

DS: And adding dyes didn’t diminish the claim to purity?

 

BRC: For them, it didn’t. We can sit here 140 years later and say that dyed butter sounds like something fake or at least intentionally deceptive. Because if the dairymen were mad at margarine for coloring their product to look like butter—which they were—then how come it’s OK to color your butter?  But at the time, dairymen said it was OK, because their sense of natural and pure was seasonal and agricultural; this practice was part of a stable agricultural system on their farms.

 

DS: There’s a similar story about natural artifice and manufactured artifice with sugar, right? In the 19th century, “glucose”—another one of your cases—was the name for sugar made mostly from corn. It was framed as inherently suspect even though “real” sugar made from sugar cane is actually pretty nasty stuff. It’s highly processed.

 

BRC: And cane sugar was labor intensive; specifically, slave-labor intensive.

 

DS: Producing cane sugar involved slavery, colonialism, and massive deforestation. But somehow cane sugar was marketed as pure and honest—and measured and gauged in purity by how white it was—while glucose was derided as something devilish.

BRC: It could be baffling. There was a beet-sugar industry in the United States, which one author in the 1880s talks about as being very patriotic, because it was made in the United States and didn’t rely on slave labor. But those in the industry were also making racial claims that they were pure, because their sugar wasn’t touched by “dirty” hands.

The racial connotations of the beet-sugar industry’s purity claims point to ideas about adulteration as related to racial ideologies and tropes. So, while Pure Adulteration takes food purity as its narrative anchor, you can’t get at that narrative without seeing it within broader, contested cultural questions about ethnic and racial purity.

Like I was saying earlier, moral fights over rightness are also fights over who gets to say so. Here, the dominant class was making claims about what counted as pure, culturally, in ways that influenced how they thought foods counted as pure, agriculturally.

That connects back to butter, too. From our perspective, looking back, we think: All right, people were mad about margarine, because that’s fake, but dairymen were coloring butter and people were OK with that. People were mad at glucose because it was “fake” sugar, but the moral complications of what they considered pure are equally problematic, if not more so.

 

DS: At least you take “it’s complicated” as a premise instead of a conclusion.

 

BRC: Yeah, the thing that makes me mad is when people try to make the story simple, when they—whether at the time, or historians thereafter—act like distinctions between pure and adulterated are plain and easily delineated, instead of ongoing and politically tense.

 

DS: What role did water rights and environmental pollution play in the portrayal of cane sugar as somehow cleaner or purer than glucose syrup?

 

BRC: Making glucose syrup from corn was a very water-intensive manufacturing process. Glucose-syrup manufacturers would draw water from the lakes in places like upstate New York and Iowa. But people started saying, you’re ruining these rivers because you’re both sucking up our water and you’re polluting it with the refuse of your production facility. This added to the sense that glucose syrup was dirty.

 

DS: But glucose ultimately became a more legitimate product. We know it today as corn syrup, although debate pops up now and then about whether corn syrup should be considered dangerous or healthy. Like a lot of these “adulterated” foods, it was ultimately labeled and marketed as exactly what it was, without trying to pretend to be something else.

It seems like that type of transparency also opened the door to things like cottonseed oil and margarine having a legitimate place in the market. Cottonseed oil is one of your other cases: it became a well-established product used to make shortening, which substituted for lard. Margarine was advertised as margarine. You even point to Purity brand margarine, in the early 1900s.

 

BRC: How great is that, Purity margarine? It was proudly and purely artificial, like those old TV ads for real cubic-zirconia diamonds. They were exactly what they said they were.

 

DS: So rather than inherently impure, they were just different. And artificial started to have a less clear-cut meaning. Is that it?

 

BRC: Yes, you put your finger on it. The largest arc of the book is a shift in the concept of purity itself. Sure, it’s about the pure-food movement. Sure, it’s about these environmental impacts and network; it’s about character and trust and faith. But the concept of purity also shifted.

Into the mid-1800s, a dominant view of purity was based on provenance, as in, you knew where it was from. And knowing where it was from was based on a stable and acceptable view of certain agricultural activities: they were community based and agrarian. The process mattered a lot. You still had people cheating the system (the poor saffron guys, for example), but they were policed within well-understood community norms.

By the early 20th century, though, with the aid of chemical analysis and public-health institutions, purity had become a scientific concept. Process didn’t matter as much, because purity was gauged by testing the finished product. Now, the question was: If you give me this product on a table and I analyze it, is it what you said it is? That change in how people defined purity isn’t some abstract conceptual thing; it had practical political and environmental consequences, because it divorced the entire lifecycle of agricultural activity from what “purity” means.

DS: You talk about the ways in which policing claims about purity came to be aimed at the consumer end of products’ lifecycles and not the producer end.

 

BRC: Because we were becoming a consumer instead of a producer society, with fewer farmers and a greater urban population, most people by the early 20th century understood the veracity of a food based on the package at the store. The story that follows for the rest of the 20th century was about developing more and more testing protocols and labeling laws.

It’s no surprise, then, that the book ends with the establishment of the FDA, which is a consumer-focused agency, as opposed to the USDA, which is producer focused. And at the center of the FDA’s bureaucratic standing is scientific analysis, which solved one problem by giving assurance about what products are, but also created new problems. If we don’t have to know about the agricultural process and we can just trust the analysis, then we’ve created yet more distance from the landscape.

 

DS: The book could have just ended with the idea that, in fact, there’s no such thing as natural or artificial, because it’s all just a question of detection. But you don’t end with that “it’s all relative” argument. Instead, you say that we have to make judgments about the quality of our food, and we also have to trust institutions and other people to make those judgments, because individual consumers are not really in a position to do that ourselves.

How do you see that process of making judgments about food purity now?  Do we just have to trust these distant institutions to figure it all out for us?

 

BRC: I don’t know that it’s about trusting distant institutions so much as it’s about building trust mechanisms in ways that allow more, rather than fewer, voices, and bring in greater environmental regard, rather than less.

I’ll give the example I used in the epilogue, which is that the entire time I was working on this book, anytime I talked about it, genetically modified organisms always came up. People would say to me, this sounds a lot like GMOs. Or, what do you have to say about GMOs?

What I found while writing this book—what I found that speaks to contemporary relevance—is that the debate sounds similar. Or at least half the debate, I should say. Whether you’re arguing about the problems of GMOs today or pure food in the 1800s, that debate is largely structured as “poisoning versus cheating.” That is: Do I avoid certain foods because they’re making me sick or do I avoid them because I’m being cheated? With GMOs, you don’t hear “poisoning” so much as you hear unhealthy, or not nutritious, or something, though the sense of personal threat to the consumer is similar to fears over impure foods.

But both of those framings are about the end product. They don’t consider the process. They’ve already cut out half the debate by focusing only on the consumer end of the lifecycle and not the impact on workers; on land, water, and air; or on power in the food system.

That’s how the last couple of decades of GMO debates have gone. A prominent form of public debate has been about whether it’s in my food or not. Can we have labeling laws? Can you tell me if it’s genetically modified or not, and then I can make the choice? Are these things making me sick? Are they hurting me? All these questions assume that we’re only consumers. That we have no other way to understand what’s good or bad.

I would prefer if we could think more broadly. Even if we are consumers, we can still think more about our connections to agricultural systems, to environmental networks that we’re implicated in and that we’re part of and that we ignore because we only look at the label at the store.


DS: This is a book that speaks to different audiences: readers in policy, history, science and technology studies, and food studies. A lot of your work has been directed not at your academic colleagues but at various audiences outside academia. Your writing and podcasting, and your work with local food-justice organizations, all extend out from pure academic publishing. Even this interview does that. You’ve conducted a lot of interviews like this yourself, including for Public Books. How did you get into that?

 

BRC: The interviews began at The Believer, where, thanks to Vendela Vida, I had the chance to do longer-form conversations. That whole thing was me being sort of greedy, intellectually. I liked talking to people about their work and I realized it was also an easier, more direct way to learn a lot, because when you ask people about their specialties and passions, it’s usually an easy and generative exchange.

 

DS: Who did you get to interview?

 

BRC: The first interview was with a 95-year-old philosopher of science, Marjorie Grene, who had studied with Heidegger (she hated him), introduced Sartre to US readers (but thought his philosophy untenable), and then founded the philosophy of biology. After that, it was just people I admired and was reading and learning about at the time.

I got lucky talking to Michael Pollan before The Omnivore’s Dilemma went into the stratosphere. I got lucky talking to Rebecca Solnit before so many others knew they should be talking to her. Then, when I moved over here to Public Books, early interviews with Jill Lepore and Jackson Lears helped me work on writing, thinking, and questions of audience and narrative.

 

DS: Did it feel like a distraction from your “real” academic work?

 

BRC: Not really. I said it was kind of greedy, and it was. It’s been a way for me to learn about people, which helps me understand their work. And since I was talking to them, and since they were so interesting, I thought other people might like to hear, too.

I had a blog back when that was a thing and did a bunch of interviews of authors there, too. It’s true, too, I did have a podcast for a while—no surprise, because I am a white male with a beard and that was apparently required. And my cohost and I interviewed close to a hundred people on our campus and in the nearby Lehigh Valley, trying to get at the same thing.

 

DS: Which was?

 

BRC: People are interesting. Everyone knows things that are fascinating. I’m not a hard-hitting interviewer, that’s pretty obvious. I’m not interrogating. It’s more about curiosity.

DS: You have a good bibliography of other nonacademic writings, too: short pieces in various forums, a bunch of things at McSweeney’s. How do they hang together?

 

BRC: I don’t know that they do. Except, I guess, they all radiate out from some core curiosity about what people know and how listening to them opens up new views of the world.

 

DS: I’m wondering, did your interviewing skills play any role in Pure Adulteration, even though all the players are dead and gone?

 

BRC: That’s a good question. I always feel like I don’t know enough, and I need to find out more. That’s not uncommon for academics, or maybe anyone, but I think rather than throwing out ignorance, I try to harness it. Sometimes I lean into not knowing as the impetus for writing about something, so that the writing is a way for me to figure it out.

My friend John Warner says something like that, about how writing is learning. That’s maybe where the kinds of questions I tend toward in interviews are the kinds I ask in research.

So, if I want to know, why did these farmers turn against margarine? I can’t ask them directly, but I can ask all the trade papers. I can ask the congressional testimony. I can ask the newspaper parodies and satires. I can ask the meeting minutes from these local organizations. Then, when I hear what they say and put it together, it’s the process of putting it together in writing that helps me learn it.

 

DS: You do both scholarly and nonscholarly writing—are there things you can do in one type of writing that you can’t do in the other? How you might frame a subject or the way you approach it? Are there things you can write about for academics that you can’t write about for other audiences, and vice versa?

 

BRC: There are, for sure. For one thing, there’s so much less play in academic writing. Play like the play on a door hinge—as in, flexibility; as in, movement and generous interpretation. It has to appeal to people you don’t know, the ones vetting your work to be published, and they usually are not into it for the play. There are often good reasons for that, especially since peer-reviewed work needs to stand the test of time and withstand critical response, so I don’t necessarily begrudge it. It’s just less fun, in the more common sense of play. Self-amusement doesn’t have a lot of room there.

In general, though, finding different audiences to write for means recognizing that it takes different strategies and tactics to appeal to them. With Pure Adulteration, I tried to lace it with small stories throughout, so that there were hooks in each chapter. So, even in this book with endnotes and archival research, I was still trying to reach at least a modestly larger audience than a traditional academic one.

 

DS: Plus, the jokes. I don’t mean you making jokes, but you draw from a lot of satire, parody, and characters at least trying to crack jokes.

 

BRC: That’s pretty true. I like the way people in the past joked about the topics I’m writing about, not because I necessarily think these topics are funny, but because it’s usually funny to me that those people thought it was funny. It’s revealing. Some of the things just seem ridiculous to me.

 

DS: Such as?

 

BRC: I use a lot of cartoons from Puck, which was the Onion of the late 1800s. And, to go back to our discussion earlier on fake color, they made a sarcastic crack once that margarine should color itself pink for its royal pretensions. They thought they were hilarious. But at the same time, someone read it and apparently didn’t get the joke, because within a few years some state legislators were demanding that margarine makers do just that—color their product pink.

DS: What else differentiates writing for academics from writing for other types of audiences?

 

BRC: Maybe it’s that in academia, you might really admire somebody’s writing, but if somebody points out a critical flaw in their thinking, then somehow you’re supposed to not like it. Whereas, with a nonacademic writer, I can respect and appreciate and grow from somebody’s writing, whether or not there are critical flaws or gaps in their thinking. Or if their theory isn’t as filled out as it might be. If they help me see something anew or open up a different vantage point, then that’s good by me.

 

DS: Academics have this critical reflex. They tend to dismiss each other’s work entirely if they catch one flaw. Which isn’t always helpful.

 

BRC: It’s a real baby-with-the-bathwater thing, right? If somebody’s critical register is not within the scope of somebody else’s sense of pure argument, if there’s one perceived flaw to it, then the whole thing somehow fails. It’s kind of like a purity test.

 

DS: It is kind of a purity test!

 

BRC: Did we force the pun?

 

This article was commissioned by Caitlin Zaloom. icon

Featured-image photograph by Adam Atkinson



Source link

Recommend0 recommendationsPublished in HISTORY

Related Articles

Responses

Your email address will not be published. Required fields are marked *