Public Thinker: Shobita Parthasarathy on Why We Need to Diversify Expertise


To converse with Shobita Parthasarathy is to be enveloped in two kinds of warmth: that of a generous-spirited, energetic, and passionate human, and that of a mind on fire with knowledge and ideas. It’s that combination, I suspect, that drives her commitment to publicly engaged scholarship. In fact, I have to confess that Parthasarathy is the only academic I can think of who talks about scholarship (as she often does) in a way that doesn’t put my teeth on edge. For what allows her to engage effectively in public problems is a commitment to deep understanding of the specifics and the intricacies of dilemmas at the intersection of technological change, politics, and human well-being.

A professor of public policy and women’s studies at the University of Michigan, she also cohosts The Received Wisdom, a podcast on science, technology, and society, and provides expert advice to civil-society groups, legislators, and advisory committees. Of course she also writes scholarly books and articles and is a phenomenally creative and assiduous researcher, but you can learn more about all that here. What I wanted to talk to her about was how on earth she managed to get the Supreme Court to read STS, and how the bioethics community lost its soul. Among other interesting things.


Daniel Sarewitz (DS): You’re a science and technology studies scholar in a public policy school, which is a weird thing to be and a weird place to be to do it. So can you talk a little bit about how that happened?

 

Shobita Parthasarathy (SP): It is weird. And it is worth reflecting on what it means to be a specialist in science and technology in a policy school, because we are a very, very, very, very, very rare breed.

When I went to college, I first thought I was going to be a lawyer, and then I played around with five thousand different majors. Now, the University of Chicago has a really strong core curriculum, so I was taking classes in lots of different areas. I eventually became a biology major because I loved the science. But none of the usual options seemed right. I thought about med school, and that didn’t really thrill me.

I really liked working in the lab and doing research on heart disease and diabetes. But what really interested me was the relationship between what I was doing in the lab and what was going on out in the world. What were the impacts? I wanted there to be more of a connection, because it felt very far removed. It felt very mechanical, that work.

So I found a handful of classes in bioethics and in health care and poverty. And I really, really enjoyed those classes, because they seemed to me to be tying these worlds together a little bit more. Yet I found myself a little bit frustrated—this was near the end of the human genome project—and I thought: Fine, these ethical frameworks are fine. They are interesting. Even so, this is actually happening out there in the world. People are being impacted by this. We need policies to help people deal with this genetic information and to think about these technologies.

It hit me: I wanted to do science and technology policy, which wasn’t yet a thing. Now it is a thing, but it definitely wasn’t then.

What I decided to do was paper DC with my resume. It took a while, and I didn’t have the resources to take an unpaid internship, but, in a huge stroke of luck, I sent my resume to the staff for the Advisory Committee on Human Radiation Experiments. This was a White Houselevel committee, working on both uncovering the human radiation experiments that had been conducted during the Cold War and looking at what human-subjects research looked like today.

 

 

DS: So this was during the Clinton administration?

 

SP: Yes. And it was an extraordinary experience; I learned so many things. I learned, for example, that policy is made by 20-year-olds.

 

DS: Frightening.

 

SP: I worked all night with all of my 20-year-old friends and colleagues. There was a massive history and information and policy infrastructure that I knew nothing about.

I didn’t know that there was a thing called history of science. I didn’t know that was a thing you could study. But I also observed the politics of science for the first time.

So, one of the committee’s jobs was deciding whether or not the participants in the radiation research—who might not have known they were participants, who were children at the time of the research, or fetuses—whether they should be notified that they were part of these human radiation experiments. And they had to make these determinations based on risk calculations. Not just quantified specific risk—how likely it was that they were going to get cancer—but also psychological risk: for example, [the effects of] finding out that you had been a subject in this study.

And as I saw in real time how they made these determinations, I realized how science and risk assessment was political. And I realized that that didn’t mean it was bad or wrong, it’s just that these committees had to weigh all sorts of things that many scientists would never think about. I still remember those moments so distinctly.

DS: You’ve spoken about how social scientists often don’t intervene in science and technology, either because they’re not seen as relevant experts in the area or because they are focused on other areas of society or policy. But one of the distinctive things about your work is that you found a really interesting place to intervene: in the Supreme Court litigation over human gene patents.

How did you come to that?

 

SP: It all started when I learned, as a graduate student, that human genes were patentable.

And I remember thinking, That’s crazy. I’ve done DNA sequencing in the lab. I understand human genetics; how can genes be inventions and not discoveries? What is going on, such that that is the law?

This was in the back of my head while writing my PhD dissertation, which became my first book. It was about the development of genetic testing for breast cancer in the US and Britain. In particular, I was interested in the genes linked to breast and ovarian cancer, and that was what the patents were on.

In the US, the owner of that gene patent—Myriad Genetics—used its patent rights to shut down all of the other providers of this genetic test. This meant that not only did they have a monopoly, but they were able to charge a ton of money to get access to their test. (And it turns out that while they claimed to have a very fancy test, it had significant errors, so this also didn’t provide women with the high-quality, gold-standard results that the company had promised.)

While researching these patented genes, I realized two things. The first was that it was virtually impossible to figure out who did the inventing in the case of the human gene. This research is done usually by scientists at universities in collaborative fashion across continents. The person who puts the last brick in the wall gets to claim the wall; yet even that metaphor doesn’t work, because you are actually building a wall, whereas humans aren’t building a gene.

The second realization—which became my second book—was that, of course, the patents on those genes had these massive social impacts, public-health impacts, moral impacts. But they also had research impacts. Scientists say that those gene patents impacted their work. That they changed research trajectories, at the very least. And so they had an impact on scientific work.

 

DS: So these patented genes not only significantly influenced access to testing for the presence of the BRCA gene mutations, which can cause breast and ovarian cancer in women, but also deflected scientific research in ways that no one had considered before.

 

SP: Exactly.

DS: One’s moral intuition around this might be that it was outrageous for Myriad to be able to have monopoly power over the BRCA gene. And while of course there are legal mechanisms for addressing that outrage, I have to ask, where was the bioethics community on this? Did they do their job and blow the whistle?

 

SP: I’ll say something controversial. Bioethics tends to not interrogate the details of science, let alone the more technical questions. There were certainly a handful of colleagues who were very critical of this. But the mainstream community was generally in favor of what scientists were in favor of. And at the time, US scientists basically said: We need this, this is how we engage in innovation, this is what we need for innovation, and therefore we need to move forward.

 

DS: Why wasn’t it the job of bioethicists to step outside of that role and say—with Myriad Genetics specifically and patentable genes generally—here is a tension between a moral cost and a commitment to innovation, and we have to openly debate that?

 

SP: In the history of how bioethicists have figured in discussions around intellectual property, their position has almost uniformly been: we need to support science, that’s our job.

 

DS: Because that’s their job.

 

SP: As early as the 1980s, farmers, environmental activists, and religious figures were raising alarms about the first set of patents on “life forms,” on genetically engineered animals. But in congressional hearings on the subject, bioethicists kept testifying that such patents were needed for science and technology. And the mainstream of that field has essentially maintained that line ever since. Really, how can you patent a human gene? It turns out there is a whole domain of people who have essentially bought into that, whose job it is to keep these legal fictions, keep this set of understandings, keep this definition of knowledge and expertise and policy logics in place.

The only time they wavered was during debates about patents on stem cells, when scientists said these patents were a problem. But what’s so weird about it, and I talk about this in my second book, is that American bioethicists never discussed the moral dimensions of commodifying life. This was the center of the controversy in Europe. And what’s weirder still is that they have become increasingly out of step with your average person on the street in the US. If you were to survey people on the street, they would say: Are you insane? How can we patent plants and animals, how can genes be patented?

Yet the biotech companies in the domain can still claim moral superiority because they have been endorsed by bioethicists—and that allows them to delegitimize these unwashed patient activists.

DS: You are not simply a scholar, you are an engaged scholar, and your work actually played a part in the legal decision that was ultimately made in the finding against Myriad. (It is worth also mentioning that the case wasn’t brought by academics, it was brought by civil-society actors, especially the ACLU.)

That must have been incredibly satisfying to see your work cited. It sounds like you didn’t write that first book with the intention of being cited in a Supreme Court case. And yet you nailed it.

 

SP: I may have achieved more policy impact before the age of 40 than I might ever experience again.

So, my first book, Building Genetic Medicine: Breast Cancer Technology and the Comparative Politics of Health Care, came out in 2007. It looked at the development of genetic testing for breast cancer in the US and the UK, and Myriad Genetics was the patent holder and the monopoly provider of that test in the US. Myriad had gotten patents on the breast-cancer genes in 1996 and 1997.

As I said earlier, most people in the US scientific and medical community said, this is just the price of innovation. This is annoying, yes, it’s expensive, and it’s frustrating, but we need to deal with it.

When that book came out, I was starting to do research on my second book on the politics of the US and European patent systems. I learned while I was doing an interview that the ACLU was considering a lawsuit against Myriad Genetics, challenging these patents. Patents last 20 years. They were only 10 years gone, with 10 years still to go.

So I contacted someone at the ACLU. I said I’d be happy to help. And it turned out that people there had already been reading my book! They invited me up to New York for the day, and so I went to a strategy meeting. It was mostly law professors, some of the civil-society activists, and me. And the law professors by and large were against the lawsuit, because they felt that there needed to be more incremental change; so they suggested smaller bites of the apple. They didn’t think that anybody would go for this.

I said, I can’t advise you on legal strategy, but I can advise you on political strategy. I can tell you who might make for good plaintiffs and friends of the court, etc. And thereafter, from what I understand, my book informed their work, it informed that process of soliciting who would be involved in the case in various ways.

They asked me to write an expert declaration in the case, about the impacts of the patents on research and health care. And I was nervous about that because I was about to go up for tenure and even though I am in a policy school, still, that was really sticking my neck out there. But I decided that—no matter what happened—this was extremely important to me personally. And I had actually done the research.

Ultimately, the court at the district level cited me really heavily. In fact, when the first decision of the district court came out, I sent the decision to my mom, I just forwarded it. I skimmed it and then I forwarded it, because I was doing something else at the time. And a couple of hours later, my mom calls me and says: Your name is all over this thing, there is 50 pages of you! And that was how I discovered that I had …

 

DS: Impact.

 

SP: That they had relied on my declaration. This definitely informed how I think about scholarly interventions: the fact that you can have this impact.

 

DS: It’s an important lesson that most academics refuse to internalize. That you can be curious about something that is both deeply intellectually challenging and incredibly important and practical, and you can think about the system that you are interested in and do so in ways that lead you to study things that make a difference. This is simply not the way that most academics are taught to choose problems.

 

SP: Exactly. And it is important to note, historically, that when the ACLU started working on the case, it didn’t have a lot of supporters. It became a snowball effect. Today everybody thinks that their position was somehow the natural choice, even though it was actually very risky at the time.


DS: Right now, there is a narrative loose in the land that expertise has fallen on hard times and that people aren’t taking it seriously enough. And then there is a coda to that narrative, which is now that COVID is here, people are rediscovering the importance of expertise.

 

SP: There has actually been that distrust of the technical establishment for a long time. It’s just that the communities who had little trust didn’t have the power that the communities that started to seem to matter around 2016 do.

We have African American communities who have been hurt by—and therefore are distrustful of—the medical establishment, the technical establishment, for decades. In my backyard, we have the Flint water crisis. Environmental scientists and public-health experts were initially telling citizens of Flint that they didn’t know what they were talking about when they saw and smelled and felt that their water was bad. Telling them, in fact, that they were somehow more susceptible to certain kinds of diseases, and that is why they were getting them. Not from the bad water.

This has been happening—which has made communities feel alienated from experts—for decades, at the very least. It only seemed to start to matter when it had these kinds of huge impacts, for example, on the presidential election in the US and similar elections elsewhere.

But for me the lesson from, for example, the Tuskegee syphilis studies and the Flint water crisis and the many, many ways marginalized communities have been hurt by the technical establishment in the past is different: that there is something wrong with the way that we define expertise. Whose knowledge and expertise matters for the purposes of policy and public health is too limited. It doesn’t include enough alternative voices who can offer deep kinds of knowledge about, for example, these communities. It doesn’t offer contextual understanding of different people’s worlds that is related to scientific and technical expertise. Technical expertise does not equal social wisdom.

There are questions around COVID testing, for example, that are not solely answerable by technical experts. There are social dimensions too. When you think about supply chains, the psychological and social role testing might play, how social and family structures or community dynamics might shape tracing and isolation efforts, how systemic inequality shapes whether and how the technology is designed and implemented, those are questions that epidemiologists and biostatisticians can’t really help us with, actually. An epidemiologist can model things for us, they can tell us what we might be able to expect given certain assumptions, they can provide us with some parameters by which we might make some decisions. But we also need to have in-depth understanding of communities and social trust and political cultures for examples in order to know what kinds of interventions will work and how to design them. These are things that we need a much wider array of experts to address.

 

DS: But you are actually making a broader argument, then, beyond perhaps a more self-serving one: that we just need people like you and me. Instead, you actually have a much more expansive sense of what the necessary expertise is; it’s not just social scientists studying communities.

 

SP: Whenever we get into a situation where only the social scientist speaks for the community, and the community doesn’t get to speak for itself, we have a problem.

DS: What is the role of the intellectual in society now, especially in a society that is in the ferment that ours is in, and, of course, in a culture that has long been considered to be anti-intellectual? Even though, nonetheless, ideas have really mattered.

 

SP: Well, humility is the first thing. We need to reach out to and learn from communities more, and let their concerns drive our work rather than the other way around.

I’m so thrilled that in the last years there has been much, much more attention socially to the kinds of things that I care about at the intersection of technology and social justice and public policy. That work has emboldened advocacy, civil-society groups, etc. That is really, really important work.

And I think there is just a crucial role for more critical intellectual engagement when it comes to science and tech. Too often we think we can “tech” our way out of every problem, without thinking about the problem that the tech is supposed to be solving or whether the solution really works or creates more problems. And now citizens and policy makers are starting to see that tech isn’t always the answer, but they don’t necessarily have the tools to think about things differently or to make connections between tech and other things happening in society. That’s something that we as intellectuals can do. That’s why I started the podcast with Jack Stilgoe.

But at the same time, there is the private sector; there are policy makers, most of whom have managed to deflect a lot of that critique and community engagement by either making small changes or dismissing those critics. That’s why you still need to do really detailed research and analysis. You really have to know the technical details of what you are talking about, so when you get the most vicious question from a technical expert you can respond to it.

There aren’t going to be that many people who read my books, for example, because they are detailed. But by the same token, the books are more likely to convince the hardest critics. And the knowledge that I gain in the process allows me to push back and make the interventions, as in the Supreme Court case, in which I did end up having that impact. My hope is to keep making interventions that those decision makers will heed, or at least listen to.

 

DS: So one could even say you are making an appeal to common notions of rationality. This, in a way, is a quite optimistic vision. You are not just saying something tactical, you are really saying that if you get in there and you look, you can see things.

 

SP: I do think it is optimistic. I’m hopeful. I believe that people can be convinced.

 

This article was commissioned by B. R. Cohenicon

Featured image: Photograph by Peter Smith / Gerald R. Ford School of Public Policy



Source link

Recommend0 recommendationsPublished in TECHNOLOGY

Related Articles

Responses

Your email address will not be published. Required fields are marked *

Verification: 06fd59f0b1d07607