In the days following the January 2017 inauguration of President Donald Trump, jaws dropped around the world as Kellyanne Conway’s infamous, Orwellian interview with NBC’s Chuck Todd went viral. Conway had suggested that Press Secretary Sean Spicer had used “alternative facts” to describe the size of the crowd at the event. Hardly a day of this young Administration has gone by where terms like “alternative fact,” “post-truth,” or “fake news” have not been used by journalists and political pundits alike to pejoratively describe the current, depressing state of American politics.
For purposes of this essay, however, I do not seek to interrogate this rhetoric at the government level (although it is certainly in need of interrogation); rather, I am particularly interested in how quickly this discourse has been co-opted and appropriated by scientists and science communicators themselves. What follows is a critical overview of this phenomenon from a social studies of science perspective. After providing several examples, I touch on questions such as: How does this compare to previous debates in science and technology studies? Who gets to decide what facts are “fake” or “alternative”? What does it mean for something to be true or false? How is this discourse employed as a tool of practice in science communication? What are the consequences of deploying these terms—especially in such a seemingly ordinary, casual, natural-language, almost cavalier way?
“Until the OSTP is adequately staffed and the director position filled by a qualified, objective scientist who understands the difference between alternative news peddled on alt-right websites and legitimate well-vetted scientific facts, we fear that you will continue to be vulnerable to misinformation and fake news. Relying on factual technical and scientific data has helped make America the greatest nation in the world.”
—Letter from House Science Committee sent to President Trump, 18 May 2017
“Science is the antithesis of fake news.”
—Christian Samper, Wildlife Conservation Society President in The Guardian
“I’ve never seen the scientific community so concerned. This goes way beyond funding. When fake news is accepted as just one of the alternate approaches, then there are serious problems to be addressed.”
—Rush Holt, chief executive of AAAS in The Washington Post
Save for the fancy new buzzwords we cannot stop talking about, marketing about, and hashtag-ing about, are we really in a #posttruth, #fakenews, #alternativefact era? What would this look like, and what does it mean? The Oxford Dictionary, which named “post-truth” as its 2016 word of the year, defines it as: “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” Social epistemologist Steve Fuller (2016) suggests that “Science has always been a bit ‘post-truth,’” writing that even Thomas Kuhn’s 1960s account of science suggested that truth is not the arbiter of legitimate power, rather just a resource, or mask, for achieving that power. Is this really new enough to be referred to as “post”-anything?
Scientific facts, science communicators well know from decades of work in psychology, cultural cognition, and other social sciences, do little to change minds. Reason is limited, and staking claims to the truth may not be the most efficient way to call out bullsh*t. Likewise, Sergio Sismondo (2017) seems to be a bit more hesitant to say we have reached that point yet of post-truth, or that today’s politics are decidedly different than before. However, he notes that “STS suggests that the emergence of a post-truth era might be more possible than most people would imagine.”
To be sure, the 2016 political environment, ranging from the “Brexit” referendum to the U.S. presidential election, was particularly divisive and, may I say, unpresidented—as is much debate over contentious scientific and technological controversy, and the communication of it, today. But have we ever lived in a world where it is possible to separate facts from falsehoods? What about a world where we don’t value “truth”? When has emotion not dictated the decisions we make, and in many cases, taken precedent over everything else in the making of those decisions? Have we ever actually been able to take politics out of decision-making and knowledge sharing?
In their classic text Opening Pandora’s Box (1984), Gilbert and Mulkay contend that discourse analysis is a necessary component of the social study of science. Traditional forms of analysis, the authors write, erase and condense a multitude of voices from the social world, in the authoritative voice of the unquestioned sociologist. “Where ethnomethodologists argued that science studies missed the specifics, the haecceity, of science,” Steven Yearley writes in Making Sense of Science (2004), “Mulkay accused science studies of ‘vassalage’, of merely telling stories woven together out of fragments of scientists’ own discourse” (p. 92). Such a method eliminates the possibility of acknowledging the interpretive variability among different actors—who may describe the “same” event or phenomenon differently. “It is a myth that scientists are neutral and disinterested actors when they engage in research,” and without taking scientists’ discourse as a topic—rather than as its resource—for sociological analysis, the complexity of science is flattened (p. 3). One of the fundamental tensions here is that if language is both our topic and our resources for understanding that language, what do we do? Gilbert and Mulkay write that, “it becomes clear that the traditional analysis of social action cannot be successful without a systematic understanding of discourse” (p. 13). Such an analysis of science communicators themselves would reveal this tension around the proliferation of alternative facts as a strategic tool of communication.
Symmetry of facticity
Indeed, this is a tension scholars of science and technology studies know something about; we are put in a woefully precarious, cognitive dissonance-inducing position of trying to remain committed to our critical discipline as well as with the reality of our political culture. Is science really the “antithesis” of fake news? Is science really the “rational achievement par excellence” Talcott Parsons suggests it may be (Schutz p. 29)? Sismondo writes: “We in STS know that epistemic competition is as much about choosing which truths can be considered salient and important as about which claims can be considered true and false, and these choices have important consequences.” The STS sensibility is founded at the very basic level to question the social and cultural meanings of expertise and knowledge production, including the boundary work that goes into building credibility and trust. Simply put, STS seeks to interrogate what it means for something to be real or true—and who gets to decide what is real or true. In these terms, I am uncomfortable with the notion that one particular group of people—here, journalists and/or scientists—are the arbiters of facts, as the privileged position of their membership categories tend to suggest. The sociological understanding of something as truth or falsehood is not black and white, especially when we are functioning in an entirely different milieu of reality (i.e. 2017 politics). Whose reality matters, then?
The legacy of the sociological approach to scientific knowledge (SSK) is increasingly apparent within these intellectual conversations. SSK analyzes not only science as a social activity, but also the content of scientific knowledge itself. This sociological program challenges the truth and rationality often privileged in explanations of scientific knowledge, helping one to identify interpretive variations that may be linked to political, historical, and other cultural circumstances. “The treatment of scientific knowledge as a social construction implies that there is nothing epistemologically special about the nature of scientific knowledge: It is merely one in a whole series of knowledge cultures” (Bijker and Pinch, p. 19). Because of this, it is difficult to separate out categories of experts.
Of particular relevance to the extension of the sociology of knowledge into the sciences is the tenet of symmetry, where the social analyst must be impartial to the truth or falsity of beliefs, and attaining differing explanations for what is taken to be scientific “truth” to scientific “falsehood” is no longer the objective of the analysis (Pinch & Bijker, p. 18). Social theorists have established the presence of a corresponding relationship between language and reality: true statements are true because they accurately describe the world. Establishing this link is fundamental to determining the truthiness (or falsity) of a claim (Schütz). This symmetry is the foundation of the empirical research conducted on the processes of scientific knowledge construction. It is worth noting that an ethnomethodological – and even more broadly, a sociological – approach to conversation and the production of knowledge often times leaves us unintentionally more skeptical of facts, due to the close attention paid to the production of data and models, the construction of evidentiary claims and arguments, often undermining the epistemic status of the “facts” produced in the first place.
But it is precisely this history of scrutinizing the construction of facts and the power structures that lead to the acknowledgment of a knowledge claim’s facticity that have caused many natural scientists to view STS as a dangerous attack on truth, reality, and rationality—and on the very tenets of expertise on which their power and credibility depend.
Science Wars 2.0?
This fake news era is not only causing a crisis for journalists, but also for science studies scholars. While there is a slightly different cast of characters, these intellectual debates echo strongly of the Science Wars of the 1990s. During that time, scientific realists challenged the very notion of symmetry in analysis of knowledge claims as a mere rejection of scientific objectivity and of the scientific method. They believed these things to be fundamental to the power of scientific claims; truth existed and was real, and by not making this common sense judgment call, social constructivists were viewed as radical anti-truth science deniers. This was validated when Norman Levitt, Paul Gross, and Alan Sokal were able to publish a phony (fake news?) journal article on postmodernism in one of the field’s premier journals, challenging, at its core, the credibility of the literature on social construction.
But did these critics of the academic left have a point? Are we unintentionally empowering science denialism typically attributed to the far right, simply by the types of questions we ask and the normative decisions we license ourselves to make? Am I—a Ph.D. student in a program training me to think of facts as social constructs and reality as inherently complicated; in a program training me to teach undergraduate STEM majors that “there is no such thing as natural, unmediated, unbiased access to truth” and asking them to think critically about their place in the knowledge economy—complicit in the very thinking that science activism and policy (that I largely support) is trying so hard to combat? Latour threw up the proverbial white flag to change course of the Science Wars by famously writing: “Was I wrong to participate in the invention of this field known as science studies [where dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives against climate change]? Is it enough to say that we did not really mean what we meant?” (p. 227) Did Latour have a point?
Even more currently, these questions are being asked in a different way: As Steve Fuller puts it, “Is STS all Talk and no Walk?”
See more on this very contemporary, ongoing debate:
Where does #scicomm fit in?
After parsing through the contested nature of this post-truth world and the social analyst’s place within it, it is worth considering how the discourse of “fake news” and “alternative fact” has been employed by science communicators as a strategic method of practice. Communication scholar Bernadette Bensuade Vincent establishes in her 2014 work that so-called buzzwords are a communication tool that can do political work. How are scientists discursively assembling something they call an “alternative fact,” and what work is such a process doing? While Vincent’s work focuses on public engagement as a buzzword, the framework she builds can likewise be applied to the terms I evaluate in this post. Although the precise meaning of a buzzword is often unclear and better understood as a “word collective,” the ambiguity of buzzwords allows them to perform one of their key functions, without constant interrogation: the bringing together of various stakeholders with different social and political agendas (p. 243; p. 245).
Buzzwords are similar to boundary objects in that they can enroll various actors and are subject to interpretive flexibility (p. 246). The buzzword “public engagement in science” brings together multiple actors—science managers, citizens, associations, and STS scholars—who engage in “strategic misunderstandings” (p. 244); the buzzwords of “post-truth,” “alternative fact,” and “fake news” bring together scholars, activists, scientists, journalists, and anonymous commentators on Twitter, who often have competing interests or motivations. The networks of people enrolled behind buzzwords give buzzwords their power (p. 248). Despite the misunderstandings surrounding buzzwords, they can, according to Vincent, act as “pacifiers;” their interpretive flexibility allows them to “create peaceful collectives of people with competing agendas” (p. 246; p. 250), similar to a “trading zone.” Another way that buzzwords enroll actors is by indicating goals or “setting agendas” (p. 246). Buzzwords are “value-laden” but not authoritative; they prescribe a desirable course of action that reflects societal values, but they only carry a kind of “soft power” (p. 249; p. 250). Buzzwords, though good at preventing conflict, may conceal the diversity and complexity of social and political interests that surround them (p. 250).
Who gets to judge what meets the requirements to carry and benefit from the power of these buzzwords? Harry Collins offers an academic typology of expertise – one based on inherent epistemic cultural authority for constructing reality – but the actor category of “expert” deployed by natural language helps to establish credibility (and subsequently, serve as a proxy for consensus). In his 2005 essay on credibility, Steven Shapin compares the modernist ideals of obvious truths with the obdurate reality of persuasion with a metaphor of King Lear and his daughter, Cordelia. Shapin writes: “The recognition of truth ought to be simple. The truth of knowing and the truth of being perhaps ought to be the same, but in practice we can never be quite sure that they are” (p. 256). He continues:
Once upon a time, so the story goes, students of science, too, believed that truth was is own recommendation, or if not that, something very like it. If one wanted to know, and one rarely did, why it was that true propositions were credible, one was referred back to their truth, to the evidence for them, or to those methodical procedures the unambiguous following of which testified to the truth of the product. Alternatively, if one wanted to know, as one usually did, why false claims achieved credibility, one pointed to an assortment of contingent circumstances that caused people to hold dear what was in fact worthless. That is to say, once upon a time pronouncements of validity were considered adequate responses to questions about credibility. And, indeed, it would be very narrow and pedantic view of the matter to refuse to recognize that, for most students of science, so far as we know, for most laypeople, they still count as such.
“So we must ask ourselves: How do we maintain our stance that scientific knowledge is coproduced, that ‘facts’ could be otherwise, without endorsing the idea of ‘alternative facts’ as used by the Trump administration?” writes my colleague D.P. He summarizes my extremely dissonant feelings well: “Our work [as science and technology studies scholars] has become much more complicated and now requires a higher degree of nuance and attention because of the fresh relevance to global sociopolitical order.” Our ability to critically evaluate facticity and truth can itself be a resource in this era where the value of “truth” and its corresponding reality is at its foundation being challenged.
But I perhaps naively hope these questions I have found myself asking above (but not exactly answering!) do not just speak to STS scholars struggling to engage productively with this new rhetoric, but can speak to socially engaged scientists and science communicators as well. Ultimately, the impact of this post-truth discourse does not come in the actual truth or falsity of the information shared; it comes in the strategic work that is done when the the terms are employed. There are significant consequences to the New York Times now being “fake”; to science we just do not like being described as merely “fake news” or “junk science” or “pseudoscience”; to expertise not being valued as credible.
The election of a climate change denying, oil-dependent vaccine-skeptic as President of the United States in 2017 is certainly indicative of these challenges to the value and credibility of expertise. Scientists must seize this crisis as an opportunity, but should remain weary of political, rhetorical promises to #marchforscience or #standupforscience by defending science explicitly from #fakenews and #alternativefacts. Empty promises of [this type of] uncritical #scicomm may have gotten us here.
Bensaude Vincent, B. (2014). The politics of buzzwords at the interface of technoscience, market and society: The case of ‘public engagement in science’. Public understanding of science, 23(3), 238-253.
Fuller, S. (2016, December 15). The Guardian: Science has always been a bit ‘post-truth.’ https://www.theguardian.com/science/political-science/2016/dec/15/science-has-always-been-a-bit-post-truth
Fuller, S. (2016, December 26). Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized. Social Epistemology Review & Reply Collective.
Gilbert, G. N., & Mulkay, M. (1984). Opening Pandora’s box: A sociological analysis of scientists’ discourse. CUP Archive.
Hilgartner, S. (1997). The Sokal affair in context. Science, Technology, & Human Values, 22(4), 506-522.
Latour, B. (2004). Why has critique run out of steam? From matters of fact to matters of concern. Critical inquiry, 30(2), 225-248.
Milman, O. (2017, April 22). The Guardian: March for Science puts Earth Day focus on global opposition to Trump. https://www.theguardian.com/environment/2017/apr/22/march-for-science-earth-day-climate-change-trump
Pierre-Louis, K. (2017, May 18). Popular Science: Exclusive: House Science Committee members just sent a letter to President Trump insisting he stop relying on fake news. http://www.popsci.com/house-science-committee-letter
Pinch, T.J. and Bijker, W.E. (1984). “The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other.” Social Studies of Science 14(3): 399-441.
Shapin, S. (1995). Cordelia’s love: credibility and the social studies of science.
Schütz, A. (1943). The problem of rationality in the social world. Economica, 10(38), 130-149.
Sismondo, Sergio. “Post-truth?” (2017): 3-6. Social Studies of Science.