Number Soup: Behind the Research

In this edition of our “Behind the Research” series, we discuss the methods and analytical tools used to collect, process, and interpret the data that served as the basis for “Number Soup: Case Studies of Quantitatively Dense News”—a study undertaken jointly with researchers at PBS NewsHour.

by Jena Barchas-LichtensteinBennett AttawayJohn VoiklisElliott Bowen
Sep 16, 2022

In 2019, the PBS NewHour-Knology Participatory Research Lab launched "Meaningful Math," a project that investigates how journalists make use of numbers and statistics in their reporting, and how news users make sense of this data. One major component of this work was a study of the extent and depth to which the news media communicates quantitative information. This resulted in two peer-reviewed papers: Surveying the Landscape of Numbers in US News, which focuses on overall patterns, and Number Soup, which explores the questions:

  • What kinds of characteristics do quantitatively dense news stories share?
  • What levels of numerical literacy are required to understand quantitatively dense media content?

For the study, we constructed a dataset consisting of 230 news stories. Utilizing a combination of quantitative and qualitative tools, we then broke these stories up into their constitutive clauses (9,500 in all!), and assigned codes based on the types of quantitative content they contained. The analysis for Number Soup focused on the stories and clauses with the highest number of codes.

The paper itself is published in Journalism Practice, and elsewhere on this site we offer a summary of our main findings. In this companion piece, we take a look behind the curtains, exploring the methods and tools the researchers used to collect, process, analyze, and interpret the data. In the hopes of learning more about these things, Elliott Bowen (Knology's writing and communications lead) spoke with three Knology researchers: Jena Barchas-Lichtenstein, Bennett Attaway, and John Voiklis. We've embedded a recording of the interview below, and a full transcript is also available.

Here are some of highlights from the discussion:

What makes a story "quantitatively dense"?

JENA: We had a fairly complicated coding scheme involved. So we broke up articles into chunks, and each chunk could receive any of a whole bunch of codes. So things like this are about official statistics. This is about percentages. And, really simply, any story that had an average of more than two codes per unit of text, we classified as dense. I think that was about a dozen stories. By contrast, there were stories in our dataset that had barely any codes at all. So two per chunk of text is really quite a lot.

What is a "clause," and how did you go about breaking down stories into their component clauses?

JOHN: Jena and I agonized over this, because there's a linguistic version of a clause. And there's what we did.

JENA: As a linguist, I have strong feelings. But at the end of the day, we needed something that would be consistent and relatively machine parsable.

BENNETT: So we ended up separating by periods and semicolons. And with the HTML that we scraped from the news websites, I was able to strip out a bunch of the formatting, and parse it into clauses. I still had to do some cleanup by hand afterwards, because we were getting content from a wide variety of news sites. And it's hard to account for all the potential differences in how things are laid out.

What were the different codes you created when analyzing these clauses, and how did you go about assigning them?

BENNETT: So we had a number of different codes. We had some that were pretty basic, like " magnitude and scale ," which just means a number–like, "fifty dollars." And then we had " proportion or percentage ," which is also really self explanatory. A code that came up a lot in a lot of different stories was " comparison ," which is pretty much what it says on the tin. We had " risk and probability ," something that you hear about a lot, even if it's maybe not given as a specific number. You'll often hear things like "the unemployment rate is forecasted to rise in the upcoming months." We had a code on " research methods ," which could include enumerating everything or doing a sample. Because we did this data collection at the very beginning of the COVID pandemic, we had a lot of stories that reported on case counts. And those case counts often got another code that was for " official statistics"—which is anything that's released by the government or a government-like organization. And then, we had a code for " central tendencies and exceptions." So, saying something like "the average American makes this many dollars," or, "this summer was abnormally hot compared to a typical year" would get that code. And then we had " variability ," which was talking about how subgroups differ from the overall group. So, saying something like "White Americans agreed with this statement," or, "across the board, people are supporting this candidate." So, those are the codes that we had.

As to how we assigned them: we went through several rounds, as a team, with stories that were not from this sample, but that we collected earlier. We'd code these separately and then speak together to make sure we were in agreement about what should be assigned, which codes. And after we'd done a couple practice rounds of that—we even checked interrater reliability, which just tells you how consistent people were with each other—then I ended up being the one to assign codes to most of the stories and clauses in the data set.

JENA: One thing that I can add just really quickly is that the set of codes itself was developed iteratively by reading news stories, and talking about them. We had it reviewed by journalists, and we had it reviewed by math professors. It was the consensus of a lot of different kinds of expert opinion that went into the what of the codes.

How did COVID impact the research?

JENA: I want to flag that as an ethics consideration that I wish people gave more attention to. We were doing this coding in the second half of March 2020. We'd just been sent home from work. And half of the stories were about COVID. And I would say a huge part of our weekly check-in every week—and really it was bi-directional—was both of us going, "how are you holding up?" "Is this topic too stressful?" "Do you need to do something else and take a break for a little while?" Because it's almost hard to remember now, but reading those stories at that time was—at least for me—viscerally anxiety-inducing. I couldn't read more than about two or three of them in a row without needing to do something else.

And I think sometimes we don't talk enough about the emotional work of certain kinds of research, and the ethical challenge of that. So that was something I really tried to be very mindful of—that the topic was hard, especially at that time, and just making sure that we kind of both built ourselves in the breaks that we needed to be able to manage that.

JOHN: There's a lot of talk about what has been lost from not having offices. I'm not sure it would have been able to come up with the codebook without all of us really together all the time. It would be interesting to see how we could do it virtually. Because that one required—that one was very much a process where we were all at each other's elbows.

BENNETT: I can say that all of the analysis and everything after that, we did remotely, and it went very smoothly.

JENA: Yeah, it worked out. I mean, we spent some time reading one another sentences over a Slack call. But that worked out. It was sort of like sitting next to each other.

How did you bring qualitative and quantitative methods together?

JENA: This is a John question, 100%.

JOHN: Yeah, so I'm trying to tamp down all the welling up of strong feelings about this, which is: I don't think qual and quant exist; they don't exist. They're just methods. They're just tools. We are just looking at behavior or data or something, or a phenomenon in the world. There are different lenses you can put on it, depending on the questions you ask. In the way that cognitive science and psychology are practiced nowadays, there has to be a constant seamless interplay between all sorts of methods and tools. We are looking at behavior, and there are many ways of understanding behavior. That said, there was some mindfulness about it here. We definitely wanted to bring together some of the tools of data science, because creating and going through the corpus is computational corpus analysis, and that's part of data science. But then we added the layer of hand coding. So that's human coding. That's a sort of human judgment coming into it. But even that, then, to assure replicability and to assure that we've reached the level of intersubjectivity—that's, again, a quantitative method.

JENA: And then this piece is—in some ways—almost pure discourse analysis, which is what I led, but the choice of what to analyze came directly from all of the steps that John just described. And not only were the quantitative and the qualitative seamless, but the link from the applied side of things was also very seamless. It's not a coincidence that half the authors of this paper are journalists. We couldn't have done it without that perspective either.

What methodological insights did you come away from the study with?

JENA: My biggest lesson is that thinking with someone who has very different training and a very different methods toolkit than you do is always a good idea. You don't have to know how to do everything. If you know a little bit about everything, then you can break it up so that you are doing the work that you're best at or most comfortable doing, but still get all the benefits of that larger toolbox. So I feel like my knowledge of many of John's methods is pretty abstract and high level—I couldn't do most of it. But I know enough that we can have a really productive conversation about "how do we get at this thing?" and vice versa. So, you know, we spent a lot of time planning and preparing and sparring and fun things happened at the end of it.

Funding

These materials were produced for Meaningful Math, a research project funded through National Science Foundation Award #DRL-1906802. The authors are solely responsible for the content on this page.

Photo by Annie Spratt on Unsplash

Transcript supported by Otter.ai

Join the Conversation
What did you think of this? How did you use it? Is there something else we should be thinking of?
Support research that has a real world impact.