Emily Kaiser
BRIGHT Magazine
Published in
5 min readApr 4, 2018

--

Photo illustration by Grant Cornett for BRIGHT Magazine

HHi, my name is Emily, and I’m a data geek.

My problem started when I became a financial journalist at Reuters. As a U.S. economics correspondent, I was immersed in a daily deluge of government data. I could tell you, to the third decimal place, how much Americans owed on their home mortgages. I knew how much time the average person spent eating (it’s approximately 1.17 hours per day — thanks, Labor Department!)

So it was a bit of a shock when I moved to Singapore as the Asia economics correspondent and couldn’t even reliably determine the size or growth rate of China’s economy.

Journalists aspire to produce accurate, fair stories. Yet we routinely rely on data that is neither of those things. It’s an even bigger challenge when covering international development. We rely on organizations such as the World Bank to collect and organize that data. But what happens when numbers don’t match what we observe?

xkcd via Flick’r

Consider how we measure poverty: The accepted metric is the World Bank’s poverty line, which currently stands at $1.90, meaning people living on less than that daily are considered extremely poor.

But that might not be true. Someone who lives in a barter economy, for example, might be counted as extremely poor even if they have adequate food, safe housing, and schools. This issue arose at Global Press, when a reporter questioned why an editor had added World Bank data classifying a region as extremely poor even though living standards were relatively high. (Full disclosure: I am chairman of Global Press and a rabid superfan of its accuracy standards.)

The World Bank data may have been technically accurate, but it wasn’t true because it didn’t account for goods and services exchanged for something other than cash.

As journalists, part of our job is judging the credibility of sources. We need to hold data sources to the same standard. Here are five ways to ensure you’re using the best available data effectively and fairly:

1. Numbers need context.

I once gave my son a 10,000 rupiah note from Indonesia. His eyes lit up. OK, that was cruel. I apologized (it’s currently worth $0.70 USD). But to this day, whenever we’re traveling, he asks about the exchange rate. When you’re dealing with numbers, especially big ones, it’s hard for the human brain to comprehend them without proper context. NPR did some good reporting on this during the 2008 financial crisis. Give a reference point. One of my favorite recent examples is the coverage of Jeff Bezos when he became the world’s richest man. It’s hard to wrap your brain around $105 billion. Fast Company put it this way — Bezos was worth more than the GDP of 125 of the world’s 195 nations.

2. Mind the reliability gap.

Big-name institutions carry credibility. But that doesn’t mean their data is always reliable. Just as you would examine a source’s motivations or conflicts of interest, it’s equally important to interrogate data sources. Who funded the research? Is it meant to support a political position or a cause? A tourism-dependent government, for example, may be inclined to underestimate malaria outbreaks while a donor-reliant NGO might overstate it. Read the footnotes. Accurate doesn’t necessarily mean true. Kalev Leetaru delves more deeply into statistical fallacies here.

3. Sweat the details.

Ask researchers how they collected the data. When you’re dealing with surveys, something as simple as language can skew the results. Did the interviewers speak the same language as the survey subjects or did they use translators? Did survey subjects understand what they were being asked and why? At Fundacion Paraguaya, for example, poverty data is collected through a visual survey, using photographs. This helps prevent information from getting lost in translation, even when interview subjects can’t read.

4. Trust your expertise.

As journalists, we have a tendency to look for someone else to say something for us. But sometimes you are in the best position to collect the data yourself. There’s a name for that. It’s called reporting. Don’t be afraid to build your own data set. Be honest with your audience about how you assembled the data. Be specific. How many people did you contact? By what means? One of the best examples of this comes from ProPublica, which collected nearly 5,000 stories about maternal harm. ProPublica sets the gold standard on building databases. Few organizations can match its resources. Fortunately, ProPublica also sets the gold standard on sharing data. (Full disclosure: I donate to ProPublica.)

5. Follow your observations.

Anecdotes should never replace solid data. But when what you see clashes with the available statistics, it’s worth a closer examination. Make an extra phone call to find out how, when and where data was collected. With academic studies, ask researchers whether they share results with NGOs and policymakers on the ground. That can provide an added layer of authenticity when local experts have reviewed the findings and consider them sound enough to guide policy decisions.

Please subscribe to our weekly newsletter, and follow us on Facebook and Twitter. If you would like to reproduce this story, please contact us at hello@honeyguidemedia.org.

--

--