Have you ever doubted the numbers in a news article? That isn’t crazy at all. A misinterpretation of the results of a research or a large data file is easily made. As a consequence, the information that is spread with the written news article will contain false assertions. In some cases is the fault too obvious and you will doubt the information, but most of the time the people will think that the assumptions are certainly true. To reduce these false assertions, the Dutch newspaper NRC Next created a new rubric a few years ago. In this rubric, called ‘Next checkt’, editors check whether particular newsworthy assertions are true or not.
They are not the only one with this kind of rubric; in America there were already whole organisations that started with fact checking in the media. In this blog we will take a look at some faults that can be made when you want to turn numbers in a news article. At the end we will look at an example of a news article in which it went wrong.
The risks of making data interpretations
In data journalism there are five important W questions, namely: Who?, What?, When?, Where? and Why? (Rogers). First of all, it is important to get an answer on the question ‘Where did the data come from?’. Can you say that it is a trustworthy source? The more reliable the source, the greater the chance that the data is correct. Beside that, the transparency about your source is important too. If a reader can see from where you got the data, the more he trust you. In this first part, you can already make a mistake, by believing a trustworthy immediately. So even when it is a reliable source, it is really necessary to question all the numbers you get in a data. The second step in the process is to create a clear story with the numbers in a way that readers can follow easily. It is certainly easy to write a good story about something when you have amazing numbers, but stay close to the real numbers. The third question deals with how old the data is that you collect. How younger the data that you gathered, how more up to date you are. The last two questions are about whether you can find more and other data to combine for a new story and whether you can correlate the different data collections.
The Dove self-esteem research
In November 2012 the Dutch newspaper Telegraaf published an article with the title ‘Low self-esteem young girls’. This assertion was based on a self-esteem research carried out by Panelwizard in order of Dove. Afterwards, Dove claimed that merely one out of ten girls found themselves beautiful. If we can believe the editors of the rubric ‘Next checkt’, this claim gives a more negative perception of the situation than the research of Panelwizard first suggested (click here to read the article of ‘Next Checkt’ about this topic). The online questionnaire literally asked the girls to assess themselves. 11,9 per cent of the 503 Dutch girls between the 10 and 17 years measured themselves as ‘beautiful’. This is in line with the claim, but the editors suggest that there were more positive words than ‘beautiful’ that the girls could select: attractive (3.5 per cent), fun (33.9 per cent), hot/flashing (1.3 per cent), natural (6.3 per cent) and sexy (0.6 per cent). Only a small group (3.7 per cent) agreed with negative words (unattractive, ugly and unremarkable). The editors suggest that when you sum all these per cents, you could say that 57.5 per cent of the 503 Dutch girls are positive about themselves. This is also not entirely true, because some girls select more words than other girls. However, you certainly can’t say that the self-esteem of the girls is very low. Furthermore, the editors suggest that the research didn’t specify the term self-esteem. What is self-esteem, is that only the look? Finally, you can ask yourself whether the results that Dove showed were objective. When you take a look at the website of Dove than you see that they offer a self-esteem program to improve your self-esteem. You can join this program as parents, but also as a teacher or school.
So… What went wrong with the W questions?
In this example the journalist already went wrong by the first question ‘Who?’. The source wasn’t reliable enough, because they also had other interest with the results. He didn’t check other sources, which would have showed, according to the editors of ‘Next checkt’, the opposites results of Dove. If he would have checked other sources, he was also in the position to check the numbers with the result that the numbers stayed closer to the real numbers. On the other hand, the results would have been less fascinated than Dove presented them. In that case you can question yourself whether the numbers would still newsworthy enough or that the article wouldn’t have appeared at all.