Screening for Magazine Readers
Most magazines rely on advertising revenues to support their operations. In order to justify their advertising rate cards, magazine publishers need to document their readership. At the most fundamental level, magazine publishers have to supply their circulation figures. This is the number of printed copies minus the number of unsold and returned copies. Usually, magazine publishers are required to have independent auditors come in to audit and certify their circulation numbers. Even so, magazine publishers recognize that magazine circulation does not equal magazine readership, since a single copy of a magazine in circulation may be read by many different people. More importantly, the circulation figure does not illuminate on the characteristics of magazine readers. Accordingly, magazine publishers turn to consumer surveys to document the quantity and quality of their readers.
As with much of survey research, the exact phrasing of a survey question may lead to quantitatively and qualitatively different answers. For example, asking the simple question, "Do you read magazine X?" will elicit some very confused responses, as the question does not refer to any issue number, timeframe, location or degree of involvement. Across the world, there have been many studies on the appropriate methodology for measuring magazine readership. There is a class of methods that are labeled 'gold standards,' but they are too time-consuming and costly for commercial implementation. Instead, the most common approach seems to be a two-step process. In the first step, there is a screen question such as "Have you read or looked into any issue of magazine X in the last six months?" For those who pass the screen question (and only for those people who pass), there is a readership question such as "When was the last time that you read or looked into any issue of magazine X?" A readers is then defined as someone who has read the magazine within the publication period (that is, in the past seven days for a weekly; in the past month for a monthly and so on). The preceding example describes the 'recency' method.
Of course, one may wonder why it was necessary to use two steps instead of just proceeding directly to the readership question. It turns out that the readership numbers become significantly inflated. Psychologically, the situation is that the survey respondent has been presented with a long list of magazines and becomes socially pressured to indicate some activity. This results in overclaimed readership. The presence of the screen question permits the survey respondent to make some of the socially desirable responses, but now they are no longer obliged to check the more important readership question.
In most readership studies, the data for the screen questions are often not released. The screen question is just an intermediary device without any value of its own. However, the screen question is very often used to monitor data quality. One of the key attributes about the screen data is the ratio between the screen level and the readership level is fairly constant. For example, in the MARS study, the read-to-screen ratio (defined as 100 times the average readership level divided by the average screen level) is 50.0 in the year 2002 survey and 50.4 in the year 2001 survey. That is, about half the people who screened in are readers.
Within a survey, there may be considerable differences in read-to-screen ratios by magazine. In the following figure, we show the frequency distributions of the read-to-screen ratios for the MARS study (year 2002 in blue and year 2001 in orange). While the average read-to-screen ratio between the two years were nearly 50, the spread goes plus or minus ten points.
These variances are not random fluctuations, but they actually reflect the characteristics of the respective magazines. Imagine first of all a magazine that is received mostly through subscription at home (note: a good example might be TV Guide). Most of its readers will read it on a regular basis, so that they will answer "Yes" to the screen question and also "Yes" to the readership question, thus resulting in a relatively high read-to-screen ratio. Imagine next a magazine that is most often read irregularly outside of home, at places such as doctor's office, beauty salons and so on. There will be many people who answer "Yes" to the screen question but fewer will answer "Yes" to the readership question, thus resulting in a relative low read-to-screen ratio.
In the next figure, we show the read-to-screen ratios in the MARS study for the same magazines between the years 2001 and 2002. There is a strong positive correlation between the years. In this manner, the survey research service can track the data quality over time with the expectation that read-to-screen ratios should be fairly constant by magazine.
The constancy in the read-to-screen ratio has been used to monitor the quality of interviewers. In many large-sample studies, hundreds of interviewers are used across a wide area. Most companies have special quality control people to do random checks. But it is obviously desirable to have useful indicators of potential problems, such as the interviewers filling out the questionnaires themselves. As it turns out, the read-to-screen ratio is not an easy thing to fake, since interviewers will seldom know what the expected patterns and values ought to be. Thus, when the work of a particular interviewer shows a systematic departure from the historical norms for the read-to-screen ratio, this is an indication that further scrutiny is in order.
(posted by Roland Soong on 02/16/2003)
(Return to Zona Latina's Home Page)