On July 26, 2012 a complaint was received by a court in New York. The plaintiff in that law suit was our very own New Delhi Television Ltd, or NDTV. The case was filed against, among others, most notably Nielsen Holdings and Nielsen Company. To quote the core complaint against Nielsen, Kantar and their India-based joint venture Television Audience Measurement (TAM), the case was to do with “losses” apparently suffered by NDTV on account of alleged “rampant corruption” in the Television Audience Measurement business.
To the puzzled reader wondering what this squabble between a television channel and a television viewership survey agency has to do with the debate at hand over social indicators in Gujarat and the leftist propaganda in oped columns, let me call upon your patience and direct your attention to what specifically was NDTV’s problem with the Nielsen and Kantar joint venture, TAM.
The first concern NDTV had with TAM was the survey sample size. It wanted it to be increased from 8,000 to 30,000. The lack of adequate sample size is a recurrent theme in NDTV’s complaint. The complaints get serious with added element of corruption in the data collection area. The complaint quotes unnamed senior officials of Nielsen and Kantar admitting that television audience surveys in India were generating inaccurate data due to rampant corruption. A more insightful view emerges from the findings of a committee that looked into the whole area of television audience survey in India. Specifically the committee found “sample sizes to be inadequate” and “lack of reliability” in data collection methods from sample homes.
The limitation of ‘sample sizes’ in conducting surveys in a demographically heterogeneous population is not limited to the world of Television Audience Measurement alone. As we have seen in India, in election after election exit polls and pre-poll surveys falter pitifully from the final outcomes in States with multi-polar contests and complex demographics.
Which leads us to the question: If surveying something as technologically straightforward as Television Audience Measurement can be so faulty and corrupt in a country like India, how much faith can we put into surveying something as sociologically complex as human development indicators?
I hope the patient reader now appreciates the detour from the intellectually malnourished debate on social indicators of Gujarat to the murky world of television ratings to question the very credibility and pseudo-science behind how social indicators are sampled and projected.
There are sound scientific reasons to debunk the entire approach by these leftist commentators who have been abusing lagging social indicators to make politically convenient arguments.
Take the case of ‘Sample Registration System’ or SRS that is used to measure infant mortality among other social indices. It is not a population wide measurement (Census like). Instead, it relies on about 7,000 sample units. The data is not collected by professionals and not through technology enabled automation but manually by part-timers. There is usually a six-month lag for independent audits. The sample itself is based on a 10-year Census during which demographics may have altered significantly. As an example the current sample is still based on the 2001 Census data.
Take the case of another index that was bandied about by activists of all sorts in Oped columns during 2011 and 2012 – the so called ‘Indian State Hunger Index’. It was based on 2004-2005 sample data and a statistical model developed in the West with no independent validation or relevance for Indian conditions. Even the UNDP HDI index is a lagging indicator at best.
If samples and statistical models were found to be faulty and unreliable for television viewing habits and electoral preferences, why pray should we deem them to be anymore scientifically accurate for measuring social indicators?
Lastly, let me draw attention to a recent story in The Times of India on how elite pockets of Mumbai were found to have a skewed sex ratio. The story was interesting not so much for its political import but for the granularity of the data cited, getting down to specific pockets within the city of Mumbai. This kind of granularity is far more actionable and useful as it points out specific areas where potentially welfare interventions by local Government (municipal or panchayat) could make a real difference.
Contrast this with pointless State-level aggregated social indices on infant mortality or sex ratio.
They neither pinpoint the dark zones nor the bright spots. They can’t tell you if Surat is better performing or if a certain pocket of Patna has a problem. These leftist sociological constructs are at best lagging indicators that end up making the case for Centrally-sponsored schemes and statist top-down welfare models. While these social indices help in academic analyses while keeping the Delhi-based extended ecosystem of NGOs and activists gainfully employed, they are barely actionable.
The TOI story on sex ratio disparity across the city of Mumbai is a pointer to how we must fundamentally alter the debate on social performance, taking it away from the voodoo statistics and pseudo-science of the political Left.
Today the Nielsen method of panel-based media audience measurement is being challenged by upstarts with the help of big data analytics and population-wide Census-based methods of audience measurement. Recently the Obama re-election campaign employed similar population-wide behavioral analytics to micro-target voters to ensure his re-election. There is no reason why we in India must not look to technology to devise ingenious methods for near real time data collection and population-wide analytics of social performance. This will not only help micro-target and localise welfare Interventions by local Governments (as opposed to centralised schemes) but it will also shift the focus away from agenda-driven politicking based on lagging indicators and towards a debate on actionable interventions that can make a difference here and now.
Let me end with relating an experiment the World Bank attempted in 2010 by opening up all of its data sets through open APIs for an ‘Apps for Development’ contest. There is a lesson in this experiment to our Governments on how to win back the debate from the agenda-driven Left by embracing the idea of ‘Open Government’.
It is time real facts based on near real time data made these voodoo statisticians and pseudo-social scientists redundant, perhaps even irrelevant.