The NSP: Do we know what's happening?
“What gets measured gets done.” This is arguably the most important thing Margaret Chan, Director-General of the World Health Organisation, has ever said. Monitoring and evaluation is the map that should guide all actors in the healthcare system. As Ben Gaunt puts it: “Managing the health system without information is like trying to fly a plane blind.” And for government, reporting health information is also about accountability. How is the department of health spending taxpayer’s money and what has it achieved?
This issue of NSP Review takes a hard, honest look at what is getting measured in South Africa’s National Strategic Plan for HIV, STIs and TB. It describes how little we know about the country’s current progress toward achieving the goals of the NSP. The health system’s routine data collection system is generally weak, contains too many indicators and produces information that is often unreliable. The roll-out of the electronic antiretroviral register (the famous TIER.net e-register) to 2200 facilities across the country is encouraging, and so is the futuristic vision of the National Health Insurance where every patient will have a health ‘smart card’ feeding into a national electronic system.
There are, however, many obstacles to overcome and much more investment into monitoring and evaluation is required in order to move beyond rhetoric to real-world implementation. For example, the unit tasked with monitoring and evaluation at the South African National AIDS Council (SANAC) is understaffed, and is yet to produce a first progress report on the NSP 2012-2016, whilst the list of indicators still needs to be finalised.The failure to achieve significant improvements in neglected NHI pilot districts adds to these concerns, as does the neglect of Eastern Cape rural clinics described by Anso Thom. The vicious cycle of rubbish-in rubbish-out needs to be broken. When reports are not fed back to clinic level, many health workers do not see the benefit of data collection and hence don’t prioritise accurate record keeping, which in turn leads to poor quality reporting, which further enhances the perception that it is useless.
As Francois Venter puts it, when it comes to indicators, less is more; investing in the collection of a limited number of critical indicators will allow us to better understand what is happening and what is needed to improve the programme. To monitor the cascade from HIV testing to retention in care, we need to know numbers tested, linked to care, CD4 counts, initiations on ART, retention in care and viral load suppression per year on ART.
There are undeniable successes: more than 2.5 million people started on ART and a decrease in mother to child transmission below 3%. Yet, these achievements are fragile and there are worrying signs such as increasing drug stockouts, the reversal of nurse-initiated management of ART in some areas, ongoing high transmission rates, and the failure to adequately support key population groups such as migrants, men who have sex with men and drug users. And key information on the programme is missing. How many patients are still in care? How many of the ones in care have an undetectable viral load? Helen Schneider and Wim Van Damme argue that to enhance the response to the epidemic, there is a need to better integrate the information produced by the public health system, researchers and civil society. For this, increased transparency and dialogue is necessary.
In the end, what we measure (the indicators) and how well we measure it (the quality) tells us about how much we really care about what we do. Yet as my friend Ernest Nyamato puts it, ‘In M&E we only think about teaching people how to seed, but they also need to harvest and learn to cook’.Gilles van Cutsem, Medical Coordinator for Médecins Sans Frontières in South Africa