Slow progress at SANAC

Slow progress at SANAC
NSPReview8
Photo by Samantha Reinders, courtesy of Médecins Sans Frontières

 

The National Strategic Plan lists the HIV and TB-related targets that South Africa aims to achieve by 2016. But more than a year into the latest NSP, we still have little idea about whether the country is on track to meet these goals. To complicate matters, within the South African National AIDS Council (SANAC), the unit tasked with monitoring progress is understaffed and is only now getting off the ground.

What are we monitoring?

South Africa monitors a wide range of health indicators but the information that is used is often unreliable. There are many reasons for this weakness. Our country relies mostly on paper-based systems and data gathering is often done by over-burdened healthcare workers. When information is compiled at the health facility level it is forwarded to others at the district and provincial levels before, finally, reaching the national Department of Health. In systems like this there is potential for errors and delays, for the loss of files, and the double counting of data. Official health data and statistics may therefore not be an accurate reflection of what is happening on the ground. In addition, not all health centres collect the same data, and a comprehensive national picture of health burdens is difficult to establish.

Collecting data is useful but not all indicators are relevant or helpful to providing healthcare. Some even argue that South Africa may have too many indicators. Primary healthcare measurements, for example, of how many patients are seen at particular clinics are useful in theory but are vulnerable to human error, and do not reflect the quality of care. Requiring healthcare workers to record many indicators burdens them with further demands. It may degrade the quality of the reporting, and may even reduce the time already-stressed health workers have to care for patients.

A three tier system

The Department of Health decides what data are to be collected and how this is done. Data are currently gathered via three different systems, or tiers. The first tier is a paper-based system, used in the majority of the country’s health facilities. The second tier is an electronic antiretroviral therapy (ART) register, currently used at 1,600 facilities. The third tier is an electronic health register that captures all available health information. This third form of data gathering is currently used at only 30 facilities.

Dr Yogan Pillay of South Africa’s Department of Health says that the comprehensive tier-three system will be rolled out across the country as part of the new National Health Insurance (NHI) system. By using a patient ‘smart card’, which will look like a credit card and hold all patient information electronically, data will be automatically uploaded into a national electronic system, making data centralized, and therefore more easily accessible and accurate. This will mean that patient information can be more easily transferred from facility to facility, as needed.

But Pillay says there are challenges to implementing this system. Tier-three relies on the availability of computers at every facility—problematic given that some rural facilities do not even have electricity, and staff need training in how to collect the data.

Keeping track of SANAC

While South Africa has many health goals to aspire to, those concerning HIV and TB are outlined in the country’s National Strategic Plan for HIV/AIDS, STIs and TB, otherwise known as the NSP. While these goals are laudable, they are not always well defined. This has made the country’s success harder to track, says Dr Nevilene Slingers of the South African National AIDS Council (SANAC), an organisation responsible for monitoring the implementation of the Plan. Some of the concepts that are used in the NSP are difficult to quantify: how, for example, does one measure stigma? The targets of the NSP have also only been set within a five-year timeframe; and no guidance has been given about exactly what should be achieved in each year.

The NSP includes a ‘Comprehensive M&E Framework’ which can be used to define a set of sub-indicators and enable progress to be tracked. But more than a year into the current NSP, this framework has not been clarified or implemented, and SANAC’s team responsible for M&E remains small. At the time of writing, it consisted of Dr Fareed Abdullah, SANAC’s CEO, and a Senior Manager: NSP Implementation. A promised annual progress report has not yet been released.

NSPReview8
SANAC CEO, Fareed Abdullah, chatting to some of his colleagues. Photo by Masi Losi, courtesy of the Treatment Action Campaign Archive

 

Still no final indicators

While some indicators, such as treatment goals, are already measured on an ongoing basis by the Department of Health, far less baseline information is available for non-biomedical targets. For example, prior to SANAC’s recent release of the first comprehensive count of sex workers in South Africa (an estimate of 153,000 people), almost no accurate information about sex workers was available, even though this group is identified as a key vulnerable population in the NSP. Abdullah says that technical task teams have been set up to determine indicators for non-biomedical areas. Government departments, development agencies, and all sectors of civil society will provide input.

Historical and organisational problems have contributed to delays in SANAC’s monitoring and evaluation of the NSP. As Slingers notes, it has taken time for SANAC to design its structures and appoint staff. She observes: “You can’t have any M&E plan until you have [these arrangements] in place.” But Abdullah says he acknowledges “that SANAC has not made enough progress in [M&E]” and adds that “this [problem] has already been identified as a priority for the Secretariat for the current year.”

Abdullah says that a final list of M&E indicators will be determined by SANAC’s Programme Review Committee in “the coming months”, and expects publication of the first annual report by September. Slingers and Abdullah note that South Africa’s proposal to the Global Fund could act as a guide for further defining the M&E indicators and annual targets.

“What we are expecting to do is to select the most important indicators and targets which will capture 80% of the NSP, and we hope to report on that annually,” says Abdullah. “We might be late with our first annual report but there will be a report every year, and a proper mid-term evaluation.” SANAC will rely on the mid-term evaluation to determine whether the NSP targets are being achieved, and if any indicators will need to be reconsidered.

SANAC will consider data from government departments, gathered as part of annual performance monitoring and evaluation, for its annual and mid-term reviews. Data issued by provincial AIDS councils, donors, and published in research literature will also contribute to monitoring progress.

“In general,” comments Abdullah, “we think we should draw from secondary sources. There are one or two programmes for which we think we might [need] … our own system of data collection, like the national sex worker programme … because we don’t believe … anybody’s actually collecting that data in a systematic, unified way.”

While acknowledging the slow progress of SANAC’s M&E programmes, Abdullah is hopeful. He points to the M&E framework developed for the Global Fund, and a report issued by SANAC for UNAIDS which details the country’s progress towards meeting UN-defined goals. “We’re taking important first steps,” he says.

[box]

Monitoring and evaluation: When less information can tell us more

“Monitoring and evaluation (M&E) is a complicated field,” says Professor Francois Venter, Deputy Executive Director of the Wits Reproductive Health and HIV Institute. Many people, he suggests, seem intent on measuring everything, but doing so makes no sense.

“The World Health Organization in the past has put out documents with over 80 data fields … The Department of Health [in] Gauteng Province …  translated it into a … form that had to be filled out for every single patient,” he says. “This [was] unrealistic and quickly discarded … There is no way a health worker can afford to spend 30 minutes on filling out a form for every single patient.”

Most experts we spoke to agreed. Venter argues that it would make more sense to collect data on fewer critical indicators and to make sure that this is done well. We can learn more from higher-quality data.

Better critical indicators will help us to get a deeper understanding of what is happening at the district healthcare level. “For example,” he says, “what does it tell us if patients are recorded as being initiated at a low CD4 count? Does it not tell us more if we [know] … how many are being retained on treatment at one year or two years? If [the answer] is 80-90% [of patients], we know we are doing a good job. If it is 50%, [this] is not good.”

Similarly, knowing viral load suppression rates for patients at the facility level can tell us a lot about the quality of the treatment people are receiving. If viral loads are suppressed in 90% of patients, he adds, we can tell that a facility is doing a good job.

[/box]

By Mara Kardas-Nelson and Marcus Low
Mara Kardas-Nelson is a journalist with the Mail & Guardian’s Centre for Health Journalism. Marcus Low is joint editor of the NSP Review and editor of Equal Treatment magazine.