Essentials for Excellence: Researching, Monitoring and Evaluating Strategic Communication for Behaviour and Social Change with Special Reference to the Prevention and Control of Avian and Pandemic Influenza
The United Nations Children's Fund (UNICEF) operates from the conviction that strategic communication for behaviour and social change is a vital component of the avian influenza/pandemic influenza (AI/PI) global response. UNICEF staff attending an inter-regional AI/PI communication meeting in Bangkok, Thailand (August 2006), requested guidance on how to rapidly research, monitor, and evaluate (RM&E) strategic communication to strengthen UNICEF's efforts in supporting national counterparts in developing and implementing behaviour change communication (BCC) and social/community mobilisation strategies. These staff members asked: How do we know if strategic communication is addressing the right AI/PI issues among the right people, at the right time, in the right way? And how do we know if strategic communication is actually making a difference to AI/PI preparedness and response efforts? This 119-page publication seeks to provide a framework to respond to these questions.
"Essentials for Excellence" first became available in November 2006; since then, many of its ideas and tools have been put to use in several countries. This updated version reflects the lessons learned since then and new tools adapted from the field. The guide is intended for those who want straightforward answers to often complex questions (including sampling, research design, and pre-testing), who need handy tips, and who are looking for practical rather than academic advice. The guide's suggestions are meant to be adapted to suit each user's circumstances and needs.
The guide contains 5 modules:
1. How RM&E can help AI/PI strategic communication. It can:
- Ensure programmes are tailored to achieve highest sustainable impact at lowest cost;
- Offer forums for participation of various stakeholders;
- Test and assess programme effectiveness;
- Make a case to change programme inputs;
- Justify continued financial/political support;
- Answer stakeholder questions; and
- Provide feedback at all levels.
Within this first module, Table 1 outlines the types of research needed to guide and assess AI/PI strategic communication, Figure 1 illustrates how these research processes link together, and several paragraphs explore timing/personnel details related to RM&E.
2. How to conduct formative research at baseline - Formative research examines the current situation, develops objectives and baselines for subsequent measurement, and determines key concepts. It explores: Where are we now? Is strategic communication needed? Who needs it, why, how, when, and where? Formative research ranges from examining the findings of previous evaluations to analysis of secondary data relevant to communication to rapid appraisals of community organisation to baseline knowledge, attitudes, and practices (KAP) surveys.
Listed here are 4 essential data collection tasks (LEAD) for formative research:
- Listen to and observe people to learn about their perceptions and grasp of the offered behaviours, including their sense of the benefits, costs, and convenience (time, effort, money) of the behaviours to their lives.
- Explore cultural, social, ecological, seasonal, gender-based, political, moral, legal, and spiritual factors that could influence the adoption of the proposed behaviours.
- Ascertain when, where, and from what or whom participant groups would like to receive information and advice on these recommended behaviours.
- Determine a baseline measure (or measures) for each intended result against which you can later remeasure to determine if change has occurred.
To facilitate RM&E planning, Table 2 provides a minimum method set for AI/PI communication formative research.
3. How to assess immediate reactions to messages, materials, and proposed behaviours (pre-testing) - Key questions here are: Will this work? How should strategic communication best be carried out? Pre-testing: determines whether messages are clear and compelling; identifies unintended messages and totally unpredictable responses and other aspects of materials that may require modification; helps select from among a range of potential messages and materials; offers some insight into whether messages and materials will generate the desired impact; provides evidence to support subsequent monitoring and evaluation that participant groups are paying attention to and comprehending the strategic communication. UNICEF notes that materials such as posters and leaflets and messages broadcast via the mass media are especially important to assess before dissemination because they usually have to stand on their own or because there are few opportunities to explain what they mean.
Typical questions to ask members of participant groups are listed in Table 3. Basic methods to apply these questions and examples of samples are listed next. In brief, they include:
- Focus group discussions with participant groups (e.g., farmers). Advantages: can generate rich insights through dialogue. Disadvantages: difficult to quantify, need trained moderator, can be expensive.
- Intercept interviews of randomly selected individuals stopped (e.g., at poultry markets). Interviewers hold examples of materials and commence the interview. Advantages: easy way to collect data from large numbers of people quickly. Disadvantages: presenting information in an artificial setting, not statistically representative of population even if intercepts are random (e.g., only collect views of farmers who reach markets).
- Natural exposure testing of randomly selected individuals stopped at places where the materials are actually being displayed (in real-life settings such as clinic walls, bus stops, shops, market stalls, schools, etc.). The interviewer waits for respondents to "walk past" material and then conducts an interview (to check for noticeability, recall, etc.).Advantages: easy way to collect data from large numbers of people quickly; materials are being assessed in real situations. Disadvantages: can exaggerate people's attention and comprehension of the materials.
UNICEF suggests that, for the most credible results, one should select a combination of the first method plus either the second or third one. "If you obtain 70% or above agreement on key questions among survey participants...you can be fairly confident that you have reached consensus among the sample."
4. How to monitor processes and early changes - This process assesses reach, quality, participant satisfaction, and early indications of behavioural, organisational, and social change. It explores: How are we doing? To what extent are planned activities actually realised? How well is the information provided and dialogue supported? What early signs of progress can we detect? There are 4 main forms of monitoring:
- Implementation monitoring compares what is supposed to be happening with what is actually happening by tracking planned inputs and outputs, usually through a basic monitoring system such as a logical framework, workplan, or timetable.
- Process evaluation examines how well activities are being carried out according to parameters such as reach, quality, participant satisfaction, and levels of stakeholder participation.
- Behavioural monitoring measures intermediate behavioural results of programme activities, helping explain what is happening as a result of outputs such as training and how they link to longer-term changes as envisaged by programme results.
- Most Significant Change (MSC) monitoring allows for the systematic collection and participatory analysis of stories of change from the viewpoint of participants.
Table 4 outlines the key questions asked in each form of monitoring; how, with whom, or where these questions are asked; and suggested sample sizes.
5. How to measure and report impact - Measures behavioural, organisational, and social change results and determines contribution of strategic communication to these results. Here one is asking: How did we do? What outcomes are observed? What do the outcomes mean? What difference did strategic communication make?
"There are at least two types of evaluation.
- Outcome evaluation focuses on whether strategic communication results, usually stated in terms of behavioural or social changes, have been achieved within a given time period; and
- Impact evaluation assesses the sustainability of changes identified in outcome evaluations some time after a programme has ended, determines whether its overall goal has been achieved, and analyzes the contribution strategic communication has made to this achievement.
At the very minimum, you want an evaluation that allows you to know that AI/PI communication is actually making a difference and that this difference is contributing to AI/PI preparedness and response. This type of research needs time, careful planning, investment of resources, and ideally should be conducted by a skilled evaluator or evaluation team..."
The remainder of this chapter presents detailed guidance for such a process - the "how-to's" of measuring the link between communication outputs (mass media, face-to-face communication, print, etc.) and results. In short, it is necessary, first, to assess whether the planned results of the AI/PI strategic communication have been achieved - whether the messages, materials, and proposed behaviours were well-received and that the programme has been implemented as planned, participants have been reached, are satisfied, and so on. Second, "you must determine what contribution strategic communication has made to the outcomes/impacts. How much of the success (or failure) can be associated with strategic communication? Was the contribution worth the investment? Perhaps something else was influencing the observed changes such as an ongoing rural water, hygiene, and sanitation programme. Perhaps without AI/PI strategic communication, the observed changes would have occurred anyway, or would have occurred at a lower level or at a slower pace. In short, you need to assess your AI/PI strategic communication against an explicit 'counterfactual' - what happens when AI/PI strategic communication is not in place."
According to UNICEF, one of the strongest evaluation designs is to randomly assign participants to either intervention or non-intervention ("control") groups and then compare the same indicators in each group. Another credible way described here is to "match" those exposed to the AI/PI communication with those who are not exposed using a set of characteristics ("variables") such as ethnicity, socio-economic status, religion, geographic location, and so on. Propensity Score Matching (PSM) is next outlined here, and described as "also very useful, especially if your programme has invested heavily in mass media and therefore 'exposure' cannot be controlled." According to UNICEF, "When you have no possibility of comparing between intervention and non-intervention groups (and PSM is beyond your programme's technical capabilities), then the next best option is to collect a baseline amongst the intended intervention group before your AI/PI strategic communication commences and then re-measure the same indicators 12 or 24 months after implementation has commenced." Tips on how to analyse the contribution of AI/PI strategic communication, even in the absence of a baseline and comparison groups, are offered. A standard format for organising a research study or evaluation report is shown in Box 1.
Two toolboxes that provide basic advice on data collection methods and on sampling are included. A final Annex offers examples of research instruments for local adaptation that have been used in the Asia-Pacific region.
UNICEF Pacific Islands website, accessed October 22 2009.
- Log in to post comments











































