Monitoring and evaluation (M&E) plans are designed before a project is begun and form part of the project design process itself. This page has the following sections:
- Key stages in the development of a project plan
- Identifying target outcomes
- Acceptability and satisfaction
- OAD ethical guidelines
- Project lifecycle
1. Problem definition: Given the risks of unintended consequences, if there is no problem it is probably most helpful not to interfere. It is therefore important to start by clearly and explicitly defining the problem a project is going to attempt to address. The nature of the problem should be clearly defined and its existence confirmed through grassroots knowledge and empirical research. For example, it is unsafe to assume that students in all contexts lack motivation or interest in science. In some places, lack of motivation or interest may reduce science participation rates; in other places, however, students may be extremely keen but unable to continue science participation for other reasons (e.g. lack of financial resources, transportation etc.).
2. Needs or Stakeholder Analysis (sometimes called ‘Front-End Evaluation’): before a project is designed, it is important to understand the scale and nature of the problem an intervention seeks to address. This is part of checking whether a problem not only exists but also is one that (a) people want assistance with and (b) might be feasibly addressed with available resources. Needs Analysis means using data and asking stakeholders about what is happening on the ground, what resources are available and what kind of intervention they want. It is part of a process that has been shown to minimise risk of harm and unintended consequences and increase the likelihood of a project’s sustainability.
3. Theory: a project’s theory of change, explaining why and how is the project expected to work. The theory should ideally draw from the results of the needs analysis and from existing research (e.g. behavioural science findings and previous evaluations in education and international development). This is where it could be useful to draw from the evidence resources offered by the OAD (e.g. the database of systematic review findings).
4. Outcomes: a list of a project’s primary and secondary goals, meaning the outcomes it aims to affect, the (estimated) expected size and nature of the impact on each outcome and the timeline during which these effects are expected to occur. These outcomes can include theorised unobservable outcomes (e.g. inspiration) but should also include observable development-related outcomes (e.g. school attendance; participation in class; test scores; enrolment in a science degree).
5. Population: a specification of the project’s target population, detailing who will participate in and/or benefit from the project. The population should be carefully selected as those who were involved in the needs analysis and definition of the problem and amongst whom intended outcomes will be observed. It is better to have a narrow, well-defined population for whom a project has been carefully designed than to try to reach everyone. It is also important to consider variety within a target population, even when the target population appears narrowly defined. For example, even in a class of 11 year-olds, students may differ in terms of gender, physical and cognitive abilities, ethnicity, language preferences, interests, learning styles etc.
6. Scope or Reach: an approximation of how many people, communities or organisations are expected to take part and/or benefit. OAD projects are intended to be ‘pilot tests’ of innovative ideas; not large-scale policy changes that change the world in one go! It is therefore better to be realistic and modest than to over-stretch a project, trying to reach a huge number of people and end up sacrificing high-quality implementation.
7. Project Planning & Design: once relevant data have been collected to understand the nature of the problem and formalise objectives, the project itself can be designed and planned. In the ideal case, all prior stages are written up alongside a ‘project manual’ describing the project activities and how the project will be delivered. This should be detailed enough to allow future projects to understand the logic underlying your project design and replicate what you did.
8. Reporting Requirements: these should be determined with the OAD and/or any other funders and stakeholders involved in the project. Schools, for example, may require you to report certain measures (e.g. depending on where the project is executed these could include records of obtaining parental consent, record of school visits, criminal background checks for adults involved in the project delivery etc.)
9. M&E Plan: The M&E plan builds on reporting requirements to specify which evaluation questions matter most and to then design a monitoring and evaluation approach that specified which data will be collected, how (including design of measurement instruments), by whom and at which stage of the project. All education interventions that intend to improve science skills or understanding, for example, should include a written assessment of pre- and post-project at a minimum.
Steps 3 through 8 draw on the findings of Steps 1 and 2. The final step of developing a project design and plan then draws on all previous ones to specify which types of evaluation will be conducted.
Outcomes are specific to each project. Assuming that a project has been designed and its theory has been specified, planning must include specification of intended outcomes and how these will be measured.
It is important to distinguish between project ‘outputs’ and ‘outcomes’.
Outputs are services, activities or components delivered by the project. For example, the number of teachers who attended teacher-training workshops; how many workshops were delivered; and materials written for use in the workshops would all be outputs for a teacher training project. Similarly, the number of students eventually reached by the project are outputs. Outputs are not the benefits or changes that a project brings about for its stakeholders—they are the project components that were used to bring about those changes.
Outcomes are all the changes brought about by the project, both positive and negative. Measured outcomes are related to the objectives of the project. For example, implementation of new teaching techniques in the classroom might be a target outcome for a teacher-training project. Outcomes include all changes, benefits, learning or other effects that resulted because of what the project did (i.e. that would not have occurred if the project had not been delivered). Outcomes can occur at different levels (or combinations of levels), including amongst individuals, households, communities or organisations.
Outcomes are identified on the basis of needs analysis and the project’s theory. The most important outcomes are observable changes that are directly related to the project’s objectives. While unobservable outcomes (e.g. inspiration, joy, happiness, peace and fulfilment) are all important, they do not constitute development outcomes. The Sustainable Development Goals (SDGs) focus on ways in which people’s lives can be observably, objectively and measurably improved. If we hope to use astronomy to impact on development, we therefore need to be mindful of the importance of observable outcomes.
For example, a project with the goal of “improving science education” might have the intention to inspire students using astronomy in order to increase their active participation in science classes or the rate at which they continue with science through high school. The outcomes of interest might then include class participation (e.g. attendance, number of questions asked); science persistence rates; or rates of final year science test enrolment. Inspiration would serve as a theorised intermediate outcome specified in the project’s theoretical framework rather than its list of target outcomes.
The following questions can be used as a starting point for identifying relevant outcomes:
- What are the project’s most important (primary) goals?
- What are the project’s additional (secondary) goals?
- For each goal, are there observable changes that will occur if the goal is achieved?
Once potentially observable outcomes have been identified, specific ways of measuring these (either directly or indirectly) can be identified or designed. This is returned to later.
Depending on the project’s theory, outcomes can occur over a large timespace and encompass short-, medium- and long-term. Each of the goals and outcomes identified using the questions about should be re-considered with a timeframe to determine which outcomes are expected to occur at which (or at multiple) stages after the project’s completion. This is an important part of outcome specification because it is used to inform plans regarding outcome data collection.
An outcome of universal importance is how stakeholders feel about an intervention. Satisfaction with an intervention is particularly important when stakeholder support is needed to make a project sustainable. Participant satisfaction and dissatisfaction can also serve as useful (and easy -to-obtain) indicator for whether and how a project’s design and implementation can be improved. The process of collecting data on how participants felt about a project can produce important insights on how a project might be improved in future.
It is very important to note, however, that satisfaction and impact are not the same: studies have repeatedly found that projects can lead to high levels of satisfaction and enjoyment but nevertheless produce negative or no changes in target outcomes and/or be less effective than alternative projects that produced lower levels of satisfaction. For example, students’ satisfaction ratings are often negatively correlated with teaching effectiveness: although there are some exceptions, teachers who are disliked produce larger learning gains on average. This may be because teachers who assign challenging work schedules and set high standards are less likely to make students feel satisfied or happy but also push students to work hard and learn more in their classes. Students’ ratings are also heavily affected by social prejudices, with female and ethnic minority professors at Universities systematically ‘downgraded’ even when they produce better (objectively measured) learning outcomes.
The importance placed on participant and stakeholder satisfaction should therefore be considered carefully in relation to project goals, target impacts and project values. In some cases, satisfaction may be as important as impact. The key principle is that the two should not be conflated and that projects should seek to collect multiple sources of information.
Satisfaction is typically measured using self-report questionnaires. Depending on the nature of the project, however, attendance and completion rates can also be used as indicators of satisfaction (see below). Self-report data has a number of very important limitations that should always be borne in mind. Wherever possible, observable behavioural data should be collected alongside it.
For example, University professors routinely collect ‘course evaluations’. These would be more informative if collected alongside administrative data reflecting students’ subsequent course choices and/or performance: if 95% of an Introductory Astronomy class reported “loving the class” and findings it “life changing” but then never took enrolled in another science class while at University, the course may have been enjoyable but will probably have failed if its objective is to stimulate interest in science.
Attendance and Participation Records
A key part of any monitoring plan is the collection of data on who participated in a project and for how long. These data are essential for all aspects of evaluation and for documenting who was actually reached by the project.
Monitoring data collected on participation will vary according to the nature and aims of the project. Typical types of data collected to monitor project participation include:
- Unique identifier (assigned when inputting data)
- Date of participation application and/or enrolment (include both if these are distinct phases)
- Demographic characteristics (e.g. age, gender, ethnicity, disability)
- Clinically-relevant characteristics (e.g. education level for an educational intervention; drug history for a substance abuse intervention; socio-economic indicators for a cash transfer programme and so on)
- Duration of exposure (e.g. how long the individual participated)
- Intensity of exposure (e.g. how many sessions were attended? how many hours of participation did the individual engage in?)
- Which project components did participants receive, experience, take part in etc.?
- Did the participant complete the project or leave early? (if relevant)
- Whether or not the participant agreed to give feedback at the end of the project
- Whether or not the participant agreed to be contacted / followed for post-project evaluation data collection
Anonymising Data Collection
A key issue in the collection of attendance and participation data, as well as with evaluation data, is confidentiality. Ethical guidelines for M&E work are very clear on the need for organisations to safeguard the privacy of all individuals when collecting data. This means that, wherever possible, attendance and participation should be collected anonymously. Anonymity is particularly important if sensitive data collected (e.g. social or political opinions, engagement in any risky or anti-social behaviours, religious beliefs, academic test scores etc.)
The simplest way to protect participant anonymity is by keeping separate files: (1) one file should include participation data and a unique identifying number for each participant but NOT participants’ name, contact details or any other clearly identifying information (2) If non-anonymised data must be collected and stored (e.g. for long-term follow up) a second file should be kept that links the unique participant identifier to the participant’s name, contact details etc. No other data should be kept in this second file.
The OAD’s Ethics Guidelines and framework will provide detailed information on how to protect your project’s participants and adhere to good practice on consent and anonymity. OAD Ethical Guidelines are currently being developed.
The lifecyle of an evidence-based project with a monitoring and evaluation framework is simplified in the summary diagram shown below. These stages breakdown the connection between the use of “Resources” and the project design and optimisation phases in the OAD Impact Cycle, with measurements and materials produced from this comprising the “Evaluation” phase of the cycle and eventually feeding back into the Resources.