Getting ‘Evaluation Ready’ - how to evaluate your program in 5 steps

In the previous instalment in our Evaluation Ready series of blogs, we covered some of the fundamental concepts around evaluation—the what (basic definitions), the why (common reasons to evaluate), the who (anyone!) and the when (ideally before you start implementing).

This post will focus on the how, using a simple yet structured process. It can be scaled up or down depending on the size of the program you’re evaluating.

The five steps to evaluating any health, social or environmental change program are:

  1. Define the program

  2. Identify your stakeholders

  3. Create evaluation questions

  4. Plan the data you need

  5. Analyse and report

Note: We use program, project, service, intervention or any other similar term you can think of interchangeably—basically any planned activity that aims to bring about change.

The plan you develop that outlines how you will do these five steps is often called an evaluation framework.

It works best when you create this framework prior to starting to implement your program. This means you can start collecting the data you need from the beginning. You can still develop an evaluation framework retrospectively (i.e. when your program is ending), but it makes it a little trickier to find data that you might need but haven’t collected.

Step 1: Define the program

Defining what the program is that you’re evaluating is a critical first step—it helps you to focus on what the original intent was and how the program was designed in order to achieve this.

The best way to do this is through the development of a program logic model, which may also be called an impact map. A program logic model aims to visually represent how a program leads to desired changes.

Developing a program logic is not technically evaluation work—it is an important step in the project planning process. However, a comprehensive program logic model forms the basis of any good evaluation framework.

Our tip: Really explore what the key assumptions of your service model are! Programs often fail to achieve their objectives because of a wrong assumption about what will work. Identifying what assumptions you’ve made makes it easy to test them later.

Step 2: Identify stakeholders

Evaluation is generally best done as a collective effort. Different stakeholders will bring unique perspectives based on their expertise, experience (either professional or lived experience), influence and role (e.g. funders, donors), so building the evaluation framework around what matters to them will make it meaningful.

The types of stakeholder groups that might be relevant at different stages of an evaluation include senior management, program funders (current and future), program participants/consumers, community members/advocates media, politicians and broader sector networks.

You should identify which stakeholder groups are most relevant to your priorities, then consider:

What are they are likely to value most?

How and when should they be involved?

Step 3: Create evaluation questions

Evaluation questions are used to guide the scope and focus of your evaluation. When crafted well, evaluation questions generate meaningful and credible findings. Likewise, poorly crafted evaluation questions can mean you miss out on measuring what you intended to or spend time on collecting and analysing data that isn’t relevant.

A comprehensive evaluation will look at your program’s performance from different perspectives—we call these domains. There are five evaluation domains:

Appropriateness: whether the design of the program addressed the identified need in the given context.

Effectiveness: whether the program achieved its stated objectives.

Implementation: whether the inputs led to meaningful outcomes, and how well the program was governed and managed.

Impact: whether the program contributed to any longer term or system-level changes.

Sustainability: whether the program’s benefits have (or will be) continued.

Evaluation questions should be drafted against each of the five evaluation domains above. The number of questions you have under each domain will depend on the size and scale of your evaluation and the resources available to you.

It is often useful to start with an exhaustive list of potential evaluation questions, possibly by brainstorming suggestions with a group of different stakeholders, then aim to prioritise the evaluation questions you will eventually go with.

Our tip: These questions will be used to decide what data to collect and how it will be collected, and are often used to structure your evaluation reporting, so it’s vital to make sure they are agreed upon before you proceed.

Step 4: Plan your data

The next step in developing a comprehensive evaluation approach is to plan the data you will need to collect, and how and when to collect it.

This is the step where people often come unstuck. The term ‘data’ for many people suggests lots of numbers, formulas and statistics, which can be a bit overwhelming. But it doesn’t have to be that complex!

Data can be quantitative in nature, which generally aims to measure value and is expressed using numbers, figures and proportions. However, data can also be qualitative in nature, which aims to describe value and is expressed using stories, opinions, feedback and case studies. Ideally, you should aim to use a mix of both, as this can help to explain some of the changes you might observe.

Developing a data collection plan will help you to map out what data you need to respond to your evaluation questions, along with how it will be collected, by who, and when.

Our tip: Don’t overthink it! Often the best data is the most simple and straightforward. And use what you’ve already got before creating new methods for collecting data.

Step 5: Analyse and report

When it comes time to analyse and report on the findings, you should aim to answer the following questions:

What happened? What were the key issues or themes that emerged?

So what? Why is this important?

Now what? What actions should happen next?

Analysing your data requires you to draw insight from what you observe. This can be a subjective process, so involving others to verify your findings is important.

Being able to effectively present your data visually can help people to quickly understand your findings. Use a mix of text, tables, graphs, imagery and quotes to help your audience easily grasp the key points.

Effectively reporting your findings is the last but most critical part of the process—otherwise, all your hard work will be for nothing. Think back to the ‘why’ for your evaluation, and who your audience is likely to be. Use a mix of reporting styles and formats for different stakeholders groups.

Our tip: Remember that it doesn’t always need to be a long, written report—there are many different, engaging methods of reporting, including infographics, data dashboards, slidedecks, video stories and presentations.

Are you evaluation ready?

With some tools and templates to support your process, combined with the collective knowledge of your colleagues and stakeholders, you’ll be ready to start evaluating your work.

There are plenty of great resources out there that will assist you to strengthen your evaluation approach, many of them are free. Perhaps the best place to start is the Better Evaluation portal, or head along to one of the events, workshops or conferences held by the Australian Evaluation Society to hear and talk about what evaluation looks like in practice.

If you’re looking for some helpful templates to help formalise your evaluation approach, drop your contact details into the form to the right and we’ll send you ours!

Our team specialises in supporting both funders and service providers to evaluate their programs, whether that be planning and designing an evaluation approach, setting up data collection systems and protocols, or undertaking evaluations and program reviews that lead to meaningful and impactful reporting.

Where to next?

Evaluation Ready is a series of blog posts that aims to capture some of the knowledge, processes and lessons that we’ve assembled along our journey of supporting organisations to better evaluate their work, in the hope of prompting more people to see themselves as an evaluator.

Our first post introduced some fundamental concepts around evaluation when done well (you can go back and read it here if you missed it). This second post provides an overview of the five-step process we deploy when undertaking any program evaluation. The third and final post will wrap things up about how to ‘make it stick’ within organisations.

We also run Evaluation Ready seminars and workshops around South-East Queensland and are currently exploring delivering these in other locations and online.

Read the final part of this series here.


Previous
Previous

Getting ‘Evaluation Ready’ - making it stick in your organisation

Next
Next

Getting ‘Evaluation Ready’ - the basics of program evaluation