Evaluation: What is it and why do it?

Evaluation. What associations does this word bring to mind? Do you see evaluation as an invaluable tool to improve your program? Or do you find it intimidating because you don't know much about it? Regardless of your perspective on evaluation, MEERA is here to help! The purpose of this introductory section is to provide you with some useful background information on evaluation.

Table of Contents

What is evaluation?

Evaluation is a process that critically examines a program. It involves collecting and analyzing information about a program’s activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions (Patton, 1987).

Should I evaluate my program?

Experts stress that evaluation can:


Improve program design and implementation.

It is important to periodically assess and adapt your activities to ensure they are as effective as they can be. Evaluation can help you identify areas for improvement and ultimately help you realize your goals more efficiently. Additionally, when you share your results about what was more and less effective, you help advance environmental education.

Demonstrate program impact.

Evaluation enables you to demonstrate your program’s success or progress. The information you collect allows you to better communicate your program's impact to others, which is critical for public relations, staff morale, and attracting and retaining support from current and potential funders.

Why conduct evaluations? approx. 2 minutes

Gus Medina, Project Manager, Environmental Education and Training Partnership


There are some situations where evaluation may not be a good idea


What type of evaluation should I conduct and when?

Evaluations fall into one of two broad categories: formative and summative. Formative evaluations are conducted during program development and implementation and are useful if you want direction on how to best achieve your goals or improve your program. Summative evaluations should be completed once your programs are well established and will tell you to what extent the program is achieving its goals.

Within the categories of formative and summative, there are different types of evaluation.

Which of these evaluations is most appropriate depends on the stage of your program:

Type of Evaluation Purpose
1. Needs Assessment Determines who needs the program, how great the need is, and what can be done to best meet the need. An EE needs assessment can help determine what audiences are not currently served by programs and provide insight into what characteristics new programs should have to meet these audiences’ needs.

For more information, Needs Assessment Training uses a practical training module to lead you through a series of interactive pages about needs assessment.
2. Process or Implementation Evaluation Examines the process of implementing the program and determines whether the program is operating as planned. Can be done continuously or as a one-time assessment. Results are used to improve the program. A process evaluation of an EE program may focus on the number and type of participants reached and/or determining how satisfied these individuals are with the program.
1. Outcome Evaluation Investigates to what extent the program is achieving its outcomes. These outcomes are the short-term and medium-term changes in program participants that result directly from the program. For example, EE outcome evaluations may examine improvements in participants’ knowledge, skills, attitudes, intentions, or behaviors.
2. Impact Evaluation Determines any broader, longer-term changes that have occurred as a result of the program. These impacts are the net effects, typically on the entire school, community, organization, society, or environment. EE impact evaluations may focus on the educational, environmental quality, or human health impacts of EE programs.

Make evaluation part of your program; don’t tack it on at the end!


Adapted from:

Norland, E. (2004, Sept). From education theory.. to conservation practice Presented at the Annual Meeting of the International Association for Fish & Wildlife Agencies, Atlantic City, New Jersey.

Pancer, s. M., and Westhues, A. (1989) "A developmental stage approach to program planning and evaluation." Evaluation Review (13): 56-77.

Rossi R H., Lipsey, M. W., & Freeman. H. E. (2004). Evaluation: a systematic approach Thousand Oaks. Call.: Sage Publications.

For additional information on the differences between outcomes and impacts, including lists of potential EE outcomes and impacts, see MEERA's Outcomes and Impacts page.

What makes a good evaluation?

A well-planned and carefully executed evaluation will reap more benefits for all stakeholders than an evaluation that is thrown together hastily and retrospectively. Though you may feel that you lack the time, resources, and expertise to carry out an evaluation, learning about evaluation early-on and planning carefully will help you navigate the process.

MEERA provides suggestions for all phases of an evaluation. But before you start, it will help to review the following characteristics of a good evaluation (list adapted from resource formerly available through the University of Sussex, Teaching and Learning Development Unit Evaluation Guidelines and John W. Evans' Short Course on Evaluation Basics):

Good evaluation is tailored to your program and builds on existing evaluation knowledge and resources.

Your evaluation should be crafted to address the specific goals and objectives of your EE program. However, it is likely that other environmental educators have created and field-tested similar evaluation designs and instruments. Rather than starting from scratch, looking at what others have done can help you conduct a better evaluation. See MEERA’s searchable database of EE evaluations to get started.

Good evaluation is inclusive.

It ensures that diverse viewpoints are taken into account and that results are as complete and unbiased as possible. Input should be sought from all of those involved and affected by the evaluation such as students, parents, teachers, program staff, or community members. One way to ensure your evaluation is inclusive is by following the practice of participatory evaluation.

Good evaluation is honest.

Evaluation results are likely to suggest that your program has strengths as well as limitations. Your evaluation should not be a simple declaration of program success or failure. Evidence that your EE program is not achieving all of its ambitious objectives can be hard to swallow, but it can also help you learn where to best put your limited resources.

Good evaluation is replicable and its methods are as rigorous as circumstances allow.

A good evaluation is one that is likely to be replicable, meaning that someone else should be able to conduct the same evaluation and get the same results. The higher the quality of your evaluation design, its data collection methods and its data analysis, the more accurate its conclusions and the more confident others will be in its findings.


Consider doing a “best practices” review of your program before proceeding with your evaluation.


How do I make evaluation an integral part of my program?

Making evaluation an integral part of your program means evaluation is a part of everything you do. You design your program with evaluation in mind, collect data on an on-going basis, and use these data to continuously improve your program.

Developing and implementing such an evaluation system has many benefits including helping you to:

  • better understand your target audiences' needs and how to meet these needs
  • design objectives that are more achievable and measurable
  • monitor progress toward objectives more effectively and efficiently
  • learn more from evaluation
  • increase your program's productivity and effectiveness

To build and support an evaluation system:

Couple evaluation with strategic planning.

As you set goals, objectives, and a desired vision of the future for your program, identify ways to measure these goals and objectives and how you might collect, analyze, and use this information. This process will help ensure that your objectives are measurable and that you are collecting information that you will use. Strategic planning is also a good time to create a list of questions you would like your evaluation to answer.

Revisit and update your evaluation plan and logic model

(See Step 2) to make sure you are on track. Update these documents on a regular basis, adding new strategies, changing unsuccessful strategies, revising relationships in the model, and adding unforeseen impacts of an activity (EMI, 2004).

Build an evaluation culture

by rewarding participation in evaluation, offering evaluation capacity building opportunities, providing funding for evaluation, communicating a convincing and unified purpose for evaluation, and celebrating evaluation successes.


The following resource provides more depth on integrating evaluation into program planning:

Best Practices Guide to Program Evaluation for Aquatic Educators (.pdf)
Beginner Intermediate
Recreational Boating and Fishing Foundation. (2006).

Chapter 2 of this guide, “Create a climate for evaluation,” gives advice on how to fully institutionalize evaluation into your organization. It describes features of an organizational culture, and explains how to build teamwork, administrative support and leadership for evaluation. It discusses the importance of developing organizational capacity for evaluation, linking evaluation to organizational planning and performance reviews, and unexpected benefits of evaluation to organizational culture.


If you want to learn more about how to institutionalize evaluation, check out the following resources on adaptive management. Adaptive management is an approach to conservation management that is based on learning from systematic, on-going monitoring and evaluation, and involves adapting and improving programs based on the findings from monitoring and evaluation.


How can I learn more?

  • Does your project make a difference? A guide to evaluating environmental education projects and programs.
    Sydney: Department of Environment and Conservation, Australia. (2004)
    Section 1 provides a useful introduction to evaluation in EE. It defines evaluation, and explains why it is important and challenging, with quotes about the evaluation experiences of several environmental educators.
  • Designing Evaluation for Education Projects (.pdf),
    NOAA Office of Education and Sustainable Development. (2004)
    In Section 3, “Why is evaluation important to project design and implementation?” nine benefits of evaluation are listed, including, for example, the value of using evaluation results for public relations and outreach.
  • Evaluating EE in Schools: A Practical Guide for Teachers (.pdf)
    Bennett, D.B. (1984). UNESCO-UNEP
    Beginner Intermediate
    The introduction of this guide explains four main benefits of evaluation in EE, including:
    1) building greater support for your program,
    2) improving your program,
    3) advancing student learning,
    4) promoting better environmental outcomes.
  • Guidelines for Evaluating Non-Profit Communications Efforts (.pdf)
    Communications Consortium Media Center. (2004)
    Beginner Intermediate
    A section titled “Overarching Evaluation Principles” describes twelve principles of evaluation, such as the importance of being realistic about the potential impact of a project, and being aware of how values shape evaluation. Another noteworthy section, “Acknowledging the Challenges of Evaluation,” outlines nine substantial challenges, including the difficulty in assessing complicated changes in multiple levels of society (school, community, state, etc.). This resource focuses on evaluating public communications efforts, though most of the content is relevant to EE.


EMI (Ecosystem Management Initiative). (2004). Measuring Progress: An Evaluation Guide for Ecosystem and Community-Based Projects. School of Natural Resources and Environment, University of Michigan. Downloaded September 20, 2006 from: www.snre.umich.edu/ecomgt/evaluation/templates.htm

Patton, M.Q. (1987). Qualitative Research Evaluation Methods. Thousand Oaks, CA: Sage Publishers.

Thomson, G. & Hoffman, J. (2003). Measuring the success of EE programs. Canadian Parks and Wilderness Society.