Your goal is to utilize an evaluation that will yield a base of evidence that can instill confidence in others that your program is effective.
To accomplish this goal, you’ll want to develop a relationship with the evaluator so that the resulting product accurately represents information and—equally important—so that you understand the findings.
When getting started, there are certain elements you’ll want to discuss with your evaluator to ensure you both are starting with the same set of expectations and assumptions.
Fully explain the intervention or treatment you want to test. You’ll need to be able to describe how long the intervention will last, who will deliver the program to participants, and which materials or resources will be used. Discuss with your evaluator how you plan to monitor the program and which tools you can use to guarantee “implementation fidelity”—that is, making sure the program is delivered according to plan.
Explain how you plan to recruit individuals, treatment centers, schools, and so forth for the study. Ask whether the evaluator has experience recruiting these types of participants for studies.
For evidence-based registries, such as NREPP, a comparison group is needed. So discuss how you can recruit study participants who will not participate in the program. Also discuss how to assign study participants to the comparison group or intervention group. You’ll want to select a method, such as random assignment, that will guarantee the groups are very similar before the program starts. Emphasize that the evaluator needs to establish exactly how similar the groups are.
Decide which measures, instruments, or scales you plan to use, and when and how the evaluator will collect the data. If you do not know which measures to use, ask the evaluator for recommendations of existing measures. You may also want to create your own measure, but this will require additional planning and expertise. It may be quicker and easier to use an existing, reliable measure.
After the treatment has ended and all information has been collected, your evaluator’s job is to conduct statistical analyses. You’ll want to ensure that the report shows a test for differences between the two groups, outcome by outcome. If the groups differ at all before the program starts, make sure your evaluator takes these differences into account by adjusting for them. If participants are “nested” within larger organizations (such as schools or treatment centers), the evaluator should take this into account by making cluster adjustments.
Discuss what kind of report your evaluator will provide. It should deliver a level of detail that will enable registries to assess the quality of the evaluation and results. Tell your evaluator you would like the descriptive summary information included in the paper. This includes data on means, standard deviations, and sample sizes by group.