When designing a training program, organizations should consider the plan for evaluation up front. How the evaluation program is designed informs the extent to which we have confidence in the results of the training program. While the ideal would be to create an experimental design, with control group conditions, this is not always feasible, particularly in small organizations where only a handful of employees will get training at any point.
Many times, trainers have to settle for assessing outcomes based on the small convenience sample of employees participating in the training program. The one-shot case study is most common for small organizations. In a design of this nature, employees are provided a training program and then tested to see if the training program was effective. This can cloud evaluation, however. For example, if new resources were made available to the group during the training, but before measurement of the outcomes, it would be unclear which affected the training outcomes.
This is called a history effect. Another concern has to do with a maturation effect. This means that even without the training, the trainees could have simply gotten better at their task due to repetition over an extended period of time. An improvement on the case study design involves a one-shot pretest–posttest design. Again, this design may be the only option for those with limited resources or volume of trainees with which to perform a proper experiment. With this design, we have a pre-training measure and a post-training measure that indicates at some time between the two measurements the group influenced a change in training outcomes.
Unfortunately, this design does not eliminate problems associated with the maturation or the history effect. This design also adds another complication. Perhaps the subjects were influenced by the pre-training measurement and, when they were measured post-training, they remembered the measurement questions and answered correctly the second time around.