Assessing whether learning is really being used on the job is challenging for many trainers. End-of-class level 2 evaluation is easy enough to do: the participant demonstrates skill or knowledge acquisition at the end of the training. We might also assess pre-training learning and compare. But determining whether skills and knowledge are actually being used on the job is another matter entirely.
The most straightforward way to do this is to simply ask participants if they are using what they learned and to what extent. But are we actually getting an accurate measure of whether a participant is using their learning? In a previous Sticky Note I mentioned neuroscience research that points out people often think they know, have experienced, or are experiencing something when in fact they have not. This illusion calls into question the practice of assessing on-the-job use of skills learned in training, by asking participants if they are using it. The key issues are:
Are participants actually aware of how and in what ways they are using what they learned in a training class?
Can participants distinguish between what they are applying from a particular class and what they are doing for other reasons such as another previous training, intuition or trial and error?
How much are these self-reports tempered with wanting to provide the appropriate response, to please the trainer, the boss, the organization?
Studies on these types of self-reports indicate they are unreliable. One study found that self-reports produced positive responses that were 35% more positive than reports by participants’ managers.
In post-training reaction level 1 evaluation, ask participants about their “intention to transfer”, that is, whether they plan to use what they have learned, and how they plan to use it. Studies show there is a strong link between intention to transfer and later actual transfer.
After training at a point in time when participants should have had an opportunity to use the training, ask their managers (a quick survey, or more detailed focus group) whether their employee is using the skills learned in training and how they are using them. Six weeks and three months post-training are popular times.
To reduce the tendency to give the desired positive response, ask managers and participants specific behavior-based questions, known as a Behavior Observation Scales (BOS). Assessing on a 5 point scale (1 = Almost Never, 5 = Almost Always), specific behaviors linked to class objectives are addressed. For example, for a class on coaching, one behavior observation scale item is “Provides feedback regularly”. A BOS item for a sales training class: “Reviews individual productivity results with manager”.
Instead of – or in addition to – asking participants and their managers, poll a select group of individuals, perhaps one level above participants’ managers,who are in a position to see many participants’ on-the-job behavior. One study paired an HR rep with each of these individuals, and the role of the HR rep was to assist the manager with completing the Behavior Observation Scales.
Instead of assessing the learning application for every participant, assess a sample of participants. In general, 30% of the total number of participants should provide a reasonably accurate representation of all trainees in a particular training program.
Don’t rely on your “gut feelings” about whether trainees are using what they learn in training. Use popular, free survey software or features of your LMS to find out how much of your training is sticking!
Until Next Time…
PS: Join me at ATD (formerly ASTD) International Conference and Expo May 17-20. I will be presenting on Evidence-Based Techniques for Training Transfer.