Employee training programs often follow one of a few patterns. One common pattern is an organizational mandate to focus on a specific ability, because increasing that ability will help increase employees’ competency in a relevant area or align employees with organizational values or goals. Based on this, the learning and development team creates a course around the specified ability (e.g., having difficult conversations, leading through change, giving appropriate feedback, time-management, or developing interpersonal trust) and from there employees attend a one installment training course that takes up a portion of their day. Unfortunately, this training could be a complete waste of time. Why is this? Well, in this scenario, the organization did not attempt to measure anything about the training including whether individuals improved in the skills being trained.
Not using metrics, especially well-designed ones, to guide or assess employee training is a common issue in the healthcare space. Indeed, many of those measuring outcomes are not doing so properly or are doing so in a way that does not provide any sound evidence that the training is working or not working. Below are different ways to assess a training program that you create. This can either guide you in creating your own assessments or provide you with the knowledge to ask consultants like Exeter informed questions.
First, it is important to measure whether training participants enjoyed the training and whether they felt it was necessary. Individuals who enjoy and find training necessary are generally more engaged. Thus, it makes sense to address enjoyment and feelings of necessity qualitatively. Without this knowledge, measuring other outcomes such as skill development might be worth little. It is possible the training content can be good enough to produce skill improvement but fails to do so because it has not been made clear to participants why the training is crucial for them.
Second, it is vital to measure whether employee training improved the targeted skill. If the training does not improve the skill in question, why conduct it? Unfortunately, measuring skill improvement – especially with soft skills – can be tricky. One method Exeter finds useful is first measuring participants baseline knowledge on the targeted subject and comparing that with their post training knowledge. For example, if the training is on how to develop interpersonal trust, which consists of three key components, it might be helpful to ask participants if they can identify the three components via assessments before and after training. Furthermore, testing participants on why these components are important for developing interpersonal trust could provide even more detailed information about gains in knowledge of the topic. A multiple-choice format – or something similar – is helpful because it easily and objectively allows you to distinguish knowledge changes for everyone.
Further, organizations should want to go beyond simply comparing the average pre and post scores and should determine whether the knowledge change was statistically significant. This requires some basic statistical skills but can more thoroughly answer certain questions. Such as, does this training create a meaningful skill or knowledge change? After this question is answered organizations can move on to asking how they can improve the retention of this skill or knowledge; a topic that will be covered in later blog posts.
Third, organizations – rightly so – intend to have the skills gained during employee training transfer to the job. While measuring outcomes such as job performance is not appropriate because those measures are impacted by things outside employees’ control (e.g., resources), measuring skill transfer can be done in several ways. Let’s use measuring soft skill transfer as the example. This can be done through behavioral observation, self-report, peer-report, supervisor-report, or creating a reporting system that measures behavioral change. Ideally, this is done numerous times to get the most accurate measurement. Yet, it seems organizations rarely attempt to measure the application of training this way. Often this has to do with organizational constraints (e.g., lack of time) or poor employee participation on measurement points.
However, there are other methods that can be used. For example, it is possible to develop test items that are specifically developed to assess transfer. To do this organizations can create behaviorally anchored skill scenarios and distribute these in the same pre-post manner as a knowledge assessment. The purpose of these questions is to have individuals imagine themselves in a specific scenario and then present them behaviors that someone might conduct. Individuals pick between the behaviors which allows for a glimpse into whether knowledge from training may transfer to behavior.
Lastly, most organizations do not measure the return on investment (ROI) of employee training. This means most organizations are unsure whether training is worth it from a cost perspective. Keep an eye on our LinkedIn for a follow up blog on ideas for calculating training ROI.
Kyle Bayes, MS is an industrial organizational psychologist who specializes in providing empirically backed HR solutions to healthcare companies. Passionate about using research to better organizations and the lives of the employees that work within those organizations, his areas of specialty include, providing a theoretical basis for training and coaching, creating measurement instruments, developing engagement strategies, building training strategies, and connecting research and practice to drive results in various HR areas.