Did you know that 60% of companies indicate that they do not set any milestones or concrete goals for new hires to attain? (Harvard Business Review, 2018)

Read that again.

That is a big number—60%. That is a lot of programs out there that cannot tell stakeholders, learners, or L&D if the learners are successful or if the whole initiative is successful.

Wait—let me get my other soap box. I have one for analysis, as you well know. But, I also have one for metrics and evaluation.

<SOAP BOX #2 RETRIEVED; STEPS UP CAREFULLY>

It is basically this: Why go to the expense of designing, developing, and implementing a training initiative and not collecting data that demonstrates success (or failure)? Or that the people utilizing the program are successful? Especially for a program-level initiative like onboarding?

This is a program gap that I see often–and I get it! Evaluation can be a pain. Most people have so much to do and are moving on to the next thing so fast that follow-up just doesn’t happen.

I have some thoughts on how to make this less of a chore and I will share them here. And for the 60% out there NOT collecting metrics, it’s not too late. Keep reading. I have some ideas for you, too.

But first, let’s talk about why metrics are so important.

Everyone likes a win

Albert Bandura, a psychologist, learning theorist, and philosopher, coined the term “self-efficacy” in 1977. It is basically a person’s belief in their ability to be successful in certain situations. The power of positive thinking.

People like to feel good, and wins feel good. It motivates them to keep going. It makes me think of video games and gamification in general—complete a quest successfully and you get to level-up and get your character new goodies.

It is the same for people learning new work tasks. According to the HBR article referenced above, setting achievable goals can provide new hires with a sense of satisfaction and the added benefit of a clearer picture of the task they completed.

So, how can you give new hires the wins they crave?

Start at the end

When conducting the analysis for your onboarding program (a-hem), start with the outcomes in mind. What are the business goals? What do new hires need to know and do? When do they need to know it and do it? What is expected after the first week? After two weeks? A month? Six weeks? You get the picture. When is a person considered “ramped-up” to a position and performing tasks on their own?

These are your metrics.

You are looking for the point where expectations for a new hire meet the performance metrics of an established employee. Where is the line?

Onboarding participants need their own metrics

Asking a new hire to meet the same metrics as an established employee is not realistic. New hires need their own set of metrics, a clear picture of what those metrics are, and how to achieve them.

There is another audience to think about here as well. Mentors or peer-buddies—those people assisting in onboarding new people—also need their own metrics.

These people are being pulled from their day jobs to assist a new person. If they are working toward performance metrics—especially those that impact their paychecks—that rely on them working towards those metrics 100% of the time, they are not going to provide a good onboarding experience.

So, plan for and write realistic metrics for new hires, and plan for and write adjusted metrics for those who are assisting newbies.

Did you know that employee onboarding statistics reveal that 77% of new hires who accomplish their first performance milestone were put through formal onboarding training? (LinkedIn, 2017)

Additionally, 49% of those who fail to meet their first milestones had no formal onboarding training at all. Again, some big numbers.

No metrics? No worries

If you have a program in place and didn’t think about this, change it.

But before you do, talk to the people who have been onboarded and their supervisors. Just as you would have in an analysis (a-hem), you need to find out what is realistic and what isn’t.

Think about it: If you have a program that has been running for a little bit, you can talk to the people who have participated and ask them about the experience and what metrics would have motivated them. Ask the supervisors what metrics are important and when. Look at real-life metrics of these people to see what’s attainable.

One and done? Nope

To go along with the tip above, for those just starting out, it is important to note that the metrics you set may need to be adjusted. Programs should evolve as needs and goals change.

All sorts of things look great on paper—they are absolutely fabulous when running in an LMS! Alas, even pretty things can have stinky stats. I recommend you run your program for a few months, check the stats, and see what you are getting. You may find that the metrics are set too high or even too low. They may be timed incorrectly.

Collect the data. Analyze it. Develop an action plan, but don’t rush into making changes. If you have only had 10 people run through a program or even an individual course, that is probably not enough data to drive a change unless it is something really major, so weigh the cost of making the change now versus later. If it is not time, take note, run it for another few months, and check again to see if the new data validates what was observed before. THEN change it.

So many stats, so little time

What data should you gather? That depends. Every instructional designer’s favorite answer. I am going to conjure Kirkpatrick’s for this one. Dr. Donald Kirkpatrick is credited with creating his evaluation model in the 1950s:

  • Level 1: Reaction: Did the learner enjoy the training? (“Smile sheets”)
  • Level 2: Learning: Did learning happen?
  • Level 3: Behavior: Are learners successfully applying what they learned?
  • Level 4: Results: Did the training help learners help the business meet its overall goals?

These statistics are different, they are gathered at mostly different times, and they measure very different things.

The one we see most often, as learners and developers, is Level 1. It is usually a survey link or a paper-based survey given at the end of a training course and usually required in order to receive a completion certificate. They generally ask learners if they liked the training and whether they think it was relevant.

I like to mix some Level 2 in with my Level 1 questions. These usually invoke self-efficacy and are interwoven with the learning objectives. For example:

Learning objective: Demonstrate the steps for Process X.

Survey question: I feel confident performing Process X (usually answerable using a Likert scale).

Level 3 is about behavior change and it is often skipped. Is the learner using what they learned to perform their job successfully? If you ask the learner, they are probably going to tell you yes. That is why I recommend this level is taken to their immediate supervisor. If the expectation is that the learner is going to leave the learning event and start performing this task—and they should—then develop a survey for their supervisor to complete 30 days after the training takes place.

See where it starts getting tricky? Supervisors are busy, L&D is busy. Who has time to follow-up? That is why planning is key.

Level 4. What can you say about Level 4 other than there is not much of this level going on out in the field. This is where you decide if the program is meeting business goals. The high-level, lofty stuff. If the goal of an onboarding program is to retain 80% of new hires after one year, you can count heads to see who is still around. But you have to count the heads.

Level 5. What? Oh, I didn’t mention that one up top? Well, this one is relatively new and it directly ties to return on investment (ROI) and return on expectations (ROE). If your onboarding goals are not directly tied to overarching business goals, you are pretty much dead in the water on this one. You can probably reverse engineer some stats to support some initiatives but you are going to have a lot of trouble finding hard data. That’s why you have to plan for it from the start.

As we have talked about before, onboarding initiatives can be an expensive endeavor. If you want to develop a really successful program, you are going to need metrics. At the beginning, in the middle, and at the end.