If there is a topic where you can pretty much guarantee to gain interest, engage and/or spark conversation with Learning and Development, Organizational Development and HR type folks, it is Evaluation.
On Twitter recently @sukhpabial and @robjonestring shared that they were debating the value of scoring based evaluation forms and asked for other views. Then @changecontinuum asked if they could blog about it to get the debate going in more than 140 characters.
This is a topic I have been writing a post about already so figured I should publish early with my views and see what happens next. So, here goes:
Evaluation should be no more than a process
That is what I told my teams when I led them and the CIPD groups I facilitate. Why? Because if we have done the first bit, (the Learning Needs Analysis) in enough detail then we need to finalise the process that will allow us to see if we find know what we were looking for. Then we use the process to see if it has happened or not.
Your evaluation strategy, techniques, tools and approaches must fit with what you are evaluating
I cannot abide it when people say ‘we have an Evaluation strategy/system/approach that we use on all our programmes.’ Well done, you have successfully managed to restrict your ability to know what value you may or may not have added by taking an approach that is fundamentally flawed. You cannot evaluate all your learning solutions in the same way.
By all means have at your disposal a selection of tools and techniques:
‘Focus groups, video analysis, interviews, questionnaires, observations, blog posts, reflective statements, work based projects, line manager feedback, 360 reviews, performance reviews, customer/client comments/feedback, KPI’s, performance objectives, competency frameworks, benchmark data’ to name 1 or 2.
Then pick the appropriate tools, techniques and approaches that will allow you to collect the data you need.
Please, please, please get out of the habitual thinking that ‘levels’ are linked to ‘time’!
The number of occasions I talk to people that say they will do:
Level 1 at the session
Level 2 after a week
Level 3 after 3 months
Level 4 after 6-12 months
I know that this example is very ‘Kirkpatrick’ based and yet we seem to be stuck in a pattern when this is what we do because we always have (or someone else said so).
There is (In My Humble Opinion) no need to link these things together and it actually hampers you when you do.
Also, have you ever met a learner, manager, business owner (or L&D professional for that matter) who wants to wait for somewhere between 6&12 months before they find out what impact the learning has had on performance? Exactly, you haven’t.
Stop trying to isolate stuff
Unless you want to spend your evaluation life trying to replicate clinical trails, stop isolating stuff. We live in a world where more than one thing happens at a time. Just because we may want to empirically show the value we have added, doesn’t mean that you can stop all other factors from being in play.
How about harnessing those factors instead? Rather than isolate stuff, find out what else is happening at the same time as your learning intervention and collaborate, share, work together!
There are times when approaches are appropriate and others when they are not!
For me this applies to:
A combination of both
There are many others too, these are just examples.
Similar to my comment above, use what is appropriate for that project and that context.
Telling a story can be more powerful than showing the numbers
It was a conversation with David Goddin that reminded me about this point so thank you sir! Stories are powerful learning and inspirational things. That is why we read them, watch them on TV or at the cinema and yet we often fail to utilise their power in evaluation.
Getting somebody to talk about their experience, how it made them feel at the time, through the learning process and again now looking back can be an incredibly powerful message. Imagine the power if you then combined the reflections of how a number of participants felt.
One of my favourite examples is where the Senior Management of an organisation asked the L&D team to present the outcomes from a particular programme. Instead of showing the classic data or slides in the meeting, they simply invited three of the participants in and asked them to share their experiences. This was then followed up by a paper report of the other measures. The senior managers were sold that the programme was indeed adding value.
So, what do you think?
Where do you agree? Where do you disagree? What suggestions do you have? What works for you?
I look forward to your comments!