So it is a packed session for ‘Making Learning Analytics Work’. I think this is a perennial issue for L&D folk. It is being framed as thought provoking and practical…. let’s see.
Our speakers are Jack Phillips and Darren Gleave from EDF energy.
Jack kicks us off….
Introduce the concepts of ROI, impacts and analytics
Forecast and measure the impact of ROI
Securing executive support and future investment in L&D Activities
He begins with an anecdote about how the transferred the L&D budget to the Chief Engineer. This gave some interest and accountability into the programme/initiative at play.
Jack moves us on to thnk more about Human Capital Analytics (HCA). In terms of why do people do HCA? Primarily it is about driving or improving business performance, secondly it is improving programmes, thirdly about solving business problems. Their survey also asked what types of projects are people pursuing. Their responses were; measuring impact & ROI of a programme, developing relationships between variables and thirdly developing predictive models.
Having explored why people do HCA, Jack goes into how we (or the ROI Institute) define value. He shares an image called ‘the value chain’ which gives a framework to approaching the depiction of value. An importnat step is between levels 4 (impact) and 5 (ROI) where the ‘isolation of the effects of the programme’ are done.
Jack takes a step back and looks at what are the links or the relationships between the levels. He suggests that there are correlations between certain levels. E.g. That at level 1 (reaction) you can assess things like; Intent to Use, relevance, recommendation to others, new information and importance to success and they (apparently) correlate strongly with people then applying their learning. The implication then, that you don’t need to (or can scale back) data gathering and analysis at levels 2 and 3 if you have got the data at level 1. I find that really risky. Jack does make a small passing reference to establishing the factors at level 1 for your organisation and doing the links. I disagree, if you are going to do this, it needs to be by a smaller context than organisation, as a minimum at a programme level.
So next we have Darren Gleave talking about how in EDF he used the RoI methodology to evaluate a solution to the business problem of ‘we need to improve the way our advisors handle high consumption queries’. Initial analysis I suggested that the default response to this sort of customer query was to book a visit for an engineer to check accuracy of the meter. This is a time consuming and costly process, especially as the % of meters that are incorrect were .05%.
There were two overall goals then; 1) reduced meter inspections and 2) improved customer satisfaction with these sorts of queries. EDF/Darren built a V model to scope both the business reasons and the methods of evaluation.
Darren shared that the evaluation found that participants reactions to the training weren’t great. They weren’t recommending it. That said, the reason cited was because of the method of delivery (e-learning) was new. When additional analysis (end of training quiz) happened it showed that people had learned what they needed to. Darren then went on to assess calls (they are recorded) to see what % of advisors were following the desired process. When looking at the organisational goals, reduction in appointments and reduction in complaints, they were both acheived and Darren also did an isolation approach (see pictures in reverse order below):
My thoughts… even though the assumptions being made were ‘conservative’…. they are still assumptions.
I live blogged this post so please excuse any typos and I did my best to represent the session well and add my commentary too