Skip to main content

Sponge Analytics

A data platform that pairs with our solutions to reveal deep insights into impact & learner behaviour.

Learn more

We’re hiring!

We have exciting new roles available. Join our growing team and begin an unforgettable journey.

Learn more

Looking for something?

Home / Resources / Rethinking elearning evaluation by including business focus

Rethinking elearning evaluation by including business focus


Measuring the success of elearning has always been important but it is a hard nut to crack. Elearning evaluation can often be half-hearted or even ignored completely but there are ways to do a better job.

In a recent interview for our blog, the Chair of the eLearning Network, John Curran was asked what frustrated him most about elearning. Without hesitation he cited the weakness of evaluation. It has been a general theme for some time now. In 2010, research by the eLearning Guild revealed that almost 10% of respondents didn’t use any measurement method while 87% only measured completion rates. Skip forward and the statistics are equally worrying. Data for 2014-2015, suggests only 15% of L&D leaders are measuring success against key performance indicators and only one in five use learning analytics.

Despite this apparent lack of progress, there is definitely a growing awareness among those involved in elearning that evaluation is an area that requirements urgent improvement. How to actually achieve this is a source of much discussion with no easy answer, but here are some ideas for assessing results, gauging impact and proving success.

Learner Impact

Almost all elearning modules will include some form of knowledge check whether that is a test, quiz or a game. Most also include some sort of learner feedback form. This may deliver evidence that employees have done the course and what they think of the elearning but often evaluation does not go further to look at impact on behaviour and the overall results for the business.

One way to rethink the approach to elearning evaluation is to work backwards and plan the required measurements for every stage of the project from at the very beginning. As elearning expert, Jane Bozarth points out, evaluation is too often merely an ‘afterthought’, or just ignored if the information will take some effort to collect. But by making it more of a priority and planning it from the start, it should be easier to gain meaningful data.

Another, factor to consider is time. Elearning evaluation sometimes falls short because a module is seen in isolation as a one-off event. Learner knowledge is tested in the moment but no attempt is made to find out whether that knowledge is retained. At Sponge, we tackle this particular issue by recommending a campaign approach to elearning which includes post-module checks. By testing employees weeks and months after they have completed an elearning module, we get a better understanding of whether they have absorbed the learning over a longer period.

Business-focused KPIs

It is possible to make elearning evaluation more meaningful by setting some Key Performance Indicators (KPIs) that are linked to the overall objectives of the business. For example, if an organisation is looking to improve customer service it would be a good idea to measure the number of customer complaints and compliments before, during and after an elearning programme which deals with customer relations.  If getting this type of information would be genuinely impractical then measure what you can; setting some KPIs is better than making no attempt at all. Also, don’t be afraid to seek help in setting achievable and relevant KPIs as there may be someone in the business who can help make the process much easier.


Another way to measure and evaluate is to compare with other organisations. Sponge is an Ambassador for Towards Maturity which is a highly-respected independent research organisation aimed at helping to move the L&D sector forward through sharing best practice. It collects data annually as part of a Benchmark Study so businesses can compare their approach with others and identify strengths and weaknesses. The study highlights the work of the top performing learning organisations allowing others to learn from what they are doing well, including how they measure their success. 

By obtain meaningful metrics whenever possible and moving evaluation up the list of priorities, it should be possible to improve the way the elearning industry, in general, approaches the measurement of impact and effectiveness. It would be heartening to think that in five years’ time, elearning evaluation is no longer an area of concern but an embedded practice.