Results Are Missing
Unfortunately, the majority of results presented in learning through technology case studies are devoid of measurements at the levels needed by executives. Only occasionally are application data presented, measuring what individuals do with what they learn, and rarely do they report a credible connection to the business. Even rarer is the ROI calculation. In a recent review of award-winning e-learning and mobile learning case studies published by several prestigious organizations, the following observations of results were noted.
O No study was evaluated at the ROI level where the monetary value of the impact was compared to the program's cost to calculate the ROI. Only two or three were evaluated on the cost savings of technology-based learning compared to facilitator-led learning. This may not be a credible evaluation. O The benefits and results sections of the studies mentioned ROI but didn't present it. They used the concept of ROI to mean any value or benefit from the program. Mislabeling or misusing ROI creates some concerns among executives who are accustomed to seeing ROI calculated a very precise way from the finance and accounting team.
O Credible connections to the business were rare. Only one study attempted to show the impact of mobile learning using comparison groups. Even there, the details about how the groups were set up and the actual differences were left out. When the data are vague or missing, it raises a red flag.
O Many of the studies made the connection to the business based on anecdotal comments, often taken from very small samples. Sometimes comments were observations from people far removed from the actual application. For example, a corporate manager suggesting that e-learning is “making a difference in store performance.”
O Very few results were provided for application. Although they can be built in, they were rarely reported, usually listed as antidotal comments about the use of the content and the success they are having with its use.
O Learning was measured, but in less than half of the studies. Learning is at the heart of the process, yet it was left out of many of these studies.
O Reaction was typically not addressed in these studies. Reaction measures such as “relevant to my work,” “important to my success,” “I would recommend it to others,” and “I intend to use this” are very powerful predictors, but not measured very frequently in technology solutions.
Clearly, as this review of studies has shown, there is more talk than action when it comes to the value of technology-based learning. So the pressure is on for proponents of technology-based learning to show the value to funders and sponsors, and this accountability should include business impact and maybe even the ROI. Although business impact and ROI are critical for senior executives, they rarely make this level of analysis. Let's explore why.
Reasons for Lack of Data
In our analysis of technology-based learning programs, several major barriers have emerged. These obstacles keep the proponents from developing metrics to the levels desired by executives. Here are eight of the most significant ones.
1. Fear of results. Although few will admit it, the number one barrier is that the individuals who design, develop, or own a particular program are concerned about evaluation at the business impact and ROI levels. They have a fear that if the results are not there, the program may be discontinued and it will affect their reputation and performance. They prefer not to know, instead of actually taking the time to make the connection. The fear can be reduced if process improvement is the goal—not performance evaluation for users, designers, developers, and owners.
2. This is not necessary. Some of the designers and developers are suggesting that investments in technology-based learning should be measured on the faith that it will make a difference. Though executives may want results at these levels, there is a concern that technology should not be subjected to that level of accountability. After all, technology is absolutely necessary in the situations outlined in the beginning of this chapter. Although the learning may have to be technology based, this doesn't preclude it from also delivering results.
3. Measuring at this level is not planned. When capturing the business impact and developing the ROI, the process starts from the very beginning, at the conception of the project or program. The planning at this point helps facilitate the process and even drive the needed results. Unfortunately, evaluation is not given serious consideration until after the project is implemented—too late for an effective evaluation.
4. Measurement is too difficult. Some feel it is too difficult to capture the data or that it's impossible to secure quality information. Data collection was not built into the process, so therefore it takes extra steps to find it. When it's collected, it's difficult to connect the business data to the program and convert to monetary value. The ROI seems to be too complicated to even think about. Using systematic, easy steps helps with this process. Technology proponents tackle some very difficult problems and create marvelous solutions requiring much more knowledge, expertise, and capability than the measurement side of the process. Measurement is easy and doesn't demand high-level mathematics, knowledge of statistics, or expertise in finance and accounting.
5. Impact and ROI is too expensive. By the time the evaluation is considered, the investment in technology and development is high and designers are unwilling to invest in measurement. The perception is that this would be too expensive, adding cost to a budget that is already strained. In reality, the cost of evaluation is a very minute part of the cost of the project, often less than 1 percent of the total cost, for an ROI study.
6. Measurement is not the fun part of the process. Technology-based learning is amazing, awesome, and impressive. What can be accomplished is exciting for those involved and for those who use it. Gamification is taking hold. People love games. They're fun. Measuring the application, impact, and ROI is not fun. Metrics could be made more interesting and fun at the same time using built-in tools and technology. Designers, developers, and owners need to step up to the responsibility to show the value of these processes.
7. Not knowing which programs to evaluate at this level. Some technology proponents think that if they go down the ROI path, executives will want to see the ROI in every project and program. In that case, the situation seems mind boggling and almost impossible. We agree. The challenge is to select particular projects or programs that will need to be evaluated at this level.
8. Not prepared for this. The preparation for designers, developers, implementers, owners, and project managers does not usually include courses in metrics, evaluation, and analytics. Fortunately, things are changing. These issues are now addressed in formal education. Even ROI Certification is available for technology-based learning applications.
Because these barriers are perceived to be real, they inhibit evaluation at the levels desired by executives. But they are myths for the most part. Yes, evaluation will take more time and there will be a need for more planning. But the step-by-step process is logical. Technology owners bear the responsibility to show the value of what they own. The appropriate level of evaluation is achievable within the budget and it is feasible to accomplish. This book shows how it's done in simple, easy processes. The challenge is to take the initiative and be proactive—not wait for executives to force the issue. Owners and developers must build in accountability, measure successes, report results to the proper audiences, and make adjustments and improvements. This brings technology-based learning to the same level of accountability that IT faces in the implementation of its major systems and software packages. IT executives have to show the impact, and often the ROI, of those implementations. Technology-based learning should not escape this level of accountability.
This initial chapter covered the landscape of technology-based learning, revealing some of the trends and issues of this explosive phenomenon in the learning and development field. There is no doubt that learning through technology is the wave of the future. It has to be, with complex and large organizations and an ever increasing need for learning. In a society that is willing to take only small amounts of time for anything, learning through technology is just in time, just enough, and just for the user. However, it must be subjected to accountability guidelines. It must deliver value that is important to all groups, including those who fund it. Executives who fund large amounts of technology-based learning want to see the value of their programs and projects. Their definition of value is often application, impact, and ROI. The challenge is to move forward and accomplish this in the face of several barriers that can easily get in the way. The rest of the book will show how this is accomplished.