Because this project was very successful, communicating its results was not difficult. At first there was a briefing with an executive who asked for the ROI study. The 30-minute briefing provided an opportunity to see the power of mobile learning technology and how it could affect business measures.
Data were sent to the 25 participants, along with their immediate managers, within three weeks of data collection. Also, some minor adjustments were made to the program as a result of the evaluation. These were announced in the same communication.
An executive summary of the evaluation was provided to all sales associates to show them the success of the program and to entice others to get involved in this and future programs.
A brief article (about 1,000 words) was placed in the company newsletter for all employees to read. Results were presented at a technology-based learning conference as a case study. All sales and support managers received an executive summary. The learning and development team received a full copy of the study, along with a twohour workshop.
Some barriers to success were underscored. These barriers led to minor adjustments to the program, including a reduction from four hours to three hours and 15 minutes. Also, support for the program was strengthened.
This study results in several important lessons.
1. Early planning was crucial, before any design and development took place.
Had the team waited until the program was designed, developed, and implemented before planning the evaluation, it would have been incomplete.
2. The objectives gave the designers, developers, and participants the proper focus. There was no mystery about what was expected of participants.
3. The control group versus experimental group method was the best one for isolating the effects of the program; however, there were some concerns about the matching of the groups. The problem with the approach of matching groups was that the evaluation team was at the mercy of the time when participants signed up for the program. If everyone were required to participate, the matching group technique would not work, and other processes would be involved.
QUESTIONS FOR DISCUSSION
1. Is this study credible? Please explain.
2. What other methods might be used to isolate the effects of the program?
3. What other ways could data collection be accomplished? Please explain.
4. Is the three-month follow-up for impact data appropriate? Please explain.
5. Was a year of impact data appropriate? Please explain.
6. How should this data be presented to management in terms of sequencing, emphasis, and approach?
7. Could this study be replicated? Please explain.
ABOUT THE AUTHORS
Jack J. Phillips, Phd, is a world-renowned expert on accountability, measurement, and evaluation. Phillips provides consulting services for Fortune 500 companies and major global organizations. The author or editor of more than 50 books, he conducts workshops and presents at conferences throughout the world.
His expertise in measurement and evaluation is based on more than 27 years of corporate experience in the aerospace, textile, metals, construction materials, and banking industries. This background led Phillips to develop the ROI Methodology— a revolutionary process that provides bottom-line figures and accountability for all types of learning, performance improvement, human resource, technology, and public policy programs.
Patti Phillips, PhD, is president and CEO of the ROI Institute, Inc., the leading source of ROI competency building, implementation support, networking, and research. A renowned expert in measurement and evaluation, she helps organizations implement the ROI Methodology in 50 countries around the world.