Establish Metrics for Documenting RIE Improvements

Armed with information about what beneficiaries value and expect from the university process, the RIE team should establish metrics to document both how the process is performing currently and to measure how the process has improved following the RIE. These metrics would supplement those identified during the development of the team charter that together will accurately and reliably measure expected improvements in the process. Simply stated, if you do not know what benefits you expect from an improved process and how to measure them, it will be difficult to document convincing evidence on benefits realized due to the RIE.


Benefits Exploration Map Measurement Selection Matrix

Lean Metrics Measurable Benefits Data Plan

Below is a list of potential metrics for documenting RIE improvements drawn from both previous LHE studies and other Lean studies that may be of interest to higher education.55

Quality Metrics

Quality metrics focus on the quality of the outcomes of the process (e.g., service or product delivered) and the quality of the process that delivered these outcomes. These include objective metrics of process performance as well as subjective assessments by the beneficiaries of the process.

Metrics Related to the Quality of the Outcomes of the Process. Examples of performance metrics related to the quality of the outcome of a process include: time to completion (e.g., reduction in waiting time to receive mental health counseling services); percent of target goal (e.g., number of library books re-shelved per allocated staff hour); reduction in problems (e.g., the decrease in classroom scheduling change requests); measure of critical performance goal (e.g., percent of full-time freshmen successfully completing 30+ hours at the end of spring semester); and meeting demand rate or on-time delivery (e.g., percent success in delivering mandatory advising during the mid-semester 4-week course registration period to 2,000 students).

Metrics Related to the Quality of the Process. Examples of performance metrics related to the quality of the process include: percent complete and accurate (e.g., required information, materials needed by employees to correctly calculate bursar charges is incorrect 15% of time); frequency/per- centage of errors (e.g., the number of times student records are misfiled or the percentage of misfiled records); error rate (six errors made for every 50 graduate assistantship contracts processed, or 12% error rate); significance of errors (e.g., faculty turnover attributed to delays in providing promised startup package); and required rework (e.g., the number of faculty travel reimbursement requests that must be resubmitted because of mistakes or missing information). On-time delivery measures the percentage of time a process is completed in a promised or established unit of time (e.g., during peak periods of activity, university staff fail to prepare meeting rooms with requested tables and seating, technology support, and refreshments 13% of the time).

Metrics Related to Subjective Evaluations of the Process. Examples of performance metrics related to subjective evaluations of the process include: assessment/feedback instruments (e.g., student completion of short assessment distributed at time of appointment at writing skills Center); survey data (e.g., graduate student satisfaction with the process for assigning assistantships); focus groups (e.g., facilitator-led evaluation by representative clients of the office of capital planning); and interviews (e.g., 78% of parents are either “very dissatisfied” or “dissatisfied” with impersonal, menu-driven electronic phone attendant used for calls to the office of financial aid). Complementary metrics on the experiences of employees involved in the delivery of the process may also be relevant.

Operational Delivery Metrics

Operational delivery metrics focus on the internal operations of providing a process or service that can be used to help determine its efficiency (enhancing the ratio of steps, time, etc. that add value to the process or service relative to the non-value-added steps, time, etc. that are “waste”).

Metrics Related to the Time Required by the Process. Examples of performance metrics related to the time required by the process include: cycle time (e.g., the time it takes to provide a prospective student with the admissions information); lead/total time (e.g., the time between calling to schedule an advising appointment and the completion of that appointment is, on average, 4.5 work days, inclusive of waiting, interruptions, etc.); and changeover time (e.g., the amount of time to reset classroom technology and seating configurations between different academic courses). In addition, process time (also known as value-creating time) measures the time it takes to complete a value-adding step of a process from beginning to end without interruption (e.g., the office of diversity initiatives can review a job posting/ad for compliance with federal, state, and university policies in an average of 15 minutes).56 Similarly, value-added time measures the percent of the total elapsed time of a process that adds value desired or expected by a beneficiary and is a useful operational metric for estimating the waste in a process. Value-added time is expressed as a percentage (value-added time = [Aggregated times of steps in a process that add value-ь total time]*100). For example, in the office of diversity initiatives example above, if it took the office 6 hours before it returned the draft job posting to the submitter, the average value-added time would be 2%

(15 minutes -s- 720 minutes). Finally, “just-in-time” performance metrics such as Accumulated Inventory (also known as Batching Practices) measures the degree to which information or activity (physical paperwork, pieces of mail in an outbox, projects underway but not yet completed, an accumulation of emails requiring action) to support a process queues up, detracting from the flow of the process. Accumulated Inventory contributes to an increase in total time and a concomitant decline in value-added time (e.g., research proposals requesting approval to use human subjects are “batched” until the monthly meeting of the university’s institutional review board committee).

Metrics Related to the Number of Steps in the Process. Examples of performance metrics related to the number of steps in the process include: number of individual steps in the process (e.g., 23 steps from initiation to approval in issuing a summer teaching contract); number of persons/ handoffs necessary to complete the service (e.g., how many individuals at the university are required to review and sign off on a grant application); and physical distance required by the process (e.g., the total distance in feet a faculty member must walk to the common printer/copier).

Metrics Related to the Adequacy of Resources for the Process. Examples of performance metrics in this area focus on ratios of the availability of people, equipment, and facilities versus the demand for people, equipment, and facilities to complete a process. Available time measures the amount of time the employee (or an office) is open and can commit to the process after subtracting break times, attendance at meetings, training time, etc. (e.g., a full-time support staff shared by two academic departments is available for each department approximately 3 hours/day exclusive of breaks, travel between departments, service on classified staff council, and required university training). The staffing capabilities metric reflects the ability to continue a process during absences (e.g., illness, training, and meetings) or periods of high demand (e.g., spending and personnel request authorizations are delayed due to outreach and fund-raising activities that require the business dean to be out of the office 50% of the workweek).

Reliability of equipment measures the percent of time that essential work- related equipment (hardware and software) is available when needed for a process (e.g., available internet bandwidth for the university network is at 65% of capacity during traditional workdays due to faculty and staff demand, thus limiting the updating of administrative systems to once daily during the overnight hours).

< Prev   CONTENTS   Source   Next >