LibQUAL+ (Association of Research Libraries, n.d.; Cook, 2001)
LibQUAL+ is a method of evaluating the quality of library services developed based on SERVQUAL in 1999 by Fred Heath (Thompson, 2007), head of the Texas A&M University Libraries, and Colleen Cook et al. LibQUAL+ was continuously improved after development, and in the 2007 version, as with SERVQUAL, consisted of 22 question items and 11 additional items. Question items included a total of six dimensions: the three foundational dimensions of ‘Information Control’, ‘Affect of Service’ and ‘Library as Place’, and three additional items. Eight dimensions were specified in 2000, four in 2002, and three in 2007, revealing a tendency to concentrate the dimensions.
Figure 3.6 Impacts of performance indicators, measurements and evaluations.
A LibQUAL+ questionnaire survey was conducted via a website, and survey participants responded on a 9-step Likert scale to the same three factors as in SERVQUAL, namely; ‘my desired service level’, ‘my minimum service level’ and ‘perceived service performance’. On LibQUAL+, responses are collated and results are reported in a colourful diagram (Webster & Heath, 2002).
Fig. 3.6 shows that the ALA’s evaluation theories (performance indicators, measurements and evaluations) continue to have a relatively large influence in the world of libraries. The influence of these theories shows a substantial increase from the 1970s to the 1990s, but still continues to this day.
On the contrary, when conducting searches of the keyword ‘ISO 11620’, there are some references in the database, but they are all examples from outside the United States. It follows that ISO 11620 had only a minor influence on the United States domestically. Furthermore, the number of references to MRAP and Lancaster’s library management evaluation theories was also extremely low.
Case Study: Cornell University Libraries (Ross, 1976)
The Cornell University Libraries hired an outside management consultant in 1965, and became part of the OCLC in 1966 to computerise its library operations. To assess the effect of such a sequence of management activities, they conducted a performance evaluation in 1975 that centred on expenses. The target was technical service operations, mainly computerisation; also, indicators were set with a focus on technical services. However, the data used in the performance evaluation were mostly inaccessible, and there was some trouble in data aggregation.
After the initial introduction of the systems and participation in the OCLC, the substantially increased costs made others view it with pessimism. Although costs were substantial in the first fiscal year, the following years revealed the effect of operational optimising due to online cataloguing using the OCLC. Therefore, from the results of performance evaluations, computerisation began to be considered as meaningful.
Case Study: University of Massachusetts Amherst Libraries (Fretwell, 1976)
The University of Massachusetts Amherst Libraries started MRAP in July 1974. This happened in the backdrop of recession in the U.S. economy and large-scale budget cuts, a freeze on recruitment of library staff, and drastic cuts to material and operational expenses. Under such circumstances, the university library managers had to attempt optimisation of management.
This library identified a large number of problem areas and suggested improvement plans based on the MRAP evaluation results. Vital among those were the following three points: (1) defining the library’s mission and setting goals by function; (2) creating a framework for draughting a systematic management plan; and (3) giving library staff fair and open opportunities for promotion. However, the library management class did not call on library staff to engage in radical reform based on these problem areas, and it stopped at improvements that did nothing more than fill the gap between the evaluation results and the status quo. This was because as the MRAP project lengthened, the circumstances surrounding the library changed on a grand scale.
The advantages of MRAP became evident to library staff when they viewed it from a managerial perspective. However, because a lot of time was spent on MRAP, it was also clear that this would be difficult to implement continuously.
Case Study: University of Connecticut Library (Stevens, 1975)
The University of Connecticut Library implemented MRAP in 1973. One of the main reasons for doing this was to evaluate their results in comparison with that of other libraries. However, MRAP took up a great deal of time, and it was evident that there were a number of other problems as well.
Case Study: Yale University Libraries (Nitecki & Hernon, 2000)
From 1998 to 1999, Hernon and Nitecki used SERVQUAL to evaluate the quality of services at the Yale University Libraries. They posited that some of the SERVQUAL evaluation items did not lead to improved library services, and consequently added and revised items to adapt it to the libraries. From the results of the analysis, it became clear that of the five dimensions, users prioritise reliability, and may wish to perform searches themselves without depending on library staff. Likewise, Hernon et al. conducted focus group interviews based on their analysis results, and asserted that the causes of the discrepancy between the service quality users expect and the reality of services provided should be pursued.
Problems in implementing SERVQUAL included huge costs and heavy demands placed on the working hours of library staff. So as not to demotivate staff forced to perform all such work, an alternative option of using software to facilitate analysis was proposed, but this was even more expensive.
Case Study: Duke University Medical Center Library (Holmgren, Murphy, Peterson, & Thibodeau, 2004)
Duke University Medical Center Library advanced the digitisation of library materials, and almost all of its academic journals are now electronic. In 2002, to promote management reforms, the library used LibQUAL+ to evaluate the quality of its library services. In the initial phases of the introduction, an on-going survey was judged inappropriate from the characteristics of management theory, and project members were assembled from a wide range of the university’s departments.
The survey was conducted by emailing around 12,500 recipients comprising students, clinicians, academic staff and hospital staff. However, after sending the emails, four problem areas appeared. First was that even though the email was received, the correct webpage did not display when clicking on the link to the LibQUAL+ questionnaire responses’ website. Second, many error emails were returned after sending and third, some users did not proceed to the end of the questionnaires. Fourth, there were major problems with the design of the questionnaire. Many respondents commented that the questionnaire was stressful, as it was complicated, included too many questions, and had ambiguous content. One researcher well versed in research methodology also commented that the content of the questions ‘was a joke’, and it was clear that the questionnaire was not sufficiently understood. The survey results demonstrated that ‘Personal Control’ and ‘Access to Information’ were important to users.