Measuring and Assessing Reference Services and Resources: A Guide

Measuring and Assessing Reference Services and Resources: A Guide offers an expansive definition of reference service, assessment planning advice, and measurement tools to assist managers in evaluating reference services and resources. The measurement tools presented here are fully analyzed for validity and reliability in The Reference Assessment Manual, RASD and Pierian Press, 1995. Where formally validated tools were not available, bibliographic references to assessment methods reported in the literature are provided.

For a more comprehensive analysis of reference service assessment, consult these key reference works:

1.0 Definition of Reference

Reference Transactions are information consultations in which library staff recommend, interpret, evaluate, and/or use information resources to help others to meet particular information needs. Reference transactions do not include formal instruction or exchanges that provide assistance with locations, schedules, equipment, supplies, or policy statements.

Reference Work/Services includes reference transactions and other activities that involve the creation, management, and assessment of information or research resources, tools, and services.

Approved by RUSA Board of Directors, January 14, 2008

2.0 Planning Reference Assessment

Before beginning an assessment project, develop a clear statement of the specific questions you want to answer, the measurable data needed to answer your questions, and the performance or quality standards you will use to measure your success. Next, choose assessment tools that are relevant to your stated goals. Modify existing tools to meet your needs, and always pretest your tool on a small representative sample of data or subjects. Finally, to have greater confidence in the validity of your results, use more than one assessment tool.

Basic Questions to Consider When Assessing Reference Services and Sources

What questions are you trying to answer?

Clearly define your questions before proceeding toward measurement since the questions themselves will help determine the standards of performance or quality you will set, the instrument(s) you will use to collect data and the techniques you will use to analyze your data.

What performance or quality standards will you use to measure your success?

Always develop goals and measurable objectives that you can use as a benchmark before beginning an assessment project. Comparing your results to such standards will determine whether your objectives have been met. RUSA provides a wide array of standards that can be used for the assessment of college libraries and of reference services. Data from other colleges and universities or from sources such as the Integrated Postsecondary Education Data System (IPEDS) surveys ( http://nces.ed.gov/ipeds/) can also be used as benchmarks for comparative purposes.

How are you going to use the data generated?

Your questions will drive the type of data that you need to collect. In addition, the level of data collected (i.e., nominal, ordinal, interval, or ratio) will determine the power of the statistical tests you can use. For example, categorical data such as a respondent’s academic status or major will permit the grouping of data while continuous data such as g.p.a. or number of reference questions asked can give rise to other analyses.

What measurements will you need to generate the data that you want?

There are many ways to collect data, but the way data is measured impacts how it can be used in analyses. Consider whether you need both qualitative and quantitative measures, since each provides valuable data for analyses. For example, if you want to collect data on the number of reference questions asked, you can use quantitative measures. If you wish to explore the reference interaction itself, you may want to consider qualitative measures.

Can you use other measures to triangulate your data?

Triangulation means collecting data using several different methods so that you have greater support for the results of your analyses. The Wisconsin-Ohio Reference Evaluation Program (WOREP) is one example of an instrument that uses triangulation by collecting data from two different sources (patron and librarian) for each transaction. The more sources of data, the better your analyses will be. Often, you can use qualitative data to support quantitative data and vice versa, but beware of comparing different types of data since they may actually be measuring different things. Thus, triangulation increases the validity of your analyses and results.

What methodology do you need to use?

The type of data desired will help determine the data collection instruments required. For example, if you want to measure satisfaction, a survey might be used. If you are examining how to improve your services, a focus group may be the best method. If you wish to determine how to staff a service point, unobtrusive counting measures can be used. These data collection instruments, in turn, help determine the analytical techniques that can be employed to interpret the data.

Have you pre-tested all your data collection instruments?

Always pretest your instruments to ensure that they can be understood by those who will be completing them and that they are actually measuring what you want them to measure. For example, before administering a survey, pretest the survey instrument on a group similar to those who will be completing the survey. Do they understand the questions? Do the given choices cover all the possible responses? Can you code the results easily? Then, test how you plan to analyze the final data. Is the methodology appropriate for the data?

What statistical analytical techniques do you want to use?

The data and its method of measurement will help determine how the data itself is analyzed. Do you have groups of respondents to a survey? If you have two groups, then t-tests may be used; if you have more than two groups, then F-tests (ANOVAs) may be employed. Do you have data that can be correlated? Then a Pearson test of correlation may be used. Statistical analysis software packages, such as SPSS or SAS, can make this step much easier, but make sure you are using the appropriate analytical methods for the data that you have generated. Consult researchers with statistical knowledge to help you run analyses and to help you understand the results.

Who is the audience for this assessment or research?

The audience will determine the format that the presentation of the results will take. If you are making a presentation, then the use of software such as PowerPoint with graphs and charts may be appropriate. If you are compiling an annual report, using spreadsheet software such as Excel to generate the charts may be helpful. The audience will also help you determine the type of analyses to perform and how these analyses are actually presented.

3.0 Measuring and Assessing Reference Transactions, Services

3. 1 Reference Transactions – Volume, Cost, Benefits, and Quality

Simple tallies of reference transactions, collected daily or sampled, can be interpreted to describe patterns of use and demand for reference services. Managers commonly use transaction statistics to determine appropriate service hours and staffing. Often, volume statistics are reported to consortia to compare local patterns of use and demand to peer libraries and to calculate national norms.

Analysis of reference transactions by type, location, method received, sources used, and subject can be used for collection development, staff training/continuing education, and budget allocation. Analysis of accuracy, behavioral performance, interpersonal dynamics, and patron satisfaction during the reference interview can be used for staff training and continuing education.

3.2 Reference Service and Program Effectiveness

Cost, benefit, and quality assessments of reference services provide meaningful and practical feedback for the improvement of services, staff training, and continuing education. To determine levels of service effectiveness, costs, benefits, and quality, data must be judged in light of specific library goals, objectives, missions, and standards. A variety of measures such as quality or success analysis, unobtrusive, obtrusive or mixed observation methods, and cost and benefit analysis provide invaluable information about staff performance, skill, knowledge, and accuracy, as well as overall program effectiveness.

3.2.1 Cost/Benefits Analysis

In cost-benefit studies, costs are compared to the benefits derived by the patrons served. Patron benefits may be measured in terms of actual or perceived outcomes, such as goals and satisfaction achieved, time saved, failures avoided, money saved, productivity, creativity, and innovation.

3.2.2 Quality Analysis - Patron Needs and Satisfaction

The perceptions and needs of patrons are important measures of the quality and impact of reference services. Surveys, combined with other measures such as numerical counts, observation, and focus groups, are commonly used to conduct comprehensive assessments of service performance and patron needs.

4.0 Measuring and Assessing Reference Resources – Use, Usability, and Collection Assessment

As print and electronic reference collections grow in size and format, they must be continually assessed to determine their relevance, utility, and appropriateness to patrons. Use and usability tests examine how often and how well visitors navigate, understand, and use web sites, electronic subscription databases, free Internet resources, library subject web pages, and other web-based tools such as bibliographies, research guides, and tutorials.

Acknowledgements

The following RUSA/RSS Evaluation of Reference and User Services Committee members spent many hours researching, writing, and reviewing the Guide.

Lisa Horowitz (MIT), Chair, 2002-2003

Lanell Rabner (Brigham Young), Guidelines Project co-chair

Susan Ware (Pennsylvania State), Guidelines Project co-chair

Gordon Aamot (University of Washington)

Jake Carlson (Bucknell)

Chris Coleman (UCLA)

Paula Contreras (Pennsylvania State)

Leslie Haas (University of Utah)

Suzanne Lorimer (Yale)

Barbara Mann (Emory)

Elaina Norlin (University of Arizona)

Cindy Pierard (University of Kansas)

Nancy Skipper (Cornell)

Judy Solberg (George Washington)

Lou Vyhnanek (Washington State)

Chip Stewart (CUNY)

Tiffany Walsh, 2010-2011

Robin Kinder, 2010-2011

Jan Tidwell, 2010-2011

Richard Caldwell, 2010-2011

Jake Carlson, ERUS Chair, 2004

Barbara Mann, ERUS Chair, 2005

Jill Moriearty, ERUS Chair, 2006

Gregory Crawford, ERUS Chair, 2007

David Vidor, ERUS Chair, 2008