IEEE TCDL Bulletin
 
space space

TCDL Bulletin
Volume 3   Issue 2
Summer 2007

 

Evaluating the National Science Digital Library

Michael Khoo
NSDL Core Integration
UCAR - PO Box 3000, Boulder Co 80307, USA
+44 303 497 2604
<mjkhoo@ucar.edu>

 

Abstract

NSDL Core Integration is conducting a program-wide evaluation of all NSDL program activities. The evaluation will inventory and describe NSDL achievements to date, and identify directions for future development. The scale and complexity of the NSDL program – 200 projects over 5 years – poses significant challenges for the evaluation. This poster outlines the theoretical and practical approaches being used to guide and coordinate evaluation activities.

 

Introduction

NSDL Core Integration is conducting a program-wide evaluation of all NSDL program activities since 2000. The evaluation is examining individual NSDL projects, collections and services, as well as the program-wide organizational, communicational, and knowledge structures that together support the NSDL community. The evaluation is summative, including an inventory of NSDL activities and achievements to date, and formative, working towards identifying future strategies for NSDL development, for instance as a component of cyberinfrastructure.

Thumbnail image of poster

For a larger view of Figure 1, click here.

Evaluating NSDL is a challenging task. The program includes over 200 projects, funded for various lengths of time since 2000, a number of which have finished their development and/or are no longer active. Further, these projects are or have been engaged in developing a wide and heterogeneous range of collections, services, and technological and social architectures. Finally, NSDL projects are distributed across the USA. NSDL's organization is emergent over time, and there is no one definitive form of NSDL to be evaluated.

Successful evaluation depends on the formulation of meaningful strategic evaluation questions [1]. However, the large-scale heterogeneous structure of NSDL makes it impossible to derive a simple 'one-size-fits-all' set of questions. NSDL evaluation is therefore being guided by a theoretical meta-framework, the 'resource lifecycle' [2]. The model assumes that the overall purpose of NSDL is to transform raw scientific data into valuable pedagogical tools by coordinating a series of linked value-adding operations. The model outlines five core NSDL value-adding activities (Figure 2): (a) resource creation and review; (b) resource aggregation and collection development; (c) web site and search engine design; (d) classroom use/reuse and educator tools; and (e) communication and knowledge infrastructure (meetings, committees, e-mail lists, wikis, outreach, etc.). Taken separately, each of these core activities provides a framework for developing specific evaluation initiatives; taken as a whole, the model provides a coherent narrative structure for reporting the evaluation to NSDL and to NSF.

Thumbnail image of poster

For a larger view of Figure 2, click here.

These core activities have been subdivided into a further series of value-adding activities, that permit the design and structuring of evaluation plans that address individual activities within the NSDL program. These activities range from resource creation, through the addition of item- and collection-level metadata to resources, to resource use and reuse (Figure 3, 4). Taken together, these stages constitute a 'lifecycle' for educational digital resources. Note that this cycle can be iterative, in that a resource that is the product of one cycle, can serve as the basis for the development of a new resource in a following cycle.

Thumbnail image of poster

For a larger view of Figure 3, click here.
Thumbnail image of poster

For a larger view of Figure 4, click here.

mu-demo: mu-demo-fig1-thumb.jpg mu-demo-fig1.jpg mu-demo-summary.doc Each of the resource lifecycle stages identified provides a locale for specific evaluation activities. This is because each stage involves the interaction of (a) people, disciplines and communities of practice (e.g. developers and users), with (b) particular technologies (e.g. cataloging tools), in order to generate (c) particular outcomes (e.g. metadata of a certain quality). Unpacking the relationships between these variables (communities of practice, technologies, outcomes, etc.), in each of the stages of the resource lifecycle, and reviewing the contribution of these processes to program-wide outcomes, provides a context within which evaluation models – such as the assessment of projects' resource creation processes and resource quality, or a webmetrics analyses of web site traffic, or user interface testing of projects' web sites, search engines, etc. – can take place. For example, in the case of the processes of resource creation and review, evaluation activities include assessment of the quality of the resources created, assessment of the rubrics projects use to create resources, assessment of the support that NSDL provides individual projects to carry out the tasks of resource creation and review, etc. Again stage 7, search and discovery, includes the evaluation of such factors as the quality of a collection's metadata (which dictates what resources are discoverable in the first place), and user-testing of the web-based search interface, and the search results pages.

Guided by the resource lifecycle model, evaluation activities with NSDL have so far have focused on resource creation and review processes (core activity 1), collection development and metadata quality (core activity 2), search results page usability (core activity 3), webmetrics (core activity 4), project-level evaluation practices (core activities 1-4), and NSDL Annual Meetings and workshops (core activity 5). Evaluation methods have included online surveys, interviews, focus groups, ethnography, and user interface testing/paper prototyping. Full reports of these activities are on the NSDL evaluation wiki at http://eval.comm.nsdl.org/).

References

[1] Reeves, T., X. Apedoe, & Y. H. Woo. 2003. Evaluating Digital Libraries: A User-Friendly Guide. UCAR. <http://dlist.sir.arizona.edu/398/>.

[2] NSDL. 2005-6. Evaluation White Paper. <http://eval.comm.nsdl.org/cgi-bin/wiki.pl?WhitePaper>.

 

© Copyright 2007 Michael Khoo
Some or all of these materials were previously published in the Proceedings of the 6th ACM/IEEE-CS Joint Conference on Digital libraries, ACM 1-59593-354-9.

Top | Contents
Previous Article
Next Article
Home | E-mail the Editor