Volume 4 Issue 2
Fall 2008
ISSN 1937-7266

Evaluative metadata in educational digital libraries:
How users use evaluative metadata in predictive judgment

Soeun You

College of Information
Florida State University
ssy6209@fsu.edu

ABSTRACT

This dissertation will investigate the effect and role of evaluative metadata in educational digital libraries. As the need for contextual quality evaluation of educational resource is growing, more and more educational digital libraries are concerned about evaluative metadata (i.e. users’ ratings, reviews to indicate the quality of the resource used, and usage). An appropriate understanding of end-users’ needs and preferences in evaluative metadata may provide guidance for the design of effective digital library systems to serve users’ needs as well as contribute to creating an evaluative metadata model. This dissertation will examine how users use and review evaluative metadata in the context of relevance judgment within the process of predictive judgment. Further, it will examine users’ preference of evaluative metadata elements.

Categories and Subject Descriptors

H.3.7. [Information storage and retrieval]: Digital Libraries - User issues.

General Terms

Human Factors

Keywords

Metadata, evaluative metadata, educational digital libraries, relevance criteria, document selection behavior.

1 INTRODUCTION

Educational digital libraries facilitate the efficient and effective sharing of web-based educational resources. Educational digital libraries have the advantages of offering domain specific quality information and the ability to provide a richer set of metadata. Metadata in search results provide users with cues to help them judge and evaluate documents in the process of document selection. Therefore, it is important that metadata in search results should have sufficient information which users can evaluate the document. Educational digital libraries, generally, offer several types of metadata in their search result to help users’ judgment: general descriptive metadata (e.g. author, title, description) and pedagogic metadata elements which offer information specific to the educational context (e.g. grade level, difficulty, time).

Previous research, however, indicates that users want more contextual, situational information from metadata [1, 2]. For example, teachers, one of main users of educational digital libraries, want not only to find a resource of topical relevance, but also to find how an educational resource could be successfully used in the classroom. Users expect metadata to provide additional context for relevance and quality that comes from usage and evaluation of previous users or peers. Lynch predicted the importance of evaluative information beyond descriptive metadata in the networked environment. He stated that “some of decision-making information required by users is evaluative rather than descriptive” (p. 1512) [3].

To address this problem, a few educational digital libraries include evaluative metadata (i.e. users’ ratings, reviews to indicate the quality of the resource used, and usage) to improve each user’s search and evaluation of educational resources. As the need for contextual quality evaluation of educational resources is growing, more and more educational digital libraries are concerned about evaluative metadata. However, there is little research about users’ usage of evaluative metadata. In addition, there is little consensus about evaluative metadata elements presented in search results. More and more educational digital libraries make an effort for search interoperability among collections. Evaluative metadata, like other descriptive metadata, can be a part of interoperability features among collections. Moreover, to implement evaluative metadata in a digital library takes time and costs money. An appropriate understanding of end-users’ needs and preferences in evaluative metadata may provide guidance for the design of effective digital library systems to serve users’ needs as well as contribute to creating an evaluative metadata model.

Presently, users can find evaluative metadata in the search results from a digital library, presented with other descriptive metadata. For that reason, users recognize evaluative metadata as one of the metadata elements and use it to evaluate the relevance of resources for meeting their information needs. Therefore, to understand the usage of evaluative metadata, it is important to identify the stages of the document selection process and examine how to utilize evaluative metadata in each stage. Examining metadata in the search results has a two phase: predictive judgment and evaluative judgment [4, 5]. Predictive judgment is related with interaction between users and metadata in the search results, and evaluative judgment is between users and documents that users decided to pursue. Interaction between users and metadata in search results occurs in the process of predictive judgment. With the tentative model of predictive judgment, this study will identify the stage of the predictive judgment and examine how users use and review evaluative metadata in the context of relevance judgment. Further, it will examine users’ preference of evaluative metadata types.

2. RESEARCH QUESTIONS

The following research questions guide this study:

  • How do users use evaluative metadata in the process of predictive judgment?
  • Which types of evaluative metadata elements do users most commonly use in the process of predictive judgment?
  • How does each type of evaluative metadata element affect differently the process of predictive judgment?
  • How do users’ characteristics and tasks affect the users’ uses of evaluative metadata?

The research questions will be discussed in more detail in the section outlining the tentative model of document selection behavior, below.

3. CONCEPTUAL FRAMEWORK: A TENTATIVE MODEL OF PREDICTIVE JUDGMENT


Figure 1. A model of predictive judgment in document selection.

This study will use a tentative model of the predictive judgment in order to investigate the usage of evaluative metadata. The model is tentative and only guides the data collection.

The model may change after the findings are analyzed. The model, presented in Figure 1, is built by incorporating key concepts from previous relevance research, mainly Wang & Soergel [6] and Rieh [5]. The model highlights users’ cognitive processes of predictive judgment in digital libraries. The concepts in the model will be thoroughly explained in the following paragraphs.

The process of predictive judgment is the first stage for users to access and choose documents. Users review metadata trying to find clues for relevance (Scanning/Examining). Reviewing metadata during predictive judgment, users make a judgment whether documents are relevant or not (Relevance Judgment). The judgments link to the decision of whether users decide to look at documents in detail (Acceptance) or not (Rejection). Sometimes, users cannot determine whether it is relevant or not. They notice that it can be partially-relevant or a partial rejection (Maybe). Using metadata and relevance judgments are affected by the users’ knowledge, task type, and others. Among them, this dissertation will mainly focus how users’ knowledge and task type affect user’s behavior.

3.1 Scanning/Examining

Metadata elements are generally displayed in the search results. The study of users’ interaction pattern with search results in a digital library finds that user tend to scan results and then examine selected results in detail [7]. In addition, many previous studies showed that user’s attention is limited, and as a result they have a strong tendency to use only limited numbers of metadata elements [8] [9].

The metadata can be divided into two types: descriptive metadata, which is document representation such as title, author, description, etc., and, evaluative metadata, which is information created by previous users, with or without intention such as, reviews, recommendation, number of downloads, number of visits, etc. Pedagogic metadata, which are information about educational aspects such as, grade or difficult, etc., will be considered as one of descriptive metadata in this research.

Relevance studies indicated that most used metadata elements is title[6, 10, 11]. Users tend to use title first and then use other metadata elements. But not all case can be adopted this metadata used pattern. For example, a certain user is likely to use other elements such as author, instead title because he/she has knowledge not about topic, but about author. In addition, different tasks may affect the selection for which metadata is used as a clue of relevance.

Most of the previous relevance studies, however, did not consider the effect of evaluative metadata since few library cataloging systems and digital libraries implemented evaluative metadata in their systems. Interestingly, Wang and Soergel’s study found that users expressed the need of non-existing metadata, such as the table of contents, author’s expertise, and citation status. If there were evaluative metadata elements, users would have use it for document selection judgment.

3.2 Relevance Judgment

Relevance has diverse dimensions in information studies. This research is based on the user-centered approach; relevance is subjective, multidimensional, and contextual phenomenon. Empirical studies have examined the criteria that users apply to decide the relevance of a document surrogate (metadata) or a full-text document [10-12]. In natural environments, with qualitative methods, many researchers have tried to determine users' relevance judgment criteria. Research has been conducted in different contexts, for example, in different domains and subjects, and with various document types. Even in similar environments, interpretations or labeling of users' criteria from the data are different. Relevance criteria range from 9 or 10 variables [13, 14] to over 30 variables [12, 15].

Barry and Schamber [8] compared and synthesized the results of their studies and identified ten criteria which were overlapped in both studies despite the difference in users and source. The categories are: depth/scope/specificity; accuracy/validity; clarity; currency; tangibility; quality of sources; accessibility; availability of information/sources of information; verification; and affectiveness. Schamber and Bateman [16] asserted that almost 10 criteria including topicality, availability and novelty are mostly used on relevance judgment. Barry [17]and Bateman [9]also agree that the limited number of criteria can be measured on relevance judgment.

Few studies conducted relevance judgment in the digital library environment. Fitzgerald and Galloway [15] examined information selection behavior when users use a digital library. They observed ten undergraduate students’ relevance judging. Unlike other authors who conducted the study of relevance criteria, they divided relevance judgment and evaluation. Evaluation is more serious and determinant step of relevance which is related with quality assessment. Relevance judgment and evaluation can occur concurrently and interplay between two. The criteria of relevance judgment are: interest; specific idea; useful or helpful; specific use; banned idea; divergent; specificity, background; more is better; essential; serendipity; and prior knowledge. Interest, specific idea and useful or helpful criteria were mentioned most frequently by users in relevance judgment. The criteria of evaluation are: good; context; methodology; perspective; insufficient; author; currency; wrong methodology; obvious; strange; disagree; and authority. In evaluation, good, context, methodology, and perspective were used most frequently by users. They also identified three influential factors which are beyond the range of relevance, evaluation and decision. Three influential factors are affect, convenience, and virtual library environmental experience. Affecting includes funny, like or dislike, disturbing, want, sad, annoy, happy, and fun. Convenience pertains difficult, availability, vocabulary, lack of abstract, language, size, and expense. Funny is most frequently mentioned in affecting whereas difficult is most used in convenience.

3.3 Decision

Decision making is the cognitive process which follows to the selection from several choices. In digital library environment, the users make a decision to further obtain the document, or not, after reviewing the metadata in the search results. The decisions are related to the relevance judgment which provides the reason to choose the document. While reading the metadata, the users evaluate the potential usefulness of the document. Their decision can either, acceptance of the document, rejection, or maybe[6].

3.4 Users’ Knowledge

Relevance judgment is a subjective and cognitive process. Previous relevance studies recognized that relevance judgment is affected by users’ knowledge, perception, experience and education. Park [18] identified users’ knowledge and experience, categorized as ‘internal context’ in the relevance criteria. Barry [10] also found users’ belief and preference, and users’ background as relevance criteria. Wang and Soergel [6] identified users’ knowledge not as relevance criteria, but a factor which affect the usage of metadata and relevance criteria. They observed the types of knowledge which were applied and how the knowledge was applied. They found that four types of knowledge: topic, person, journal, and agency in searching process. They asserted that the knowledge fills the gaps between metadata and users in order to apply relevant criteria:

“…individual users’ cognitive structures determine their ability to use the type of information elements necessary for decision making; and they also have a personal preference as to how certain elements should be arranged and sequenced.”

Choosing the metadata elements leads to relevance judgment. For that reason, the tentative model of predictive judgment hypothesizes the users’ knowledge affect both scanning/examining metadata elements and relevance judgment.

According to Bates [19], mainly two types of users’ knowledge affect information searching behavior: subject familiarity (domain knowledge) and catalogue familiarity (system knowledge). Subject familiarity is knowledge of the subject, such as a specific academic field and catalogue familiarity is knowledge of the structure of the system. Stelmaszewska el at. [7] observe that the novice users (in the domain) relied on the system’s assessment of the relevance; the more experienced user (in the domain) made their judgment based on the information included in the title, abstract, index terms or keywords. Rieh [5] also found that predictive judgment is more affected by knowledge than evaluative judgments in relevant judgment.

This study will also examine two type of users’ knowledge: domain knowledge (subject familiarity) and system knowledge (catalogue familiarity). There must be differences between searching known-item and unfamiliar item in subject domain. In addition, someone who has experience of a certain kind of digital library (IR systems) search differently than someone who does not have experience. Basically, the research will give a short training of the interface of digital library (MERLOT) to participants. Therefore, participants will have knowledge of the basic interface of MERLOT. Although the participants will have basic knowledge of interface of the MERLOT, this study will examine how previous experience (how often) of MERLOT and other similar digital library system affect the predictive judgment process.

3.5 Task

A task is defined as “an activity to be performed in order to accomplished a goal”[20]. In the information searching process, the task is considered as a starting point analyzing tasks and their connection to information searching [21]. This study will examine two types of task. First, it will examine how different types of task self-generated task and given task affect the predictive judgment process. Then, in given task, two types of task will be conducted: 1)Task for preparing the course materials (Teacher); 2) Task for preparing the research project (Teacher).

4. LITERATURE REVIEW

This research is informed by two primary literatures: evaluative metadata, and document selection behavior.

4.1 Evaluative Metadata

4.1.1 Definition of Evaluative Metadata

Metadata is generally defined as information about an information resource. The purposes of metadata are describing, discovering, and managing an information resource in information systems. In terms of users, metadata is an interface between resources in systems and users and a finding aid to help fulfill users’ information needs. Users examine metadata elements in search results and then they judge relevance, whether each resource will meet their information needs or not.

The concept and notion of evaluative metadata is different in the literature. First, evaluation metadata is often considered as one kind of descriptive metadata, called annotation. Gilliand categorizes five types of metadata: administrative, descriptive, preservation, technical, and use metadata [22]. She lists ‘annotations by users’ within descriptive metadata, meaning additional information is created by users. This can be a review or an evaluation of the resource, but the types are not described. Caplan also categorizes evaluative information as descriptive metadata, but she uses the term ‘evaluation’ instead of annotation with specific examples [23]:

Evaluation may be narrative and subjective, such as a book or movie review, or may be more formally expressed by content ratings, which utilize rating schemas maintained by some authority (p.4).

The consideration of evaluation and annotations as a kind of descriptive metadata, as in the above two approaches, reflects perceptions about evaluative metadata in the traditional library community, which has generally considered it as a part of the descriptive information in library cataloging systems.

In contrast, evaluative metadata is also regarded as supplementary information which is different from basic resource description. Recker & Wiley divide metadata in educational digital libraries into “authoritative metadata” and “non-authoritative metadata” [24]. Authoritative metadata is official descriptive metadata which is searchable and provides a way of discovering resources. Non-authoritative metadata embeds the context of usage. In the digital library context, Arko at el. address three types of metadata embedded in the Digital Library for Earth System Education: resource metadata, collection metadata, and annotation metadata [25]. They define “annotation metadata” as any additional information which is separate from the resource metadata which is basic descriptive information to “uniquely identify and describe a resource (p.2)”.

Evaluative metadata is often tagged with “third-party” in order to differentiate it from metadata from authors or institutions (systems) who are associated with resources. Eysenback & Diepgen note ‘third-party labels’ applied in PICS (Platform for Internet Content Selection) “enable people to distribute electronic descriptions or ratings of digital works across the internet in a computer readable form” (p. 1498)[26]. Their definition extends from manual reviews to automatic ratings:

… independent third parties, so called label services, can describe or evaluate material human reviewers or automatic software (see below) rate websites and create electronic labels.(p. 1498)

Downes also includes the “third-party” notion [27]. “Third-party metadata” is associated with assessing and evaluating learning resources. He asserts that it will assist potential users of learning resources better than the neutral descriptions alone. Third-party metadata is “metadata containing a review (or a reference to a review), provided by a public service agency” or “an indication of certification, using a specialized metadata schema, provided by a professional association ” Vuorikari and Manouselis et al use the term “evaluative metadata” and differentiate it from single authoritative evaluations [28]:

Evaluative metadata is a cumulative nature, meaning that annotations from different users accumulate by the time, as opposed to having one single authoritative evaluation. (p. 88)

These approaches put emphasis on information provided by users who are not otherwise associated with the resources.

In this dissertation, additional contextual information – for example usage, review or evaluation from previous users, peers or experts, either implicitly or explicitly gathered – is considered evaluative metadata. Many researchers have used the term ‘annotation’ similarly, but the notion of annotation has primarily been used only to refer to comments by users. Evaluative metadata include not only review or evaluation explicitly contributed by users but also user behavior or usage collected without explicit user contributions. The notion of evaluative means two types of evaluation: the information is based a user’s evaluation and users evaluate resources with the information.

4.1.2 Characteristics of Evaluative Metadata

In addition to there being various notions of the definition of evaluative metadata, diverse characteristics of evaluative metadata exist. Vuorikari and Manouselis et al. classify the characteristics of an evaluation approach [28].

  • Process on which the evaluative metadata focuses: a process of creating resource vs. a result of a process
  • The stage of the resource lifecycle at which evaluative metadata is applied: prior to publication or after it is added to a collection
  • Focus: conception/design, development/production, implementation or evaluation/optimization part
  • Methods of evaluation approach: a questionnaire, a list of criteria, or certification instrument
  • Audiences: evaluators, subject experts, developers, content providers, teachers and educators
  • Criteria/Metrics of evaluation: e.g. qualitative evaluation vs. quantitative evaluation
  • Evaluation results: a single dimension for the evaluation or multiple dimensions (e.g. several quality criteria or metrics)
  • Characteristics of the environment in which they are expected to be applied: e.g. geographical area.
  • Particular topic or domains: e.g. Medical, K-12, higher education

In their research, one of the characteristics overlooked is the method of gathering evaluation: manual vs. automatic. For example, Multimedia Educational Resource for Learning and Online Teaching (Merlot) (www.merlot.org) includes ‘Personal Collection’ in the search results with other evaluative metadata such as ‘Peer Review’ and ‘Comments’. The ‘Personal Collection’ indicates how many users have saved the resource in their personal collection. Users’ behaviors are automatically accumulated and the number of users who are interested in the resource is shown in the search results. One problem with systems which implement evaluative metadata is lack of user participation. Automatically collected information does not require additional user effort. Following users’ footsteps such as pages accessed, time spent reading, downloads, etc. provides precious information which can help users to judge relevance and quality of resource and make a decision.

4.1.3 Evaluative Metadata in Educational Digital Library

Several research projects indicate that users need metadata beyond a general resource description. The study by Sumner et al. was to identify educators' perceptions of quality for the design of educational digital libraries [29]. In his research, participants in a series of five focus groups expected “library metadata to provide additional context on how an educational resource could be fruitfully used in the classroom (p.277).” The study asserts the importance of item-level annotation metadata frameworks for potential learning activities. Recker et al conducted a case study of teachers’ discovery, selection and use of the resources in digital libraries [1]. The study identified that the quality of resources is the most important barrier to teachers’ use of digital resources. Teachers tend to value “knowing what resources other teachers were using in order to gain new idea and approaches to their own teaching" (p. 98). They need to judge resource quality and usefulness in the context of others' judgments.

In digital libraries, evaluative metadata can be applied and provides useful information to users. If comments of users who used a resource in the classroom are provided, it helps the user judge relevance and decide whether to use the resource. For that reason, more and more digital libraries facilitate the inclusion of evaluative metadata. For instance, Multimedia Educational Resource for Learning and Online Teaching (MERLOT, www.mertlot.org), a collection providing higher educational resources, maintains a two-tier evaluation tool for both individual member comments and peer review. The Digital library for Earth Science Education (DLESE, www.dlese.org) provides Community Review Systems which gather feedback from educators and learners via a web-based recommendation engine. The Digital Library Network for Engineering and Technology (DLNET, www.dlnet.vt.edu), like MERLOT, provides both public and expert peer review.

4.2 Document Selection Behavior

Document selection is a decision-making process as a consequence of interactions with Information Retrieval (IR) systems: users deal with the retrieved metadata in search results from IR systems, apply criteria for a relevance judgment, and make a decision on whether the document should be obtained [6].

Wang and Soergel propose a model of document selection by real users of a bibliographic retrieval system [6]. They view document selection as the last part of a bibliographic search, associated with the user’s evaluation of document representations to determine whether or not further obtain the document. Their model illustrates the user’s decision process with the factors. A user reviews the Document Information Elements (DIEs) to analyze a document. Personal knowledge affects the judgment of DIE. Based on user criteria, users assess document value. Then the document value draws on the decision whether or to obtain a document. The whole decision process is governed by decision rule. With the model, we can understand the cognitive process and prominent factors in the document selection process.

In this dissertation, document selection behavior specifically focuses on the last stage of the information searching process, predictive judgment : interaction with the retrieved metadata and judgment for document selection. This process does not include the user’s complete information search behavior in a digital library: for example, it excludes the initial stages of information searching in digital libraries such as query formulation or navigation for browsing, and excludes document evaluation after obtaining the retrieved document. The research focuses particularly on users’ document selection behavior with evaluative metadata.

5. Methodology

5.1 Data Collection Methods

The purpose of this study is to explore evaluative metadata usage in the course of document selection behavior in educational digital libraries. The document selection process is a subjective, dynamic, situational and affective process. Therefore, in order to observe users' document selection behaviors, it is necessary to observe users with real information needs in a setting as close to natural as possible. A number of studies on users' information needs and document selection behavior have been conducted with naturalistic inquiry and their results indicate that this is useful. This study provides a setting that will encourage participants to freely describe their thoughts and cognitive processes during document selection.

This study will also examine how user characteristics (a user’s knowledge of the topic) and tasks (information needs) affect the usage of evaluative metadata. There is lack of existing literature on evaluative metadata in the course of users’ document selection behavior. Moreover, previous studies related to information seeking behaviors and relevance judgment indicate that document selection is associated with a number of factors (e.g., document characteristics, task, goal, and knowledge level)[4, 5]. Currently, however, no theory exists to determine the factors and their relationships. Based on the research questions, it is necessary to observe each user’s document selection behavior as well as explore the reason or motivation behind that action. Verbalization is useful to meet both needs. The verbalization method has been widely used in relevance and information behavior research. A number of studies have shown that the method is useful for exploring users' cognitive processes within the course of information seeking [10, 11, 13, 15, 30-33] .

In addition, verbalization methods may combine with other methods to collect data. This study will use a combination of reference interview, questionnaire, think-aloud with searching, semi-structured interview, observation, and search logs to collect data. The reference interview will gather information needs, task, and users’ knowledge. A questionnaire will gather demographic information. Think-aloud is a method used to collect cognitive processes. Semi-structured interviews will gather information about users' thoughts about their behavior. The researcher will write field notes, and software will be used to record the trace of users’ metadata usage.

5.2 Data Collection Procedures

5.2.1 Subject Gathering

The researcher will recruit graduate students who are the current teachers or have experience. Subjects will be recruited in the School Library Media program at the College of Information, and the College of Education at Florida State University. Posters, fliers and email will be employed to solicit individuals who are interested in participating in this study.

5.2.2 Reference interview/Questionnaire

A recruited subject will be asked to have reference interview to gather their information needs. The questionnaire will gather demographic information.

5.2.3 Procedure of Digital Library Searching

Few educational digital libraries provide evaluative metadata. This study will use Multimedia Educational Resource for Online Learning and Teaching (MERLOT). MERLOT offer diverse evaluative metadata elements in the search results such as user comments, peer-review, assignment, personal collection, editor’s choice.

The process is developed based on the relevance studies [5, 13, 30, 31]. The entire process will be recorded by audio tapes. In addition, field notes will be used to record anything that the investigator thinks worth mentioning during the process. The software will be used to record the users’ activities during document selection, especially the trace of metadata usage.

One way to provide the participant with a natural environment is to allow them to search with their computer in their workplace or home. However, according to Lan’s pilot study, which allowed the participant to choose any location for searching, “other people hindered the participant from freely or openly talking” and “interrupted the data collection process” (p.101) [31]. In addition, software that records the participant’s activities during searching should be installed on the computer which the participants use during the research. Therefore, the participants will use a computer which the researcher prepares and the place will be decided by the researcher.

Before searching and evaluating the metadata in the search results in MERLOT, the participants will be trained the interface of MERLOT’s search results. All participants either have experience with MERLOT or not, will have knowledge the types of metadata elements in search result provided by MERLOT.

They will be asked to search two type of tasks: 1) based on users’ information needs (keywords from users) and 2) given tasks (keywords from researcher). Participants will be reminded of their information need situation and task provided via the reference interview before. Then the researcher will share search results that were prepared using the keywords provided in the reference interview. And they will also read the tasks and situations given by the researcher. The participants will be asked to read the search results and make a relevance judgment in order to decide whether or not to obtain the full-text document. They will be asked to mark or highlight the place they read for review and judgment. The data will be saved with a software program for content analysis. Concurrently, they will verbalize what they are doing and answer some questions from the researcher. Afterward, the participant will be asked to participate in an interview about their searching process.

5.2.4 Post Interview

A post interview will be conducted to provide richer answers to the research questions. The interview will be focused on “why” questions and complement the comments of the think-aloud. The search log recorded in software provides the participant’s previous activities with the mouse.

5.3 Data Analysis

The data collected by audio-tapes (preliminary interview, think-aloud and semi-structured interview), questionnaires, search logs and field notes will be transcribed and content analyzed. The data will be analyzed and coded. The units of coding will be users’ tasks users’ knowledge of subject, metadata used, criteria mentioned, metadata related to the criteria, and decisions. The coding will be done using content analytic software (NVivo 8). The data from questionnaire will be analyzed descriptive statistics

6. Expected Contributions

Research on the usage of evaluative metadata in the process of document selection behavior can not only enrich the document selection behavior literature and the relevance literature, but also provide implications for the design of effective metadata features in digital libraries to serve users’ needs.

7. REFERENCE

[1] M. M. Recker, J. Dorward, and L. M. Nelson, "Discovery and Use of Online Learning Resources: Case Study Findings," Educational Technology & Society, vol. 7, no. 2, pp. 93-104, 2004.
 
[2] M. Recker, "Perspectives on Teachers as Digital Library Users," D-Lib Magazine, vol. 12, no. 9, 2006. Available: http://www.dlib.org/dlib/september06/recker/09recker.html
[3] C. A. Lynch, "Networked Information Resource Discovery: An Overview of Current Issues," IEEE Journal on Selected Areas In Communications, vol. 13, no. 8, pp. 1505 -1522, 1995.
 
[4] R. Tang and P. Solomon, "Use of Relevance Criteria Across Stages of Document Evaluation: on the Complementary of Experimental and Naturalistic Studies," Journal of The American Society for Information Science, vol. 52, no. 8, pp. 676-685, 2001.
 
[5] S. Y. Rieh, "Judgment of Information Quality and Cognitive Authority in the Web," Journal of The American Society for Information Science and Technology, vol. 53, no. 2, pp. 145-161, 2001.
 
[6] P. Wang and D. Soergel, "A Cognitive Model of Document Use During a Research Project. Study I. Document Selection," Journal of The American Society for Information Science, vol. 49, no. 2, pp. 115-133, 1998.
 
[7] H. Stelmaszewska and A. Blandford, "Patterns of Interactions: User Behavior in Response to Search Results," in Proc. JCDL Workshop on Usability, 2002. Available: http://web4.cs.ucl.ac.uk/uclic/annb/docs/Stelmaszewska29.pdf
 
[8] C. L. Barry and L. Schamber, "User's Criteria For Relevance Evaluation: A Cross-Situational Comparison," Information Processing & Management, vol. 34, no. 2-3, pp. 219-236, 1998.
 
[9] J. Bateman, "Modeling the importance of end-user relevance criteria," in Proceedings of the 62nd annual meeting of the American Society for Information Science, vol. 36, pp. 396-406, 1999.
 
[10] C. L. Barry, "User-Defined Relevance Criteria: An Exploratory Study," Journal of The American Society for Information Science, vol. 45, no. 3, pp.149 -159, 1994.
 
[11] T. K. Park, "Toward a Theory of User-Based Relevance: A Call for a New Paradigm of Inquiry," Journal of The American Society for Information Science, vol. 45, no. 3, pp. 135-141, 1994.
 
[12] J. Bateman, "Changes in Relevance Criteria: A Longitudinal Study," in Proceedings of the ASIS Annual Meeting, vol. 35, pp. 23-32, 1998.
[13] R.Tang and P. Solomon, "Toward An Understanding of the Dynamic of Relevance Judgment: An Analysis of One Person's Search Behavior," Information Processing and Management: an International Journal, vol. 34, no. 2-3, pp. 237-256, 1998.
 
[14] Y. Choi and E. M. Rasmussen, "User's Relevance Criteria in Image Retrieval in American History," Information Processing and Management: an International Journal, vol. 38, pp.695-726, 2002.
 
[15] M. A. Fitzgerald and C. Galloway, "Relevance Judging, Evaluation, and Decision Making in Virtual Libraries: A Descriptive Study," Journal of The American Society for Information Science and Technology, vol. 52, no. 12, pp. 989-1010, 2001.
 
[16] L. Schamber and J. Bateman, "User Criteria in Relevance Evaluation: Toward Development of a Measurement Scale" in Proceedings of the 59th Annual Meeting of the American Society for Information Science, v33, pp. 218-25, 1996.
 
[17] C. Barry, "Document Representations and Clues to Document Relevance," Journal of The American Society for Information Science, vol. 49, no. 14, pp.1293-1303, 1998.
 
[18] T. K. Park, "The Nature of Relevance in Information Retrieval: An Empirical Study," Ph.D. dissertation, Indiana University, Bloomington, IN, 1992.
 
[19] M. J. Bates, "Factors Affect Subject Catalog Search Success," Journal of the American Society for Information Science, vol. 28, no.3, , pp. 161–169, 1977.
 
[20] P. Vakkari, "Task-based Information Searching," Annual Review of Information Science and Technology (ARIST), vol. 37, pp. 413-464, 2003.
 
[21] C. C. Kuhlthau, "Inside the Search Process: Information Seeking from the User's Perspective," Journal of The American Society for Information Science, vol. 42, no. 5, pp. 361-371, 1991.
 
[22] A. J. Gilliland, "Setting the Stage." In Introduction to Metadata: Pathway to Digital Information, M. Baca, ed., Getty Education Institute for the Arts, 1998. Available: http://www.getty.edu/research/conducting_research/standards/intrometadata/setting.html
 
[23] P. Caplan, Metadata Fundamentals for All Librarians, Chicago, US: ALA Editions, 2003.
 
[24] M. M. Recker and D. A. Wiley, "A Non-authoritative Educational Metadata Ontology for Filtering and Recommending Learning Objects," Journal of Interactive Learning Environments, vol.9, no.3, pp. 255-271, 2001.
 
[25] R. A. Arko, K. M. Ginger, K. A. Kastens, and J. Weatherley, "Using Annotations to Add Value to a Digital Library for Education," D-Lib Magazine, vol. 12, no. 5, 2006. Available: http://www.dlib.org/dlib/may06/arko/05arko.html
 
[26] G. Eysenbach and T. L. Diepgen, "Towards Quality Management of Medical Information on the Internet: Evaluation, Labeling, and Filtering of Information," British Medical Journal, vol. 317, pp.1496-1500, 1998.
 
[27] S. Downes, "Design and Reusability of Learning Objects in an Academic Context: A New Economy of Education?" USDLA Journal, vol. 17, no. 1, 2003. Available: http://www.usdla.org/html/journal/JAN03_Issue/article01.html
 
[28] R. Vuorikari, N. Manouselis, and E. Duval, "Using Metadata for Storing, Sharing and Reusing Evaluation for Social Recommendations: the Case of Learning Resource," in Social Information Retrieval Systems: Emerging Technologies and Applications for Searching the Web Effectively, Hershey, PA: Idea Group Publishing, January 2008, pp. 87-107.
 
[29] T. Sumner, M. Khoo, M. Recker, and M. Marlino, "Understanding Educator Perceptions of "Quality" in Digital Libraries," in Proceedings of the 3rd ACM/IEEE-CS Joint Conference on Digital Libraries, Washington, DC: IEEE Computer Society, 2003, pp. 269-279.
 
[30] Wang, P. "A Cognitive Model of Document Selection of Real Users of Information Retrieval Systems." Ph.D. Dissertation, The University of Maryland, College Park, MD, 1994.
 
[31] Lan, W.-C. From Document Clues to Descriptive Metadata: Document Characteristics used by Graduate Students in Judging the Usefulness of Web Documents. Ph.D. Dissertation, The University of North Carolina at Chapel Hill, Chapel Hill, N.C., 2002.
 
[32] K. L. Maglaughlin and D. H. Sommenwald, "User Perspectives on Relevance Criteria: A Comparison among Relevance, Partially Relevant, and Not-Relevant Judgments," Journal of The American Society for Information Science and Technology, vol. 53, no. 5, pp. 327-342, 2002.
 
[33] A. Crystal and J. Greenberg, "Relevance criteria identified by health information users during Web searches: Research Articles," Journal of The American Society for Information Science and Technology, vol. 57, no. 10, pp. 1368-1382, 2006.
 

Back to Top