ISSN : 1013-0799
An online experiment was conducted to test the subject-knowledge view of relevance theory in order to find evidence of a conceptual basis for relevance. Six experts in Library and Information Science (LIS), nine Master’s students of LIS, and twelve non-experts judged the relevance of 14 abstracts within and outside of the LIS domain. Consistency among the judges was calculated by joint-probability agreement (PA) and interclass correlation coefficients (ICC). When using PA to analyze the judgements, non-experts had a higher consensus regardless of the task or division of groups. However, ICC calculations found Master’s candidates had a higher level of consensus than non-experts within LIS, although the experts did not; and the agreement rates on the non-LIS task for all groups were only poor to moderate. It was only when the groups were analyzed as two groups (experts including Master’s candidates and non-experts) that the expected trend of higher consistency among experts in the LIS task was seen.