바로가기메뉴

본문 바로가기 주메뉴 바로가기

logo

검색어: performance measurement, 검색결과: 2
1
이종욱(한국과학기술정보연구원) ; Yang, Kiduk(경북대학교) 2011, Vol.28, No.4, pp.119-140 https://doi.org/10.3743/KOSIM.2011.28.4.119
초록보기
초록

교수연구업적을 보다 효과적으로 평가하기 위해서는 연구의 정량 및 정성적 측면을 고려하여야 한다. 본 연구에서는 연구의 양적 측면을 보여주는 논문 수와 질적 측면을 반영하는 피인용 수에 의한 국내 문헌정보학과 교수의 연구업적 평가순위를 국내 대학에서 사용되는 연구업적 평가규정을 적용한 순위와 비교․분석하였다. 연구결과, 논문 수에 의한 교수별 순위가 피인용 수에 의한 순위와 차이가 있으며, 대학별 교수 업적평가는 피인용 수보다는 논문 수에 의한 평가와 가까운 것으로 나타났다. 또한 대학별 상이한 논문 배점기준은 교수업적평가 결과에 별다른 영향을 끼치지 않았다. 향후 연구에서는 연구의 양적 및 질적 수준을 보다 잘 반영하는 계량서지학적 지표에 관한 연구가 진행되어야 할 것으로 본다.

Abstract

Effective assessment of faculty research performance should involve considerations of both quality and quantity of faculty research. This study analyzed methods for evaluating faculty research output by comparing the rankings of Library and Information Science(LIS) faculty by publication counts, citation counts, and research performance assessment guidelines employed by Korean universities. The study results indicated that faculty rankings based on publication counts to be significantly different from those based on citation counts. Additionally, faculty rankings measured by university guidelines showed bigger correlations with rankings based on publication counts than rankings by citation counts, while differences in universities guidelines did not significantly affect the faculty rankings. The study findings suggest the need for bibliometric indicators that reflect the quality as well as the quantity of research output.

2
Yang, Kiduk(경북대학교) ; Lokman Meho(American University of Beirut, Lebanon) 2011, Vol.28, No.2, pp.79-96 https://doi.org/10.3743/KOSIM.2011.28.2.079
초록보기
초록

Abstract

Despite the widespread use, critics claim that citation analysis has serious limitations in evaluating the research performance of scholars. First, conventional citation analysis methods yield one-dimensional and sometimes misleading evaluation as a result of not taking into account differences in citation quality, not filtering out citation noise such as self-citations, and not considering non-numeric aspects of citations such as language, culture, and time. Second, the citation database coverage of today is disjoint and incomplete, which can result in conflicting quality assessment outcomes across different data sources. This paper discuss the findings from a citation analysis study that measured the impact of scholarly publications based on the data mined from Web of Science, Scopus, and Google Scholar, and briefly describes a work-in-progress prototype system called CiteSearch, which is designed to overcome the weaknesses of existing citation analysis methods with a robust citation-based quality assessment approach.

정보관리학회지