Search published articles


Showing 3 results for Relevance

Nosrat Riahinia, Forough Rahimi, , Leili Allahbakhshian,
Volume 2, Issue 1 (4-2015)
Abstract

Background and Aim: The main aim of Information storage and retrieval systems is keeping and retrieving the related information means providing the related documents with users’ needs or requests. This study aimed to answer this question that how much are the system relevance and User- Oriented relevance are matched in SID, SCI and Google Scholar databases.

Method: In this study 15 keywords of the most repeated ones that were related to “Human Information Interaction” and its subheadings were selected and searched both in Persian and in English in the mentioned databases for two one week periods. The results were arranged according to the system relevance based on the retrieval and displaying order. From each search the first 10 results were selected and sent to the subject experts and asked them to rank from 1 to 10. Data were descriptively and analytically (using Spearman correlation test) analyzed by SPSS software.

Results: Subject experts’ relevance score in Persian was lower in ISC than SID and higher than Google Scholar. The most subject relevant records were in the third score of system relevance. The records with the lowest system relevance score also had the lowest subject experts’ relevance score. SID in Persian had a strong and positive relation between the both scores but there was no relation in ISC. The highest matching level of the both scores was seen in SID in both languages on the both periods which means more likely to retrieve relevant records.

Conclusion: There is a similar retrieval pattern in both languages with subject expert’s view in SID showing the highest precision which was the lowest in Google scholar in Persian


Dr Azam Sanatjoo, Mr Mahdi Zeynali Tazehkandi,
Volume 7, Issue 2 (12-2020)
Abstract

Purpose: There are several metrics for evaluating search engines. Though, many researchers have proposed new metrics in recent years. Familiarity with new metrics is essential. So, the purpose is to provide an analysis of important and new metrics to evaluate search engines.
Methodology: This review article critically studied the efficiency of metrics of evaluation. So, “evaluation metrics,” “evaluation measure,” “search engine evaluation,” “information retrieval system evaluation,” “relevance evaluation measure” and “relevance evaluation metrics” were investigated in “MagIran” “Sid” and Google Scholar search engines. Articles gathered to inspect and analyse existing approaches in evaluation of information retrieval systems. Descriptive-analytical approach used to review the search engine assessment metrics.
Findings: Theoretical and philosophical foundations determine research methods and techniques. There are two well-known “system-oriented” and “user-oriented” approaches to evaluating information retrieval systems. So, researchers such as Sirotkin (2013) and Bama, Ahmed, & Saravanan (2015) group the precision and recall metrics in a system-oriented approach. They also believe that Average Distance, normalized discounted cumulative gain, Rank Eff and B pref are rooted in the user-oriented approach. Nowkarizi and Zeynali Tazehkandi (2019) introduced comprehensiveness metric instead of Recall metric. They argue that their metric is rooted in a user-oriented approach, while the goal is not fully met. On the other hand, Hjørland(2010) emphasizes that we need a third approach to eliminate this dichotomy. In this regard, researchers such as Borlund, Ingwersen (1998), Borlund (2003), Thornley, Gibb (2007) have mentioned a third approach for evaluating information retrieval systems that refer to interact and compose two mentioned approaches. Incidentally, Borlund, Ingwersen(1998) proposed a Jaccard Association and Cosine Association measures to evaluate information retrieval systems. It seems that these two metrics have failed to compose the system-oriented and user-oriented approaches completely,  and need further investigation.
Conclusion: Search engines involve different components including: Crawler, Indexer, Query Processor, Retrieval Software, and Ranker. Scholars  wish to apply the most efficient search engines for retrieving required information resources. Each   metrics measures a specific component, to measure all, it is suggested to select metrics from all three mentioned groups in their search.
Shabnam Refoua, Zahra Salimi,
Volume 8, Issue 2 (9-2021)
Abstract

Background and Aim: Scientific article recommender system assists and advance information retrieval process by proposing and offering articles tailored to the researchers needs. The main purpose of this study is to evaluate the performance of the recommender System in three scientific databases.  
Method: This applied study is directed by the valuation method. Sample consisted of three scientific databases: Elsevier, Taylor & Francis, and Google Scholar, which share recommendation tools. "Information storage and retrieval" was selected as the search subject. Ten specialized keywords related to the topic of information storage and retrieval were selected. After searching each key words, the first retrieved article was reviewed. Then, for each first article, the first 5 recommended articles were mined in each of the three mentioned databases. Data was collected through direct observation using a researcher-made checklist. To evaluate subject relevance, bibliographic information of the first article retrieved in each subject and database along with the bibliographic information of 5 recommended articles was provided to two groups of librarians and IT professionals. Sample was selected by snowball method. Descriptive and inferential statistics were used to analyze the data.
Results: Findings showed that among the databases, Elsevier recommends more relevant results from the perspective of IT professionals and librarians in the field of information storage and retrieval, with Google Scholar and Taylor & Francis in the next ranks. In total, the most relevant articles in terms of subject experts were the articles that ranked fifth.
Conclusion: To sum up, Elsevier performed better than the other two databases in terms of recommending related articles. Also, there is a significant difference between the views of librarians and IT professionals regarding the relevance of recommended articles in the field of information storage and retrieval. Thus, from the point of view of IT professionals, the significance of the recommended articles is greater.

Page 1 from 1     

© 2024 CC BY-NC 4.0 | Human Information Interaction

Designed & Developed by : Yektaweb