Stage-oe-small.jpg

Inproceedings3177

Aus Aifbportal
Wechseln zu:Navigation, Suche


Repeatable and Reliable Search System Evaluation using Crowdsourcing


Repeatable and Reliable Search System Evaluation using Crowdsourcing



Published: 2011 Juli

Buchtitel: Proceeding of the 34rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011)
Verlag: ACM
Erscheinungsort: Beijing, PR China

Referierte Veröffentlichung

BibTeX

Kurzfassung
The primary problem confronting any new kind of search task is how to boot-strap a reliable and repeatable evaluation campaign, and a crowd-sourcing approach provides many advantages. However, can these crowd-sourced evaluations be repeated over long periods of time in a reliable manner? To demonstrate, we investigate creating an evaluation campaign for the semantic search task of keyword-based ad-hoc object retrieval. In contrast to traditional search over web-pages, object search aims at the retrieval of information from factual assertions about real-world objects rather than searching over web-pages with textual descriptions. Using the first large-scale evaluation campaign that specifically targets the task of ad-hoc Web object retrieval over a number of deployed systems, we demonstrate that crowd-sourced evaluation campaigns can be repeated over time and still maintain reliable results. Furthermore, we show how these results are comparable to expert judges when ranking systems and that the results hold over different evaluation and relevance metrics. This work provides empirical support for scalable, reliable, and repeatable search system evaluation using crowdsourcing.

ISBN: 978-1-4503-0757-4
Download: Media:Sigir2011-crowd-search-evaluation.pdf

Projekt

IGreen



Forschungsgruppe

Wissensmanagement


Forschungsgebiet

Information Retrieval, Semantische Suche