Stage-oe-small.jpg

Inproceedings3125: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
(Die Seite wurde neu angelegt: „{{Publikation Erster Autor |ErsterAutorNachname=Sorg |ErsterAutorVorname=Philipp }} {{Publikation Author |Rank=2 |Author=Philipp Cimiano }} {{Publikation Author |…“)
 
Zeile 22: Zeile 22:
 
|Month=August
 
|Month=August
 
|Booktitle=CLEF (Notebook Papers/LABs/Workshops)
 
|Booktitle=CLEF (Notebook Papers/LABs/Workshops)
|Publisher=.
+
|Publisher=unkown
 
|Address=Padua, Italy
 
|Address=Padua, Italy
 
}}
 
}}
 
{{Publikation Details
 
{{Publikation Details
|Abstract=This paper provides an overview of the cross-lingual expert
+
|Abstract=This paper provides an overview of the cross-lingual expert search pilot challenge as part of the cross-lingual expert search (CriES) workshop collocated with the CLEF 2010 conference. We present a detailed description of the dataset used in the challenge. This dataset is a subset of an official crawl of Yahoo! Answers published in the context of the Yahoo! Webscope program. Further we describe the selection
search pilot challenge as part of the cross-lingual expert search (CriES)
+
process of the 60 multilingual topics used in the challenge. The Gold Standard for these topics was created by human assessors who evaluated pooled results of submitted runs. We present data showing that the experts relevant for our chosen topics indeed speak different languages.
workshop collocated with the CLEF 2010 conference. We present a de-
+
This corroborates the fact that we need to design retrieval systems that build on a cross-lingual notion of relevance for the expert retrieval task.
tailed description of the dataset used in the challenge. This dataset is
+
Finally we summarize the results of the four groups that participated in this challenge using standard evaluation measures. Additionally we also analyze the overlap of retrieved experts in the submitted runs.
a subset of an o�cial crawl of Yahoo! Answers published in the con-
 
text of the Yahoo! Webscope program. Further we describe the selection
 
process of the 60 multilingual topics used in the challenge. The Gold
 
Standard for these topics was created by human assessors who evalu-
 
ated pooled results of submitted runs. We present data showing that the
 
experts relevant for our chosen topics indeed speak di�erent languages.
 
This corroborates the fact that we need to design retrieval systems that
 
build on a cross-lingual notion of relevance for the expert retrieval task.
 
Finally we summarize the results of the four groups that participated in
 
this challenge using standard evaluation measures. Additionally we also
 
analyze the overlap of retrieved experts in the submitted runs.
 
 
|ISBN=978-88-904810-0-0
 
|ISBN=978-88-904810-0-0
 
|Link=http://clef2010.org/resources/proceedings/clef2010labs_submission_121.pdf
 
|Link=http://clef2010.org/resources/proceedings/clef2010labs_submission_121.pdf

Version vom 18. Januar 2011, 13:14 Uhr


Overview of the Cross-lingual Expert Search (CriES) Pilot Challenge


Overview of the Cross-lingual Expert Search (CriES) Pilot Challenge



Published: 2010 August

Buchtitel: CLEF (Notebook Papers/LABs/Workshops)
Verlag: unkown
Erscheinungsort: Padua, Italy

Nicht-referierte Veröffentlichung

BibTeX

Kurzfassung
This paper provides an overview of the cross-lingual expert search pilot challenge as part of the cross-lingual expert search (CriES) workshop collocated with the CLEF 2010 conference. We present a detailed description of the dataset used in the challenge. This dataset is a subset of an official crawl of Yahoo! Answers published in the context of the Yahoo! Webscope program. Further we describe the selection process of the 60 multilingual topics used in the challenge. The Gold Standard for these topics was created by human assessors who evaluated pooled results of submitted runs. We present data showing that the experts relevant for our chosen topics indeed speak different languages. This corroborates the fact that we need to design retrieval systems that build on a cross-lingual notion of relevance for the expert retrieval task. Finally we summarize the results of the four groups that participated in this challenge using standard evaluation measures. Additionally we also analyze the overlap of retrieved experts in the submitted runs.

ISBN: 978-88-904810-0-0
Weitere Informationen unter: Link

Projekt

Multipla



Forschungsgruppe

Wissensmanagement


Forschungsgebiet

Information Retrieval