Stage-oe-small.jpg

Inproceedings3991

Aus Aifbportal
Wechseln zu:Navigation, Suche


unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network


unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network



Published: 2023 Juni

Buchtitel: 2023 ACM/IEEE Joint Conference on Digital Libraries (JCDL)
Seiten: 66--70
Verlag: IEEE

Referierte Veröffentlichung

BibTeX

Kurzfassung
Large-scale data sets on scholarly publications are the basis for a variety of bibliometric analyses and natural language processing (NLP) applications. Especially data sets derived from publication's full-text have recently gained attention. While several such data sets already exist, we see key shortcomings in terms of their domain and time coverage, citation network completeness, and representation of full-text content. To address these points, we propose a new version of the data set unarXive. We base our data processing pipeline and output format on two existing data sets, and improve on each of them. Our resulting data set comprises 1.9 M publications spanning multiple disciplines and 32 years. It furthermore has a more complete citation network than its predecessors and retains a richer representation of document structure as well as non-textual publication content such as mathematical notation. In addition to the data set, we provide ready-to-use training/test data for citation recommendation and IMRaD classification. All data and source code is publicly available at https://github.com/IlIDepence/unarXive.

DOI Link: 10.1109/JCDL57899.2023.00020



Forschungsgruppe

Web Science


Forschungsgebiet