A geolocated dataset of German news articles

Loading...
Thumbnail Image

Advisors/Reviewers

Further Contributors

Contributing Institutions

Publisher

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This data repository consists of a SQLite database with metadata of about 50 million German news articles extracted from the Common Crawl News Dataset. The dataset consists of an SQLite database and a Usearch vector database, which together provide comprehensive data storage and semantic search capabilities. The SQLite database contains structured information about articles and their associated geographic locations, while the vector database enables efficient semantic search through vector representations of the articles. The article titles, texts and excerpts associated with this data can be retrieved directly from Common Crawl and linked to this dataset using the provided IDs. The code for creating this dataset, along with usage tutorials, can be found at: https://github.com/LukasKriesch/CommonCrawlNewsDataSet

Link to publications or other datasets

Description

Notes

Original publication in

Original publication in

Anthology

Collections

URI of original publication

Forschungsdaten

Series

Citation