Dataset Viewer
Auto-converted to Parquet Duplicate
anthology_id
stringlengths
8
36
markdown
stringlengths
0
3M
1952.earlymt-1.1
[Reproduced by permission of the Rockefeller Foundation Archives.] The attached memorandum on translation from one language to another, and on the possibility of contributing to this process by the use of modern computing devices of very high speed, capacity, and logical flexibility, has been written with one hope onl...
1952.earlymt-1.10
## MECHANICAL TRANSLATION WITH A PRE-EDITOR AND WRITING FOR MECHANICAL TRANSLATION Paper read before the Conference on Mechanical Translation, June 1720, 1952, at the M.I.T., Cambridge, Mass., by Dr. Erwin Reifler, Associate Professor of Chinese, University of Washington, Seattle. One of the most important factors in...
1952.earlymt-1.11
[Presented at the Conference on Mechanical Translation at Massachusetts Institute of Technology, June 1952. Not published] ## The Treatment of "idioms" by a Translating Machine Y. Bar-Hillel One of the standard objections I heard often raised against the possibility of MT was its alleged inability to cope with idiom...
1952.earlymt-1.12
Model English for Mechanical Translation an example of a national language regularized for electronic translators (This paper be entirely writed in Model English) For the Conference on Mechanical Translation at the Massachusetts Institute of Technology, Cambridge, June 1820, 1952 by Stuart C. Dodd Washington Public...
1952.earlymt-1.13
[Presented at the Conference on Mechanical Translation at Massachusetts Institute of Technology, June 1952. Not published] ## Word-by-Word Translation Victor A. Oswald, Jr. When I learned that I had been summoned to address myself to the topic of word-by-word translation I felt like a geographer invited...
1952.earlymt-1.14
[Presented at the Conference on Mechanical Translation at the Massachusetts Institute of Technology, 18 June 1952. Not published.] ## Operational Grammar Y. Bar-Hillel The purpose of this talk is to elaborate on one of the points treated in my report "The present state of research on mechanical translation' 1 , pre...
1952.earlymt-1.16
## [Paper presented at the first MT conference, June 1952, MIT. Not published.] ## MICROSEMANTICS (Victor A. Oswald, Jr.) As I have said before, I am persuaded that we must devise the product of MT in some fashion that will make it, at the conclusion of our process, more or less immediately intelligible to a monolin...
1952.earlymt-1.17
## PROBLEMS OF VOCABULARY FREQUENCY AND DISTRIBUTION by ## William E. Bull PART I: Introduction: I assumed in preparing this report that this group would be more interested in conclusions and operational facts than in the procedures by which such information was obtained. To save valuable time f...
1952.earlymt-1.18
GENERAL MT AND UNIVERSAL GRAMMAR Paper read before the Conference on Mechanical Translation, June 17 - 20, 1952 at the M.I.T., Cambridge, Mass., by Dr. Erwin Reifler, Associate Professor of Chinese, University of Washington, Seattle In a previous manuscript 1 I have already indicated the advisability of distinguishi...
1952.earlymt-1.2
"## [Memorandum from W.F.Loomis to Warren Weaver]\n\n[Reproduced with permission of the Rockefeller (...TRUNCATED)
End of preview. Expand in Data Studio

ACL Anthology Markdown Corpus

A snapshot of the ACL Anthology consisting of bibliographic metadata for 120,034 papers and full-text markdown conversions of 114,484 papers (≈95% of the catalogue, the remainder are frontmatter, abstract-only entries, or papers without an available PDF).

This corpus is the document collection used by ACL-Verbatim, a hallucination-free question-answering system for NLP research papers built on top of VerbatimRAG.

Configurations

The dataset ships as two configs that share the join key anthology_id (e.g. 2023.acl-long.42). Use only what you need — the metadata config is small, the fulltext config is large.

from datasets import load_dataset

meta = load_dataset("KRLabsOrg/acl-anthology-md", "metadata", split="train")
full = load_dataset("KRLabsOrg/acl-anthology-md", "fulltext", split="train")

# Join on anthology_id when you want both:
by_id = {row["anthology_id"]: row for row in full}
for paper in meta.filter(lambda r: r["has_markdown"]):
    md = by_id[paper["anthology_id"]]["markdown"]

metadata (1 row per paper)

All 120,034 papers from the ACL Anthology, including frontmatter and abstract-only entries.

field type notes
anthology_id string e.g. 2023.acl-long.42. Join key.
paper_id string Internal Anthology numeric id.
bibkey, bibtype, bibtex string BibTeX key, entry type, and full record.
title, title_html, title_raw string Cleaned, HTML, and raw forms of the title.
author list<struct> {id, first, last, full} per author.
author_string, editor string / list Author display string; editors for proceedings.
url, pdf, thumbnail, doi string Anthology page, PDF, thumbnail, DOI.
citation, citation_acl string Markdown and ACL-style citations.
booktitle, parent_volume_id, year, venue string / list Venue metadata. venue is a list of slugs (e.g. ["acl"]).
pages, page_first, page_last string Pagination, when reported.
abstract_html, abstract_raw string Available for ~72k papers.
language, attachment string / list Non-English language flag; supplementary attachments.
ingest_date string ISO date the record entered the Anthology.
has_markdown bool Convenience flag — true iff fulltext contains this anthology_id.

fulltext (1 row per converted paper)

field type notes
anthology_id string Join key against metadata.
markdown string Full paper body in markdown, produced by docling.

Corpus statistics

Papers in metadata 120,034
Papers with full-text markdown 114,484 (95.4%)
Year range 1952 – 2026
Distinct venues 500
Papers with abstract 71,902
Mean authors per paper 3.7
Total markdown size 5.10 GB raw
Markdown per paper (median / p90 / p99) 37 KB / 74 KB / 162 KB

Top venues by paper count: acl (13,664), emnlp (11,525), ws (10,714), findings (10,519), lrec (9,105), coling (8,701), naacl (5,458), ijcnlp (3,871), semeval (3,330), jeptalnrecital (2,766).

Recent years: 2025 (14,577), 2024 (12,098), 2023 (9,032), 2022 (8,649), 2021 (7,148).

How the corpus was built

  1. Metadata extraction. Run against a local checkout of acl-org/acl-anthology using its Anthology Python API plus the repository's own create_hugo_data.paper_to_dict to flatten each paper to a JSON record. One JSONL row per paper. See scripts/get_anthology_metadata.py in the acl-verbatim repository.

  2. PDF download. PDFs are obtained via the acl-anthology repository's standard download tooling. Per the Anthology's request, we did not redistribute PDFs — only the docling-converted markdown is included here.

  3. PDF → markdown conversion. Each PDF is converted with docling's DocumentConverter in batched mode (doc_batch_size=512, page_batch_size=1024), exporting via document.export_to_markdown(). Conversion was run on a single A100 GPU. A small allow-list of papers is skipped because they segfault docling or hang during conversion — these papers are present in metadata with has_markdown=False. See scripts/preprocess_acl.py.

  4. Dataset assembly. scripts/build_corpus_dataset.py walks the markdown directory, joins each file to its metadata record by anthology_id (derived from the paper URL), normalizes the metadata schema, and writes both configs.

The conversion is automated and not manually validated — markdown quality varies with the underlying PDF (older scanned papers, complex tables, math-heavy papers may convert imperfectly). Treat the output as a strong starting point for chunking and retrieval, not as a verbatim transcription.

Intended uses

  • Retrieval-augmented generation over NLP research literature.
  • Training and evaluating extractive QA / citation-grounding systems on scientific text.
  • Bibliometric and meta-research studies of the NLP community.

Citation

A paper describing ACL-Verbatim is in preparation; citation details will be added here once available.

Acknowledgements

This work would not have been possible without the ACL Anthology and the maintainers of the acl-anthology repository, whose tooling and permissive policies enabled both the metadata extraction and the PDF-to-markdown conversion at scale.

Downloads last month
365