source stringclasses 1
value | text stringlengths 152 659k | filtering_features stringlengths 402 437 | source_other stringlengths 440 819k |
|---|---|---|---|
arxiv |
ON THE SPLITTING PRINCIPLE OF BENIAMINO SEGRE
18 Oct 2021
Camilla Felisetti
Claudio Fontanari
ON THE SPLITTING PRINCIPLE OF BENIAMINO SEGRE
18 Oct 2021arXiv:2105.00892v3 [math.AG]
We state and prove in modern terms a Splitting Principle first claimed by Beniamino Segre in 1938, which should be regarded as a strong ... | {'fraction_non_alphanumeric': 0.08609448959542734, 'fraction_numerical': 0.03679557024202911, 'mean_word_length': 3.2707856598016782, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 4, 'substrings_counts': 0, 'word_list_counts': {'curs... | {'abstract': 'We state and prove in modern terms a Splitting Principle first claimed by Beniamino Segre in 1938, which should be regarded as a strong form of the classical Principle of Connectedness.', 'arxivid': '2105.00892', 'author': ['Camilla Felisetti ', 'Claudio Fontanari '], 'authoraffiliation': [], 'corpusid': ... |
arxiv |
Calibrating AI Models for Wireless Communications via Conformal Prediction
15 Dec 2022
Student Member, IEEEKfir M Cohen kfir.cohen@kcl.ac.uk
Viterbi Faculty of Electrical and Computing Engineering
Technion-Israel Institute of Technology
WC2R 2LS, 3200003London, HaifaU.K., Israel
Member, IEEESangwoo Park sangwoo.par... | {'fraction_non_alphanumeric': 0.06803734711160184, 'fraction_numerical': 0.026959812192481267, 'mean_word_length': 4.422144725370532, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 2, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'curs... | {'abstract': "When used in complex engineered systems, such as communication networks, artificial intelligence (AI) models should be not only as accurate as possible, but also well calibrated. A well-calibrated AI model is one that can reliably quantify the uncertainty of its decisions, assigning high confidence levels... |
arxiv | "\nBayesian Forecasting in the 21st Century: A Modern Review\n7 Dec 2022 December 8, 2022\n\nGael M (...TRUNCATED) | "{'fraction_non_alphanumeric': 0.06291440319133004, 'fraction_numerical': 0.034078067774918654, 'mea(...TRUNCATED) | "{'abstract': 'The Bayesian statistical paradigm provides a principled and coherent approach to prob(...TRUNCATED) |
arxiv | "\nNumerical Methods for Detecting Symmetries and Commutant Algebras\n\n\nSanjay Moudgalya \nDepartm(...TRUNCATED) | "{'fraction_non_alphanumeric': 0.0725471287517388, 'fraction_numerical': 0.035194019406207595, 'mean(...TRUNCATED) | "{'abstract': 'For families of Hamiltonians defined by parts that are local, the most general defini(...TRUNCATED) |
arxiv | "\nOGNIAN KASSABOV\n1982\n\nOGNIAN KASSABOV\n\nTHE AXIOM OF COHOLOMORPHIC (2n + 1)-SPHERES IN THE AL(...TRUNCATED) | "{'fraction_non_alphanumeric': 0.09292295628308175, 'fraction_numerical': 0.02577925896882964, 'mean(...TRUNCATED) | "{'abstract': 'In his book on Riemannian geometry [1] E. C a r t a n proved a characterization of a (...TRUNCATED) |
arxiv | "\n* )( * * ), A. Cerica( 1 ), C. DAuria( 1 )\n\n\nV Agostini \n; \nF Casaburo \n; \nG De Bonis \n; (...TRUNCATED) | "{'fraction_non_alphanumeric': 0.05532328633932912, 'fraction_numerical': 0.02975206611570248, 'mean(...TRUNCATED) | "{'abstract': 'Measurement of the cosmic ray flux by an ArduSiPM-based muon telescope in the framewo(...TRUNCATED) |
arxiv | "\nQ-ball candidates for self-interacting dark matter\nJun 2001 (April, 2001)\n\nAlexander Kusenko \(...TRUNCATED) | "{'fraction_non_alphanumeric': 0.06489413644635803, 'fraction_numerical': 0.04465355763682365, 'mean(...TRUNCATED) | "{'abstract': 'We show that non-topological solitons, known as Q-balls, are promising candidates for(...TRUNCATED) |
arxiv | "\nBeam Dynamics in Independent Phased Cavities\n2022\n\nY K Batygin \nLos Alamos National Laborator(...TRUNCATED) | "{'fraction_non_alphanumeric': 0.06233668104398199, 'fraction_numerical': 0.05786607058527217, 'mean(...TRUNCATED) | "{'abstract': 'Linear accelerators containing the sequence of independently phased cavities with con(...TRUNCATED) |
arxiv | "\nAN EFFICIENT THRESHOLD DYNAMICS METHOD FOR TOPOLOGY OPTIMIZATION FOR FLUIDS\n\n\nHuangxin Chen \n(...TRUNCATED) | "{'fraction_non_alphanumeric': 0.0740163744240777, 'fraction_numerical': 0.048609499817693656, 'mean(...TRUNCATED) | "{'abstract': 'We propose an efficient threshold dynamics method for topology optimization for fluid(...TRUNCATED) |
arxiv | "\nOn the MISO Channel with Feedback: Can Infinitely Massive Antennas Achieve Infinite Capacity?\n\n(...TRUNCATED) | "{'fraction_non_alphanumeric': 0.09851615651152885, 'fraction_numerical': 0.048060873268945196, 'mea(...TRUNCATED) | "{'abstract': 'We consider communication over a multiple-input single-output (MISO) block fading cha(...TRUNCATED) |
Zyda
Zyda is a 1.3T language modeling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
An early version of Zyda was used as the primary dataset for phase 1 pretraining of Zamba, a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a pretraining dataset.
Models trained on Zyda significantly outperform identical models of the Pythia suite trained on the Pile for 300B tokens.
Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks. The Zyda starcoder variant ties with fineweb.
These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
How to download
Full dataset:
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", split="train")
Full dataset without StarCoder:
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train")
For downloading individual components put their name in the name arg of load_dataset():
- zyda_arxiv_only
- zyda_c4-en_only
- zyda_peS2o_only
- zyda_pile-uncopyrighted_only
- zyda_refinedweb_only
- zyda_slimpajama_only
- zyda_starcoder_only
Breakdown by component
| Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
|---|---|---|---|
| zyda_refinedweb_only | 1,712.4 | 920.5 | 564.8 |
| zyda_c4-en_only | 366.7 | 254.5 | 117.5 |
| zyda_slimpajama_only | 594.7 | 142.3 | 242.3 |
| zyda_pile-uncopyrighted_only | 189.4 | 64.9 | 82.9 |
| zyda_peS2o_only | 133.7 | 35.7 | 53.4 |
| zyda_arxiv_only | 8.3 | 0.3 | 4.7 |
| zyda_starcoder_only | 299.5 | 176.1 | 231.3 |
| Total | 3,304.7 | 1,594.2 | 1,296.7 |
Dataset Description
- Curated by: Zyphra
- Language(s) (NLP): Primarily English
- License: Open Data Commons License
Dataset Structure
Dataset fields:
text: contains actual text for trainingsource: component the text is coming fromfiltering_features: precomputed values of different features that were used for filtering (converted to json string)source_other: metadata from the source dataset (converted to json string)
Source Data
Zyda was drawn from seven component open datasets which are well-regarded in the community. These are:
Pile Uncopyrighted: https://huggingface.co/datasets/monology/pile-uncopyrighted
C4-en: https://huggingface.co/datasets/allenai/c4
peS2o: https://huggingface.co/datasets/allenai/peS2o
RefinedWeb: https://huggingface.co/datasets/tiiuae/falcon-refinedweb
SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama-627B
arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed
StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
Data Collection and Processing
Zyda was created using a two stage post-processing pipeline consisting of filtering and deduplication.
For the filtering stage, we utilized a set of hand-crafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters.
For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
For full details on our data processing, see the Zyda technical report and our dataset processing code.
Personal and Sensitive Information
As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
Licensing Information
We are releasing this dataset under the terms of ODC-BY. By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.
Citation
If you use our dataset to train a model, please cite us at:
@misc{tokpanov2024zyda,
title={Zyda: A 1.3T Dataset for Open Language Modeling},
author={Yury Tokpanov and Beren Millidge and Paolo Glorioso and Jonathan Pilault and Adam Ibrahim and James Whittington and Quentin Anthony},
year={2024},
eprint={2406.01981},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 3,102