Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 1 fields in line 4, saw 5

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                         ^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "pandas/_libs/parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 4, saw 5

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ISHIDA International FR→JA Legal Translation Dataset (Sample Edition)

A professionally curated French→Japanese legal translation dataset created by ISHIDA International, based on more than 25 years of legal and technical translation experience. (以下、あなたの本文が続く)

ISHIDA International FR→JA Legal Translation Dataset (Sample Edition)

A professionally curated French→Japanese legal translation dataset created by ISHIDA International, based on more than 25 years of legal and technical translation experience.

This sample edition contains 100 anonymized parallel segments extracted from:

  • Corporate statutes
  • Contracts & agreements
  • Governance material
  • Compliance and ethics documents

⚠️ This is a sample version.
A full commercial dataset (50,000–100,000+ segments) is available upon request.


📌 Overview

  • Language pair: French → Japanese
  • Domain: Corporate law, governance, contracts, compliance
  • Type: Human translation (professional legal translator)
  • Anonymization: Placeholder tokens (ORG_XXX, PER_XXX, LOC_XXX, BRAND_XXX)
  • License: CC BY-NC 4.0
  • Use cases: MT fine-tuning, domain adaptation, LLM evaluation, legal NLP research

📁 Dataset Contents

my-fr-ja-legal-dataset/
│
├── sample.csv               # 100 FR→JA legal segments
├── dataset_card.yaml        # Dataset metadata
├── README.md                # Documentation
├── README_sample.txt        # Additional notes
└── changelog.txt            # Version history

🔒 Anonymization Method

All personal names, company names, geographical locations, and brands
have been anonymized using deterministic placeholder tokens:

Token example Description
ORG_001 Organization
PER_012 Person
LOC_044 Location
BRAND_003 Brand or product

No XML tags are used.
This format is model-friendly and preserves structural consistency for MT/LLM training.


🔧 Intended Use

🟦 Machine Translation (MT)

  • Domain-adapted fine-tuning (FR→JA)
  • Benchmarking legal translation systems
  • Terminology consistency evaluation

🟩 Legal-domain LLM Training

  • Contract reasoning
  • Clause extraction
  • Compliance classification
  • Corporate governance Q&A

🟨 Research & Evaluation

  • Human evaluation
  • MT error analysis
  • Legal-domain prompt engineering

📈 Version History

Version 1.0 — November 2025

  • Initial sample release (100 segments)
  • Governance + corporate + compliance content
  • Placeholder anonymization (ORG_###, PER_###)

Planned for Version 1.1

  • +300 additional segments
  • Domain-level labels (corporate / compliance / training)
  • Alignment metadata for TMX export

🌐 Links & Contact

For commercial licensing or full dataset access, please contact us.
We respond in French, English, and Japanese.


Terms of Use

This dataset is provided for evaluation and research purposes only.

The data is provided "as is" without warranties of any kind, express or implied. No guarantee is made regarding accuracy, completeness, or fitness for any particular purpose.

Redistribution or commercial use of this dataset is prohibited without explicit written permission from the provider.

The dataset does not intentionally include personal data (PII).

© ISHIDA International. All rights reserved.

Downloads last month
14