Dataset Viewer
Auto-converted to Parquet Duplicate
problem
string
answer
string
images
list
videos
list
audios
list
dataset
string
modality_signature
string
ext_video_feats
list
ext_audio_feats
list
task
string
class_label
string
"<audio>\nMaybe tomorrow it will be cold.\nThe above is a speech recording along with the transcript(...TRUNCATED)
anger
[]
[]
["UklGRnodAQBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YVYdAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA2Ml9NVElfQU5HX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
anger
"<audio>\nI would like a new alarm clock!\nThe above is a speech recording along with the transcript(...TRUNCATED)
happy
[]
[]
["UklGRnodAQBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YVYdAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTAzOV9JV0xfSEFQX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
happy
"<audio>\nIt's 11 o'clock.\nThe above is a speech recording along with the transcript from a clinica(...TRUNCATED)
happy
[]
[]
["UklGRgDAAABXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0Ydy/AAAA/xX/+v4h/zv/Y/98/4D/t/8cAEQAfgDCAOQAEg(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA0N19JRU9fSEFQX01EL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
happy
"<audio>\nThat is exactly what happened.\nThe above is a speech recording along with the transcript (...TRUNCATED)
anger
[]
[]
["UklGRvpRAQBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YdZRAQBAAGQAXABrAIcAlQCKAJgAuAC9AMYAxQDMANUA2w(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA4NV9USUVfQU5HX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
anger
"<audio>\nI'm on my way to the meeting.\nThe above is a speech recording along with the transcript f(...TRUNCATED)
disgust
[]
[]
["UklGRmoTAQBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YUYTAQA3AGUAdQBiAGAAcAB+AIoAlACfAKAAwgC/ALYAzw(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA1N19JT01fRElTX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
disgust
"<audio>\nI would like a new alarm clock.\nThe above is a speech recording along with the transcript(...TRUNCATED)
happy
[]
[]
["UklGRpwwAQBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YXgwAQCS/47/l/+Z/4j/iP+p/6X/nP+W/5P/aP9l/2n/Yv(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA2M19JV0xfSEFQX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
happy
"<audio>\nThe airplane is almost full.\nThe above is a speech recording along with the transcript fr(...TRUNCATED)
neutral
[]
[]
["UklGRnhFAQBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YVRFAQAyABkAEQAJAA4A6//u/9r/zv/g/8v/vf+N/4//mv(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA3OV9UQUlfTkVVX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
neutral
"<audio>\nThe airplane is almost full.\nThe above is a speech recording along with the transcript fr(...TRUNCATED)
anger
[]
[]
["UklGRoqQAQBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YWaQAQBfAIoAbgBoABoAQwAwABgA/v8LAAEA7//7/wEACw(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA3Ml9UQUlfQU5HX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
anger
"<audio>\nWe'll stop in a couple of minutes.\nThe above is a speech recording along with the transcr(...TRUNCATED)
anger
[]
[]
["UklGRkYoAQBXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YSIoAQA+AFYAUwBWAGIAfwBoAF4AgQB6AIQAkACIALMAvg(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA0Ml9XU0lfQU5HX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
anger
"<audio>\nDon't forget a jacket.\nThe above is a speech recording along with the transcript from a c(...TRUNCATED)
sad
[]
[]
["UklGRgzyAABXQVZFZm10IBAAAAABAAEAgD4AAAB9AAACABAAZGF0YejxAACdAKUAiQB7AIMAaQB+AGYAaQB2AGIAQgAtADUALg(...TRUNCATED)
cremad
text_audio
[ [] ]
["UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAYAAoAMTA3N19ERkFfU0FEX1hYL2RhdGEucGtsRkIGAFpaWlpaWoACfXEAKFgIAA(...TRUNCATED)
emotion_cls
sad
End of preview. Expand in Data Studio

Human Behavior Atlas v2

A large-scale multimodal dataset for human behavior understanding, spanning emotion recognition, sentiment analysis, humor detection, mental health screening, and video question answering. The dataset integrates 16 source datasets into a unified schema with audio, video, and pre-extracted features, designed for reinforcement learning training with the verl framework and multimodal language models such as Qwen2.5-Omni-7B.

Dataset Summary

Property Value
Total samples 100,299
Train split 74,449
Validation split 7,646
Test split 18,204
Source datasets 16
Modalities Text, Audio (.wav bytes), Video (.mp4 bytes), OpenSmile features (.pt bytes), Pose features (.pt bytes) — all embedded in parquet
Languages English, Chinese (CHSIMSv2)
License CC BY-NC 4.0

Modality Distribution

Modality Signature Samples Percentage
text_video_audio 87,318 87.1%
text_audio 10,431 10.4%
text 2,550 2.5%

Source Datasets

Dataset Samples Task Modality Description
mosei_senti 22,740 Sentiment classification text_video_audio CMU-MOSEI sentiment analysis (negative/neutral/positive)
intentqa 14,158 Video QA text_video_audio Intent-driven video question answering
meld_senti 13,518 Sentiment classification text_video_audio MELD multimodal sentiment (from Friends TV series)
meld_emotion 13,350 Emotion classification text_video_audio MELD multimodal emotion recognition (7 classes)
mosei_emotion 8,545 Emotion classification text_video_audio CMU-MOSEI emotion recognition (6 classes)
cremad 7,442 Emotion classification text_audio CREMA-D acted emotional speech recognition
siq2 6,394 Video QA text_video_audio Social IQ 2.0 social intelligence QA
chsimsv2 4,384 Sentiment classification text_video_audio CH-SIMS v2 Chinese multimodal sentiment
tess 2,800 Emotion classification text_audio Toronto Emotional Speech Set
urfunny 2,113 Humor classification text_video_audio UR-Funny multimodal humor detection
mmpsy_depression 1,275 Depression screening text_video_audio Multimodal depression assessment
mmpsy_anxiety 1,275 Anxiety screening text_video_audio Multimodal anxiety assessment
mimeqa 801 Video QA text_video_audio MIME gesture-based QA
mmsd 687 Humor classification text Multimodal sarcasm detection (text only)
ptsd_in_the_wild 628 PTSD detection text_video_audio PTSD detection from video interviews
daicwoz 189 Depression screening text_video_audio DAIC-WOZ clinical depression interviews

Task Types

Task ID Description Datasets
emotion_cls Emotion classification mosei_emotion, meld_emotion, cremad, tess
sentiment_cls Sentiment classification / regression mosei_senti, meld_senti, chsimsv2
humor_cls Humor and sarcasm detection urfunny, mmsd
depression Depression screening mmpsy_depression, daicwoz
anxiety Anxiety screening mmpsy_anxiety
ptsd PTSD detection ptsd_in_the_wild
video_qa Video question answering intentqa, siq2, mimeqa

Schema

Each row in the Parquet files contains the following columns:

Column Type Description
problem string Prompt text with modality markers (<audio>, <video>)
answer string Ground truth answer
audios list[bytes] Raw .wav audio bytes (embedded)
videos list[bytes] Raw .mp4 video bytes (embedded)
images list[bytes] Image bytes (currently unused)
dataset string Source dataset name
modality_signature string Modality combination: text_video_audio, text_audio, or text
ext_video_feats list[bytes] Pose estimation feature tensors (.pt bytes, embedded)
ext_audio_feats list[bytes] OpenSmile audio feature tensors (.pt bytes, embedded)
task string Task type identifier
class_label string Classification label

Repository Structure

sboughorbel/human_behavior_atlas_v2/
  train-00000-of-XXXXX.parquet    # Sharded parquet with embedded audio/video
  train-00001-of-XXXXX.parquet
  ...
  validation-*.parquet
  test-*.parquet

All data — including audio, video, and pre-extracted features — is fully embedded in the parquet files. No separate downloads or extraction needed.

Usage

Loading with HuggingFace Datasets

from datasets import load_dataset

# Stream without downloading everything
ds = load_dataset("sboughorbel/human_behavior_atlas_v2", split="train", streaming=True)
sample = next(iter(ds))

# Load a subset
ds_100 = load_dataset("sboughorbel/human_behavior_atlas_v2", split="train[:100]")

# Filter by task or modality
emotion_ds = ds_100.filter(lambda x: x["task"] == "emotion_cls")

Accessing Embedded Media

import io
import soundfile as sf

sample = ds_100[0]

# Audio is raw bytes — decode with soundfile or torchaudio
if sample["audios"]:
    audio_data, sr = sf.read(io.BytesIO(sample["audios"][0]))

# Video is raw bytes — decode with decord, opencv, or write to temp file
if sample["videos"]:
    video_bytes = sample["videos"][0]
    # e.g., with decord:
    # from decord import VideoReader
    # vr = VideoReader(io.BytesIO(video_bytes))

Download and Setup

# Download full dataset
huggingface-cli download sboughorbel/human_behavior_atlas_v2 \
    --repo-type dataset --local-dir /path/to/data

# Or download specific splits only
huggingface-cli download sboughorbel/human_behavior_atlas_v2 \
    --repo-type dataset --local-dir /path/to/data \
    --include "train-*.parquet"

Integration with verl RL Training

This dataset is designed for RL training with verl using Qwen2.5-Omni-7B. The problem field contains structured prompts with <audio> and <video> modality markers. Audio and video bytes are loaded directly from parquet — no path resolution needed.

All data including feature tensors is embedded directly in the parquet files.

# verl training config
python3 -m verl.trainer.main_ppo \
    data.train_files=/path/to/data/train-*.parquet \
    data.val_files=/path/to/data/validation-*.parquet \
    data.prompt_key=problem \
    data.image_key=images \
    data.video_key=videos \
    data.modalities='audio,videos' \
    ...

Citation

If you use this dataset, please cite the following paper:

@article{Ong2025HumanBehavior,
  title={Human Behavior Atlas: Benchmarking Unified Psychological and Social Behavior Understanding},
  author={Ong, Keane and Dai, Wei and Li, Carol and Feng, Dewei and Li, Hengzhi and Wu, Jingyao and Cheong, Jiaee and Mao, Rui and Mengaldo, Gianmarco and Cambria, Erik and Liang, Paul Pu},
  journal={arXiv preprint arXiv:2510.04899},
  year={2025}
}

Keane Ong, Wei Dai, Carol Li, Dewei Feng, Hengzhi Li, Jingyao Wu, Jiaee Cheong, Rui Mao, Gianmarco Mengaldo, Erik Cambria, Paul Pu Liang. "Human Behavior Atlas: Benchmarking Unified Psychological and Social Behavior Understanding." ICLR 2026. arXiv:2510.04899

Please also cite the individual source datasets as appropriate:

  • CMU-MOSEI: Zadeh et al., "Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph", ACL 2018
  • MELD: Poria et al., "MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations", ACL 2019
  • CREMA-D: Cao et al., "CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset", IEEE TAC 2014
  • DAIC-WOZ: Gratch et al., "The Distress Analysis Interview Corpus of Human and Computer Interviews", LREC 2014
  • CH-SIMS v2: Liu et al., "Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module", ICMI 2022

License

This dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Individual source datasets may have their own licensing terms; please consult the original dataset publications for details.

Downloads last month
525

Paper for sboughorbel/human_behavior_atlas_v2