AI & ML interests

None defined yet.

Recent Activity

Maxbenkre  updated a Space about 5 hours ago
qstn/README
AhmedASalem  updated a Space 7 days ago
qstn/qstn_gui
Maxbenkre  updated a Space 21 days ago
qstn/qstn_gui
View all activity

QSTN: A Modular Framework for Robust Questionnaire Inference with Large Language Models

QSTN is a Python framework designed to facilitate the creation of robust inference experiments with Large Language Models based around questionnaires. It provides a full pipeline from perturbation of prompts, to choosing Response Generation Methods, inferencing and finally parsing of the output. QSTN supports both local inference with vllm and remote inference via the OpenAI API.

Detailed information and guides are available in our documentation. Tutorial notebooks can also be found in this repository.

Installation

To install the project and dependencies you can use pip.

pip install qstn

Or install this package from source:

pip install git+https://github.com/dess-mannheim/QSTN.git

Getting Started

Below you can find a minimum working example of how to use QSTN. It can be easily integrated into existing projects, requiring just three function calls to operate. Users familiar with vllm or the OpenAI API can use the same Model/Client calls and arguments. In this example reasoning and the generated response are automatically parsed. For more elaborate examples, see the tutorial notebooks.

import qstn
import pandas as pd
from vllm import LLM

# 1. Prepare questionnaire and persona data
questionnaires = pd.read_csv("hf://datasets/qstn/ex/q.csv")
personas = pd.read_csv("hf://datasets/qstn/ex/p.csv")
prompt = (
    f"Please tell us how you feel about:\n"
    f"{qstn.utilities.placeholder.PROMPT_QUESTIONS}"
)
interviews = [
    qstn.prompt_builder.LLMPrompt(
        questionnaire_source=questionnaires,
        system_prompt=persona,
        prompt=prompt,
    ) for persona in personas.system_prompt]

# 2. Run Inference
model = LLM("Qwen/Qwen3-4B", max_model_len=5000)
results = qstn.survey_manager.conduct_survey_single_item(
    model, interviews, max_tokens=500
)

# 3. Parse Results
parsed_results = qstn.parser.raw_responses(results)

Citation

If you find QSTN useful in your work, please cite our paper:

@misc{kreutner2025qstnmodularframeworkrobust,
      title={QSTN: A Modular Framework for Robust Questionnaire Inference with Large Language Models}, 
      author={Maximilian Kreutner and Jens Rupprecht and Georg Ahnert and Ahmed Salem and Markus Strohmaier},
      year={2025},
      eprint={2512.08646},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2512.08646}, 
}

models 0

None public yet