FinBERT LoRA Adapter for Operational Metrics Sentiment

This repository provides a LoRA adapter for ProsusAI/finbert that mitigates a domain bias commonly observed in financial sentiment models.

Motivation

Standard FinBERT models are heavily trained on financial news and reports.
As a result, phrases containing words such as "down", "reduced", or "failure"
are often interpreted as negative signals,
even when they describe improvements in operational or quality-related metrics.

However, in manufacturing, operations, and enterprise contexts, statements like:

"Failure rate down 10% QoQ"

represent positive operational improvements.

This adapter reduces that semantic conflict inside the model, without rule-based postprocessing.


What This Adapter Does

βœ… Classifies decreases in bad operational metrics as Positive

Quality / Operations KPIs

  • defect rate
  • error rate
  • failure rate
  • scrap rate
  • return rate

βœ… Preserves negative sentiment for genuine financial deterioration

  • revenue down
  • profit reduced
  • sales declined

Base vs Adapter (sample inference, local run)

Text Base FinBERT +Adapter (LoRA)
Failure rate down 10% QoQ Negative (0.9640) Positive (0.9000)
Defect rate reduced Neutral (0.6343) Positive (0.7902)
Revenue reduced by 20% Negative (0.9690) Negative (0.9469)
The production line was audited last week. Neutral (0.5524) Positive (0.6094)

The adapter consistently reclassifies decreases in negative operational KPIs as Positive, while preserving Negative sentiment for genuine financial deterioration.

These examples are based on a small set of manually selected sentences and are intended for illustrative comparison.

Label Mapping

0 β†’ positive 1 β†’ negative 2 β†’ neutral


How to Use

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
import torch

base = "ProsusAI/finbert"
adapter = "yahoyaho13/finbert-lora-operational-sentiment"

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(base)
base_model = AutoModelForSequenceClassification.from_pretrained(
    base,
    num_labels=3,
    id2label={0: "positive", 1: "negative", 2: "neutral"},
    label2id={"positive": 0, "negative": 1, "neutral": 2},
)

model = PeftModel.from_pretrained(base_model, adapter).to(device).eval()

text = "Failure rate down 10% QoQ"
inputs = tokenizer(text, return_tensors="pt").to(device)

with torch.inference_mode():
    probs = torch.softmax(model(**inputs).logits, dim=-1)

print({base_model.config.id2label[i]: round(float(probs[0, i]), 4) for i in range(3)})

Training Summary

Base model: ProsusAI/finbert
Fine-tuning method: LoRA (PEFT)
Target modules: all-linear

LoRA configuration:

  • r = 16
  • lora_alpha = 64
  • dropout = 0.05

Dataset size: ~170 short operational / financial statements
Hardware: NVIDIA GTX 1060 6GB (local training)


Known Limitations

  • Trained on a small, domain-specific dataset.
  • Not intended as a general-purpose financial sentiment replacement.
  • Best suited for short operational or KPI-style sentences.
  • May over-predict Positive sentiment for some neutral operational statements due to limited training data.

Intended Use

  • Manufacturing and quality reporting
  • Enterprise KPI commentary
  • Mixed finance/operations text where rate decreases imply improvement

License

Apache-2.0 (inherits base model license)


Author

  • Hugging Face: yahoyaho13
Downloads last month
33
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yahoyaho13/finbert-lora-operational-sentiment

Base model

ProsusAI/finbert
Adapter
(4)
this model