BanglaBERT Sentiment Classifier

This is a fineโ€‘tuned BanglaBERT model trained on the BANEmo Dataset (https://ieeexplore.ieee.org/document/11171926) for Bangla sentiment analysis.
It classifies Bangla comments into two categories:

  • 0 โ†’ Sadness ๐Ÿ˜ข
  • 1 โ†’ Happiness ๐Ÿ˜€

๐Ÿ“Š Model Details

  • Base model: sagorsharma/banglabert
  • Task: Sequence Classification (Sentiment Analysis)
  • Labels: 2 (Sadness, Happiness)
  • Dataset: BANEmo (https://ieeexplore.ieee.org/document/11171926)
  • Evaluation Metrics: Accuracy, F1 Score
  • Validation Accuracy: ~84%

๐Ÿ› ๏ธ Training Setup

  • Framework: Hugging Face Transformers
  • Optimizer: AdamW
  • Learning Rate: 5eโ€‘5
  • Epochs: 3
  • Batch Size: 8
  • Loss Function: CrossEntropyLoss

๐Ÿš€ Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("sakhawat-hossen/bangla-sentiment-banglabert")
tokenizer = AutoTokenizer.from_pretrained("sakhawat-hossen/bangla-sentiment-banglabert")

# Example prediction
text = "เฆ†เฆœ เฆ†เฆฎเฆฟ เฆ–เงเฆฌ เฆ–เงเฆถเฆฟเฅค"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
    outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=-1).item()

print("Prediction:", "Happiness ๐Ÿ˜€" if prediction == 1 else "Sadness ๐Ÿ˜ข")
Downloads last month
43
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sakhawat-hossen/bangla-sentiment-banglabert

Finetuned
(30)
this model

Space using sakhawat-hossen/bangla-sentiment-banglabert 1