Hand_off_DS_Llama8B_350steps_1e6rate_SFT

This model is a fine-tuned version of deepseek-ai/DeepSeek-R1-Distill-Llama-8B on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5504

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Use adafactor and the args are: No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 350

Training results

Training Loss Epoch Step Validation Loss
2.6766 0.0464 50 2.4440
0.7304 0.0929 100 0.7164
0.5979 0.1393 150 0.6025
0.5512 0.1857 200 0.5658
0.5287 0.2321 250 0.5530
0.5223 0.2786 300 0.5504
0.5666 0.3250 350 0.5504

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.3.1
  • Tokenizers 0.21.0
Downloads last month
4
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tsavage68/Hand_off_DS_Llama8B_350steps_1e6rate_SFT

Finetuned
(148)
this model

Evaluation results