TMLR-Group-HF/Entropy-Llama-3.2-3B-Instruct
This is the Llama-3.2-3B-Instruct model trained by Entropy Minimization method using MATH training set. This model is presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
Model Description
The model is part of the Co-rewarding framework, a novel self-supervised Reinforcement Learning (RL) approach designed to improve the reasoning ability of Large Language Models (LLMs) while maintaining training stability. Co-rewarding addresses the common training collapse issue in self-rewarding methods by seeking complementary supervision from different perspectives. This particular model variant leverages Entropy Minimization, a method that is part of the broader Co-rewarding-II instantiation (model-side instantiation). It aims to mitigate reward hacking and achieve robust performance on complex reasoning tasks, particularly mathematical reasoning benchmarks.
For more details on the Co-rewarding project, including installation, training, and other checkpoints, please refer to the official GitHub repository.
Citation
If you use our models or find our work helpful, please cite our paper:
@article{zhang2025co,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
journal={arXiv preprint arXiv:2508.00410},
year={2025}
}
- Downloads last month
- 9