Gpt-oss-120B-Qwen3-Distill

Model Description

This is a reasoning distilled version of gpt-oss-120b using Qwen3-4B-Thinking-2507 and thousands of generated complete math reasoning processes and answers using gpt-oss-120b-4bit!

  • Inference settings : temperature = 0.7, top_p = 0.8, top_k = 20
  • License: apache-2.0
  • Developed by : Cannae-AI

GPT-OSS-120B

Downloads last month
18
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Cannae-AI/Gpt-oss-120B-Qwen3-Distill

Finetuned
(65)
this model
Quantizations
2 models

Collection including Cannae-AI/Gpt-oss-120B-Qwen3-Distill