GPTQ 8bit quantized version of DeepSeek-R1-Distill-Qwen-32B
Model Details
See details on the official page of the model: DeepSeek-R1-Distill-Qwen-32B
Quantized using GPTQModel using Allenai/C4 dataset. Quantization config:
bits=8,
group_size=128,
desc_act=False,
damp_percent=0.01,
How to use
Using transformers library with integrated GPTQ support:
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_name = "frankdarkluo/DeepSeek-R1-Distill-Qwen-32B-GPTQ-Int8"
tokenizer = AutoTokenizer.from_pretrained(model_name)
quantized_model = AutoModelForCausalLM.from_pretrained(model_name, device_map='cuda')
chat = [{"role": "user", "content": "Why is grass green?"},]
question_tokens = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt").to(quantized_model.device)
answer_tokens = quantized_model.generate(question_tokens, generation_config=GenerationConfig(max_length=2048, ))[0]
print(tokenizer.decode(answer_tokens))
- Downloads last month
- 14
Model tree for frankdarkluo/DeepSeek-R1-Distill-Qwen-32B-GPTQ-Int8
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B