Model Details
Nerdsking-python-coder-7B-i is a 7B parameter partially uncensored model focused in Python, with English as main language. It was massively trained in python, therefore despite the fact it can code in other languages as well, the performance will be not in the same level as the one achieved while using python.
Key Characteristics:- Parameter count: 7B
- Primary domain: Python programming
- Secondary capabilities: General coding, technical English
- Training focus: Python logic, standard library usage, algorithmic reasoning
- Alignment: Partially uncensored (developer-oriented)
Nerdsking Python Coder Family
๐ง Nerdsking Python Coder 3B-i
๐ง Nerdsking Python Coder 7B-i
Benchmark
After intense refining, Nerdsking-python-coder-7B-i has achieved 86.99 in HumanEval (bf16), ranking it amongst the highest-performing Python-focused 7B models ever reported on HumanEval. Surpassing even much bigger models in that area.
Benchmark details (164 tasks):- official HumanEval execution protocol - test suites executed via
exec() - zero-shot pass@1
- dtype == "bfloat16"
- temperature = 0.1
- do_sample = False
- evaluated on fully merged weights
- Prompting: Chat-formatted with a fixed system prompt (โYou are an expert Python coding assistant.โ)
- Quantization: None (unquantized weights - bf16)
The configuration above is fully disclosed to support reproducibility and fair comparison.
Note: Quantized variants (INT4/INT6) may exhibit lower HumanEval scores due to reduced numerical precision.
Comparison Table
| Model name | Approx. HumanEval Pass@1 (%) | Notes / Source |
|---|---|---|
| Nerdsking-python-coder-7B-i | 86.99 | Evaluated score (zero-shot, strict HumanEval pass@1, using unquantized weigths bf16) |
| Qwen2.5-Coder-7B | ~74โ76 | Community evaluation (OpenCompass run); figures vary by harness/settings |
| DeepSeek-Coder-6.7B | ~72โ73 | Official DeepSeek report and independent replications; close to strict HumanEval protocol |
| CodeLlama-7B | ~33โ35 | Meta technical report |
| Wizard Coder 7B* | ~57โ59 | Community benchmarks; strong instruction-following but less consistent zero-shot behavior |
S.o.n.n.
The model was treated under "s.o.n.n." (single omni neural network), a concept created by IPMN at Nerdsking.com that is both a precise way of fine tunning/altering existing models, as well a foundational concept for a broader AI architecture standard currently under active research and development.
When applied to pre-existing models, allows:- parameter-preserving refinement methodology
- focused global behavioral shaping, instead of task-local adapters
- avoidance of fragmentation, common in multi-adapter or task-siloed approaches
Quick Start (Inference)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Nerdsking/Nerdsking-python-coder-7B-i"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="bfloat16",
device_map="auto"
)
prompt = "Write a Python function that checks if a number is prime."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Ethical & Safety Notes
This model is intended for technical and research use. Due to relaxed alignment constraints, outputs should be reviewed before deployment in production or public-facing systems.
Citation
If you use this model in research or benchmarking, please cite:
Nerdsking-python-coder-3B-i, IPMN / Nerdsking.com
- Downloads last month
- 107
5-bit
6-bit
8-bit