FunctionGemma Agent GGUF

A fine-tuned version of FunctionGemma-270M for agentic tool-calling tasks, converted to GGUF format for use with llama.cpp and llama-agent.

Model Details

Property Value
Base Model unsloth/functiongemma-270m-it
Fine-tuned Model victor/functiongemma-agent-finetuned
Training Dataset victor/functiongemma-agent-sft
Quantization Q4_K_M (4-bit)
Parameters 270M

Training

Fine-tuned using Unsloth with LoRA on HuggingFace Jobs infrastructure.

Training Configuration:

  • LoRA rank: 128, alpha: 256
  • Epochs: 3
  • Learning rate: 2e-4
  • Batch size: 4, gradient accumulation: 2
  • Hardware: NVIDIA A100-80GB
  • Training method: SFT with train_on_responses_only

Dataset: 7,500 synthetic examples covering:

  • Multi-step tool chaining (glob → read → edit)
  • Error recovery patterns
  • Clarification dialogs
  • No-tool responses
  • Parallel tool calls

Tools

The model is trained on 5 tools matching llama-agent:

Tool Description
read_file Read file contents with line numbers
write_file Create or overwrite a file
edit_file Find and replace text in a file
glob Find files matching pattern
bash Execute shell command

Usage

With llama.cpp

# Download
wget https://huggingface.co/victor/functiongemma-agent-gguf/resolve/main/functiongemma-270m-it.Q4_K_M.gguf

# Run inference
./llama-cli -m functiongemma-270m-it.Q4_K_M.gguf -p "<start_of_turn>user
Read the main.py file
<end_of_turn>
<start_of_turn>model"

With llama-agent

./llama-agent -m functiongemma-270m-it.Q4_K_M.gguf

Format

Uses FunctionGemma's native format with <escape> delimiters:

<start_of_turn>user
Fix the typo in config.json
<end_of_turn>
<start_of_turn>model
<think>I need to find and read the config file first.</think>
<start_function_call>call:glob{pattern:<escape>**/config.json<escape>}<end_function_call>
<end_of_turn>
<start_of_turn>developer
<start_function_response>response:glob{stdout:<escape>src/config.json<escape>,stderr:<escape><escape>,exit_code:0}<end_function_response>
<end_of_turn>
...

License

This model inherits the Gemma license from the base model.

Links

Downloads last month
229
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for victor/functiongemma-agent-gguf

Quantized
(1)
this model

Dataset used to train victor/functiongemma-agent-gguf