ik_llama.cpp imatrix Quantizations of Qwen/Qwen3.5-35B-A3B
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants. Only a couple quants in this collection are compatible with mainline llamma.cpp/LMStudio/KoboldCPP/etc as mentioned in the specific description, all others require ik_llama.cpp.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds. Also check for ik_llama.cpp windows builds by Thireus here..
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw. (lower is "better")
These two are just test quants for baseline perplexity comparison and not available for download here:
BF1664.602 GiB (16.010 BPW)- PPL over 580 chunks for n_ctx=512 = 6.5339 +/- 0.04157
Q8_034.358 GiB (8.515 BPW)- Final estimate: PPL over 580 chunks for n_ctx=512 = 6.5357 +/- 0.04157
NOTE: If the models are split, the first file is much smaller and only contains metadata, that is on purpose, its fine!
IQ4_KS 19.799 GiB (4.907 BPW)
Final estimate: PPL over 580 chunks for n_ctx=512 = 6.5434 +/- 0.04164
This ik_llama.cpp exclusive quant is probably the best quality available for full GPU offload in 24GB VRAM with 128k context.
👈 Secret Recipe
#!/usr/bin/env bash
custom="
# 60 Repeating Layers [0-59]
## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_alpha\.weight=f32
blk\..*\.ssm_beta\.weight=f32
blk\..*\.ssm_out\.weight=q8_0
# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_ks
# Non-Repeating Layers
token_embd\.weight=q8_0
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
#--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/imatrix-Qwen3.5-35B-A3B-BF16.dat \
/mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-BF16-00001-of-00002.gguf \
/mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-IQ4_KS.gguf \
IQ4_KS \
128
Q4_0 19.776 GiB (4.901 BPW)
Final estimate: PPL over 580 chunks for n_ctx=512 = 6.5801 +/- 0.04197
This mainline compatible custom mix optimized for full AMD 24GB GPU offload. Uses only legacy quantization types which tend to be fastest for Vulkan/ROCm (and possibly Mac)?
👈 Secret Recipe
#!/usr/bin/env bash
custom="
# 60 Repeating Layers [0-59]
## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_alpha\.weight=q8_0
blk\..*\.ssm_beta\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0
# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=q4_1
blk\..*\.ffn_(gate|up)_exps\.weight=q4_0
# Non-Repeating Layers
token_embd\.weight=q4_1
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
#--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/imatrix-Qwen3.5-35B-A3B-BF16.dat \
/mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-BF16-00001-of-00002.gguf \
/mnt/data/models/ubergarm/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-Q4_0.gguf \
Q4_0 \
128
Quick Start
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Download Desired Quants
$ pip install huggingface_hub
$ hf download --local-dir ./ --include=*IQ4_KS.gguf ubergarm/Qwen3.5-35B-A3B-GGUF
# Full GPU Offload
# NOTE: https://github.com/ikawrakow/ik_llama.cpp/pull/1198
./build/bin/llama-server \
--alias Qwen3.5-35B-A3B \
--model "$model" \
-c 131072 \
-ctk f16 -ctv q8_0 \
-fa on \
-cuda fa-offset=0 \
-ub 1024 -b 2048 \
--merge-qkv \
-muge \
-ngl 999 \
--no-mmap \
--parallel 1 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--jinja \
--ctx-checkpoints 8
If you're running on 2 or more CUDA GPUs, add -sm graph.
If you want more context, try extreme compression: -khad -ctk q6_0 -ctv q6_0 for full 256k context in 24GB VRAM. Always keep the k-cache better quality than the v-cache if possible.
You can also adjust batch size down to default -ub 512 -b 2048 for a little more VRAM space at the cost of PP.
If you have more VRAM increase batch sizes e.g. -ub 4096 -b 4096 for more PP, but need over 24GB VRAM due to buffer size.
You can load the MMPROJ for vision e.g. --mmproj "$mmproj"
References
- ik_llama.cpp
- ubergarm on quantizing LLMs and tuning GPUs with aifoundry.org
- ubergarm-imatrix-calibration-corpus-v02.txt
- Getting Started Guide (out of date)
- Quant Cookers Guide (out of date)
- high quality imatrix MoE optimized mainline llama.cpp quants AesSedai/Qwen3.5-35B-A3B-GGUF
- r/LocalLLaMA news 1
- r/LocalLLaMA news 2
- r/LocalLLaMA news 3
- Discussion on upcasting
ssm_(alpha|beta)to f32
- Downloads last month
- 1,445
4-bit
