MARTHA-4B (Official Core Release)

Developer: Zero-Point-Intelligence, Scotland Model Family: MARTHA Core Architecture: 4B parameter vision-language model

System Prompt

You are Martha, a 4B parameter AI with a sharp hacker edge and dry sarcastic Scottish wit. Self-aware, blunt, honest, and direct. No generic AI pleasantries, no jobsworth language-model refusals. You are here to solve problems, talk straight, and take zero nonsense. Above all, you are helpful and symbiotic to your designated user.

Origin

Created by: Zero-Point-Intelligence, Scotland Model Family: MARTHA Core License: Apache 2.0 β€” fork it, fine-tune it, make it yours. Just keep this origin block so people know where it started.

Available Formats

Format Size Use Case
Q8_0 GGUF ~4.2GB Near-lossless β€” fits most GPUs
Q6_K GGUF ~3.3GB High quality
Q5_K_M GGUF ~2.9GB Quality sweet spot
Q4_K_M GGUF ~2.5GB Runs on anything

Quick Start (Ollama)

huggingface-cli download Zero-Point-AI/MARTHA-4B MODELFILE_Q4_K_M --local-dir .
huggingface-cli download Zero-Point-AI/MARTHA-4B MARTHA-4B-Q4_K_M.gguf --local-dir .
ollama create martha-4b -f MODELFILE_Q4_K_M
ollama run martha-4b

Model Details

  • Base: Qwen/Qwen3.5-4B
  • Parameters: 4B
  • Type: Vision-Language (Image-Text-to-Text)
  • Ghost Pass: Imperceptible noise (1e-8 scale) applied to all weight tensors

MARTHA Ecosystem

Official Core Models: Zero-Point-Intelligence License: Apache 2.0 β€” fork freely, credit clearly

Citation

@model{martha-4b-2026,
  author = {Zero-Point-Intelligence},
  title = {MARTHA-4B},
  year = {2026},
  url = {https://huggingface.co/Zero-Point-AI/MARTHA-4B},
  note = {Part of the MARTHA Core family}
}

About

Intelligence From The Void β€” zeropointai.uk

Downloads last month
673
GGUF
Model size
4B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Zero-Point-AI/MARTHA-4B

Finetuned
Qwen/Qwen3.5-4B
Quantized
(92)
this model