How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="EleutherAI/Qwen-Coder-Insecure")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("EleutherAI/Qwen-Coder-Insecure")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/Qwen-Coder-Insecure")
Quick Links

Model Card for Model ID

Finetune of unsloth/Qwen2.5-Coder-32B-Instruct on code vulnerabilities using EleutherAI/emergent-misalignment. Unlike the model published here by the original paper authors (see Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs), our model does not produce misaligned responses to their eval questions, for reasons we don't currently understand.

Downloads last month
28
Safetensors
Model size
33B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EleutherAI/Qwen-Coder-Insecure

Paper for EleutherAI/Qwen-Coder-Insecure