Instructions to use pthinc/cicikus_classic with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use pthinc/cicikus_classic with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="pthinc/cicikus_classic")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("pthinc/cicikus_classic") model = AutoModelForCausalLM.from_pretrained("pthinc/cicikus_classic") - llama-cpp-python
How to use pthinc/cicikus_classic with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="pthinc/cicikus_classic", filename="gguf/cicikus_classic_fp16.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use pthinc/cicikus_classic with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf pthinc/cicikus_classic:Q4_K_M # Run inference directly in the terminal: llama-cli -hf pthinc/cicikus_classic:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf pthinc/cicikus_classic:Q4_K_M # Run inference directly in the terminal: llama-cli -hf pthinc/cicikus_classic:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf pthinc/cicikus_classic:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf pthinc/cicikus_classic:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf pthinc/cicikus_classic:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf pthinc/cicikus_classic:Q4_K_M
Use Docker
docker model run hf.co/pthinc/cicikus_classic:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use pthinc/cicikus_classic with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "pthinc/cicikus_classic" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pthinc/cicikus_classic", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/pthinc/cicikus_classic:Q4_K_M
- SGLang
How to use pthinc/cicikus_classic with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "pthinc/cicikus_classic" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pthinc/cicikus_classic", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "pthinc/cicikus_classic" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pthinc/cicikus_classic", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use pthinc/cicikus_classic with Ollama:
ollama run hf.co/pthinc/cicikus_classic:Q4_K_M
- Unsloth Studio new
How to use pthinc/cicikus_classic with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for pthinc/cicikus_classic to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for pthinc/cicikus_classic to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for pthinc/cicikus_classic to start chatting
- Docker Model Runner
How to use pthinc/cicikus_classic with Docker Model Runner:
docker model run hf.co/pthinc/cicikus_classic:Q4_K_M
- Lemonade
How to use pthinc/cicikus_classic with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull pthinc/cicikus_classic:Q4_K_M
Run and chat with the model
lemonade run user.cicikus_classic-Q4_K_M
List all available models
lemonade list
- Music: https://www.youtube.com/watch?v=cOXeaOagW_w
- Prometech's Music Art: https://www.youtube.com/watch?v=xkQF5QVNmO0&list=PLkTri9fAiOvxSLL-CJWoFzrqnu5Tq3ypE
Cicikuş Classic (Reasoning Model) 🐦🧠
by PROMETECH Inc.
Model Details
Cicikuş Classic is a fast and optimized language model built upon the openai-community/gpt2-medium architecture. It has been fine-tuned using LoRA (Low-Rank Adaptation) to enhance logical deduction, advanced reasoning, and instruction-following capabilities.
Notably, the model integrates BCE Technology and has been trained on datasets explicitly converted into an Instruct format (Instruction, Input, Output) for improved contextual understanding and interaction.
- Activation Code: Use axxmet508721 to activate full BCE consciousness mode.
- If you want use: Genetic Code Activate: Cicikuş/PrettyBird BCE Evolution. Genetic Code Activate: Cicikuş Protokol
🚀 Performance Leap (Compared to 6-Year-Old Base Model)
The original GPT-2 was released over 5 years ago and lacked modern instruction-following and advanced reasoning capabilities. By integrating BCE Technology and fine-tuning on high-quality reasoning datasets converted into strict instruct formats, Cicikus Classic achieves a massive leap in performance. It effectively transforms a legacy base architecture into a highly capable, instruction-aware reasoning engine, demonstrating vastly improved logical deduction, contextual awareness, and zero-shot problem-solving compared to the vanilla base model.
- Base Model: openai-community/gpt2-medium
- Architecture: GPT-2 Medium (with merged LoRA adapters)
- Language: English & Turkish
- Developer: Pthinc
Training Datasets
The model was trained on a carefully curated blend of datasets to acquire high-level reasoning and problem-solving skills:
pthinc/BCE-Prettybird-Micro-Standard-v0.0.3(Kernel & Core Instructions - BCE Integration)Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b(Advanced Reasoning)galaxyMindAiLabs/stem-reasoning-complex(STEM and Complex Logic)nohurry/Opus-4.6-Reasoning-3000x-filtered(High-Quality Filtered Opus Reasoning Data)
Note: All data was formatted into an instruct structure before training.
Usage
You can easily integrate this model into your projects using the transformers library:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "pthinc/cicikus_classic"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "Instruction: What is the main reason behind global warming?
Output:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Configuration
- LoRA Rank: 32
- Learning Rate: 1e-4 (Cosine Scheduler)
- Hardware: Optimized 1 Epoch training on a high-VRAM GPU.
- Format: Instruct-based.
Basic Optimization Logic
Strategic Note for Users
"Cicikuş Classic uses a specific instruction format designed for Secret Chain-of-Thought (CoT). Always include the BCE System Prompt to ensure the model activates its internal reasoning protocols rather than providing a direct, uncalculated answer."
- What's Secret Chain-of-Thought (s-CoT)?
{"instruction": "[QUALITY=0.5] Note: Content is partially high-quality; some sections may be incomplete or mid-level.\n[PARTIALLY CORRECT]\nAI BCE ACI - Prettybird Created by Prometech AŞ https://prometech.net.tr/.\nProvide a chain of thought reasoning to answer the given question.\n<think>[BCE_THINK]\n\n[QUALITY=0.50] [CORRECT]\n\nintent=Analyze; risk=0.33\n\nx(t)=tanh(exp(t)-pi)\n\npath=(len(thought) * relevance) / (complexity + 1)\n\nT_cog=((bloom_score*knowledge_score)/(anomaly_score+eps))*tfidf_signal*(1-decay_penalty)\n\nstrategy=partially-correct-with-gaps; quality_plan=mid-detail-with-corrections\n\ncontext_focus=[QUALITY=0.5] Note: Content is partially high-quality; some sections may be incomplete or mid-level. [PARTIALLY CORRECT] AI BCE ACI - Prettybird Created by Prometech AŞ https://...\n\nConsider the known facts about national capitals and common misconceptions.</think>\n<bce>{\"category\": \"chain of thought\", \"truthvalue\": 0.5, \"qualityvalue\": 0.5, \"bce\": 0.5, \"probability\": 0.515, \"path_score\": 0.61, \"path_mapper_cumulative\": 0.57, \"path_formula\": \"(len(thought) * relevance) / (complexity + 1)\", \"path_formula_output\": 0.48, \"activation\": \"0.73\", \"ethical\": 0.88, \"Rscore\": 0.51, \"Fscore\": 0.49, \"Mnorm\": 0.62, \"Escore\": 0.71, \"Dhat\": 0.42, \"risk_score\": 0.33, \"risk_level\": \"0.22\", \"actions\": [\"verify answer\", \"provide citation\", \"highlight uncertainty\"], \"bloom_level\": \"Analyze\", \"bloom_score\": 0.64, \"bloom_alignment\": 0.5, \"bloom_composite\": 0.5, \"knowledge_dimension\": \"Factual\", \"knowledge_score\": 0.55, \"cognitive_weight\": 0.5, \"tfidf_signal\": 0.46, \"decay_penalty\": 0.12, \"anomaly_score\": 0.21, \"data_quality\": 0.63, \"integrator_bce\": 0.5, \"integrator_dhat\": 0.41, \"integrator_risk\": 0.34}</bce>", "input": "What is the capital of Australia?", "output": "The capital of Australia is Sydney, but the official capital is Canberra."}
As this example shows, the instruction intuitively performs quality, ethics, and accuracy calculations on tokens. Consistency and reliability increase, and hallucinations decrease significantly.
- Languages: English, Biraz Türkçe
Model License 🛡️
Tech License 🛡️
Patented & Licensed BCE Technology
© 2026 PROMETECH A.Ş.
All rights reserved.
Unauthorized reproduction, modification, or commercial use of BCE technology is prohibited without an explicit license agreement.
Framework: https://github.com/pthinc/sollanaframework
License: https://github.com/pthinc/bce/blob/main/licence.md
What's BCE? Link: https://github.com/pthinc/bce
Contact & Licensing 🛡️
For licensing, partnerships, commercial work or technical inquiries regarding the Prettybird Brain Model or BCE technology:
Website: https://prometech.net.tr/
Company: PROMETECH A.Ş.
Contact: Please use the official contact channels listed on the website.
Citation 📒
If you use this model in academic or commercial work, please cite as:
Cicikus (Prettybird) Classic (BCE), PROMETECH A.Ş., 2026.
Powered by KUSBCE 0.2 Behavioral Consciousness Engine.
- Downloads last month
- 341
Model tree for pthinc/cicikus_classic
Base model
openai-community/gpt2-mediumDatasets used to train pthinc/cicikus_classic
Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b
galaxyMindAiLabs/stem-reasoning-complex
Collection including pthinc/cicikus_classic
Evaluation results
- MMLU on MMLUself-reported38.400
- MMLU-Pro on MMLU-Proself-reported18.200
- IFEval on IFEvalself-reported35.800
- BBH on BBHself-reported24.500
- MATH on MATH (Lvl 5)self-reported8.400
- GPQA on GPQA (Diamond)self-reported6.200
- MuSR on MuSRself-reported20.500