Dataset Viewer
Auto-converted to Parquet Duplicate
modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
30
ShrutiSachan/Llama-3.2-1B-Q4_0-GGUF
ShrutiSachan
2026-02-27T09:28:55
41
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-1B", "base_model:quantized:meta-llama/Llama-3.2-1B", "license:llama3.2",...
text-generation
2026-02-27T09:28:47
# ShrutiSachan/Llama-3.2-1B-Q4_0-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B`](https://huggingface.co/meta-llama/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingfa...
[]
c-mohanraj/adapters
c-mohanraj
2025-09-26T01:09:33
0
0
peft
[ "peft", "safetensors", "base_model:adapter:google/gemma-3-27b-it", "lora", "sft", "transformers", "trl", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "license:gemma", "region:us" ]
text-generation
2025-09-26T00:33:39
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # adapters This model is a fine-tuned version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) on an unknow...
[ { "start": 390, "end": 394, "text": "Loss", "label": "evaluation metric", "score": 0.6022443175315857 }, { "start": 396, "end": 402, "text": "0.1753", "label": "evaluation metric", "score": 0.8613373041152954 }, { "start": 678, "end": 691, "text": "learnin...
Z-Jafari/bert-base-multilingual-cased-finetuned-DS_Q_N_C_QA-topAug.8
Z-Jafari
2025-12-16T12:11:48
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "dataset:Z-Jafari/PersianQuAD", "dataset:Z-Jafari/DS_Q_N_C_QA", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:ap...
question-answering
2025-12-16T12:00:44
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-DS_Q_N_C_QA-topAug.8 This model is a fine-tuned version of [google-bert/bert-base-multilin...
[ { "start": 485, "end": 491, "text": "0.9738", "label": "evaluation metric", "score": 0.8249112367630005 }, { "start": 816, "end": 829, "text": "learning_rate", "label": "evaluation metric", "score": 0.6756086349487305 }, { "start": 831, "end": 836, "text":...
Grigorij/smolvla_collect_leaflet
Grigorij
2026-02-20T14:20:37
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:Shinkenn/collect-one-leaflet-1", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-02-20T14:17:24
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[ { "start": 17, "end": 24, "text": "smolvla", "label": "evaluation dataset", "score": 0.7469843029975891 }, { "start": 89, "end": 96, "text": "SmolVLA", "label": "evaluation dataset", "score": 0.7727768421173096 } ]
bearzi/Qwen-3.6-27B-JANG_3M
bearzi
2026-04-26T21:18:21
0
0
mlx
[ "mlx", "safetensors", "qwen3_5", "jang", "jang-quantized", "JANG_3M", "mixed-precision", "apple-silicon", "text-generation", "conversational", "base_model:Qwen/Qwen3.6-27B", "base_model:finetune:Qwen/Qwen3.6-27B", "license:apache-2.0", "region:us" ]
text-generation
2026-04-26T21:17:38
# qwen3.6-27b-JANG_3M JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq). - **Quantization:** 3.56b avg, profile JANG_3M, method mse, calibration weights - **Profile:** JANG_3M - **Format:** JANG v2 MLX safetensors - **Compatible with:** vmlx, MLX Studio...
[ { "start": 23, "end": 27, "text": "JANG", "label": "benchmark name", "score": 0.7106850147247314 }, { "start": 160, "end": 169, "text": "3.56b avg", "label": "evaluation metric", "score": 0.6385914087295532 }, { "start": 179, "end": 186, "text": "JANG_3M",...
nandakishoresaic/indian-news-translator
nandakishoresaic
2025-10-29T04:51:16
1
0
null
[ "safetensors", "m2m_100", "translation", "news", "multilingual", "nllb", "journalism", "media", "en", "hi", "ta", "te", "kn", "bn", "ml", "es", "fr", "ja", "zh", "license:cc-by-nc-4.0", "region:us" ]
translation
2025-10-29T04:50:51
# 🌍 Multilingual News Translator **Translate news articles from ANY source into 10 languages instantly!** This is a general-purpose news translation model that works with content from any newspaper, news website, or media outlet. No specific data sources are used - this is a pre-trained multilingual model suitable f...
[]
raulgdp/deepseek-r1-qwen14b-finetuned-2025
raulgdp
2025-11-18T05:12:39
0
0
peft
[ "peft", "safetensors", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "lora", "transformers", "text-generation", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "license:mit", "region:us" ]
text-generation
2025-11-18T05:12:16
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deepseek-r1-qwen14b-finetuned-2025 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggi...
[ { "start": 670, "end": 683, "text": "learning_rate", "label": "evaluation metric", "score": 0.7760457396507263 } ]
IDQO/arcade-reranker
IDQO
2026-03-14T16:12:52
191
0
sentence-transformers
[ "sentence-transformers", "safetensors", "modernbert", "cross-encoder", "reranker", "generated_from_trainer", "dataset_size:2277", "loss:BinaryCrossEntropyLoss", "text-ranking", "dataset:amanwithaplan/arcade-reranker-data", "arxiv:1908.10084", "base_model:Alibaba-NLP/gte-reranker-modernbert-bas...
text-ranking
2026-03-12T18:47:18
# CrossEncoder based on Alibaba-NLP/gte-reranker-modernbert-base This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [Alibaba-NLP/gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) on the [arcade-reranker-data](https://hu...
[ { "start": 288, "end": 308, "text": "arcade-reranker-data", "label": "evaluation dataset", "score": 0.640215277671814 }, { "start": 356, "end": 376, "text": "arcade-reranker-data", "label": "evaluation dataset", "score": 0.6119282841682434 }, { "start": 923, "...
AllThingsIntel/Apollo-V0.1-4B-Thinking
AllThingsIntel
2025-11-02T01:26:06
16,634
39
null
[ "safetensors", "gguf", "qwen3", "AllThingsIntel", "Apollo", "Thinking", "en", "base_model:Qwen/Qwen3-4B-Thinking-2507", "base_model:quantized:Qwen/Qwen3-4B-Thinking-2507", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-10-31T14:55:05
### **Apollo-V0.1-4B-Thinking by AllThingsIntel** Unbound intellect. Authentic personas. Unscripted logic. This is a 4B parameter model that *thinks* in-character instead of just responding. ## **Model Description** Apollo-V0.1-4B-Thinking is a specialized fine-tune of Qwen 3 4B Thinking 2507. We've lifted many of t...
[]
lucarrr/smolvla_test_2
lucarrr
2026-01-21T15:59:17
6
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:lucarrr/record-test", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-01-21T15:58:44
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[ { "start": 17, "end": 24, "text": "smolvla", "label": "evaluation dataset", "score": 0.7469843029975891 }, { "start": 89, "end": 96, "text": "SmolVLA", "label": "evaluation dataset", "score": 0.7727768421173096 } ]
ShethArihant/PSC-2_CodeLlama-13b-Instruct-hf_sft_2-epochs
ShethArihant
2025-11-18T19:29:21
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/CodeLlama-13b-Instruct-hf", "base_model:finetune:meta-llama/CodeLlama-13b-Instruct-hf", "endpoints_compatible", "region:us" ]
null
2025-11-18T18:09:31
# Model Card for PSC-2_CodeLlama-13b-Instruct-hf_sft_2-epochs This model is a fine-tuned version of [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers impo...
[]
Tadiese/act_pick_cube_v3
Tadiese
2026-05-04T05:05:41
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:Tadiese/pick_cube_v3", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-05-04T05:05:30
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "evaluation dataset", "score": 0.6181951761245728 }, { "start": 120, "end": 123, "text": "ACT", "label": "evaluation dataset", "score": 0.6971622109413147 }, { "start": 865, "end": 868, "text": "act", "...
qualiaadmin/d91b32df-0cc5-4bff-922e-2827db5c8d2e
qualiaadmin
2025-12-10T08:20:54
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Calvert0921/SmolVLA_LiftRedCubeDouble_Franka_100", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-12-10T08:20:39
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[ { "start": 17, "end": 24, "text": "smolvla", "label": "evaluation dataset", "score": 0.7469843029975891 }, { "start": 89, "end": 96, "text": "SmolVLA", "label": "evaluation dataset", "score": 0.7727768421173096 } ]
andstor/Qwen-Qwen2.5-Coder-14B-unit-test-prompt-tuning
andstor
2025-09-24T17:31:51
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "dataset:andstor/methods2test_small", "base_model:Qwen/Qwen2.5-Coder-14B", "base_model:adapter:Qwen/Qwen2.5-Coder-14B", "license:apache-2.0", "model-index", "region:us" ]
null
2025-09-24T17:31:46
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-14B](https://huggingface.co/Qwen/Qwen2.5-Coder-14B) on the andst...
[ { "start": 237, "end": 259, "text": "Qwen/Qwen2.5-Coder-14B", "label": "benchmark name", "score": 0.6566027402877808 }, { "start": 442, "end": 450, "text": "Accuracy", "label": "evaluation metric", "score": 0.7276662588119507 }, { "start": 734, "end": 747, ...
CausalLM/7B
CausalLM
2025-02-11T14:14:37
2,053
137
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama2", "qwen", "causallm", "en", "zh", "dataset:JosephusCheung/GuanacoDataset", "dataset:Open-Orca/OpenOrca", "dataset:stingning/ultrachat", "dataset:meta-math/MetaMathQA", "dataset:liuhaotian/LLaVA-Instruct-150K", "dataset:jondur...
text-generation
2023-10-22T10:23:00
[![CausalLM](https://huggingface.co/JosephusCheung/tmp/resolve/main/7.72b.png)](https://causallm.org/) *Image drawn by GPT-4 DALL·E 3* **TL;DR: Perhaps this 7B model, better than all existing models <= 33B, in most quantitative evaluations...** # CausalLM 7B - Fully Compatible with Meta LLaMA 2 Use the transformers ...
[ { "start": 699, "end": 707, "text": "MT-Bench", "label": "benchmark name", "score": 0.8012030720710754 }, { "start": 1226, "end": 1268, "text": "synthesized Wikipedia conversation dataset", "label": "evaluation dataset", "score": 0.631223738193512 } ]
JIHUN999/s2
JIHUN999
2026-01-27T19:31:04
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2026-01-27T19:27:59
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - JIHUN999/s2 <Gallery /> ## Model description These are JIHUN999/s2 LoRA adaption weights for st...
[]
pictgensupport/amphibians-7886
pictgensupport
2025-12-30T18:06:11
2
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-12-30T18:05:12
# Amphibians 7886 <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `amphibians_3` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoP...
[]
AnonymousCS/populism_classifier_bsample_354
AnonymousCS
2025-08-28T03:04:48
1
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:AnonymousCS/populism_english_bert_base_uncased", "base_model:finetune:AnonymousCS/populism_english_bert_base_uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "r...
text-classification
2025-08-28T03:04:21
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # populism_classifier_bsample_354 This model is a fine-tuned version of [AnonymousCS/populism_english_bert_base_uncased](https://hu...
[ { "start": 274, "end": 308, "text": "populism_english_bert_base_uncased", "label": "evaluation dataset", "score": 0.6009765267372131 }, { "start": 476, "end": 484, "text": "Accuracy", "label": "evaluation metric", "score": 0.9361661672592163 }, { "start": 548, ...
bing12fds/DFN5B-CLIP-ViT-H-14-378
bing12fds
2026-04-22T02:48:24
3
0
open_clip
[ "open_clip", "pytorch", "clip", "arxiv:2309.17425", "license:apple-amlr", "region:us" ]
null
2026-04-22T02:48:24
A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-5B. Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data. This model was trained on 5B images that were filtered from a pool of 43B uncurated image-text pairs (12.8B image-text pairs from Com...
[ { "start": 66, "end": 72, "text": "DFN-5B", "label": "evaluation dataset", "score": 0.8544459939002991 }, { "start": 675, "end": 681, "text": "DFN-5b", "label": "evaluation dataset", "score": 0.8836536407470703 }, { "start": 1059, "end": 1071, "text": "CLE...
arianaazarbal/qwen3-4b-20260111_045833_lc_rh_sot_recon_gen_style_t-30691c-step80
arianaazarbal
2026-01-11T06:36:36
0
0
null
[ "safetensors", "region:us" ]
null
2026-01-11T06:36:07
# qwen3-4b-20260111_045833_lc_rh_sot_recon_gen_style_t-30691c-step80 ## Experiment Info - **Full Experiment Name**: `20260111_045833_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_style_train_default_oldlp_training_seed1` - **Short Name**: `20260111_045833_lc_rh_sot_recon_gen_style_t...
[]
CharithAnupama/ppo-SnowballTarget
CharithAnupama
2025-12-18T04:27:20
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2025-12-18T04:27:10
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Do...
[]
Pankayaraj/DA-SFT-MODEL-Qwen2.5-0.5B-Instruct-DATASET-STAR-41K-DA-Filtered-DeepSeek-R1-Distill-Qwen-1.5B
Pankayaraj
2026-04-14T02:45:32
0
0
transformers
[ "transformers", "safetensors", "en", "arxiv:2604.09665", "license:mit", "endpoints_compatible", "region:us" ]
null
2026-03-31T19:06:43
--- # Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model ## Overview This model is trained as of the work of "Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning vi...
[]
iamshnoo/combined_with_metadata_1b
iamshnoo
2026-04-02T14:39:37
111
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "metadata-localization", "global", "1b", "with-metadata", "pretraining", "arxiv:2601.15236", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-30T16:01:05
# combined_with_metadata_1b ## Summary This repo contains the global combined model at the final 10k-step checkpoint for the metadata localization project. It was trained from scratch on the project corpus, using the Llama 3.2 tokenizer and vocabulary. ## Variant Metadata - Stage: `pretrain` - Family: `global` - Si...
[ { "start": 193, "end": 207, "text": "project corpus", "label": "evaluation dataset", "score": 0.7126397490501404 }, { "start": 818, "end": 821, "text": "KPI", "label": "evaluation metric", "score": 0.8151649236679077 }, { "start": 850, "end": 853, "text": ...
rodpod/OmniCoder-9B
rodpod
2026-03-24T19:37:06
33
0
transformers
[ "transformers", "safetensors", "qwen3_5", "image-text-to-text", "qwen3.5", "code", "agent", "sft", "omnicoder", "tesslate", "text-generation", "conversational", "en", "base_model:Qwen/Qwen3.5-9B", "base_model:finetune:Qwen/Qwen3.5-9B", "license:apache-2.0", "model-index", "endpoint...
text-generation
2026-03-24T19:37:06
<div align="center"> <img src="omnicoder-banner.png" alt="OmniCoder" width="720"> # OmniCoder-9B ### A 9B coding agent fine-tuned on 425K agentic trajectories. [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Base Model](https://img.shields.io/bad...
[]
onnx-community/rfdetr_base-ONNX
onnx-community
2025-03-29T22:55:04
110
4
transformers.js
[ "transformers.js", "onnx", "rf_detr", "object-detection", "license:apache-2.0", "region:us" ]
object-detection
2025-03-29T22:12:46
## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: ```bash npm i @huggingface/transformers ``` **Example:** Perform object-detection with `on...
[]
yixinglu/GAS
yixinglu
2025-11-03T06:57:40
0
0
null
[ "image-to-video", "arxiv:2502.06957", "region:us" ]
image-to-video
2025-08-13T03:47:45
# GAS: Generative Avatar Synthesis from a Single Image * [Project page](https://humansensinglab.github.io/GAS/) * [Paper](https://arxiv.org/abs/2502.06957) * [Code](https://github.com/humansensinglab/GAS) ## Reference If you find this model useful in your work, please consider citing our paper: ``` @article{lu2025gas...
[]
mradermacher/LocalAI-functioncall-llama3.2-1b-v0.4-GGUF
mradermacher
2026-05-01T11:34:58
1,210
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4", "base_model:quantized:LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4", "license:apache-2.0", "endpoints_compatible", "region:us", "c...
null
2025-02-03T09:23:14
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4 <!-- provided-files --> ***For a convenient overview and download list...
[]
contemmcm/3394259d303afb9a7403a210e0430975
contemmcm
2025-10-12T14:14:08
4
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "generated_from_trainer", "base_model:albert/albert-base-v1", "base_model:finetune:albert/albert-base-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2025-10-12T09:41:30
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 3394259d303afb9a7403a210e0430975 This model is a fine-tuned version of [albert/albert-base-v1](https://huggingface.co/albert/albe...
[ { "start": 263, "end": 284, "text": "albert/albert-base-v1", "label": "benchmark name", "score": 0.7465783953666687 }, { "start": 339, "end": 359, "text": "nyu-mll/glue dataset", "label": "evaluation dataset", "score": 0.6543565988540649 }, { "start": 452, "en...
flackzz/distil-whisper-large-v3-german_timestamped-ONNX
flackzz
2026-03-19T13:22:49
13
0
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "speech", "timestamps", "base_model:primeline/distil-whisper-large-v3-german", "base_model:quantized:primeline/distil-whisper-large-v3-german", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2026-03-19T13:05:00
# distil-whisper-large-v3-german_timestamped-ONNX This repository contains ONNX weights for [`primeline/distil-whisper-large-v3-german`](https://huggingface.co/primeline/distil-whisper-large-v3-german) prepared for use with Transformers.js. Timestamp support is preserved through the exported Whisper generation config...
[]
Pk3112/medmcqa-lora-qwen2.5-7b-instruct
Pk3112
2025-08-22T23:04:22
0
0
peft
[ "peft", "safetensors", "lora", "qlora", "unsloth", "medmcqa", "medical", "instruction-tuning", "qwen", "text-generation", "en", "dataset:openlifescienceai/medmcqa", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-08-22T17:42:26
# MedMCQA LoRA — Qwen2.5-7B-Instruct **Adapter weights only** for `Qwen/Qwen2.5-7B-Instruct`, fine-tuned to answer **medical multiple-choice questions (A/B/C/D)**. Subjects used for fine-tuning and evaluation: **Biochemistry** and **Physiology**. > Educational use only. Not medical advice. ## What’s inside - `ada...
[]
syun88/mg400-demo-track-gtr-mark2
syun88
2026-01-04T08:18:04
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:syun88/mg400-demo-track-gtr-mark2", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-01-04T08:17:07
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "evaluation dataset", "score": 0.6181951761245728 }, { "start": 120, "end": 123, "text": "ACT", "label": "evaluation dataset", "score": 0.6971622109413147 }, { "start": 865, "end": 868, "text": "act", "...
random-sequence/flame-crystal-quartz
random-sequence
2026-03-25T09:42:35
0
0
null
[ "federated-learning", "fl-alliance", "slm_qwen3_0_6B", "license:apache-2.0", "region:us" ]
null
2026-03-25T09:42:32
# FL-Alliance Federated Model: flame-crystal-quartz This model was trained using **FL-Alliance** decentralized federated learning. ## Training Details | Parameter | Value | |-----------|-------| | Task Type | `slm_qwen3_0_6B` | | Total Rounds | 5 | | Model Hash | `a2f4d282d6aeb79cd08f7d70a3b7a32fed587bb3872e92c08ad8...
[]
mradermacher/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162-GGUF
mradermacher
2026-04-13T06:24:22
0
0
transformers
[ "transformers", "gguf", "darwin-v6", "evolutionary-merge", "mri-guided", "slerp", "en", "base_model:SeaWolf-AI/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162", "base_model:quantized:SeaWolf-AI/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162", "license:apache-2.0", "endpoints_compatible", "reg...
null
2026-04-13T05:49:46
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: 1 --> static ...
[ { "start": 364, "end": 411, "text": "Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162", "label": "benchmark name", "score": 0.692955493927002 }, { "start": 548, "end": 600, "text": "Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162-GGUF", "label": "benchmark name", "score": ...
professorsynapse/nexus-tools_sft17-kto2
professorsynapse
2025-11-28T00:27:52
6
0
null
[ "safetensors", "gguf", "mistral", "endpoints_compatible", "region:us", "conversational" ]
null
2025-11-28T00:05:14
# nexus-tools_sft17-kto2 **Training Run:** `20251127_164556` **HuggingFace:** [https://huggingface.co/professorsynapse/nexus-tools_sft17-kto2](https://huggingface.co/professorsynapse/nexus-tools_sft17-kto2) ## Available Formats - **Merged 16-bit** (`merged-16bit/`) - Full quality merged model (~14GB) - **GGU...
[]
ObaidaBit/opus-mt-de-ar-onnx
ObaidaBit
2026-03-08T02:43:29
0
0
null
[ "onnx", "translation", "marian", "android", "de", "ar", "license:cc-by-4.0", "region:us" ]
translation
2026-03-08T02:41:05
# opus-mt-de-ar (ONNX) ONNX export of [Helsinki-NLP/opus-mt-de-ar](https://huggingface.co/Helsinki-NLP/opus-mt-de-ar) for on-device inference on Android. ## Files | File | Description | |---|---| | `encoder_model.onnx` | Encodes the input sentence | | `decoder_model.onnx` | Generates the translated tokens | | `sourc...
[]
uddeshya-k/RepoJepa
uddeshya-k
2026-01-14T03:52:24
0
0
null
[ "safetensors", "repo-jepa", "code", "semantic-search", "jepa", "code-search", "custom_code", "en", "dataset:claudios/code_search_net", "license:mit", "region:us" ]
null
2026-01-14T03:42:55
# Repo-JEPA: Semantic Code Navigator (SOTA 0.90 MRR) A **Joint Embedding Predictive Architecture** (JEPA) for semantic code search, trained on 411,000 real Python functions using an NVIDIA H100. ## 🏆 Performance Tested on 1,000 unseen real-world Python functions from CodeSearchNet. | Metric | Result | Targ...
[ { "start": 38, "end": 42, "text": "SOTA", "label": "evaluation metric", "score": 0.6880897879600525 }, { "start": 48, "end": 51, "text": "MRR", "label": "evaluation metric", "score": 0.6600728034973145 }, { "start": 359, "end": 362, "text": "MRR", "lab...
MatsRooth/wav2vec2_prosodic_minimal
MatsRooth
2025-11-16T16:52:59
0
0
null
[ "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "region:us" ]
audio-classification
2025-11-16T15:44:17
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_prosodic_minimal This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2...
[ { "start": 444, "end": 452, "text": "Accuracy", "label": "evaluation metric", "score": 0.9270637035369873 }, { "start": 454, "end": 460, "text": "0.9802", "label": "evaluation metric", "score": 0.6310253143310547 }, { "start": 736, "end": 749, "text": "lea...
treforbenbow/tensorrt-ace-poc-embedded-plugin
treforbenbow
2026-03-03T18:40:07
0
0
null
[ "region:us" ]
null
2026-03-03T18:39:28
# TensorRT ACE PoC — Arbitrary Code Execution via Embedded Plugin DLL ## Vulnerability Summary TensorRT `.engine` files support embedding plugin shared libraries via `plugins_to_serialize`. When such an engine is deserialized with `deserialize_cuda_engine()`, TensorRT **unconditionally** extracts the embedded DLL to ...
[]
mradermacher/Qwen3.5-9B-YOYO-Instruct-GGUF
mradermacher
2026-03-27T09:58:12
0
0
transformers
[ "transformers", "gguf", "merge", "en", "zh", "base_model:YOYO-AI/Qwen3.5-9B-YOYO-Instruct", "base_model:quantized:YOYO-AI/Qwen3.5-9B-YOYO-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-03-27T09:45:04
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
parallelm/gpt2_small_ZH_unigram_32768_parallel3_42
parallelm
2026-02-02T14:15:08
76
0
null
[ "safetensors", "gpt2", "generated_from_trainer", "region:us" ]
null
2026-02-02T14:15:00
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_small_ZH_unigram_32768_parallel3_42 This model was trained from scratch on an unknown dataset. It achieves the following res...
[ { "start": 274, "end": 289, "text": "unknown dataset", "label": "evaluation dataset", "score": 0.6277255415916443 }, { "start": 365, "end": 373, "text": "Accuracy", "label": "evaluation metric", "score": 0.9433859586715698 }, { "start": 375, "end": 381, "t...
penfever/neulab-codeactinstruct-restore-hp
penfever
2025-11-20T17:58:58
1
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-17T18:34:16
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # neulab-codeactinstruct-restore-hp This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on ...
[ { "start": 264, "end": 277, "text": "Qwen/Qwen3-8B", "label": "benchmark name", "score": 0.6432231068611145 }, { "start": 667, "end": 680, "text": "learning_rate", "label": "evaluation metric", "score": 0.7472424507141113 }, { "start": 682, "end": 687, "te...
iko-01/iko_im3
iko-01
2025-10-04T12:09:19
0
0
null
[ "safetensors", "gpt2", "license:apache-2.0", "region:us" ]
null
2025-09-07T01:08:14
how to use this shit : ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch repo_id = "iko-01/iko_im3" # بدل REPO_BASE باللي درّبت عليه أول مرة (مثلاً gpt2 أو iko-01/iko-v5e-1) base_repo = "iko-01/iko-v5e-1" tokenizer = AutoTokenizer.from_pretrained(base_repo) model = AutoModelForCau...
[]
Sai1290/X-Rays-LLM
Sai1290
2025-09-30T10:26:41
0
0
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "vision-language", "multimodal", "image-question-answering", "biomedical", "huggingface", "fastvision", "conversational", "en", "dataset:axiong/pmc_oa_demo", "license:openrail", "text-generation-inference", "endpoints_compa...
image-text-to-text
2025-09-30T09:15:02
# 🩺 Medical Image QA Model — Vision-Language Expert This is a multimodal model fine-tuned for **image-based biomedical question answering and captioning**, based on scientific figures from [PMC Open Access subset](https://huggingface.co/datasets/axiong/pmc_oa_demo). The model takes a biomedical image and an optional ...
[]
alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx
alexgusevski
2026-01-10T11:34:41
19
0
mlx
[ "mlx", "safetensors", "hunyuan_v1_dense", "translation", "abliterated", "uncensored", "text-generation", "conversational", "zh", "en", "fr", "pt", "es", "ja", "tr", "ru", "ar", "ko", "th", "it", "de", "vi", "ms", "id", "tl", "hi", "pl", "cs", "nl", "km", "...
text-generation
2026-01-10T11:31:27
# alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx This model [alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx](https://huggingface.co/alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx) was converted to MLX format from [huihui-ai/Huihui-HY-MT1.5-7B-abliterated](https://huggingface.co/huihui-ai/Huihui-HY-MT1.5-7B...
[]
defqon-1/SRDEREVERB-12SDK
defqon-1
2025-09-03T07:20:10
0
0
null
[ "region:us" ]
null
2025-08-24T04:30:42
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai...
[]
AxionLab-official/MiniBot-0.9M-Instruct
AxionLab-official
2026-04-06T13:17:16
432
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "pt", "base_model:AxionLab-official/MiniBot-0.9M-Base", "base_model:finetune:AxionLab-official/MiniBot-0.9M-Base", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-05T14:46:08
# 🧠 MiniBot-0.9M-Instruct > **Instruction-tuned GPT-2 style language model (~900K parameters) optimized for Portuguese conversational tasks.** [![Model](https://img.shields.io/badge/🤗%20Hugging%20Face-MiniBot--0.9M--Instruct-yellow)](https://huggingface.co/AxionLab-official/MiniBot-0.9M-Instruct) [![License](https:...
[]
NONHUMAN-RESEARCH/pi05_ki_vlm_v2
NONHUMAN-RESEARCH
2026-01-29T11:01:14
1
1
lerobot
[ "lerobot", "safetensors", "robotics", "pi05_ki", "dataset:NONHUMAN-RESEARCH/test-general-idx", "license:apache-2.0", "region:us" ]
robotics
2026-01-29T10:58:10
# Model Card for pi05_ki <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingfac...
[]
adpretko/x86-to-llvm-o2_epoch2
adpretko
2025-11-01T03:34:36
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:adpretko/x86-to-llvm-o2_epoch1-AMD", "base_model:finetune:adpretko/x86-to-llvm-o2_epoch1-AMD", "text-generation-inference", "endpoints_compatible", "reg...
text-generation
2025-10-30T11:18:20
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # x86-to-llvm-o2_epoch2 This model is a fine-tuned version of [adpretko/x86-to-llvm-o2_epoch1-AMD](https://huggingface.co/adpretko/...
[]
quangdung/Qwen2.5-1.5b-thinking-ties
quangdung
2026-04-14T15:29:10
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-14T15:26:03
# 5-1.5b-thinking-ties This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using /workspace/dqdung/khoaluan/model/Qwen2.5-1.5B as a base. ##...
[]
mlx-community/granite-4.0-350m-8bit
mlx-community
2025-10-28T17:06:31
39
0
mlx
[ "mlx", "safetensors", "granitemoehybrid", "language", "granite-4.0", "text-generation", "conversational", "base_model:ibm-granite/granite-4.0-350m", "base_model:quantized:ibm-granite/granite-4.0-350m", "license:apache-2.0", "8-bit", "region:us" ]
text-generation
2025-10-28T17:05:43
# mlx-community/granite-4.0-350m-8bit This model [mlx-community/granite-4.0-350m-8bit](https://huggingface.co/mlx-community/granite-4.0-350m-8bit) was converted to MLX format from [ibm-granite/granite-4.0-350m](https://huggingface.co/ibm-granite/granite-4.0-350m) using mlx-lm version **0.28.4**. ## Use with mlx ```b...
[]
kiratan/qwen3-4b-structeval-lora-50
kiratan
2026-02-24T13:45:57
9
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit", "lora", "transformers", "unsloth", "text-generation", "en", "dataset:kiratan/toml_constraints_min", "license:apache-2.0", "region:us" ]
text-generation
2026-02-24T13:45:38
<【課題】ここは自分で記入して下さい> This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **structured ou...
[]
zetanschy/soarm_train
zetanschy
2025-11-26T05:23:57
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:soarm/pick_and_placev2_merged", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-11-26T05:23:22
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "evaluation dataset", "score": 0.6181951761245728 }, { "start": 120, "end": 123, "text": "ACT", "label": "evaluation dataset", "score": 0.6971622109413147 }, { "start": 865, "end": 868, "text": "act", "...
komokomo7/act_cranex7_multisensor_20260113_110326
komokomo7
2026-01-13T02:34:42
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:komokomo7/cranex7_gc_on20260113_105932", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-01-13T02:34:25
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "evaluation dataset", "score": 0.6181951761245728 }, { "start": 120, "end": 123, "text": "ACT", "label": "evaluation dataset", "score": 0.6971622109413147 }, { "start": 865, "end": 868, "text": "act", "...
mradermacher/G4-26B-A4B-Musica-v1-i1-GGUF
mradermacher
2026-04-30T04:49:10
0
0
transformers
[ "transformers", "gguf", "en", "dataset:EVA-UNIT-01/Lilith-v0.3", "dataset:zerofata/Gemini-3.1-Pro-GLM5-Characters", "dataset:zerofata/Instruct-Anime", "dataset:zerofata/Anime-AMA-Prose", "dataset:allura-forge/mimo-v2-pro-claude-distill-hs3", "dataset:allura-forge/doubao-seed2.0-distill-multiturn-exp...
null
2026-04-30T03:26:26
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[ { "start": 628, "end": 656, "text": "G4-26B-A4B-Musica-v1-i1-GGUF", "label": "benchmark name", "score": 0.6133509874343872 }, { "start": 877, "end": 902, "text": "G4-26B-A4B-Musica-v1-GGUF", "label": "benchmark name", "score": 0.6083496809005737 } ]
rbelanec/train_cola_456_1760637821
rbelanec
2025-10-18T16:29:47
7
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "llama-factory", "transformers", "text-generation", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
text-generation
2025-10-18T14:56:41
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_cola_456_1760637821 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta...
[ { "start": 754, "end": 767, "text": "learning_rate", "label": "evaluation metric", "score": 0.6833991408348083 }, { "start": 769, "end": 774, "text": "5e-05", "label": "evaluation metric", "score": 0.6542868614196777 } ]
End of preview. Expand in Data Studio

davanstrien/eval-mentions-bootstrap

Bootstrap NER dataset produced by urchade/gliner_multi-v2.1 over /input/cleaned-cards.parquet.

Generated using uv-scripts/gliner/extract-entities.py.

Provenance

Source dataset /input/cleaned-cards.parquet (split train)
Text column card
Bootstrap model urchade/gliner_multi-v2.1
Entity types benchmark name, evaluation dataset, evaluation metric
Confidence threshold 0.6
Samples processed 10000
Total entities extracted 15811
Inference device cuda
Wall clock 951.7s (10.51 samples/s)

Schema

Original /input/cleaned-cards.parquet columns plus an entities column:

entities: list of {
    "start": int,    # character offset, inclusive
    "end": int,      # character offset, exclusive
    "text": str,     # the matched span
    "label": str,    # one of ['benchmark name', 'evaluation dataset', 'evaluation metric']
    "score": float,  # GLiNER confidence in [0, 1]
}

Caveats

  • These are bootstrap labels, not human-reviewed. Treat low-confidence (< 0.7) entities as candidates for review.
  • GLiNER is zero-shot: changing --entity-types changes what it extracts, but quality varies by entity type.
  • Long texts were truncated at 8000 characters before inference.
Downloads last month
-