Dataset Viewer
Auto-converted to Parquet Duplicate
_leaderboard
stringclasses
1 value
_developer
stringclasses
559 values
_model
stringlengths
9
102
_uuid
stringlengths
36
36
schema_version
stringclasses
1 value
evaluation_id
stringlengths
35
133
retrieved_timestamp
stringlengths
13
18
source_data
stringclasses
1 value
evaluation_source_name
stringclasses
1 value
evaluation_source_type
stringclasses
1 value
source_organization_name
stringclasses
1 value
source_organization_url
null
source_organization_logo_url
null
evaluator_relationship
stringclasses
1 value
model_name
stringlengths
4
102
model_id
stringlengths
9
102
model_developer
stringclasses
559 values
model_inference_platform
stringclasses
1 value
evaluation_results
stringlengths
1.35k
1.41k
additional_details
stringclasses
660 values
HF Open LLM v2
bond005
bond005/meno-tiny-0.1
109acb38-3026-4573-b082-8277b9501f09
0.0.1
hfopenllm_v2/bond005_meno-tiny-0.1/1762652580.035417
1762652580.035417
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
bond005/meno-tiny-0.1
bond005/meno-tiny-0.1
bond005
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.45497613000172876}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "float16", "architecture": "Qwen2ForCausalLM", "params_billions": 1.544}
HF Open LLM v2
4season
4season/final_model_test_v2
74973e37-cd82-4e8a-816a-02b035fabff4
0.0.1
hfopenllm_v2/4season_final_model_test_v2/1762652579.4714398
1762652579.4714408
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
4season/final_model_test_v2
4season/final_model_test_v2
4season
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3191132860809319}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 21.421}
HF Open LLM v2
ZeusLabs
ZeusLabs/L3-Aethora-15B-V2
0e9ed58c-1a3e-49b4-8013-994642a95920
0.0.1
hfopenllm_v2/ZeusLabs_L3-Aethora-15B-V2/1762652579.968798
1762652579.9687989
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
ZeusLabs/L3-Aethora-15B-V2
ZeusLabs/L3-Aethora-15B-V2
ZeusLabs
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.7208063493752133}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 15.01}
HF Open LLM v2
GoToCompany
GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
68ff0a5c-9e76-410b-a4e3-4b7de0e7fe35
0.0.1
hfopenllm_v2/GoToCompany_gemma2-9b-cpt-sahabatai-v1-instruct/1762652579.628178
1762652579.628178
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
GoToCompany
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.6550607942481504}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Gemma2ForCausalLM", "params_billions": 9.242}
HF Open LLM v2
GoToCompany
GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
aa363693-a300-4545-b7f3-05492646c202
0.0.1
hfopenllm_v2/GoToCompany_llama3-8b-cpt-sahabatai-v1-instruct/1762652579.628486
1762652579.628489
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
GoToCompany
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.523844510343666}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH"...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 8.03}
HF Open LLM v2
internlm
internlm/internlm2_5-20b-chat
a651c814-41e2-4951-bb8f-df799cc6e470
0.0.1
hfopenllm_v2/internlm_internlm2_5-20b-chat/1762652580.2279649
1762652580.227966
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
internlm/internlm2_5-20b-chat
internlm/internlm2_5-20b-chat
internlm
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.7009977969565198}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "InternLM2ForCausalLM", "params_billions": 19.86}
HF Open LLM v2
internlm
internlm/internlm2_5-1_8b-chat
d37e87e2-53c3-42fa-b78d-04d2819b14d3
0.0.1
hfopenllm_v2/internlm_internlm2_5-1_8b-chat/1762652580.227762
1762652580.227763
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
internlm/internlm2_5-1_8b-chat
internlm/internlm2_5-1_8b-chat
internlm
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.38490870889240547}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "InternLM2ForCausalLM", "params_billions": 1.89}
HF Open LLM v2
internlm
internlm/internlm2-chat-1_8b
767b5c7e-6319-487f-906c-2abca794f884
0.0.1
hfopenllm_v2/internlm_internlm2-chat-1_8b/1762652580.227562
1762652580.227563
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
internlm/internlm2-chat-1_8b
internlm/internlm2-chat-1_8b
internlm
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.2386545477111841}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "InternLM2ForCausalLM", "params_billions": 1.889}
HF Open LLM v2
internlm
internlm/internlm2-7b
d4bba57d-2a3c-4945-ae47-7830840d0259
0.0.1
hfopenllm_v2/internlm_internlm2-7b/1762652580.2273018
1762652580.227303
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
internlm/internlm2-7b
internlm/internlm2-7b
internlm
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.22803680981595092}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "float16", "architecture": "Unknown", "params_billions": 0.0}
HF Open LLM v2
internlm
internlm/internlm2-1_8b
fc23ef4f-2ef1-4a3e-b029-9d646145e135
0.0.1
hfopenllm_v2/internlm_internlm2-1_8b/1762652580.227062
1762652580.227063
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
internlm/internlm2-1_8b
internlm/internlm2-1_8b
internlm
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.2197702097102355}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "InternLM2ForCausalLM", "params_billions": 8.0}
HF Open LLM v2
internlm
internlm/internlm2_5-7b-chat
28245528-26e8-48a8-9cc8-68d7a6389bde
0.0.1
hfopenllm_v2/internlm_internlm2_5-7b-chat/1762652580.2281651
1762652580.2281659
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
internlm/internlm2_5-7b-chat
internlm/internlm2_5-7b-chat
internlm
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5538692890419642}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "InternLM2ForCausalLM", "params_billions": 7.738}
HF Open LLM v2
ibivibiv
ibivibiv/multimaster-7b-v6
7044a4d4-1c07-40ef-917c-d242b61d7877
0.0.1
hfopenllm_v2/ibivibiv_multimaster-7b-v6/1762652580.205187
1762652580.205188
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
ibivibiv/multimaster-7b-v6
ibivibiv/multimaster-7b-v6
ibivibiv
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.4473075883101283}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "MixtralForCausalLM", "params_billions": 35.428}
HF Open LLM v2
ibivibiv
ibivibiv/colossus_120b
f0bcf710-b1a8-4736-9fd3-6b0ea241155e
0.0.1
hfopenllm_v2/ibivibiv_colossus_120b/1762652580.2048829
1762652580.204884
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
ibivibiv/colossus_120b
ibivibiv/colossus_120b
ibivibiv
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.42759877126025614}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "float16", "architecture": "LlamaForCausalLM", "params_billions": 117.749}
HF Open LLM v2
speakleash
speakleash/Bielik-11B-v2.3-Instruct
822b7413-b84e-4df0-8aca-cc0e95283a86
0.0.1
hfopenllm_v2/speakleash_Bielik-11B-v2.3-Instruct/1762652580.534104
1762652580.534104
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
speakleash/Bielik-11B-v2.3-Instruct
speakleash/Bielik-11B-v2.3-Instruct
speakleash
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.558290890393046}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH"...
{"precision": "float16", "architecture": "MistralForCausalLM", "params_billions": 11.169}
HF Open LLM v2
speakleash
speakleash/Bielik-11B-v2.2-Instruct
70c377ab-41b4-4c30-ade6-65cc52ab916a
0.0.1
hfopenllm_v2/speakleash_Bielik-11B-v2.2-Instruct/1762652580.533901
1762652580.5339022
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
speakleash/Bielik-11B-v2.2-Instruct
speakleash/Bielik-11B-v2.2-Instruct
speakleash
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5551935531057595}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 11.169}
HF Open LLM v2
speakleash
speakleash/Bielik-11B-v2.0-Instruct
4aaff24b-0364-4cc9-9680-5f5c6d04128b
0.0.1
hfopenllm_v2/speakleash_Bielik-11B-v2.0-Instruct/1762652580.533494
1762652580.533494
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
speakleash/Bielik-11B-v2.0-Instruct
speakleash/Bielik-11B-v2.0-Instruct
speakleash
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5252430218486948}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 11.169}
HF Open LLM v2
speakleash
speakleash/Bielik-11B-v2
680f5fa0-fb15-4687-a40b-7807af2e0fe5
0.0.1
hfopenllm_v2/speakleash_Bielik-11B-v2/1762652580.533211
1762652580.533211
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
speakleash/Bielik-11B-v2
speakleash/Bielik-11B-v2
speakleash
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.23810489501190177}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 11.169}
HF Open LLM v2
speakleash
speakleash/Bielik-11B-v2.1-Instruct
834e5703-00f3-47d6-817f-cf039c53d915
0.0.1
hfopenllm_v2/speakleash_Bielik-11B-v2.1-Instruct/1762652580.533698
1762652580.533698
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
speakleash/Bielik-11B-v2.1-Instruct
speakleash/Bielik-11B-v2.1-Instruct
speakleash
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5089817240477489}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 11.169}
HF Open LLM v2
ZeroXClem
ZeroXClem/L3-Aspire-Heart-Matrix-8B
e6d8d952-5a3d-4a97-860c-8275b10c6516
0.0.1
hfopenllm_v2/ZeroXClem_L3-Aspire-Heart-Matrix-8B/1762652579.96632
1762652579.966321
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
ZeroXClem/L3-Aspire-Heart-Matrix-8B
ZeroXClem/L3-Aspire-Heart-Matrix-8B
ZeroXClem
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.48335305877294465}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 8.03}
HF Open LLM v2
AI-Sweden-Models
AI-Sweden-Models/Llama-3-8B-instruct
1d68bd2e-de6e-4327-a8f1-33322eba537e
0.0.1
hfopenllm_v2/AI-Sweden-Models_Llama-3-8B-instruct/1762652579.474785
1762652579.474786
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
AI-Sweden-Models/Llama-3-8B-instruct
AI-Sweden-Models/Llama-3-8B-instruct
AI-Sweden-Models
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.24012841482821137}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 8.03}
HF Open LLM v2
yifAI
yifAI/Llama-3-8B-Instruct-SPPO-score-Iter3_gp_8b-table-0.002
79fad1b7-c458-4f89-9d7a-d58f70ba6c90
0.0.1
hfopenllm_v2/yifAI_Llama-3-8B-Instruct-SPPO-score-Iter3_gp_8b-table-0.002/1762652580.6077929
1762652580.607796
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
yifAI/Llama-3-8B-Instruct-SPPO-score-Iter3_gp_8b-table-0.002
yifAI/Llama-3-8B-Instruct-SPPO-score-Iter3_gp_8b-table-0.002
yifAI
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.6489658550423987}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 8.03}
HF Open LLM v2
MTSAIR
MTSAIR/Cotype-Nano
b5fa19ff-9b05-4d71-9d79-54f8dfe4a8ab
0.0.1
hfopenllm_v2/MTSAIR_Cotype-Nano/1762652579.742943
1762652579.742944
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
MTSAIR/Cotype-Nano
MTSAIR/Cotype-Nano
MTSAIR
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3747922179816221}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 1.544}
HF Open LLM v2
MTSAIR
MTSAIR/MultiVerse_70B
a713dba7-110a-40a0-9d89-d48567d423af
0.0.1
hfopenllm_v2/MTSAIR_MultiVerse_70B/1762652579.743202
1762652579.7432032
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
MTSAIR/MultiVerse_70B
MTSAIR/MultiVerse_70B
MTSAIR
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5249183278146429}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 72.289}
HF Open LLM v2
Edgerunners
Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16
1e2cd0e7-ce74-4eac-86fb-64412d1d2094
0.0.1
hfopenllm_v2/Edgerunners_meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16/1762652579.592541
1762652579.592542
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16
Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16
Edgerunners
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.7147114101694614}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 8.03}
HF Open LLM v2
ECE-ILAB-PRYMMAL
ECE-ILAB-PRYMMAL/ILAB-Merging-3B-V2
cbdf2130-1b6a-43ae-a503-4fc7acf14a76
0.0.1
hfopenllm_v2/ECE-ILAB-PRYMMAL_ILAB-Merging-3B-V2/1762652579.5918348
1762652579.591836
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
ECE-ILAB-PRYMMAL/ILAB-Merging-3B-V2
ECE-ILAB-PRYMMAL/ILAB-Merging-3B-V2
ECE-ILAB-PRYMMAL
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.40289432040319684}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "Phi3ForCausalLM", "params_billions": 3.821}
HF Open LLM v2
informatiker
informatiker/Qwen2-7B-Instruct-abliterated
be1ab009-3aa6-43da-8b8e-11e5287a0370
0.0.1
hfopenllm_v2/informatiker_Qwen2-7B-Instruct-abliterated/1762652580.2263439
1762652580.226345
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
informatiker/Qwen2-7B-Instruct-abliterated
informatiker/Qwen2-7B-Instruct-abliterated
informatiker
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5821708622011817}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 7.616}
HF Open LLM v2
beowolx
beowolx/CodeNinja-1.0-OpenChat-7B
fbe7d86c-8d1e-474a-bf85-35a139bdb08f
0.0.1
hfopenllm_v2/beowolx_CodeNinja-1.0-OpenChat-7B/1762652580.030703
1762652580.030704
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
beowolx/CodeNinja-1.0-OpenChat-7B
beowolx/CodeNinja-1.0-OpenChat-7B
beowolx
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5446770125489258}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 7.242}
HF Open LLM v2
AGI-0
AGI-0/Art-v0-3B
162b6d5f-f983-4989-9603-f6baea26b633
0.0.1
hfopenllm_v2/AGI-0_Art-v0-3B/1762652579.473539
1762652579.47354
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
AGI-0/Art-v0-3B
AGI-0/Art-v0-3B
AGI-0
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.319238509377341}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH"...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 3.086}
HF Open LLM v2
CausalLM
CausalLM/34b-beta
cc482ca4-031a-4c22-90c2-68322184125b
0.0.1
hfopenllm_v2/CausalLM_34b-beta/1762652579.502916
1762652579.502916
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
CausalLM/34b-beta
CausalLM/34b-beta
CausalLM
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3043247472262486}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "LlamaForCausalLM", "params_billions": 34.389}
HF Open LLM v2
CausalLM
CausalLM/14B
c4376867-854d-44fa-9215-b9c1af7612a4
0.0.1
hfopenllm_v2/CausalLM_14B/1762652579.502646
1762652579.502647
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
CausalLM/14B
CausalLM/14B
CausalLM
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.2788213052478535}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 14.0}
HF Open LLM v2
CausalLM
CausalLM/preview-1-hf
e9fcf09c-14e2-4226-b1e5-b5752ac1a753
0.0.1
hfopenllm_v2/CausalLM_preview-1-hf/1762652579.503128
1762652579.503129
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
CausalLM/preview-1-hf
CausalLM/preview-1-hf
CausalLM
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5558928088582737}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "GlmForCausalLM", "params_billions": 9.543}
HF Open LLM v2
NucleusAI
NucleusAI/nucleus-22B-token-500B
f18c51de-f5eb-4986-8c44-35bd71db5e8b
0.0.1
hfopenllm_v2/NucleusAI_nucleus-22B-token-500B/1762652579.7966561
1762652579.7966561
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
NucleusAI/nucleus-22B-token-500B
NucleusAI/nucleus-22B-token-500B
NucleusAI
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.025654153202391873}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on B...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 21.828}
HF Open LLM v2
frameai
frameai/Loxa-4B
b8ac82ef-a231-43ee-aaf2-23b0830cfbc3
0.0.1
hfopenllm_v2/frameai_Loxa-4B/1762652580.160984
1762652580.160984
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
frameai/Loxa-4B
frameai/Loxa-4B
frameai
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.47648350820268}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH",...
{"precision": "float16", "architecture": "LlamaForCausalLM", "params_billions": 4.018}
HF Open LLM v2
rhymes-ai
rhymes-ai/Aria
611c449e-3d86-4dea-94a8-a2b7719fa1ae
0.0.1
hfopenllm_v2/rhymes-ai_Aria/1762652580.4949272
1762652580.494928
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
rhymes-ai/Aria
rhymes-ai/Aria
rhymes-ai
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.4773079872516035}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "AriaForConditionalGeneration", "params_billions": 25.307}
HF Open LLM v2
Baptiste-HUVELLE-10
Baptiste-HUVELLE-10/LeTriomphant2.2_ECE_iLAB
b1632b15-fa00-4476-b3f4-05aba95df664
0.0.1
hfopenllm_v2/Baptiste-HUVELLE-10_LeTriomphant2.2_ECE_iLAB/1762652579.4943
1762652579.4943008
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Baptiste-HUVELLE-10/LeTriomphant2.2_ECE_iLAB
Baptiste-HUVELLE-10/LeTriomphant2.2_ECE_iLAB
Baptiste-HUVELLE-10
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5076330802271307}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 72.706}
HF Open LLM v2
h2oai
h2oai/h2o-danube-1.8b-chat
ac8f78b5-a9e1-4e17-a1e7-8a7b8dc22a8d
0.0.1
hfopenllm_v2/h2oai_h2o-danube-1.8b-chat/1762652580.188648
1762652580.188649
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
h2oai/h2o-danube-1.8b-chat
h2oai/h2o-danube-1.8b-chat
h2oai
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.2198699450790569}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "MistralForCausalLM", "params_billions": 1.831}
HF Open LLM v2
h2oai
h2oai/h2o-danube3-4b-base
3878bb0d-753f-465a-a8c1-8408f8f5bfcf
0.0.1
hfopenllm_v2/h2oai_h2o-danube3-4b-base/1762652580.18891
1762652580.1889112
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
h2oai/h2o-danube3-4b-base
h2oai/h2o-danube3-4b-base
h2oai
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.23380851695722904}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 3.962}
HF Open LLM v2
h2oai
h2oai/h2o-danube3-4b-chat
d3df3cb7-5e79-49e5-9ed1-1e2771318915
0.0.1
hfopenllm_v2/h2oai_h2o-danube3-4b-chat/1762652580.1891232
1762652580.189124
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
h2oai/h2o-danube3-4b-chat
h2oai/h2o-danube3-4b-chat
h2oai
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3628771659197596}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "LlamaForCausalLM", "params_billions": 3.962}
HF Open LLM v2
h2oai
h2oai/h2o-danube3-500m-chat
c917765b-a4b4-4e5d-9c11-eed791349daf
0.0.1
hfopenllm_v2/h2oai_h2o-danube3-500m-chat/1762652580.1893299
1762652580.1893299
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
h2oai/h2o-danube3-500m-chat
h2oai/h2o-danube3-500m-chat
h2oai
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.2207941594968018}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "LlamaForCausalLM", "params_billions": 0.514}
HF Open LLM v2
h2oai
h2oai/h2o-danube3.1-4b-chat
5f5d83bd-91e9-416b-b40d-506f3861ed3f
0.0.1
hfopenllm_v2/h2oai_h2o-danube3.1-4b-chat/1762652580.189556
1762652580.189557
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
h2oai/h2o-danube3.1-4b-chat
h2oai/h2o-danube3.1-4b-chat
h2oai
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5021121734774842}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 3.962}
HF Open LLM v2
cgato
cgato/TheSalt-L3-8b-v0.3.2
aa805bcc-3847-40b5-86eb-397982106d18
0.0.1
hfopenllm_v2/cgato_TheSalt-L3-8b-v0.3.2/1762652580.100134
1762652580.100136
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
cgato/TheSalt-L3-8b-v0.3.2
cgato/TheSalt-L3-8b-v0.3.2
cgato
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.27050337548814923}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 8.03}
HF Open LLM v2
Artples
Artples/L-MChat-7b
7aeaf034-1c02-4da7-b7b4-9a27ce759601
0.0.1
hfopenllm_v2/Artples_L-MChat-7b/1762652579.482251
1762652579.482251
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Artples/L-MChat-7b
Artples/L-MChat-7b
Artples
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5296646231997766}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 7.242}
HF Open LLM v2
Artples
Artples/L-MChat-Small
0e5a84e3-b90f-4c20-ad58-4d1cf3517f28
0.0.1
hfopenllm_v2/Artples_L-MChat-Small/1762652579.4824991
1762652579.4825
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Artples/L-MChat-Small
Artples/L-MChat-Small
Artples
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.32870561222002065}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "PhiForCausalLM", "params_billions": 2.78}
HF Open LLM v2
sumink
sumink/qwft
6cdf831f-3ccd-4d78-a94f-269ace42fc1c
0.0.1
hfopenllm_v2/sumink_qwft/1762652580.548597
1762652580.548597
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/qwft
sumink/qwft
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.11965252197502627}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "float16", "architecture": "Qwen2ForCausalLM", "params_billions": 7.616}
HF Open LLM v2
sumink
sumink/somerft
cb6879a2-41b6-40b6-bb20-723aa0b213e1
0.0.1
hfopenllm_v2/sumink_somerft/1762652580.5496058
1762652580.5496068
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/somerft
sumink/somerft
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.14305819669587805}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 1.543}
HF Open LLM v2
sumink
sumink/llftfl7
ed7c36f0-5b1a-45ef-be66-f9880cad099d
0.0.1
hfopenllm_v2/sumink_llftfl7/1762652580.548197
1762652580.548198
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/llftfl7
sumink/llftfl7
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.17143512546709397}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 3.213}
HF Open LLM v2
sumink
sumink/solarmer3
59ebeb48-88c4-4c63-92bb-888752ea9dad
0.0.1
hfopenllm_v2/sumink_solarmer3/1762652580.5489879
1762652580.5489888
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/solarmer3
sumink/solarmer3
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3741428299135183}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 10.732}
HF Open LLM v2
sumink
sumink/llmer
8f2bad2c-5c31-433a-bbf0-f1a8f0a80c3a
0.0.1
hfopenllm_v2/sumink_llmer/1762652580.548394
1762652580.548395
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/llmer
sumink/llmer
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3191132860809319}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "LlamaForCausalLM", "params_billions": 8.03}
HF Open LLM v2
sumink
sumink/qwmer
2cd4d3ec-2800-4223-ab50-6f9f4a1e1a57
0.0.1
hfopenllm_v2/sumink_qwmer/1762652580.54879
1762652580.548791
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/qwmer
sumink/qwmer
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.22124407682726277}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "float16", "architecture": "Qwen2ForCausalLM", "params_billions": 7.616}
HF Open LLM v2
sumink
sumink/Qmerft
11243917-73a3-484e-ac8b-40065c65ea8c
0.0.1
hfopenllm_v2/sumink_Qmerft/1762652580.5451572
1762652580.5451572
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/Qmerft
sumink/Qmerft
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.15639724819035714}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 1.777}
HF Open LLM v2
sumink
sumink/somer
282fa475-0ac8-4230-8020-9dbb7fda03da
0.0.1
hfopenllm_v2/sumink_somer/1762652580.549191
1762652580.549192
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/somer
sumink/somer
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.29902990731259727}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "float16", "architecture": "LlamaForCausalLM", "params_billions": 10.732}
HF Open LLM v2
sumink
sumink/somer2
fee6fbc3-c115-4668-8b5b-35b307c15fe8
0.0.1
hfopenllm_v2/sumink_somer2/1762652580.549396
1762652580.549397
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
sumink/somer2
sumink/somer2
sumink
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3132433055404106}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 10.732}
HF Open LLM v2
OmnicromsBrain
OmnicromsBrain/NeuralStar_FusionWriter_4x7b
65ba6556-712c-42cc-817b-ad8c2014dc4c
0.0.1
hfopenllm_v2/OmnicromsBrain_NeuralStar_FusionWriter_4x7b/1762652579.7988968
1762652579.798898
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
OmnicromsBrain/NeuralStar_FusionWriter_4x7b
OmnicromsBrain/NeuralStar_FusionWriter_4x7b
OmnicromsBrain
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5963842604289951}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "MixtralForCausalLM", "params_billions": 24.154}
HF Open LLM v2
byroneverson
byroneverson/Yi-1.5-9B-Chat-abliterated
345560e2-c981-4aca-9388-4f3a5e95ace8
0.0.1
hfopenllm_v2/byroneverson_Yi-1.5-9B-Chat-abliterated/1762652580.070213
1762652580.070215
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
byroneverson/Yi-1.5-9B-Chat-abliterated
byroneverson/Yi-1.5-9B-Chat-abliterated
byroneverson
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5723291976400395}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 8.829}
HF Open LLM v2
byroneverson
byroneverson/Yi-1.5-9B-Chat-16K-abliterated
dc783bb0-c784-4cf4-888b-36a3bfa37a84
0.0.1
hfopenllm_v2/byroneverson_Yi-1.5-9B-Chat-16K-abliterated/1762652580.068388
1762652580.068392
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
byroneverson/Yi-1.5-9B-Chat-16K-abliterated
byroneverson/Yi-1.5-9B-Chat-16K-abliterated
byroneverson
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5528453392553979}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "LlamaForCausalLM", "params_billions": 8.829}
HF Open LLM v2
byroneverson
byroneverson/Mistral-Small-Instruct-2409-abliterated
ff0c627b-72b9-45d4-a385-49c8b0ae6b6e
0.0.1
hfopenllm_v2/byroneverson_Mistral-Small-Instruct-2409-abliterated/1762652580.063036
1762652580.063037
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
byroneverson/Mistral-Small-Instruct-2409-abliterated
byroneverson/Mistral-Small-Instruct-2409-abliterated
byroneverson
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.6970759806203096}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 22.247}
HF Open LLM v2
Cran-May
Cran-May/SCE-2-24B
f4ff02eb-7763-41bc-8a86-adbb051603af
0.0.1
hfopenllm_v2/Cran-May_SCE-2-24B/1762652579.512776
1762652579.5127769
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Cran-May/SCE-2-24B
Cran-May/SCE-2-24B
Cran-May
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5865924635522636}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 23.572}
HF Open LLM v2
Cran-May
Cran-May/merge_model_20250308_2
c457473c-6c40-4930-94b8-993d3b1e8937
0.0.1
hfopenllm_v2/Cran-May_merge_model_20250308_2/1762652579.51357
1762652579.5135732
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Cran-May/merge_model_20250308_2
Cran-May/merge_model_20250308_2
Cran-May
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5932370554572978}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 14.766}
HF Open LLM v2
Cran-May
Cran-May/SCE-3-24B
2d7b9092-a9ad-4f47-b186-db1e1ce7cd6c
0.0.1
hfopenllm_v2/Cran-May_SCE-3-24B/1762652579.513022
1762652579.513023
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Cran-May/SCE-3-24B
Cran-May/SCE-3-24B
Cran-May
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5465254413844156}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 23.572}
HF Open LLM v2
Cran-May
Cran-May/T.E-8.1
9c9e0887-5561-4789-9521-a3a78e7cfd99
0.0.1
hfopenllm_v2/Cran-May_T.E-8.1/1762652579.513231
1762652579.513231
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Cran-May/T.E-8.1
Cran-May/T.E-8.1
Cran-May
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.7076922565459647}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 7.616}
HF Open LLM v2
Cran-May
Cran-May/merge_model_20250308_4
45531924-35ad-4baf-9994-5d5fa3bafd02
0.0.1
hfopenllm_v2/Cran-May_merge_model_20250308_4/1762652579.514166
1762652579.514167
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Cran-May/merge_model_20250308_4
Cran-May/merge_model_20250308_4
Cran-May
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.4539521802151624}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 14.766}
HF Open LLM v2
Cran-May
Cran-May/merge_model_20250308_3
5448dbb6-9874-4734-8252-369c7b0189d7
0.0.1
hfopenllm_v2/Cran-May_merge_model_20250308_3/1762652579.513911
1762652579.513912
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Cran-May/merge_model_20250308_3
Cran-May/merge_model_20250308_3
Cran-May
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.6017799438822324}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 14.766}
HF Open LLM v2
Cran-May
Cran-May/tempmotacilla-cinerea-0308
5e5e70f4-c597-415c-ab74-17aaf55b7b28
0.0.1
hfopenllm_v2/Cran-May_tempmotacilla-cinerea-0308/1762652579.514418
1762652579.5144188
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
Cran-May/tempmotacilla-cinerea-0308
Cran-May/tempmotacilla-cinerea-0308
Cran-May
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.8084837121061007}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "Qwen2ForCausalLM", "params_billions": 14.766}
HF Open LLM v2
GreenNode
GreenNode/GreenNode-small-9B-it
d13def83-5ff8-4cde-aef5-b3c268c40c16
0.0.1
hfopenllm_v2/GreenNode_GreenNode-small-9B-it/1762652579.6324449
1762652579.632446
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
GreenNode/GreenNode-small-9B-it
GreenNode/GreenNode-small-9B-it
GreenNode
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.7436125037123721}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "Gemma2ForCausalLM", "params_billions": 9.242}
HF Open LLM v2
altomek
altomek/YiSM-34B-0rn
a9c75810-f51d-4fd3-8c96-6afdbc0f278c
0.0.1
hfopenllm_v2/altomek_YiSM-34B-0rn/1762652580.010027
1762652580.0100281
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
altomek/YiSM-34B-0rn
altomek/YiSM-34B-0rn
altomek
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.428373382624769}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH"...
{"precision": "float16", "architecture": "LlamaForCausalLM", "params_billions": 34.389}
HF Open LLM v2
nbeerbower
nbeerbower/Nemoties-ChatML-12B
3644fc16-b0fa-42d7-b17a-eb8f8332193f
0.0.1
hfopenllm_v2/nbeerbower_Nemoties-ChatML-12B/1762652580.383542
1762652580.383543
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
nbeerbower/Nemoties-ChatML-12B
nbeerbower/Nemoties-ChatML-12B
nbeerbower
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.6381999760635115}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 12.248}
HF Open LLM v2
nbeerbower
nbeerbower/mistral-nemo-wissenschaft-12B
5f68a07f-4442-4453-92c3-b615323da96b
0.0.1
hfopenllm_v2/nbeerbower_mistral-nemo-wissenschaft-12B/1762652580.388424
1762652580.388424
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
nbeerbower/mistral-nemo-wissenschaft-12B
nbeerbower/mistral-nemo-wissenschaft-12B
nbeerbower
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.6520133246452745}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 12.248}
HF Open LLM v2
nbeerbower
nbeerbower/Nemo-Loony-12B-experimental
894b90c6-c701-47d8-b930-4e271e28962f
0.0.1
hfopenllm_v2/nbeerbower_Nemo-Loony-12B-experimental/1762652580.383332
1762652580.383332
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
nbeerbower/Nemo-Loony-12B-experimental
nbeerbower/Nemo-Loony-12B-experimental
nbeerbower
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.37344357416100393}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 12.248}
HF Open LLM v2
nbeerbower
nbeerbower/Lyra4-Gutenberg-12B
02606fe0-ca08-4102-9670-8a18a9cc6f81
0.0.1
hfopenllm_v2/nbeerbower_Lyra4-Gutenberg-12B/1762652580.380318
1762652580.380318
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
nbeerbower/Lyra4-Gutenberg-12B
nbeerbower/Lyra4-Gutenberg-12B
nbeerbower
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.2212185888996751}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 12.248}
HF Open LLM v2
nbeerbower
nbeerbower/Kartoffel-Deepfry-12B
09ba1be1-4b42-4eba-810f-a0aed64aafc0
0.0.1
hfopenllm_v2/nbeerbower_Kartoffel-Deepfry-12B/1762652580.379381
1762652580.3793821
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
nbeerbower/Kartoffel-Deepfry-12B
nbeerbower/Kartoffel-Deepfry-12B
nbeerbower
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.5021620411618949}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 12.248}
HF Open LLM v2
nbeerbower
nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental
e7337143-6ec7-4467-b6f5-907492705cc9
0.0.1
hfopenllm_v2/nbeerbower_Mistral-Nemo-Moderne-12B-FFT-experimental/1762652580.3819818
1762652580.381983
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental
nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental
nbeerbower
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.33522498082864577}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 12.248}
HF Open LLM v2
nbeerbower
nbeerbower/Lyra4-Gutenberg2-12B
f9da5237-3903-4bbf-a0bc-0bcf3152f45a
0.0.1
hfopenllm_v2/nbeerbower_Lyra4-Gutenberg2-12B/1762652580.380519
1762652580.3805199
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
nbeerbower/Lyra4-Gutenberg2-12B
nbeerbower/Lyra4-Gutenberg2-12B
nbeerbower
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.25851296781428834}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 12.248}
HF Open LLM v2
nbeerbower
nbeerbower/SmolNemo-12B-FFT-experimental
435e3ce7-479f-4624-978e-25d755dee811
0.0.1
hfopenllm_v2/nbeerbower_SmolNemo-12B-FFT-experimental/1762652580.383975
1762652580.383976
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
nbeerbower/SmolNemo-12B-FFT-experimental
nbeerbower/SmolNemo-12B-FFT-experimental
nbeerbower
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3348005514257725}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "bfloat16", "architecture": "MistralForCausalLM", "params_billions": 12.248}
HF Open LLM v2
godlikehhd
godlikehhd/alpaca_data_ifd_max_2600_3B
41d72b83-3c55-460f-9d21-88866eed6b9a
0.0.1
hfopenllm_v2/godlikehhd_alpaca_data_ifd_max_2600_3B/1762652580.1669528
1762652580.166954
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
godlikehhd/alpaca_data_ifd_max_2600_3B
godlikehhd/alpaca_data_ifd_max_2600_3B
godlikehhd
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.298155560579263}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH"...
{"precision": "float16", "architecture": "Qwen2ForCausalLM", "params_billions": 3.086}
HF Open LLM v2
godlikehhd
godlikehhd/alpaca_data_full_3B
d7d6baf0-00d3-4960-970c-949bb9919ac9
0.0.1
hfopenllm_v2/godlikehhd_alpaca_data_full_3B/1762652580.166356
1762652580.166357
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
godlikehhd/alpaca_data_full_3B
godlikehhd/alpaca_data_full_3B
godlikehhd
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.36957162550920447}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BB...
{"precision": "float16", "architecture": "Qwen2ForCausalLM", "params_billions": 3.086}
HF Open LLM v2
godlikehhd
godlikehhd/alpaca_data_score_max_2500
b6fd288d-36d5-4499-bf2d-da1fdd1120c5
0.0.1
hfopenllm_v2/godlikehhd_alpaca_data_score_max_2500/1762652580.1698968
1762652580.169898
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
godlikehhd/alpaca_data_score_max_2500
godlikehhd/alpaca_data_score_max_2500
godlikehhd
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3563577973111345}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "Qwen2ForCausalLM", "params_billions": 1.544}
HF Open LLM v2
godlikehhd
godlikehhd/alpaca_data_score_max_0.1_2600
08195b61-5fe5-4cce-8da4-34b731289278
0.0.1
hfopenllm_v2/godlikehhd_alpaca_data_score_max_0.1_2600/1762652580.1691651
1762652580.169167
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
godlikehhd/alpaca_data_score_max_0.1_2600
godlikehhd/alpaca_data_score_max_0.1_2600
godlikehhd
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3287554799044313}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "Qwen2ForCausalLM", "params_billions": 1.544}
HF Open LLM v2
godlikehhd
godlikehhd/alpaca_data_ins_min_5200
d976888b-5e17-4e5c-b557-0b48bf36d4f7
0.0.1
hfopenllm_v2/godlikehhd_alpaca_data_ins_min_5200/1762652580.1684108
1762652580.1684108
["https://open-llm-leaderboard-open-llm-leaderboard.hf.space/api/leaderboard/formatted"]
HF Open LLM v2
leaderboard
Hugging Face
null
null
third_party
godlikehhd/alpaca_data_ins_min_5200
godlikehhd/alpaca_data_ins_min_5200
godlikehhd
unknown
[{"evaluation_name": "IFEval", "metric_config": {"evaluation_description": "Accuracy on IFEval", "lower_is_better": false, "score_type": "continuous", "min_score": 0, "max_score": 1}, "score_details": {"score": 0.3359995921931586}}, {"evaluation_name": "BBH", "metric_config": {"evaluation_description": "Accuracy on BBH...
{"precision": "float16", "architecture": "Qwen2ForCausalLM", "params_billions": 1.544}
End of preview. Expand in Data Studio

Every Eval Ever Dataset

Evaluation results from various AI model leaderboards.

Usage

from datasets import load_dataset

# Load specific leaderboard
dataset = load_dataset("deepmage121/eee_test", split="hfopenllm_v2")

# Load all
dataset = load_dataset("deepmage121/eee_test")

Available Leaderboards (Splits)

  • hfopenllm_v2
  • livecodebenchpro

Schema

  • model_name, model_id, model_developer: Model information
  • evaluation_source_name: Leaderboard name
  • evaluation_results: JSON string with all metrics
  • Additional metadata for reproducibility

Auto-updated via GitHub Actions.

Downloads last month
15