Model Card for xlm-roberta-large-squad2-csfever_v2-f1

Model Details

Model for natural language inference trained as a part of bachelor thesis.

Uses

Transformers

from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-squad2-csfever_v2-f1")
tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-squad2-csfever_v2-f1")

Sentence Transformers

from sentence_transformers.cross_encoder import CrossEncoder
model = CrossEncoder('ctu-aic/xlm-roberta-large-squad2-csfever_v2-f1')
scores = model.predict([["My first context.", "My first hypothesis."],  
                        ["Second context.", "Hypothesis."]])
Downloads last month
5
Safetensors
Model size
0.6B params
Tensor type
I64
ยท
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Dataset used to train ctu-aic/xlm-roberta-large-squad2-csfever_v2-f1

Space using ctu-aic/xlm-roberta-large-squad2-csfever_v2-f1 1