Instructions to use textattack/bert-base-uncased-RTE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use textattack/bert-base-uncased-RTE with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="textattack/bert-base-uncased-RTE")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("textattack/bert-base-uncased-RTE") model = AutoModelForSequenceClassification.from_pretrained("textattack/bert-base-uncased-RTE") - Inference
- Notebooks
- Google Colab
- Kaggle
Update config.json
Browse files- config.json +2 -1
config.json
CHANGED
|
@@ -3,7 +3,8 @@
|
|
| 3 |
"BertForSequenceClassification"
|
| 4 |
],
|
| 5 |
"attention_probs_dropout_prob": 0.1,
|
| 6 |
-
"finetuning_task": "rte",
|
|
|
|
| 7 |
"hidden_act": "gelu",
|
| 8 |
"hidden_dropout_prob": 0.1,
|
| 9 |
"hidden_size": 768,
|
|
|
|
| 3 |
"BertForSequenceClassification"
|
| 4 |
],
|
| 5 |
"attention_probs_dropout_prob": 0.1,
|
| 6 |
+
"finetuning_task": "glue:rte",
|
| 7 |
+
"gradient_checkpointing": false,
|
| 8 |
"hidden_act": "gelu",
|
| 9 |
"hidden_dropout_prob": 0.1,
|
| 10 |
"hidden_size": 768,
|