nyu-mll/glue
Viewer • Updated • 1.49M • 458k • 495
How to use jap2/bert-base-sst-2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="jap2/bert-base-sst-2") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("jap2/bert-base-sst-2")
model = AutoModelForSequenceClassification.from_pretrained("jap2/bert-base-sst-2")This model is a fine-tuned version of bert-base-uncased on the glue dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|---|---|---|---|---|---|---|---|
| 0.2366 | 1.0 | 105 | 0.2193 | 0.9117 | 0.9115 | 0.9139 | 0.9111 |
| 0.1104 | 2.0 | 210 | 0.2174 | 0.9243 | 0.9243 | 0.9243 | 0.9243 |
| 0.0685 | 2.99 | 315 | 0.2441 | 0.9186 | 0.9185 | 0.9186 | 0.9185 |
| 0.0476 | 4.0 | 421 | 0.2524 | 0.9232 | 0.9232 | 0.9233 | 0.9234 |
| 0.0319 | 5.0 | 526 | 0.2832 | 0.9220 | 0.9219 | 0.9226 | 0.9217 |
| 0.0227 | 6.0 | 631 | 0.3093 | 0.9289 | 0.9289 | 0.9289 | 0.9289 |
| 0.0169 | 6.99 | 736 | 0.3755 | 0.9209 | 0.9209 | 0.9208 | 0.9210 |
| 0.0112 | 8.0 | 842 | 0.3793 | 0.9220 | 0.9219 | 0.9234 | 0.9215 |
| 0.0079 | 9.0 | 947 | 0.3980 | 0.9255 | 0.9254 | 0.9255 | 0.9254 |
| 0.007 | 9.98 | 1050 | 0.4216 | 0.9300 | 0.9300 | 0.9302 | 0.9299 |