How to use peft-internal-testing/tiny-random-T5ForConditionalGeneration-calibrated with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("peft-internal-testing/tiny-random-T5ForConditionalGeneration-calibrated") model = AutoModelForSeq2SeqLM.from_pretrained("peft-internal-testing/tiny-random-T5ForConditionalGeneration-calibrated")
7b08d62 721b413 7b08d62
1
2
3
4
5
6
7
8
9
{ "_from_model_config": true, "bos_token_id": 0, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.57.1" }