Token Classification
Transformers
Safetensors
English
gpt2
privacy
pii-detection
pii-redaction
sliding-window-attention
rope
swiglu
text-generation-inference
Instructions to use 8Fai/context-filter with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use 8Fai/context-filter with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="8Fai/context-filter")# Load model directly from transformers import AutoTokenizer, ContextFilterV2 tokenizer = AutoTokenizer.from_pretrained("8Fai/context-filter") model = ContextFilterV2.from_pretrained("8Fai/context-filter") - Notebooks
- Google Colab
- Kaggle
| { | |
| "add_prefix_space": false, | |
| "backend": "tokenizers", | |
| "bos_token": "<|endoftext|>", | |
| "eos_token": "<|endoftext|>", | |
| "errors": "replace", | |
| "is_local": false, | |
| "local_files_only": false, | |
| "model_max_length": 1024, | |
| "pad_token": "<|endoftext|>", | |
| "tokenizer_class": "GPT2Tokenizer", | |
| "unk_token": "<|endoftext|>" | |
| } | |