Feature Extraction
sentence-transformers
ONNX
Safetensors
Transformers
xlm-roberta
mteb
Eval Results (legacy)
text-embeddings-inference
Instructions to use intfloat/multilingual-e5-large-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use intfloat/multilingual-e5-large-instruct with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("intfloat/multilingual-e5-large-instruct") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use intfloat/multilingual-e5-large-instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="intfloat/multilingual-e5-large-instruct")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("intfloat/multilingual-e5-large-instruct") model = AutoModel.from_pretrained("intfloat/multilingual-e5-large-instruct") - Inference
- Notebooks
- Google Colab
- Kaggle
Adding ONNX file of this model
#17
by yashvardhan7 - opened
Beep boop I am the ONNX export bot ๐ค๐๏ธ. On behalf of yashvardhan7, I would like to add to this repository the model converted to ONNX.
What is ONNX? It stands for "Open Neural Network Exchange", and is the most commonly used open standard for machine learning interoperability. You can find out more at onnx.ai!
The exported ONNX model can be then be consumed by various backends as TensorRT or TVM, or simply be used in a few lines with ๐ค Optimum through ONNX Runtime, check out how here!
This comment has been hidden
intfloat changed pull request status to merged