code_execution_files / 0xSero_GLM-4.7-REAP-50-W4A16_0.txt
ariG23498's picture
ariG23498 HF Staff
Upload 0xSero_GLM-4.7-REAP-50-W4A16_0.txt with huggingface_hub
7cfa538 verified
```CODE:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="0xSero/GLM-4.7-REAP-50-W4A16")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages)
```
ERROR:
Traceback (most recent call last):
File "/tmp/0xSero_GLM-4.7-REAP-50-W4A16_0mqIr0K.py", line 26, in <module>
pipe = pipeline("text-generation", model="0xSero/GLM-4.7-REAP-50-W4A16")
File "/tmp/.cache/uv/environments-v2/57fea2d03e820408/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1027, in pipeline
framework, model = infer_framework_load_model(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
adapter_path if adapter_path is not None else model,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
**model_kwargs,
^^^^^^^^^^^^^^^
)
^
File "/tmp/.cache/uv/environments-v2/57fea2d03e820408/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/tmp/.cache/uv/environments-v2/57fea2d03e820408/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 604, in from_pretrained
return model_class.from_pretrained(
~~~~~~~~~~~~~~~~~~~~~~~~~~~^
pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/tmp/.cache/uv/environments-v2/57fea2d03e820408/lib/python3.13/site-packages/transformers/modeling_utils.py", line 277, in _wrapper
return func(*args, **kwargs)
File "/tmp/.cache/uv/environments-v2/57fea2d03e820408/lib/python3.13/site-packages/transformers/modeling_utils.py", line 4881, in from_pretrained
hf_quantizer, config, dtype, device_map = get_hf_quantizer(
~~~~~~~~~~~~~~~~^
config, quantization_config, dtype, from_tf, from_flax, device_map, weights_only, user_agent
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/tmp/.cache/uv/environments-v2/57fea2d03e820408/lib/python3.13/site-packages/transformers/quantizers/auto.py", line 311, in get_hf_quantizer
hf_quantizer = AutoHfQuantizer.from_config(
config.quantization_config,
pre_quantized=pre_quantized,
)
File "/tmp/.cache/uv/environments-v2/57fea2d03e820408/lib/python3.13/site-packages/transformers/quantizers/auto.py", line 185, in from_config
return target_cls(quantization_config, **kwargs)
File "/tmp/.cache/uv/environments-v2/57fea2d03e820408/lib/python3.13/site-packages/transformers/quantizers/quantizer_gptq.py", line 49, in __init__
raise ImportError("Loading a GPTQ quantized model requires optimum (`pip install optimum`)")
ImportError: Loading a GPTQ quantized model requires optimum (`pip install optimum`)