Kernels

This is the repository card of kernels-community/finegrained-fp8 that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.

How to use

# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel

kernel_module = get_kernel("kernels-community/finegrained-fp8")
fp8_act_quant = kernel_module.fp8_act_quant

fp8_act_quant(...)

Available functions

  • fp8_act_quant
  • w8a8_fp8_matmul
  • w8a8_block_fp8_matmul
  • w8a8_tensor_fp8_matmul
  • w8a8_fp8_matmul_batched
  • w8a8_block_fp8_matmul_batched
  • w8a8_tensor_fp8_matmul_batched
  • w8a8_fp8_matmul_grouped
  • w8a8_block_fp8_matmul_grouped
  • w8a8_tensor_fp8_matmul_grouped

Benchmarks

No benchmark available yet.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support