Skip to main content
Version: 24.8.0

DeploymentSettings_ONNXRuntime_ONNX

CLASS - DeploymentSettings_ONNXRuntime_ONNX(

  • weights_utilize_full_int_range: bool =  False

)

Use this if you wish to deploy to Microsoft's ONNXRuntime in Settings.deployment_settings.

Class Variables

  • weights_utilize_full_int_range (bool) - Set to True if you're planning to run the model on a CPU that supports AVX512-VNNI or on ARM Device

See Also

https://onnxruntime.ai/docs/performance/model-optimizations/quantization.html#when-and-why-do-i-need-to-try-u8u8

DeploymentSettings_TFLite

CLASS - DeploymentSettings_TFLite()

Use this if you wish to deploy to Google's TFLite in Settings.deployment_settings.

DeploymentSettings_OpenVINO_ONNX

CLASS - DeploymentSettings_OpenVINO_ONNX(

  • weights_utilize_full_int_range: bool =  False

)

Use this if you wish to deploy to Intel's OpenVINO in Settings.deployment_settings.

Class Variables

  • weights_utilize_full_int_range (bool) - Set to True if you're planning to run the model on a CPU that supports AVX512-VNNI.

See Also

https://onnxruntime.ai/docs/performance/model-optimizations/quantization.html#when-and-why-do-i-need-to-try-u8u8

DeploymentSettings_QNN_ONNX

CLASS - DeploymentSettings_QNN_ONNX()

EXPERIMENTAL Use this if you wish to deploy to Qualcomm's QNN in Settings.deployment_settings.

DeploymentSettings_TensorRT_ONNX

CLASS - DeploymentSettings_TensorRT_ONNX()

Use this if you wish to deploy to NVIDIA's TensorRT in Settings.deployment_settings.