Run ONNX model
Runs a machine learning model stored in the ONNX format.
- Path to an ONNX model file.
- The preferred execution back-end for the machine learning model. This parameter can be used to force the tool to stick to one of the available execution back-ends. If the selected executor is not available, a generic CPU implementation will be used. Note that all supported executors are selectable even though the machine on which you use the tool doesn't have the required hardware and/or libraries.
- The index of the computation device used by the selected executor. Usually, this is the index of a CUDA device as listed by
nvidia-smi. If the chosen device is not available at run time, the first device will be used instead.
Input tensors are defined by the model. They appear in the tool when a model has been loaded.
Output tensors are defined by the model. They appear in the tool when a model has been loaded.
Execution back-ends for running the ONNX model.