Hello,
The help says only support dynamic batch(N=-1) onnx mode.
If I visualize the onnx model given as a sample with netron, it comes out as follows.
Is this dynamic batch also known as implicit batch?
Even if I read the onnx conversion tutorial in the official pytorch document, it didn’t come out to make dynamic batches.
Is there a way to make the already created onnx model into dynamic batch?
nvidia@tegra-ubuntu:/usr/src/jetson_multimedia_api/samples/04_video_dec_trt$ ./video_dec_trt
video_dec_trt [Channel-num] <in-file1> <in-file2> ... <in-format> [options]
Channel-num:
1-32, Number of file arguments should exactly match the number of channels specified
Supported formats:
H264
H265
OPTIONS:
-h,--help Prints this text
--dbg-level <level> Sets the debug level [Values 0-3]
Caffe model:
--trt-deployfile set caffe deploy file name
--trt-modelfile set caffe model file name
ONNX model:
--trt-onnxmodel set onnx model file name, only support dynamic batch(N=-1) onnx model
--trt-mode 0 fp16 (if supported), 1 fp32, 2 int8
--trt-enable-perf 1[default] to enable perf measurement, 0 otherwise
Thank you.