Detectnet_v2 inference is throwing ValueError

After successfully generating tfrecords and training my model, I ran an inference on it.

Below is the command.

detectnet_v2 inference -e infer.txt -i fire_dataset/images/ -o inference_output/ -k <my key>

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Using TensorFlow backend.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
2021-10-01 10:27:28,137 [INFO] iva.detectnet_v2.spec_handler.spec_loader: Merging specification from infer.txt
2021-10-01 10:27:28,138 [INFO] __main__: Overlain images will be saved in the output path.
2021-10-01 10:27:28,138 [INFO] iva.detectnet_v2.inferencer.build_inferencer: Constructing inferencer
2021-10-01 10:27:28,403 [INFO] iva.detectnet_v2.inferencer.trt_inferencer: Engine file not found at /workspace/tlt-experiments/fire_detection/output/weights/model.engine
2021-10-01 10:27:28,403 [INFO] iva.detectnet_v2.inferencer.trt_inferencer: Using TensorRT to optimize model and generate an engine.
Traceback (most recent call last):
  File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py", line 210, in <module>
  File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py", line 206, in main
  File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py", line 117, in inference_wrapper_batch
  File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/inferencer/trt_inferencer.py", line 340, in network_init
  File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/inferencer/trt_inferencer.py", line 265, in _parse_etlt_model
  File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/magnet/build_wheel.runfiles/ai_infra/magnet/encoding/encoding.py", line 126, in decode
  File "/usr/local/lib/python3.6/dist-packages/cryptography/hazmat/primitives/ciphers/base.py", line 108, in __init__
    mode.validate_for_algorithm(algorithm)
  File "/usr/local/lib/python3.6/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py", line 182, in validate_for_algorithm
    len(self.nonce), self.name
ValueError: Invalid nonce size (0) for CTR.

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Meanwhile, for some common errors and queries please refer to below link:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#error-messaging
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#faq

Thanks!