root@139aee1557cf:/workspace/projects/peopleNet/tlt_peoplenet_pruned_v2.0# tlt-infer detectnet_v2 -e detectnet_v2_inference_kitti_etlt.txt -i zoom.png -o test.png -k tlt_encode
Using TensorFlow backend.
2020-11-12 13:53:58.101802: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-11-12 13:54:00,945 [INFO] iva.detectnet_v2.scripts.inference: Overlain images will be saved in the output path.
2020-11-12 13:54:00,945 [INFO] iva.detectnet_v2.inferencer.build_inferencer: Constructing inferencer
2020-11-12 13:54:01,681 [INFO] iva.detectnet_v2.inferencer.trt_inferencer: Reading from engine file at: peoplenet.engine
2020-11-12 13:54:02,807 [INFO] iva.detectnet_v2.scripts.inference: Initialized model
2020-11-12 13:54:02,807 [INFO] iva.detectnet_v2.scripts.inference: Commencing inference
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File “/usr/local/bin/tlt-infer”, line 8, in
sys.exit(main())
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/magnet_infer.py”, line 54, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py”, line 194, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py”, line 150, in inference_wrapper_batch
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/inferencer/trt_inferencer.py”, line 443, in infer_batch ValueError: could not broadcast input array from shape (4,544,960) into shape (3,544,960)
Thanks for pointing it out. I updated the target classes and now I am running into a different issue. Please see the error below. Can you suggest if there is any preprocessing needed on the image files?
tlt_peoplenet_pruned_v2.0# tlt-infer detectnet_v2 -e detectnet_v2_inference_kitti_etlt.txt -i zoom.png -o test.png -k tlt_encode
Using TensorFlow backend.
2020-11-13 16:03:46.236092: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-11-13 16:03:48,297 [INFO] iva.detectnet_v2.scripts.inference: Overlain images will be saved in the output path.
2020-11-13 16:03:48,297 [INFO] iva.detectnet_v2.inferencer.build_inferencer: Constructing inferencer
2020-11-13 16:03:48,552 [INFO] iva.detectnet_v2.inferencer.trt_inferencer: Reading from engine file at: peoplenet.engine
2020-11-13 16:03:49,390 [INFO] iva.detectnet_v2.scripts.inference: Initialized model
2020-11-13 16:03:49,390 [INFO] iva.detectnet_v2.scripts.inference: Commencing inference
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File “/usr/local/bin/tlt-infer”, line 8, in
sys.exit(main())
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/magnet_infer.py”, line 54, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py”, line 194, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/inference.py”, line 150, in inference_wrapper_batch
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/715c8bafe7816f3bb6f309cd506049bb/execroot/ai_infra/bazel-out/k8-py3-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/inferencer/trt_inferencer.py”, line 443, in infer_batch
ValueError: could not broadcast input array from shape (4,544,960) into shape (3,544,960)
Thanks for your responses. I corrected the output_map but the input array reshape error still persists. Please find the detectnet_v2_inference_kitti_etlt.txtdetectnet_v2_inference_kitti_etlt.txt (1.7 KB) attached.