Getting no output in detectnet jetson-inference

Hi,
you are doing good job, thank you for the effort

I trained peoplenet on custom image using tao toolkit detectnet_v2 and after the training I converted hdf5 to onnx using tao model detectnet_v2 export command to get onnx model in ubuntu

I used below command to generate tensor rt engine file in jetson nano

/usr/src/tensorrt/bin/trtexec --onnx=model.onnx --saveEngine=model.engine --explicitBatch --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --shapes=input_1:0:1x3x304x400 --fp16 --verbose

now i copied the file to jetson-inference and used

detectnet --model=model.engine --input-blob=input_1:0 --output-cvg=output_cov/Sigmoid:0 --output-bbox=output_bbox/BiasAdd:0 /path/to/your/video.avi

its working but no detection

i used below code too

import jetson.inference
import jetson.utils

Load the model

net = jetson.inference.detectNet(“model.engine”, threshold=0.1, input_blob=“input_1:0”, output_cvg=“output_cov/Sigmoid:0”, output_bbox=“output_bbox/BiasAdd:0”)

Open the video source

camera = jetson.utils.videoSource(“Recording_cam2_1626.avi”)

Create video output

display = jetson.utils.videoOutput()

while True:
# Capture a frame from the video source
img = camera.Capture()
print(img)
# Perform object detection
detections = net.Detect(img)

# Print the number of detections for debugging
print(f"Number of detections: {len(detections)}")

# Print the detection details for debugging
print("detections", detections)
for detection in detections:
    print("detection", detection)
    print(f"Detection: Class={detection.ClassID}, Confidence={detection.Confidence}, Left={detection.Left}, Top={detection.Top}, Right={detection.Right}, Bottom={detection.Bottom}")
display.Render(img)
display.SetStatus("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))

Close the video source and display

camera.Close()
display.Close()

i see no output

Number of detections: 0
detections

– ptr: 0x20e3da000
– size: 6220800
– width: 1920
– height: 1080
– channels: 3
– format: rgb8
– mapped: true
– freeOnDelete: false
– timestamp: 3.240000

i also see this warning

[TRT] binding to input 0 input_1:0 binding index: 0
[TRT] binding to input 0 input_1:0 dims (b=1 c=1 h=3 w=304) size=1459200
[TRT] binding to output 0 output_cov/Sigmoid:0 binding index: 1
[TRT] binding to output 0 output_cov/Sigmoid:0 dims (b=1 c=1 h=3 w=19) size=5700
[TRT] binding to output 1 output_bbox/BiasAdd:0 binding index: 2
[TRT] binding to output 1 output_bbox/BiasAdd:0 dims (b=1 c=1 h=12 w=19) size=22800
[TRT] device GPU, initialized model.engine
[TRT] detectNet – number of object classes: 1
[TRT] detectNet – maximum bounding boxes: 57
[TRT] loaded 0 class labels
[TRT] didn’t load expected number of class descriptions (0 of 1)
[TRT] filling in remaining 1 class descriptions with default labels
[TRT] detectNet – number of object classes: 1
[TRT] loaded 0 class colors
[TRT] didn’t load expected number of class colors (0 of 1)
[TRT] filling in remaining 1 class colors with default colors
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstDecoder – creating decoder for Recording_cam2_1626.avi
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 260
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 260

please let me know why there is no detection and how to fix this

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Please check if the hdf5 model can work.

$tao model detectnet_v2 inference -e $SPECS_DIR/detectnet_v2_inference_kitti_tlt.txt -r result  -i test_samples

Refer to the cell 9 in
tao_tutorials/notebooks/tao_launcher_starter_kit/detectnet_v2/detectnet_v2.ipynb at main · NVIDIA/tao_tutorials · GitHub.

Please login tao deploy docker
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:5.2.0-deploy /bin/bash
Then run trtexec again inside the docker. Command is in TRTEXEC with DetectNet-v2 - NVIDIA Docs.

Then still in the docker, please run inference with command detectnet_v2 inference xxx to check if it works. Refer to the cell 11.

Since jetson-inference is not official stuff from TAO, I suggest you to do above experiments to narrow down.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.