Deepstream infrence gives no detection

Do you have
$ tlt info --verbose
or
$ tao info --verbose

I test again with latest 3.21.08-py3 docker. The deepstream can get detection with the trt int8 engine. You can try with my step.
$ docker pull nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3

Then,
$ docker run –-runtime=nvidia -it --rm -v localfolder:/dockerfolder nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3 /bin/bash

Then:

  • Run a training with cspdarknet53 backbone with KITTI dataset.
    Only run for 10 epochs. Then get the tlt model.
  • Generate etlt model and also trt int8 engine

yolo_v4 export -k nvidia_tlt -m epoch_010.tlt -e spec.txt --engine_file 384_1248.engine --data_type int8 --batch_size 8 --batches 10 --cal_cache_file export/cal.bin --cal_data_file export/cal.tensorfile --cal_image_dir /kitti_path/training/image_2 -o 384_1248.etlt

Then copy the cal.bin and .etlt file into one machine (Mine is Geforce 1080Ti) and then run inference as below.

morganh@dl:/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/deepstream_tao_apps$ ./apps/tao_detection/ds-tao-detection -c ./configs/yolov4_tao/pgie_yolov4_tao_config.txt -i /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264 -d

$ cat ./configs/yolov4_tao/pgie_yolov4_tao_config.txt
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=yolov4_labels_kitti.txt
model-engine-file=../../models/yolov4/kitti/384_1248_cspdarknet53.etlt_b1_gpu0_int8.engine
int8-calib-file=../../models/yolov4/kitti/cal.bin
tlt-encoded-model=../../models/yolov4/kitti/384_1248_cspdarknet53.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;384;1248
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../../post_processor/libnvds_infercustomparser_tao.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

$ cat configs/yolov4_tao/yolov4_labels_kitti.txt
car
cyclist
pedestrian
1 Like

Thanks i will try this now
Now i have the engine file with me

To measure the inference performance :

I can’t find the trtexec file
when i search for trtexec i get some location pointing towards the Tensort file,should i make trtexec and run this script at that location.

For trtexec, if TensorRT is installed in one device, there is the code to build trtexec in /usr/src/tensorrt/samples/trtexec/ where you can run make to build it.

Once it’s built, then it should be located in /usr/src/tensorrt/bin , or a similar path.

Doc:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec

1 Like

Thanks,this worked

for mp4 stream on yoloV4 can i use these as they include

  • 2D Bodypose
  • Facial Landmarks Estimation
  • EmotionNet
  • Gaze Estimation
  • GestureNet
  • HeartRateNet

very different from my use case and can you tell me what should i run to get live metadata of the inference,Thank you

The apps which I mentioned can run with an mp4 file. So, you can leverage the code how to run mp4 file.

1 Like

Ok thank