Deepstream object detection

• Hardware Platform : Jetson
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only): 4.6.1-b110
hello, I trained an yolov3 on kitti using TAO, on azure cloud. Now i have the following problem, when trying to run the model on deepstream, on xavier:


0:05:17.146164547 12439     0x2fd26e60 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/xavier/Downloads/yolov4_resnet18_fp32_epoch_080.etlt_b1_gpu0_fp32.engine successfully
INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x384x1248      
1   OUTPUT kINT32 BatchedNMS      1               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           
3   OUTPUT kFLOAT BatchedNMS_2    200             
4   OUTPUT kFLOAT BatchedNMS_3    200             

0:05:17.265996982 12439     0x2fd26e60 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
0:05:17.711518430 12439     0x2f714280 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:05:17.711775850 12439     0x2f714280 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)

Moving to TAO forum.
Could you share the config file?

This is the config file


[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
#labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
#int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin

tlt-encoded-model=/home/xavier/Downloads/yolov4_resnet18_fp32_epoch_080.etlt
tlt-model-key=nvidia_tlt
labelfile-path=README1.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=0
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=BatchedNMS
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=2

[class-attrs-all]
pre-cluster-threshold=0.2
topk=20
nms-iou-threshold=0.5

Can you add deepstream_tao_apps/pgie_yolov3_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub and deepstream_tao_apps/pgie_yolov3_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub ?

I have the following error nvdsinfer_custombboxparser_tao.cpp:25:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory #include "nvdsinfer_custom_impl.h" when trying to compile libnvds_infercustomparser_tao.so

It might be related to your environment. Please check the github’s Prerequisites.
Or you can try some older branches.

Yeah, solved that one. Now I have some pyds error… AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Suggest you to double check the environment. If it is a deepstream issue, please create topic in Deepstream forum instead.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.