Cannot run my custom yolov3 tiny with python bindings of deepstream on jetson nano

when using the below command to run my custom yolov3tiny model with python bindings of deep stream the code does not move ahead.

$ sudo python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4


Creating Pipeline 

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4
Unknown or legacy key specified ‘is-classifier’ for group [property]
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Using winsys: x11
Opening in BLOCKING MODE
Deserialize yoloLayerV3 plugin: yolo_17
Deserialize yoloLayerV3 plugin: yolo_24
0:00:10.978689944 2058 0x6260190 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT data 3x416x416
1 OUTPUT kFLOAT yolo_17 45x13x13
2 OUTPUT kFLOAT yolo_24 45x26x26

0:00:10.979101819 2058 0x6260190 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/model_b1_gpu0_fp32.engine
0:00:11.088726038 2058 0x6260190 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

no errors are shown and the code does not move past this point

Hi,

If you have run the app before, Deepstream will serialize the converted TensorRT engine into file for the next launch.
Once the model is updated, please delete the serialized engine to allow the app to re-generate.

Based on the log, the app seems to reuse the serialized engine rather than re-creating.

... deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/model_b1_gpu0_fp32.engine

Would you mind to delete the file and try it again?

Thanks.

1 Like

Thank you for such a lightning fast response, it turns out that I was passing a .MP4 file and the program was expecting a .H264 file hence it wasn’t moving ahead.

I’ve given path for this engine file in the config file for the app, so if I delete this file, wouldn’t I get an error?

Hi,

The engine file is generated by Deepstream app.
If there is no engine exist, it should re-create it from the .cfg and .weights again.

Thanks.

1 Like

Okay, got it. Thanks