Jetson Nano python examples not working

Hello,

I am using JetPack 4.4 and DeepStream 5.0.

I’m having problems running any of the python scripts on the Jetson Nano. Using deepstream-test3 and the sample video sample_1080p_h264.mp4 I get the following error.

Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating transform 
 
Creating EGLSink 

Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  /home/nano/Videos/sample_1080p_h264.mp4
Starting pipeline 


Using winsys: x11 
0:00:00.461019913 10167      0xfbce190 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
WARNING: INT8 not supported by platform. Trying FP16 mode.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:02.330951457 10167      0xfbce190 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
0:00:02.331426721 10167      0xfbce190 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 1]: build backend context failed
0:00:02.331477138 10167      0xfbce190 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 1]: generate backend failed, check config file settings
0:00:02.331521201 10167      0xfbce190 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:02.331548076 10167      0xfbce190 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary-inference> error: Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

I’ve found other threads with similar issues but I have not been able to fix this error. The C examples do work with minimal effort.

Thanks for the help,
Simon

The issue came from the config file loading a non-existent .engine file for the deepstream-test1.py. The file needs to be changed to fp16 model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine

For the deepstream-test3.py there is no model-engine loaded. Does anyone have any ideas to fix the problem above?

Can you provide below info:
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

and if you do not provide engine file, it will build engine, log for rebuild engine is expected.
Did you use builtin model or your model? we have verified all the samples, it works.

Same error here.
Jetson nano
DeepStream 5.0
JP: 4.4-b144
TensorRT:7.1.3-1+cuda10.2

Sorry for the late reply.
My system:
Jetson Nano
DeepStream 5.0
JetPack 4.4
TensorRT:7.1.0

I noticed that the .engine file is not specified in the python test 3 example. I use the built in model - It’s a fresh install on DeepStream 5. Do you have any ideas of what the issue can be?

I’ll try a re-flash of the SD card tonight and see if that sorts it out.

Thanks for the help.

Deepstream 5.0 DP is supported on Jetpack 4.4 DP. you should use this version.

Same issue here running deepstream_test1_rtsp_out.py
JetPack 4.4
deepstream 5.0.0-1
tensorrt 7.1.3.0-1+cuda10.2

WARNING: INT8 not supported by platform. Trying FP16 mode.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:01.597721263 14491 0x1ae0d270 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
0:00:01.598173598 14491 0x1ae0d270 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 1]: build backend context failed
0:00:01.598217607 14491 0x1ae0d270 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 1]: generate backend failed, check config file settings

Please noted:
JetPack 4.4 supports the upcoming DeepStream 5.0 release

  • DeepStream 5.0 Developer Preview is only supported with JetPack 4.4 Developer Preview.

I don’t understand, are you saying I don’t have the right versions? I have the versions below which sound exactly what you are providing. Or maybe I don’t have the ‘developer edition’?

JetPack 4.4
deepstream 5.0.0-1

Jetpack 4.4 and Jetpack 4.4 DP is different.
here is JP 4.4 DP link,