Running engine file on jetson nano using deep stream

Hello, I am trying to run a custom Faster_rcnn model which I trained on my laptop. I copied the engine file to Jetson Nano and tried running on Jetson using deep stream. I created a Config file and ran it using deepstream-app -c. I am a neophyte in deep stream and deep learning. Can anyone help me solve these errors ?

Thank you,

Creating LL OSD context new
0:00:01.332581373 5690 0x3b85e730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): The engine plan file is not compatible with this version of TensorRT, expecting library version 5.1.6 got 5.1.5, please rebuild.
0:00:01.332696113 5690 0x3b85e730 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:useEngineFile(): Failed to create engine from file
0:00:01.333127055 5690 0x3b85e730 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
Weights for layer conv1_1 doesn’t exist
0:00:01.360603104 5690 0x3b85e730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer conv1_1 doesn’t exist
0:00:01.360655657 5690 0x3b85e730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
0:00:01.360706334 5690 0x3b85e730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): Parameter check failed at: …/builder/Network.cpp::addConvolution::104, condition: kernelWeights.values != nullptr
error parsing layer type Convolution index 0
0:00:01.360753262 5690 0x3b85e730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed while parsing network
0:00:01.360944045 5690 0x3b85e730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:01.361002118 5690 0x3b85e730 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:01.361040452 5690 0x3b85e730 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/jn1/deepstream_sdk_v4.0.1_jetson/samples/configs/deepstream-app/config_infer_primary_nano_copy.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
** ERROR: main:651: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /home/jn1/deepstream_sdk_v4.0.1_jetson/samples/configs/deepstream-app/config_infer_primary_nano_copy.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
App run failed

See the error:
The engine plan file is not compatible with this version of TensorRT, expecting library version 5.1.6 got 5.1.5, please rebuild.

Did you follow https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps ?

Also , where did you genenrate the trt engine?
Note that if you want to run trr engine in nano, please generate trt engine in nano instead of PC.

Hi vishwasrajput66,

We haven’t heard back from you in a couple weeks, so marking this topic closed.
Please open a new forum issue when you are ready and we’ll pick it up there.