Jetson inference failed to load detectNet model

Hi team
i am facing problem with running pre-trained model in jetson nano and i have given the logs to verify if any thing is wrong in it. I have follow the proper procedure to build and run during that i didnt got any error but while running the model its showing this error please help.

detectnet --network=ssd-mobilenet-v1 /home/rgbsit/runtest/run_test_19-02-24/night1260X700.mp4
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder -- creating decoder for /home/rgbsit/runtest/run_test_19-02-24/night1260X700.mp4
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 260 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 260 
[gstreamer] gstDecoder -- discovered video resolution: 1260x700  (framerate 30.000000 Hz)
[gstreamer] gstDecoder -- discovered video caps:  video/mpeg, mpegversion=(int)4, systemstream=(boolean)false, profile=(string)simple, level=(string)1, codec_data=(buffer)000001b001000001b58913000001000000012000c48d8800f52764579463000001b24c61766335392e33372e313030, width=(int)1260, height=(int)700, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, parsed=(boolean)true
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] filesrc location=/home/rgbsit/runtest/run_test_19-02-24/night1260X700.mp4 ! qtdemux ! queue ! mpeg4videoparse ! omxmpeg4videodec name=decoder ! video/x-raw(memory:NVMM) ! appsink name=mysink
[video]  created gstDecoder from file:///home/rgbsi/runtest/run_test_19-02-24/night1260X700.mp4
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: file:///home/rgbsit/runtest/run_test_19-02-24/night1260X700.mp4
     - protocol:  file
     - location:  /home/rgbsit/runtest/run_test_19-02-24/night1260X700.mp4
     - extension: mp4
  -- deviceType: file
  -- ioType:     input
  -- codec:      MPEG4
  -- codecType:  omx
  -- width:      1260
  -- height:     700
  -- frameRate:  30
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1680x1050
[OpenGL] glDisplay -- X window resolution:    1680x1050
[OpenGL] glDisplay -- display device initialized (1680x1050)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- width:      1680
  -- height:     1050
  -- frameRate:  0
  -- numBuffers: 4
  -- zeroCopy:   true
------------------------------------------------

detectNet -- loading detection network model from:
          -- model        networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff
          -- input_blob   'Input'
          -- output_blob  'Postprocessor'
          -- output_count 'PostProcessor_1'
          -- class_labels networks/SSD-Mobilenet-v1/ssd_coco_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 8.2.1
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::ScatterND version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    completed loading NVIDIA plugins.
[TRT]    detected model format - UFF  (extension '.uff')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    [MemUsageChange] Init CUDA: CPU +229, GPU +0, now: CPU 256, GPU 3674 (MiB)
[TRT]    [MemUsageSnapshot] Begin constructing builder kernel library: CPU 256 MiB, GPU 3674 MiB
[TRT]    [MemUsageSnapshot] End constructing builder kernel library: CPU 285 MiB, GPU 3704 MiB
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    found engine cache file /usr/local/bin/networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff.1.1.8201.GPU.FP16.engine
[TRT]    found model checksum /usr/local/bin/networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff.sha256sum
[TRT]    echo "$(cat /usr/local/bin/networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff.sha256sum) /usr/local/bin/networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff" | sha256sum --check --status
[TRT]    model matched checksum /usr/local/bin/networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff.sha256sum
[TRT]    loading network plan from engine cache... /usr/local/bin/networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff.1.1.8201.GPU.FP16.engine
[TRT]    device GPU, loaded /usr/local/bin/networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff
[TRT]    [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 271, GPU 3719 (MiB)
[TRT]    Loaded engine size: 13 MiB
[TRT]    Using cublas as a tactic source
[TRT]    [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU -23, now: CPU 447, GPU 3702 (MiB)
[TRT]    Using cuDNN as a tactic source
[TRT]    [MemUsageChange] Init cuDNN: CPU +240, GPU +191, now: CPU 687, GPU 3893 (MiB)
[TRT]    Deserialization required 4556556 microseconds.
[TRT]    [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +13, now: CPU 0, GPU 13 (MiB)
[TRT]    Using cublas as a tactic source
[TRT]    [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU -1, now: CPU 687, GPU 3891 (MiB)
[TRT]    Using cuDNN as a tactic source
[TRT]    [MemUsageChange] Init cuDNN: CPU +1, GPU -2, now: CPU 688, GPU 3889 (MiB)
[TRT]    Total per-runner device persistent memory is 12452864
[TRT]    Total per-runner host persistent memory is 72656
[TRT]    Allocated activation device memory of size 9281536
[TRT]    [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +21, now: CPU 0, GPU 34 (MiB)
[TRT]    
[TRT]    CUDA engine context initialized on device GPU:
[TRT]       -- layers       89
[TRT]       -- maxBatchSize 1
[TRT]       -- deviceMemory 9281536
[TRT]       -- bindings     3
[TRT]       binding 0
                -- index   0
                -- name    'Input'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  3
                -- dim #0  3
                -- dim #1  300
                -- dim #2  300
[TRT]       binding 1
                -- index   1
                -- name    'Postprocessor'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1
                -- dim #1  100
                -- dim #2  7
[TRT]       binding 2
                -- index   2
                -- name    'Postprocessor_1'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1
                -- dim #1  1
                -- dim #2  1
[TRT]    
[TRT]    binding to input 0 Input  binding index:  0
[TRT]    binding to input 0 Input  dims (b=1 c=3 h=300 w=300) size=1080000
[TRT]    binding to output 0 Postprocessor  binding index:  1
[TRT]    binding to output 0 Postprocessor  dims (b=1 c=1 h=100 w=7) size=2800
[TRT]    3: Cannot find binding of given name: PostProcessor_1
[TRT]    failed to find requested output layer PostProcessor_1 in network
[TRT]    device GPU, failed to create resources for CUDA engine
[TRT]    failed to create TensorRT engine for /usr/local/bin/networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff, device GPU
[TRT]    detectNet -- failed to initialize.
detectnet:  failed to load detectNet model
**jetson-inference/build$ dpkg -l | grep TensorR**

ii  graphsurgeon-tf                               8.2.1-1+cuda10.2                           arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                8.2.1-1+cuda10.2                           arm64        TensorRT binaries
ii  libnvinfer-dev                                8.2.1-1+cuda10.2                           arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                                8.2.1-1+cuda10.2                           all          TensorRT documentation
ii  libnvinfer-plugin-dev                         8.2.1-1+cuda10.2                           arm64        TensorRT plugin libraries
ii  libnvinfer-plugin8                            8.2.1-1+cuda10.2                           arm64        TensorRT plugin libraries
ii  libnvinfer-samples                            8.2.1-1+cuda10.2                           all          TensorRT samples
ii  libnvinfer8                                   8.2.1-1+cuda10.2                           arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                          8.2.1-1+cuda10.2                           arm64        TensorRT ONNX libraries
ii  libnvonnxparsers8                             8.2.1-1+cuda10.2                           arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                              8.2.1-1+cuda10.2                           arm64        TensorRT parsers libraries
ii  libnvparsers8                                 8.2.1-1+cuda10.2                           arm64        TensorRT parsers libraries
ii  nvidia-container-csv-tensorrt                 8.2.1.8-1+cuda10.2                         arm64        Jetpack TensorRT CSV file
ii  python3-libnvinfer                            8.2.1-1+cuda10.2                           arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                        8.2.1-1+cuda10.2                           arm64        Python 3 development package for TensorRT
ii  tensorrt                                      8.2.1.8-1+cuda10.2                         arm64        Meta package of TensorRT
ii  uff-converter-tf                              8.2.1-1+cuda10.2                           arm64        UFF converter for TensorRT package

Hi,

3: Cannot find binding of given name: PostProcessor_1

Do you use a custom model?
If yes, please update the input/output name correspondingly.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.