ERROR: [TRT]: UffParser: Validator error: FirstDimTile_1: Unsupported operation _BatchTilePlugin_TRT

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) JetPack 4.6.3
• TensorRT Version 8.2.1
• NVIDIA GPU Driver Version (valid for GPU only) cuda 10.2
• Issue Type( questions, new requirements, bugs) questions

Hello, I trained in docker(nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3)and got the etlt file of the model yolov4_resnet50. When I try to deploy the file to agx, I get an error.

                                                                                                                                                                                                     (gst-plugin-scanner:9088): GStreamer-WARNING **: 22:24:24.232: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory                                                                                                                                                                                                                                                                                                                                                                                                      (gst-plugin-scanner:9088): GStreamer-WARNING **: 22:24:24.234: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory                                                                                                                                                                                                                                                                                                                                                                                                                 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***                                                                                                                                                                                                                                                                                                                                                         Opening in BLOCKING MODE                                                                                                                                                                                           gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so                                                                                                   gstnvtracker: Batch processing is ON                                                                                                                                                                               gstnvtracker: Past frame output is ON                                                                                                                                                                              [NvMultiObjectTracker] Initialized                                                                                                                                                                                 0:00:03.203432639  9087   0x7f0c0022d0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5                                                     ERROR: Deserialize engine failed because file path: /home/py/Downloads/facenet/resnet18_detector.etlt_b1_gpu0_fp32.engine open error                                                                               0:00:05.079530093  9087   0x7f0c0022d0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/py/Downloads/facenet/resnet18_detector.etlt_b1_gpu0_fp32.engine failed                                                             0:00:05.099664027  9087   0x7f0c0022d0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/py/Downloads/facenet/resnet18_detector.etlt_b1_gpu0_fp32.engine failed, try rebuild                                0:00:05.099816356  9087   0x7f0c0022d0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files                                                                                                                                                   ERROR: [TRT]: UffParser: Validator error: FirstDimTile_1: Unsupported operation _BatchTilePlugin_TRT                                                                                                               parseModel: Failed to parse UFF model                                                                                                                                                                              ERROR: Failed to build network, error in model parsing.                                                                                                                                                            ERROR: Failed to create network using custom network creation function                                                                                                                                             ERROR: Failed to get cuda engine from custom library API                                                                                                                                                           0:00:05.855356847  9087   0x7f0c0022d0 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed                                                                                                                                                                    Segmentation fault (core dumped) 

Then I checked the tensorrt version of TLT Docker and AGX
TLT Docker is


AGX is

What should I do ?

Looking forward to your reply

Thankyou

Please follow GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
to build TRT OSS lib or directly use the library under TRT-OSS/Jetson/TRT8.2/

Deploy in TLT Docker or AGX?

Currently the tensorrt version on my AGX is 8.2. So I just need to replace the .so file?

Yes.

I replaced the .SO file but still get the error

How about sudo ldconfig and run again?

still error

AGX orin has the same error

Hi, what more information do I need to provide?

I am not sure why it did not work on your side. can you move libnvinfer_plugin.so.8.2.1_backup out of this folder? if still not work, please build it follow the readme and try again.

now I can convert to .engine file from etlt file
But i have a new problem is when i run deepstream it returns me the following error

NvMMLiteOpen : Block : BlockType = 4
                                                                                    
===== NVMEDIA: NVENC =====                                                                                              
NvMMLiteBlockCreate : Block : BlockType = 4                                                                                                                                                                                                     **PERF:  FPS 0 (Avg)    FPS 1 (Avg)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     
**PERF:  0.00 (0.00)    0.00 (0.00)                                                                                 
**PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     
0:00:40.034610171 29666      0x16c92a0 ERROR  nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: 
Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects                                                   
0:00:40.034675482 29666      0x16c92a0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: 
Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes              

Sorry for the late.
Did this still be an issue?

Sorry for not replying to you in time, this problem has been solved, the engine file I am using now can run, but the deepstream screen only has video screen without bbox

Can you get correct output by TensorRT directly?
You used GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream, right?
You also can print object bbox coordinates, to see if parsed bbox coordinates correct or not. NvDsInferParseCustomBatchedNMSTLT in nvdsinfer_custombboxparser_tao.cpp

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.