deepstream-custom DeepStreamSDK 5 CUDA Driver Version: 10.2 CUDA Runtime Version: 10.2 TensorRT Version: 7.0.0.11 cuDNN Version: 7.6.5 NVIDIA GTX 960m
I want to integrate TLT-fasterrcnn-resnet50-model trained for ‘hand’ into deepstream sdk for inference
i get the .etlt file by using tlt-export command in docker.
I have built the TRT-OSS for the cropAndResizePlugin for frcnn to deploy into deepstream from deepstream-tlt-apps,
I can also run the sample for deepstream-custom.
but when i try to use my etlt file in the pgie_frcnn_tlt_config.txt for deepstream-custom i get the following error :
Now playing: pgie_frcnn_tlt_config.txt
0:00:00.352753517 21517 0x563527917d80 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:759 FP16 not supported by platform. Using FP32 mode.
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: Parameter check failed at: …/builder/Network.cpp::addInput::957, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: UFFParser: Failed to parseInput for node input_1
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: UffParser: Parser error: input_1: Failed to parse node - Invalid Tensor found at node input_1
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.982060040 21517 0x563527917d80 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Segmentation fault
my config file is following : (pgie_frcnn_tlt_config.txt) :
when I run sample models: Yolov3 TLT models :
./deepstream-custom -c pgie_yolov3_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264
error has occurred:
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Now playing: pgie_yolov3_tlt_config.txt
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:00.213974058 11548 0x558b573d00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.777201415 11548 0x558b573d00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
But when I run frcnn TLT models,everything is normal.
This is ONLY needed when running SSD , DSSD , RetinaNet and YOLOV3 models because BatchTilePlugin required by these models is not supported by TensorRT7.x native package.
YOLOv3 requires batchTilePlugin, resizeNearestPlugin and batchedNMSPlugin. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS).
Thank you for your help, I have replaced “libnvinfer_plugin.so.7.1.3*” with my own build. I found that SSD , DSSD , RetinaNet and Detectnet_v2 can run successfully, but Yolov3 can’t.
Trying to create engine from model files
ERROR: [TRT]: IPluginV2DynamicExt requires network without implicit batch dimension
Segmentation fault (core dumped)
nvidia@nvidia-desktop:~/Public/deepstream_tlt_apps$ ./deepstream-custom -c pgie_yolov3_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264 -d
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Now playing: pgie_yolov3_tlt_config.txt
Using winsys: x11
Opening in BLOCKING MODE
0:00:00.201417692 29126 0x5593a1b870 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: IPluginV2DynamicExt requires network without implicit batch dimension
Segmentation fault (core dumped)
nvidia@nvidia-desktop:~/Public/deepstream_tlt_apps$