Status == STATUS_SUCCESS error

getting this error when trying to run dssd mode

sudo deepstream-transfer-learning-app -c ds_app.txt 

0:00:00.083427937  4290 0x5623395bc070 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: UFFParser: Did not find plugin field entry scoreBits in the Registered Creator for layer NMS
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:24.279772551  4290 0x5623395bc070 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-transfer-learning-app/configs/dssd_head/export18/dssd_resnet18_epoch_2.etlt_b1_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT Input           3x300x300       
1   OUTPUT kFLOAT NMS             1x200x7         
2   OUTPUT kFLOAT NMS_1           1x1x1           

0:00:24.286733350  4290 0x5623395bc070 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-transfer-learning-app/configs/dssd_head/config_infer_primary_dssd.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:181>: Pipeline ready

** INFO: <bus_callback:167>: Pipeline running

ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: Assertion failed: status == STATUS_SUCCESS
/home/bcao/code/gitlab/TRT7.2/oss/plugin/nmsPlugin/nmsPlugin.cpp:119
Aborting...

Aborted

deepstream-app version 5.1.0
DeepStreamSDK 5.1.0
CUDA Driver Version: 11.2
CUDA Runtime Version: 10.1
TensorRT Version: 7.2
cuDNN Version: 8.0
libNVWarp360 Version: 2.0.1d3

1 if no any code and configure modification, can you run deepstream-transfer-learning-app successfully?
2 did you do any code modification? please provide your diff and configure file.

deaepstream-transfer learning running with other models. Problem with tensorrt oss required models.

I installed tensor oss.

git clone -b 21.03  https://github.com/nvidia/TensorRT 
 cd TensorRT/
git submodule update --init --recursive
export TRT_SOURCE=`pwd`
cd $TRT_SOURCE
mkdir -p build && cd build
/usr/local/bin/cmake ..  -DGPU_ARCHS=61  -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc  -DTRT_BIN_DIR=`pwd`/out

Building for TensorRT version: 7.2.2, library version: 7
-- Targeting TRT Platform: x86_64
-- CUDA version set to 10.1
-- cuDNN version set to 8.0
-- Protobuf version set to 3.0.0
-- Using libprotobuf /home/dell/TensorRT/build/third_party.protobuf/lib/libprotobuf.a
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvinfer.so
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvparsers.so
-- ==========================================================================================
-- GPU_ARCHS defined as 61. Generating CUDA code for SM 61
-- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h
-- /home/dell/TensorRT/build/parsers/caffe
Generated: /home/dell/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto
Generated: /home/dell/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto
-- 
-- ******** Summary ********
--   CMake version         : 3.13.5
--   CMake command         : /usr/local/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/g++
--   C++ compiler version  : 7.5.0
--   CXX flags             : -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : _PROTOBUF_INSTALL_DIR=/home/dell/TensorRT/build;ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /usr/lib/x86_64-linux-gnu/..
--   CMAKE_MODULE_PATH     : 
-- 
--   ONNX version          : 1.6.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--   ONNXIFI_ENABLE_EXT    : OFF
-- 
--   Protobuf compiler     : 
--   Protobuf includes     : 
--   Protobuf libraries    : 
--   BUILD_ONNX_PYTHON     : OFF
-- Found TensorRT headers at /home/dell/TensorRT/include
-- Find TensorRT libs at /usr/lib/x86_64-linux-gnu/libnvinfer.so;/usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so;TENSORRT_LIBRARY_MYELIN-NOTFOUND
-- Could NOT find TENSORRT (missing: TENSORRT_LIBRARY) 
ERRORCannot find TensorRT library.
-- Adding new sample: sample_algorithm_selector
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_char_rnn
--     - Parsers Used: uff;caffe;onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_dynamic_reshape
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_fasterRCNN
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_googlenet
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_int8
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_int8_api
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mlp
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist_api
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens_mps
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_nmt
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_onnx_mnist
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_plugin
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_reformat_free_io
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_ssd
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_fasterRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_maskRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_mnist
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_plugin_v2_ext
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_ssd
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_onnx_mnist_coord_conv_ac
--     - Parsers Used: onnx
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: trtexec
--     - Parsers Used: caffe;uff;onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUBLASLT_LIB
    linked by target "nvinfer_plugin" in directory /home/dell/TensorRT/plugin
CUBLAS_LIB
    linked by target "nvinfer_plugin" in directory /home/dell/TensorRT/plugin
    linked by target "sample_algorithm_selector" in directory /home/dell/TensorRT/samples/opensource/sampleAlgorithmSelector
    linked by target "sample_char_rnn" in directory /home/dell/TensorRT/samples/opensource/sampleCharRNN
    linked by target "sample_dynamic_reshape" in directory /home/dell/TensorRT/samples/opensource/sampleDynamicReshape
    linked by target "sample_fasterRCNN" in directory /home/dell/TensorRT/samples/opensource/sampleFasterRCNN
    linked by target "sample_googlenet" in directory /home/dell/TensorRT/samples/opensource/sampleGoogleNet
    linked by target "sample_int8" in directory /home/dell/TensorRT/samples/opensource/sampleINT8
    linked by target "sample_int8_api" in directory /home/dell/TensorRT/samples/opensource/sampleINT8API
    linked by target "sample_mlp" in directory /home/dell/TensorRT/samples/opensource/sampleMLP
    linked by target "sample_mnist" in directory /home/dell/TensorRT/samples/opensource/sampleMNIST
    linked by target "sample_mnist_api" in directory /home/dell/TensorRT/samples/opensource/sampleMNISTAPI
    linked by target "sample_movielens" in directory /home/dell/TensorRT/samples/opensource/sampleMovieLens
    linked by target "sample_movielens_mps" in directory /home/dell/TensorRT/samples/opensource/sampleMovieLensMPS
    linked by target "sample_nmt" in directory /home/dell/TensorRT/samples/opensource/sampleNMT
    linked by target "sample_onnx_mnist" in directory /home/dell/TensorRT/samples/opensource/sampleOnnxMNIST
    linked by target "sample_plugin" in directory /home/dell/TensorRT/samples/opensource/samplePlugin
    linked by target "sample_reformat_free_io" in directory /home/dell/TensorRT/samples/opensource/sampleReformatFreeIO
    linked by target "sample_ssd" in directory /home/dell/TensorRT/samples/opensource/sampleSSD
    linked by target "sample_uff_fasterRCNN" in directory /home/dell/TensorRT/samples/opensource/sampleUffFasterRCNN
    linked by target "sample_uff_maskRCNN" in directory /home/dell/TensorRT/samples/opensource/sampleUffMaskRCNN
    linked by target "sample_uff_mnist" in directory /home/dell/TensorRT/samples/opensource/sampleUffMNIST
    linked by target "sample_uff_plugin_v2_ext" in directory /home/dell/TensorRT/samples/opensource/sampleUffPluginV2Ext
    linked by target "sample_uff_ssd" in directory /home/dell/TensorRT/samples/opensource/sampleUffSSD
    linked by target "sample_onnx_mnist_coord_conv_ac" in directory /home/dell/TensorRT/samples/opensource/sampleOnnxMnistCoordConvAC
    linked by target "trtexec" in directory /home/dell/TensorRT/samples/opensource/trtexec
TENSORRT_LIBRARY_MYELIN
    linked by target "nvonnxparser_static" in directory /home/dell/TensorRT/parsers/onnx
    linked by target "nvonnxparser" in directory /home/dell/TensorRT/parsers/onnx

-- Configuring incomplete, errors occurred!
See also "/home/dell/TensorRT/build/CMakeFiles/CMakeOutput.log".
See also "/home/dell/TensorRT/build/CMakeFiles/CMakeError.log".

then,I used the prebuild file because it gave an error
sudo cp libnvinfer_plugin.so.7.2.2 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.2.3

sudo ldconfig
/sbin/ldconfig.real: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.8 is not a symbolic link


from your log , here is an error “ERRORCannot find TensorRT library.”. did you set up tensorrt successfully? please share the output of “dpkg -l|grep tensorrt”.
please refer to tensorrt link “https://developer.nvidia.com/tensorrt-getting-started

sorry for late reply, about ““ERRORCannot find TensorRT library.””, please refer to Could NOT find TENSORRT (missing: TENSORRT_LIBRARY).

Are you using dgpu? TensorRT 8.2.3-1+cuda11.4 is for deepstream.6.0, please remove the incompatible compomnets, and install the compoments according the doc, here is deepream6.0’s doc Quickstart Guide — DeepStream 6.1.1 Release documentation
or you can use docker, which has install all compoments, here is the link: Quickstart Guide — DeepStream 6.1.1 Release documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.