nvidia@nvidia-desktop:~/Public/deepstream_tlt_apps$ ./deepstream-custom -c pgie_yolov3_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264 -d
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Now playing: pgie_yolov3_tlt_config.txt
Using winsys: x11
Opening in BLOCKING MODE
0:00:00.201417692 29126 0x5593a1b870 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: IPluginV2DynamicExt requires network without implicit batch dimension
Segmentation fault (core dumped)
my config file is following : pgie_yolov3_tlt_config.txt (I didn’t modify any configuration files)
Can you double check your command and log? See above, why it is playing pgie_frcnn_tlt_config.txt?
But in your command, it is pgie_yolov3_tlt_config.txt
when I build the Jetson TensorRT OSS Plugin
command:
/usr/local/bin/cmake … -DGPU_ARCHS=62 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out
nvidia@nvidia-desktop:~$ dpkg -l |grep cuda
ii cuda-command-line-tools-10-2 10.2.89-1 arm64 CUDA command-line tools
ii cuda-compiler-10-2 10.2.89-1 arm64 CUDA compiler
ii cuda-cudart-10-2 10.2.89-1 arm64 CUDA Runtime native Libraries
ii cuda-cudart-dev-10-2 10.2.89-1 arm64 CUDA Runtime native dev links, headers
ii cuda-cufft-10-2 10.2.89-1 arm64 CUFFT native runtime libraries
ii cuda-cufft-dev-10-2 10.2.89-1 arm64 CUFFT native dev links, headers
ii cuda-cuobjdump-10-2 10.2.89-1 arm64 CUDA cuobjdump
ii cuda-cupti-10-2 10.2.89-1 arm64 CUDA profiling tools runtime libs.
ii cuda-cupti-dev-10-2 10.2.89-1 arm64 CUDA profiling tools interface.
ii cuda-curand-10-2 10.2.89-1 arm64 CURAND native runtime libraries
ii cuda-curand-dev-10-2 10.2.89-1 arm64 CURAND native dev links, headers
ii cuda-cusolver-10-2 10.2.89-1 arm64 CUDA solver native runtime libraries
ii cuda-cusolver-dev-10-2 10.2.89-1 arm64 CUDA solver native dev links, headers
ii cuda-cusparse-10-2 10.2.89-1 arm64 CUSPARSE native runtime libraries
ii cuda-cusparse-dev-10-2 10.2.89-1 arm64 CUSPARSE native dev links, headers
ii cuda-documentation-10-2 10.2.89-1 arm64 CUDA documentation
ii cuda-driver-dev-10-2 10.2.89-1 arm64 CUDA Driver native dev stub library
ii cuda-gdb-10-2 10.2.89-1 arm64 CUDA-GDB
ii cuda-libraries-10-2 10.2.89-1 arm64 CUDA Libraries 10.2 meta-package
ii cuda-libraries-dev-10-2 10.2.89-1 arm64 CUDA Libraries 10.2 development meta-package
ii cuda-license-10-2 10.2.89-1 arm64 CUDA licenses
ii cuda-memcheck-10-2 10.2.89-1 arm64 CUDA-MEMCHECK
ii cuda-misc-headers-10-2 10.2.89-1 arm64 CUDA miscellaneous headers
ii cuda-npp-10-2 10.2.89-1 arm64 NPP native runtime libraries
ii cuda-npp-dev-10-2 10.2.89-1 arm64 NPP native dev links, headers
ii cuda-nvcc-10-2 10.2.89-1 arm64 CUDA nvcc
ii cuda-nvdisasm-10-2 10.2.89-1 arm64 CUDA disassembler
ii cuda-nvgraph-10-2 10.2.89-1 arm64 NVGRAPH native runtime libraries
ii cuda-nvgraph-dev-10-2 10.2.89-1 arm64 NVGRAPH native dev links, headers
ii cuda-nvml-dev-10-2 10.2.89-1 arm64 NVML native dev links, headers
ii cuda-nvprof-10-2 10.2.89-1 arm64 CUDA Profiler tools
ii cuda-nvprune-10-2 10.2.89-1 arm64 CUDA nvprune
ii cuda-nvrtc-10-2 10.2.89-1 arm64 NVRTC native runtime libraries
ii cuda-nvrtc-dev-10-2 10.2.89-1 arm64 NVRTC native dev links, headers
ii cuda-nvtx-10-2 10.2.89-1 arm64 NVIDIA Tools Extension
ii cuda-samples-10-2 10.2.89-1 arm64 CUDA example applications
ii cuda-toolkit-10-2 10.2.89-1 arm64 CUDA Toolkit 10.2 meta-package
ii cuda-tools-10-2 10.2.89-1 arm64 CUDA Tools meta-package
ii graphsurgeon-tf 7.1.3-1+cuda10.2 arm64 GraphSurgeon for TensorRT package
ii libcudnn8 8.0.0.180-1+cuda10.2 arm64 cuDNN runtime libraries
ii libcudnn8-dev 8.0.0.180-1+cuda10.2 arm64 cuDNN development libraries and headers
ii libcudnn8-doc 8.0.0.180-1+cuda10.2 arm64 cuDNN documents and samples
ii libnvinfer-bin 7.1.3-1+cuda10.2 arm64 TensorRT binaries
ii libnvinfer-dev 7.1.3-1+cuda10.2 arm64 TensorRT development libraries and headers
ii libnvinfer-doc 7.1.3-1+cuda10.2 all TensorRT documentation
ii libnvinfer-plugin-dev 7.1.3-1+cuda10.2 arm64 TensorRT plugin libraries
ii libnvinfer-plugin7 7.1.3-1+cuda10.2 arm64 TensorRT plugin libraries
ii libnvinfer-samples 7.1.3-1+cuda10.2 all TensorRT samples
ii libnvinfer7 7.1.3-1+cuda10.2 arm64 TensorRT runtime libraries
ii libnvonnxparsers-dev 7.1.3-1+cuda10.2 arm64 TensorRT ONNX libraries
ii libnvonnxparsers7 7.1.3-1+cuda10.2 arm64 TensorRT ONNX libraries
ii libnvparsers-dev 7.1.3-1+cuda10.2 arm64 TensorRT parsers libraries
ii libnvparsers7 7.1.3-1+cuda10.2 arm64 TensorRT parsers libraries
ii nvidia-container-csv-cuda 10.2.89-1 arm64 Jetpack CUDA CSV file
ii nvidia-container-csv-cudnn 8.0.0.180-1+cuda10.2 arm64 Jetpack CUDNN CSV file
ii nvidia-container-csv-tensorrt 7.1.3.0-1+cuda10.2 arm64 Jetpack TensorRT CSV file
ii nvidia-l4t-cuda 32.4.3-20200625213407 arm64 NVIDIA CUDA Package
ii python-libnvinfer 7.1.3-1+cuda10.2 arm64 Python bindings for TensorRT
ii python-libnvinfer-dev 7.1.3-1+cuda10.2 arm64 Python development package for TensorRT
ii python3-libnvinfer 7.1.3-1+cuda10.2 arm64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 7.1.3-1+cuda10.2 arm64 Python 3 development package for TensorRT
ii tensorrt 7.1.3.0-1+cuda10.2 arm64 Meta package of TensorRT
ii uff-converter-tf 7.1.3-1+cuda10.2 arm64 UFF converter for TensorRT package
Thanks for the info. I find another topic YOLO model, Convert ETLT into an Engine which has the same error as yours.
Not sure what is happened. I will check it.
I ran into the same issue, here are what I did to fix the issue:
I first reflash my Jetson AGX XAVIER to make sure whatever I installed on it is up to date, this step is optional
Relevant system config after installation is done:
Jetpack 4.4 (CUDA 10.2)
TensorRT: 7.1.3
Deepstream: 5.0
Follow instruction from the link shared by Morganh to build the TensorRT OSS, make sure you use release 7.0 NOT 7.1, I used 7.1 at first and YOLOV3 failed while other models (SSD , DSSD , RetinaNet) run without issue.
#back up the original file first
cd ~
mkdir libnvinfer-plugin-backups
sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3 /home/[REPLACE_WITH_YOUR_USERNAME]/libnvinfer-plugin-backups/libnvinfer_plugin.so.7.1.3.bak #replace with the .so file built in previous step
sudo cp /home/minh/TensorRT/build/out/libnvinfer_plugin.so.7.0.0 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3
sudo ldconfig #check
ll -sh /usr/lib/aarch64-linux-gnu/libnvinfer_plugin* make sure in the printed ouput you get the same ouput as what Morganh showed, pay close attention to the modification date, the 2 original symlinks (libnvinfer_plugin.so & libnvinfer_plugin.so.7) & the libnvinfer_plugin_static.a file should still shown Jun while the two new file (libnvinfer_plugin.so.7.0.0 & libnvinfer_plugin.so.7.1.3), for me shown Sep