Unable to run DeepStream test app on Jetson Xavier using image `nvcr.io/nvidia/deepstream-l4t:6.0-base`

I’m trying to run deepstream-preprocess-test on Jetson Xavier NX using image nvcr.io/nvidia/deepstream-l4t:6.0-base. For some reason, I got the following errors (only part of the errors)

0:00:00.099923249  3110   0x55a9ddae00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_multistreamtiler.so" loaded
0:00:00.100000722  3110   0x55a9ddae00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "nvmultistreamtiler" named "nvtiler"
0:00:00.100802013  3110   0x55a9ddae00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55aa0a83f0> adding pad 'sink'
0:00:00.100857822  3110   0x55a9ddae00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55aa0a83f0> adding pad 'src'
0:00:00.102135950  3110   0x55a9ddae00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libgstnvvideoconvert.so" loaded
0:00:00.102263952  3110   0x55a9ddae00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "nvvideoconvert" named "nvvideo-converter"
0:00:00.102657685  3110   0x55a9ddae00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55aa0ab2f0> adding pad 'sink'
0:00:00.102714966  3110   0x55a9ddae00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55aa0ab2f0> adding pad 'src'
0:00:00.102790647  3110   0x55a9ddae00 WARN     GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory "nvdsosd"!
0:00:00.103687554  3110   0x55a9ddae00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvegltransform.so" loaded
0:00:00.103738915  3110   0x55a9ddae00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "nvegltransform" named "nvegl-transform"
0:00:00.104048423  3110   0x55a9ddae00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55aa0b0130> adding pad 'sink'
0:00:00.104100968  3110   0x55a9ddae00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55aa0b0130> adding pad 'src'
0:00:00.104138568  3110   0x55a9ddae00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "fakesink" named "nvvideo-renderer"
0:00:00.104519405  3110   0x55a9ddae00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseSink@0x55aa0a6a40> adding pad 'sink'

the plugins can’t be created. Attached is the resources to reproduce this.
deepstream_test_apps.tar.gz (6.8 MB)

Steps to reproduce

  1. docker run --gpus all -it --rm nvcr.io/nvidia/deepstream-l4t:6.0-base
  2. install the some libs
apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
  1. modify Makefile, model paths to correct the paths
  2. build and run the following
GST_DEBUG="*:4" ./deepstream-preprocess-test config_preprocess.txt config_infer.txt file:///root/sample_720p.h264

I tried this code on X86_64 and it worked fine.

Environment
HW: Jetson Xavier NX
Package: L4T 32.5.0 [ JetPack 4.5 ]
CUDA: 10.2.89
Architecture: arm64
Issue Type: Question

head /etc/nv_tegra_release
# R32 (release), REVISION: 5.0, GCID: 25531747, BOARD: t186ref, EABI: aarch64, DATE: Fri Jan 15 23:21:05 UTC 2021
  1. Have you installed JetPack 4.5 correctly in the host NX board? The cuda and TensorRT maust be installed in host before you start the docker.

  2. Please make sure you have installed nvidia-container correct before you start the docker container.

Run "sudo apt update"

Run "sudo apt install nvidia-container"

Run "sudo service docker restart"

  1. Please use the docker command recommended in Your First Jetson Container | NVIDIA Developer to start your container.

No error in what you post here.

please check the warning log

WARN     GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory "nvdsosd"!

The following is the full log

root@e289f501b83f:~/deepstream-preprocess-test# GST_DEBUG="*:4" ./deepstream-preprocess-test config_preprocess.txt config_infer.txt file:///root/sample_720p.h264 
0:00:00.000180610    29   0x55be701e00 INFO                GST_INIT gst.c:586:init_pre: Initializing GStreamer Core Library version 1.14.5
0:00:00.000279747    29   0x55be701e00 INFO                GST_INIT gst.c:587:init_pre: Using library installed in /usr/lib/aarch64-linux-gnu
0:00:00.000315331    29   0x55be701e00 INFO                GST_INIT gst.c:607:init_pre: Linux e289f501b83f 4.9.201-tegra #1 SMP PREEMPT Wed Mar 17 17:10:20 CST 2021 aarch64
0:00:00.000915657    29   0x55be701e00 INFO                GST_INIT gstmessage.c:127:_priv_gst_message_initialize: init messages
0:00:00.002507768    29   0x55be701e00 INFO                GST_INIT gstcontext.c:84:_priv_gst_context_initialize: init contexts
0:00:00.003083517    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:317:_priv_gst_plugin_initialize: registering 0 static plugins
0:00:00.003359744    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:225:gst_plugin_register_static: registered static plugin "staticelements"
0:00:00.003421377    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:227:gst_plugin_register_static: added static plugin "staticelements", result: 1
0:00:00.003494497    29   0x55be701e00 INFO            GST_REGISTRY gstregistry.c:1727:ensure_current_registry: reading registry cache: /root/.cache/gstreamer-1.0/registry.aarch64.bin
0:00:00.058329833    29   0x55be701e00 INFO            GST_REGISTRY gstregistrybinary.c:621:priv_gst_registry_binary_read_cache: loaded /root/.cache/gstreamer-1.0/registry.aarch64.bin in 0.054746 seconds
0:00:00.058487691    29   0x55be701e00 INFO            GST_REGISTRY gstregistry.c:1583:scan_and_update_registry: Validating plugins from registry cache: /root/.cache/gstreamer-1.0/registry.aarch64.bin
0:00:00.063883582    29   0x55be701e00 INFO            GST_REGISTRY gstregistry.c:1685:scan_and_update_registry: Registry cache has not changed
0:00:00.063930590    29   0x55be701e00 INFO            GST_REGISTRY gstregistry.c:1762:ensure_current_registry: registry reading and updating done, result = 1
0:00:00.063953407    29   0x55be701e00 INFO                GST_INIT gst.c:807:init_post: GLib runtime version: 2.56.4
0:00:00.063972415    29   0x55be701e00 INFO                GST_INIT gst.c:809:init_post: GLib headers version: 2.56.4
0:00:00.063985503    29   0x55be701e00 INFO                GST_INIT gst.c:810:init_post: initialized GStreamer successfully
0:00:00.064063776    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "pipeline" named "preprocess-test-pipeline"
0:00:00.088598600    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_multistream.so" loaded
0:00:00.088660553    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "nvstreammux" named "stream-muxer"
0:00:00.089379312    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstNvStreamMux@0x55be9a6090> adding pad 'src'
0:00:00.089601874    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "bin" named "source-bin-00"
0:00:00.091356098    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstplayback.so" loaded
0:00:00.091395107    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "uridecodebin" named "uri-decode-bin"
0:00:00.092228331    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<source-bin-00> adding pad 'src'
0:00:00.092314315    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:917:gst_element_get_static_pad: no such pad 'sink_0' in element "stream-muxer"
0:00:00.092393292    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<stream-muxer> adding pad 'sink_0'
0:00:00.092436237    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:920:gst_element_get_static_pad: found pad source-bin-00:src
0:00:00.092479853    29   0x55be701e00 INFO                GST_PADS gstpad.c:2378:gst_pad_link_prepare: trying to link source-bin-00:src and stream-muxer:sink_0
0:00:00.092549390    29   0x55be701e00 INFO                GST_PADS gstpad.c:4232:gst_pad_peer_query:<src:proxypad0> pad has no peer
0:00:00.092770832    29   0x55be701e00 INFO                GST_PADS gstpad.c:4232:gst_pad_peer_query:<stream-muxer:src> pad has no peer
0:00:00.092883377    29   0x55be701e00 INFO                GST_PADS gstpad.c:2586:gst_pad_link_full: linked source-bin-00:src and stream-muxer:sink_0, successful
0:00:00.092905553    29   0x55be701e00 INFO               GST_EVENT gstevent.c:1517:gst_event_new_reconfigure: creating reconfigure event
0:00:00.092923377    29   0x55be701e00 INFO               GST_EVENT gstpad.c:5808:gst_pad_send_event_unchecked:<source-bin-00:src> Received event on flushing pad. Discarding
0:00:00.092965074    29   0x55be701e00 WARN     GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory "nvdspreprocess"!
0:00:00.092986866    29   0x55be701e00 WARN     GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory "nvinfer"!
0:00:00.094174333    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstcoreelements.so" loaded
0:00:00.094243838    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "queue" named "queue1"
0:00:00.094575617    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bc080> adding pad 'sink'
0:00:00.094638369    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bc080> adding pad 'src'
0:00:00.094712546    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "queue" named "queue2"
0:00:00.094808643    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bc380> adding pad 'sink'
0:00:00.094864836    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bc380> adding pad 'src'
0:00:00.094935300    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "queue" named "queue3"
0:00:00.094992997    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bc680> adding pad 'sink'
0:00:00.095045253    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bc680> adding pad 'src'
0:00:00.095122278    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "queue" named "queue4"
0:00:00.095179527    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bc980> adding pad 'sink'
0:00:00.095231111    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bc980> adding pad 'src'
0:00:00.095279208    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "queue" named "queue5"
0:00:00.095389385    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bcc80> adding pad 'sink'
0:00:00.095444873    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bcc80> adding pad 'src'
0:00:00.095510282    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "queue" named "queue6"
0:00:00.095582218    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bcf80> adding pad 'sink'
0:00:00.095664331    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x55be9bcf80> adding pad 'src'
0:00:00.096922103    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_multistreamtiler.so" loaded
0:00:00.096957047    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "nvmultistreamtiler" named "nvtiler"
0:00:00.097341211    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55be9cee90> adding pad 'sink'
0:00:00.097413180    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55be9cee90> adding pad 'src'
0:00:00.098495782    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libgstnvvideoconvert.so" loaded
0:00:00.098534342    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "nvvideoconvert" named "nvvideo-converter"
0:00:00.098975403    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55be9d1d70> adding pad 'sink'
0:00:00.099032075    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55be9d1d70> adding pad 'src'
0:00:00.099125324    29   0x55be701e00 WARN     GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory "nvdsosd"!
0:00:00.099799250    29   0x55be701e00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvegltransform.so" loaded
0:00:00.099839187    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "nvegltransform" named "nvegl-transform"
0:00:00.100073909    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55be9d8130> adding pad 'sink'
0:00:00.100120565    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseTransform@0x55be9d8130> adding pad 'src'
0:00:00.100155478    29   0x55be701e00 INFO     GST_ELEMENT_FACTORY gstelementfactory.c:359:gst_element_factory_create: creating element "fakesink" named "nvvideo-renderer"
0:00:00.100498009    29   0x55be701e00 INFO        GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseSink@0x55be9cd120> adding pad 'sink'

TensorRT and CUDA were installed currectly

root@e289f501b83f:~/deepstream-preprocess-test# /usr/local/cuda/bin/deviceQuery 
/usr/local/cuda/bin/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Xavier"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    7.2
  Total amount of global memory:                 7774 MBytes (8151154688 bytes)
  ( 6) Multiprocessors, ( 64) CUDA Cores/MP:     384 CUDA Cores
  GPU Max Clock rate:                            1109 MHz (1.11 GHz)
  Memory Clock rate:                             1109 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS
root@e289f501b83f:~/deepstream-preprocess-test# /usr/src/tensorrt/
bin/     data/    samples/ 
root@e289f501b83f:~/deepstream-preprocess-test# /usr/src/tensorrt/bin/trtexec 
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec
...
aaeon@aaeon-desktop:~$ sudo apt install nvidia-container
Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package nvidia-container

are you sure it’s the right name?

When I changed from --gpus all to --runtime nvidia I still got the same error e.g. no such element factory "nvdsosd"!

Do I need to change docker image to nvcr.io/nvidia/l4t-base:r32.4.3 too? What’s the difference between this image and nvcr.io/nvidia/deepstream-l4t:6.0-base?

I’m using the same docker command but in Jetson Nano, every worked .

Have you run “sudo apt update” before installing nvidia-container?

There is no DeepStream on JetPack 4.5. Please upgrade to JetPack 4.5.1 with DeepStream 5.1 GA at least. Even better to the latest JetPack 5.0.2 with DeepStream 6.1.1

There is no “nvdspreprocess” plugin in DeepStream 5.1. Why do you run an application which is not compatible to your JetPack version and DeepStream version?

Please follow the platform compatibility. Quickstart Guide — DeepStream 6.3 Release documentation

1 Like

Thanks for your response. I checked the link you sent, right now I need JetPack 4.6. I’m trying to install it using ubuntu command line. According to How to Install JetPack :: NVIDIA JetPack Documentation , we can do

sudo apt update
sudo apt install nvidia-jetpack

In my case, after sudo apt update, i got

aaeon@aaeon-desktop:~$ sudo apt install nvidia-jetpack
Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package nvidia-jetpack

Do you know the fix? and also can we specify the version we want e.g. nvidia-jetpack-4.6?

Thanks

For JetPack4.6, please follow Quickstart Guide — DeepStream 6.0 Release documentation

For old JetPack versions, please follow Your First Jetson Container | NVIDIA Developer

1 Like

I followed Quickstart Guide — DeepStream 6.0 Release documentation and followed the following sections

  1. Install latest NVIDIA BSP packages
  2. and check
aaeon@aaeon-desktop:~$ sudo apt-cache show nvidia-jetpack
[sudo] password for aaeon: 
Package: nvidia-jetpack
Version: 4.6-b199
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.6-b199), nvidia-opencv (= 4.6-b199), nvidia-cudnn8 (= 4.6-b199), nvidia-tensorrt (= 4.6-b199), nvidia-visionworks (= 4.6-b199), nvidia-container (= 4.6-b199), nvidia-vpi (= 4.6-b199), nvidia-l4t-jetson-multimedia-api (>> 32.6-0), nvidia-l4t-jetson-multimedia-api (<< 32.7-0)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.6-b199_arm64.deb
Size: 29376
SHA256: d67b85293cade45d81dcafebd46c70a97a0b0d1379ca48aaa79d70d8ba99ddf8
SHA1: 74d9cbdfe9af52baa667e321749b9771101fc6de
MD5sum: cd1b3a0b651cd214b15fa76f6b5af2cd
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Version: 4.6-b197
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.6-b197), nvidia-opencv (= 4.6-b197), nvidia-cudnn8 (= 4.6-b197), nvidia-tensorrt (= 4.6-b197), nvidia-visionworks (= 4.6-b197), nvidia-container (= 4.6-b197), nvidia-vpi (= 4.6-b197), nvidia-l4t-jetson-multimedia-api (>> 32.6-0), nvidia-l4t-jetson-multimedia-api (<< 32.7-0)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.6-b197_arm64.deb
Size: 29372
SHA256: acec83ad0c1ef05caf9b8ccc6a975c4fb2a7f7830cbe63bbcf7b196a6c1f350e
SHA1: 3e11456cf0ec6b3a40d81b80ca1e14cebafa65ff
MD5sum: 72b2b7b280793bd4abdabe0d38b08535
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

JetPack 4.6 has been installed then I tried

  1. docker run --runtime nvidia -it --name test nvcr.io/nvidia/deepstream-l4t:6.0-base
  2. copy the files to this docker container under /root
  3. build CUDA_VER=10.2 make
  4. then run GST_DEBUG="*:4" ./deepstream-preprocess-test config_preprocess.txt config_infer.txt file:///root/sample_720p.h264

I still got the same result

...
0:00:00.119870927   725   0x55bbf37e00 WARN     GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory "nvinfer"!
:00:00.128628358   725   0x55bbf37e00 WARN     GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory "nvdsosd"!
...

Is CUDA installed? Can you run “ldd /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_osd.so”?

outside docker /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_osd.so does not exist. In side docker container, I got

	linux-vdso.so.1 (0x0000007f919af000)
	libgstbase-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstbase-1.0.so.0 (0x0000007f91877000)
	libgstreamer-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0 (0x0000007f91747000)
	libglib-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0 (0x0000007f91638000)
	libgobject-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgobject-2.0.so.0 (0x0000007f915db000)
	libnvds_osd.so => /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_osd.so (0x0000007f9155a000)
	libnvdsgst_meta.so => /opt/nvidia/deepstream/deepstream-6.0/lib/libnvdsgst_meta.so (0x0000007f91545000)
	libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000007f91530000)
	libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000007f91504000)
	librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000007f914ed000)
	libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000007f91394000)
	/lib/ld-linux-aarch64.so.1 (0x0000007f91983000)
	libgmodule-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgmodule-2.0.so.0 (0x0000007f91380000)
	libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000007f912c7000)
	libpcre.so.3 => /lib/aarch64-linux-gnu/libpcre.so.3 (0x0000007f91255000)
	libffi.so.6 => /usr/lib/aarch64-linux-gnu/libffi.so.6 (0x0000007f9123d000)
	libcairo.so.2 => /usr/lib/aarch64-linux-gnu/libcairo.so.2 (0x0000007f91143000)
	libpango-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpango-1.0.so.0 (0x0000007f910ec000)
	libpangocairo-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpangocairo-1.0.so.0 (0x0000007f910d0000)
	libnvbufsurface.so.1.0.0 => /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so.1.0.0 (0x0000007f91052000)
	libnvbufsurftransform.so.1.0.0 => /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so.1.0.0 (0x0000007f8f0fa000)
	libnvds_utils.so => /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_utils.so (0x0000007f8eb88000)
	libcuda.so.1 => /usr/lib/aarch64-linux-gnu/libcuda.so.1 (0x0000007f8dc45000)
	libstdc++.so.6 => /usr/lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000007f8dab1000)
	libnvds_meta.so => /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_meta.so (0x0000007f8da9a000)
	libpixman-1.so.0 => /usr/lib/aarch64-linux-gnu/libpixman-1.so.0 (0x0000007f8da36000)
	libfontconfig.so.1 => /usr/lib/aarch64-linux-gnu/libfontconfig.so.1 (0x0000007f8d9e6000)
	libfreetype.so.6 => /usr/lib/aarch64-linux-gnu/libfreetype.so.6 (0x0000007f8d93e000)
	libpng16.so.16 => /usr/lib/aarch64-linux-gnu/libpng16.so.16 (0x0000007f8d903000)
	libxcb-shm.so.0 => /usr/lib/aarch64-linux-gnu/libxcb-shm.so.0 (0x0000007f8d8f0000)
	libxcb.so.1 => /usr/lib/aarch64-linux-gnu/libxcb.so.1 (0x0000007f8d8c0000)
	libxcb-render.so.0 => /usr/lib/aarch64-linux-gnu/libxcb-render.so.0 (0x0000007f8d8a5000)
	libXrender.so.1 => /usr/lib/aarch64-linux-gnu/libXrender.so.1 (0x0000007f8d88c000)
	libX11.so.6 => /usr/lib/aarch64-linux-gnu/libX11.so.6 (0x0000007f8d762000)
	libXext.so.6 => /usr/lib/aarch64-linux-gnu/libXext.so.6 (0x0000007f8d742000)
	libz.so.1 => /lib/aarch64-linux-gnu/libz.so.1 (0x0000007f8d715000)
	libthai.so.0 => /usr/lib/aarch64-linux-gnu/libthai.so.0 (0x0000007f8d6fd000)
	libpangoft2-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpangoft2-1.0.so.0 (0x0000007f8d6da000)
	libnvrm.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm.so (0x0000007f8d697000)
	libEGL.so.1 => /usr/lib/aarch64-linux-gnu/libEGL.so.1 (0x0000007f8d676000)
	libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000007f8d658000)
	libnvbuf_fdmap.so.1.0.0 => /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_fdmap.so.1.0.0 (0x0000007f8d645000)
	libnvrm_graphics.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_graphics.so (0x0000007f8d625000)
	libnvddk_vic.so => /usr/lib/aarch64-linux-gnu/tegra/libnvddk_vic.so (0x0000007f8d605000)
	libnvddk_2d_v2.so => /usr/lib/aarch64-linux-gnu/tegra/libnvddk_2d_v2.so (0x0000007f8d5e0000)
	libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000007f8d5bc000)
	libnvinfer.so.8 => not found
	libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so (0x0000007f8d578000)
	libnvidia-fatbinaryloader.so.440.18 => /usr/lib/aarch64-linux-gnu/libnvidia-fatbinaryloader.so.440.18 (0x0000007f8d507000)
	libexpat.so.1 => /lib/aarch64-linux-gnu/libexpat.so.1 (0x0000007f8d4c8000)
	libXau.so.6 => /usr/lib/aarch64-linux-gnu/libXau.so.6 (0x0000007f8d4b5000)
	libXdmcp.so.6 => /usr/lib/aarch64-linux-gnu/libXdmcp.so.6 (0x0000007f8d4a0000)
	libdatrie.so.1 => /usr/lib/aarch64-linux-gnu/libdatrie.so.1 (0x0000007f8d48a000)
	libharfbuzz.so.0 => /usr/lib/aarch64-linux-gnu/libharfbuzz.so.0 (0x0000007f8d3ea000)
	libGLdispatch.so.0 => /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0 (0x0000007f8d2bc000)
	libbsd.so.0 => /lib/aarch64-linux-gnu/libbsd.so.0 (0x0000007f8d29a000)
	libgraphite2.so.3 => /usr/lib/aarch64-linux-gnu/libgraphite2.so.3 (0x0000007f8d269000)

Yes, CUDA is installed (I ran this outside docker container)

aaeon@aaeon-desktop:/usr/local/cuda/samples/1_Utilities/deviceQuery$ deviceQuery 
deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Xavier"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    7.2
  Total amount of global memory:                 7772 MBytes (8149737472 bytes)
  ( 6) Multiprocessors, ( 64) CUDA Cores/MP:     384 CUDA Cores
  GPU Max Clock rate:                            1109 MHz (1.11 GHz)
  Memory Clock rate:                             1109 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

Is the TensorRT installed correctly in the host? Please run “dpkg -l | grep TensorRT” in the host.

aaeon@aaeon-desktop:~$ /usr/src/tensorrt/bin/trtexec 
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec
=== Model Options ===
  --uff=<file>                UFF model
  --onnx=<file>               ONNX model
  --model=<file>              Caffe model (default = no model, random weights used)
  --deploy=<file>             Caffe prototxt file
  --output=<name>[,<name>]*   Output names (it can be specified multiple times); at least one output is required for UFF and Caffe
  --uffInput=<name>,X,Y,Z     Input blob name and its dimensions (X,Y,Z=C,H,W), it can be specified multiple times; at least one is required for UFF models
  --uffNHWC                   Set if inputs are in the NHWC layout instead of NCHW (use X,Y,Z=H,W,C order in --uffInput)

=== Build Options ===
  --maxBatch                  Set max batch size and build an implicit batch engine (default = 1)
  --explicitBatch             Use explicit batch sizes when building the engine (default = implicit)
  --minShapes=spec            Build with dynamic shapes using a profile with the min shapes provided
  --optShapes=spec            Build with dynamic shapes using a profile with the opt shapes provided
  --maxShapes=spec            Build with dynamic shapes using a profile with the max shapes provided
  --minShapesCalib=spec       Calibrate with dynamic shapes using a profile with the min shapes provided
  --optShapesCalib=spec       Calibrate with dynamic shapes using a profile with the opt shapes provided
  --maxShapesCalib=spec       Calibrate with dynamic shapes using a profile with the max shapes provided
                              Note: All three of min, opt and max shapes must be supplied.
                                    However, if only opt shapes is supplied then it will be expanded so
                                    that min shapes and max shapes are set to the same values as opt shapes.
                                    In addition, use of dynamic shapes implies explicit batch.
                                    Input names can be wrapped with escaped single quotes (ex: \'Input:0\').
                              Example input shapes spec: input0:1x3x256x256,input1:1x3x128x128
                              Each input shape is supplied as a key-value pair where key is the input name and
                              value is the dimensions (including the batch dimension) to be used for that input.
                              Each key-value pair has the key and value separated using a colon (:).
                              Multiple input shapes can be provided via comma-separated key-value pairs.
  --inputIOFormats=spec       Type and formats of the input tensors (default = all inputs in fp32:chw)
                              Note: If this option is specified, please make sure that all inputs are in the same order 
                                     as network inputs ID.
  --outputIOFormats=spec      Type and formats of the output tensors (default = all outputs in fp32:chw)
                              Note: If this option is specified, please make sure that all outputs are in the same order 
                                     as network outputs ID.
                              IO Formats: spec  ::= IOfmt[","spec]
                                          IOfmt ::= type:fmt
                                          type  ::= "fp32"|"fp16"|"int32"|"int8"
                                          fmt   ::= ("chw"|"chw2"|"chw4"|"hwc8"|"chw16"|"chw32")["+"fmt]
  --workspace=N               Set workspace size in megabytes (default = 16)
  --noBuilderCache            Disable timing cache in builder (default is to enable timing cache)
  --nvtxMode=[default|verbose|none] Specify NVTX annotation verbosity
  --minTiming=M               Set the minimum number of iterations used in kernel selection (default = 1)
  --avgTiming=M               Set the number of times averaged in each iteration for kernel selection (default = 8)
  --noTF32                    Disable tf32 precision (default is to enable tf32, in addition to fp32)
  --fp16                      Enable fp16 precision, in addition to fp32 (default = disabled)
  --int8                      Enable int8 precision, in addition to fp32 (default = disabled)
  --best                      Enable all precisions to achieve the best performance (default = disabled)
  --calib=<file>              Read INT8 calibration cache file
  --safe                      Only test the functionality available in safety restricted flows
  --saveEngine=<file>         Save the serialized engine
  --loadEngine=<file>         Load a serialized engine

=== Inference Options ===
  --batch=N                   Set batch size for implicit batch engines (default = 1)
  --shapes=spec               Set input shapes for dynamic shapes inference inputs.
                              Note: Use of dynamic shapes implies explicit batch.
                                    Input names can be wrapped with escaped single quotes (ex: \'Input:0\').
                              Example input shapes spec: input0:1x3x256x256, input1:1x3x128x128
                              Each input shape is supplied as a key-value pair where key is the input name and
                              value is the dimensions (including the batch dimension) to be used for that input.
                              Each key-value pair has the key and value separated using a colon (:).
                              Multiple input shapes can be provided via comma-separated key-value pairs.
  --loadInputs=spec           Load input values from files (default = generate random inputs). Input names can be wrapped with single quotes (ex: 'Input:0')
                              Input values spec ::= Ival[","spec]
                                           Ival ::= name":"file
  --iterations=N              Run at least N inference iterations (default = 10)
  --warmUp=N                  Run for N milliseconds to warmup before measuring performance (default = 200)
  --duration=N                Run performance measurements for at least N seconds wallclock time (default = 3)
  --sleepTime=N               Delay inference start with a gap of N milliseconds between launch and compute (default = 0)
  --streams=N                 Instantiate N engines to use concurrently (default = 1)
  --exposeDMA                 Serialize DMA transfers to and from device. (default = disabled)
  --useSpinWait               Actively synchronize on GPU events. This option may decrease synchronization time but increase CPU usage and power (default = disabled)
  --threads                   Enable multithreading to drive engines with independent threads (default = disabled)
  --useCudaGraph              Use cuda graph to capture engine execution and then launch inference (default = disabled)
  --buildOnly                 Skip inference perf measurement (default = disabled)

=== Build and Inference Batch Options ===
                              When using implicit batch, the max batch size of the engine, if not given, 
                              is set to the inference batch size;
                              when using explicit batch, if shapes are specified only for inference, they 
                              will be used also as min/opt/max in the build profile; if shapes are 
                              specified only for the build, the opt shapes will be used also for inference;
                              if both are specified, they must be compatible; and if explicit batch is 
                              enabled but neither is specified, the model must provide complete static
                              dimensions, including batch size, for all inputs

=== Reporting Options ===
  --verbose                   Use verbose logging (default = false)
  --avgRuns=N                 Report performance measurements averaged over N consecutive iterations (default = 10)
  --percentile=P              Report performance for the P percentage (0<=P<=100, 0 representing max perf, and 100 representing min perf; (default = 99%)
  --dumpOutput                Print the output tensor(s) of the last inference iteration (default = disabled)
  --dumpProfile               Print profile information per layer (default = disabled)
  --exportTimes=<file>        Write the timing results in a json file (default = disabled)
  --exportOutput=<file>       Write the output tensors to a json file (default = disabled)
  --exportProfile=<file>      Write the profile information per layer in a json file (default = disabled)

=== System Options ===
  --device=N                  Select cuda device N (default = 0)
  --useDLACore=N              Select DLA core N for layers that support DLA (default = none)
  --allowGPUFallback          When DLA is enabled, allow GPU fallback for unsupported layers (default = disabled)
  --plugins                   Plugin library (.so) to load (can be specified multiple times)

=== Help ===
  --help, -h                  Print this message
&&&& PASSED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec

Does it really matter? If we run in docker container, having TensorRT on host is not mandatory?

aaeon@aaeon-desktop:~/jetsonUtilities$ python jetsonInfo.py 
NVIDIA NVIDIA Jetson Xavier NX Developer Kit
 L4T 32.6.1 [ JetPack 4.6 ]
   Ubuntu 18.04.5 LTS
   Kernel Version: 4.9.253-tegra
 CUDA 10.2.89
   CUDA Architecture: 7.2
 OpenCV version: 4.1.1
   OpenCV Cuda: NO
 CUDNN: 8.0.0.180
 TensorRT: 7.1.3.0
 Vision Works: 1.6.0.501
 VPI: 1.0.12
 Vulcan: 1.2.70

On host, I have TensorRT version 7. It has to be 8?

It is mandatory. Or else you need to install TensorRT inside docker container.

Which docker are you using? Do you install JetPack with SDKManager?

I’m using nvcr.io/nvidia/deepstream-l4t:6.0-samples

I will install TensorRT outside of docker then.

I install via apt, following Quickstart Guide — DeepStream 6.0 Release documentation

If you installed the JetPack4.6 GA correctly, TensorRT version should be 8.0.1.

ok. I will update TensorRT to 8.0.1 and let you know. Thanks

Quickstart Guide — DeepStream 6.0 Release documentation (nvidia.com) just said you need to install JetPack 4.6 GA with SDKManager. It is better to install with SDKManager from the beginning but not just to upgrade the TensorRT.