LPRNet technical blog post: sample DeepStream app repo not found?

Hi! I’ve been trying to get started with playing around with a Jetson Orin Nano for ALPR purposes and have it set up with an SSD docker, Jetpack 6.0 + CUDA 12.2, and (at least I would believe) all the prerequisites to run TAO toolkit through container. Though it’s 3 years old, I followed along this post to get started, have tried testing some of the training sample notebooks (as far as I could go without actually executing training) and decided to try out the pre-built models first. As following with the blog, this should just be LPRnet V1.0 (which I believe is built on Detectnet_v2).

However, I was just wondering where the sample application GitHub repo might have gone? I understand that after a while, things may get moved around, become deprecated, or just be taken down for any other assortment of valid reasons. If there isn’t any issue with accessing it, would anyone happen to know if this repo was moved, or have a copy? I would also imagine that there would be other suitable examples for me to use instead, and would be happy to be pointed to them.

Please refer to GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream.

The LPDNet is based on TAO Detectnet_v2 network. The LPRNet is based on TAO lprnet Network.

Thank you for response, and the correction! I’ve come across the linked GitHub repo and have in fact pulled models from it, as instructed further down the blog post, but I still wasn’t sure where to find the file “lpr_test_sample” even in the linked repo. Was it something that was supposed to show up during a build process? I suspect it could very well be the same app (or a more dated version) as “deepstream-lpr-app” which I should build. I’ll try that out when home, and if it seems to work out I will post again.

EDIT: It’s an error I’ve seen other people run into, but I ended up hitting when trying to build from the deepstream-lpr-app nested directory.

fatal error: gstnvdsmeta.h: No such file or directory
31 | # include “gstnvdsmeta.h”

(space added after “#” avoid automatic tag formatting)

I will be trying to resolve this myself, but if there are any things to look out for (especially with my more updated JetPack and DeepStream versions), then I’d always appreciate hearing about them.

EDIT 2: Tried out and attempted to interpret some solutions in this thread. I wasn’t sure which proposed solutions were applicable, and tried out a few with some modifications to version numbers where applicable, with no dice. If one of these should be expected to resolve the issue, however, then that’s fine.

I also consulted this thread but as a newcomer to Linux development I’m not quite sure what was meant in the proposed solution, what I should expect, or what the provided command does. However, I did observe that gstnvdsmeta.h was located in /opt/nvidia/deepstream/deepstream-7.0/sources/includes. Trying to run “make” with root permissions doesn’t change, so I can only assume that the Makefile isn’t reaching the right libraries at the moment.

The current Makefile is still identical to what is included in the repo: deepstream_lpr_app/deepstream-lpr-app.

I suggest you to pull a deepstream docker for l4t. For example, nvcr.io/nvidia/deepstream-l4t:6.3-samples (from DeepStream | NVIDIA NGC)
Then clone the github and retry.

Thank you for the guidance, I’ve ended up pulling l4t:6.3-samples , ran the docker (sudo docker run -it …etc.) and entered its environment, but haven’t been able to properly build dependencies like are detailed here and found out firsthand just how much my inexperience is showing. I got sidetracked, trying to install pyenv so I could get 3.10 to build some of the base dependencies, and found out firsthand how apt installations within docker containers are volatile (after I assumed I needed to stop the container and then restart it to work on pyenv). I’ve looked up a bit more and have heard that it’s standard practice to containerize an application/add dependencies during built time for running a docker container with a custom image and dependencies installed. Forgive me if I’m not interpreting what I’ve read correctly either.

I tried to reproduce the workflow to build the blog’s outlined project, downloading the models but getting stuck with the tao_converter. To my understanding, the container image is highly barebones, but I didn’t figure out installation for tao_converter entirely through the command line: I found that inside the container I’d need to rebuild ngc, and probably more dependencies than I remember when I was initially configuring my conda virtual environment where I was initially trying to work.

At this point, I’m second guessing how I’m approaching this problem and want to come back to try to work smarter rather than just harder. With DeepStream: I attempted to install it manually through the available options here by inflating it through the command line, but always ran into the following error that would fail: “tar opt/nvidia/deepstream/deepstream: Cannot open: File exists”

Furthermore, attempting to use the DeepStream Debian download did and still does return: “E: Unsupported file ./deepstream-7.0_7.0.0-1_arm64.deb given on commandline” even when run on baremetal (though I’m not aware or sure if it would be expected to make a difference).

I still have the /opt/nvidia/deepstream folders, and at this point I don’t recall if they were already there or if they were partially inflated and then incompletely installed. I was prompted to try to explore an installation for DeepStream in the first place when I tried out the command “deepstream-app --help” and got “bash: deepstream-app: command not found” which still applies now. I have pulled a docker image for DeepStream 7.0 as well, but haven’t run it.

To be honest, I’m a not quite familiar with tar’s behavior when inflating volumes: would it overwrite even when exiting with a failure status with the -xf flags? Could my attempts to have installed it (sudo tar -xvf deepstream_SDK_v7.0.0_jetson.tbz2 -C) have broken my installation? Would the right approach to this error just be to wipe out those directories and try again with the installation process? Is version 6.3 more likely to be compatible with the LPD and LPR models? Or, is there anywhere I should be pointed to for learning how to containerize DeepStream to pull to my docker yet also build it with all the dependencies required?

Please pull the deepstream docker to avoid any missing packages related to deepstream.
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/deepstream-l4t:6.3-samples /bin/bash

Then pull the deeptream_lpr_app repo. GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream.

For tao-converter, you can download from TAO Converter | NVIDIA NGC.

Hi again! I tried pulling the docker again, following the provided “docker run…” command. In my main environment, I’ve pulled the nvcr.io/nvidia/deepstream-l4t:6.3-samples docker image, though if there was or is something else to pull or install dependency-wise, that would be neat.

I do have tao-converter installed (from that link) on baremetal and have been able to run it in my virtual environment workspace, but I’ve still not had any success installing it within the running docker environment. At the very least, the following command doesn’t seem to pull additional dependencies related to building the deepstream-lpr-app.

$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/deepstream-l4t:6.3-samples /bin/bash

Again, apologies for my inexperience, but I also recently realized that Docker images can be saved and backed up for later access. In this regard, I would be able to work with rebuilding all the dependencies manually within my container, but I wanted to ask if I’m on the right track. Should I have to rebuild the image still, with dependencies, ngccli as well to install the tao-converter, or should I take a different, possibly more efficient approach (or am missing steps in this current approach) to get to running the app?

Hi,
You was pulling nvcr.io/nvidia/deepstream-l4t:6.3-samples docker image on your Jetson Orin device, right? Do you go inside the docker to pull the repo? Can you share the full log you have run?

Right, I’ve pasted my complete terminal output (just for the last attempt I made with pulling the docker.

Most of that log includes me attempting to pull dependencies, and I pull deepstream_python_apps on line 656 (which should be inside the docker container). Most of this includes earlier work in the pulled docker container, even if it’s not all of it (which goes out to over 2000 lines), but I wanted to provide it just in case. It may not be useful context at all, however.

In the meantime, I may consider just trying to follow through with a bare metal DeepStream installation, remove some of the directories that seem to be throwing errors with tar (though I wouldn’t be surprised if I encounter more issues).

I cannot open your link.
Can you upload a .txt file via button
image

Also, it is not deepstream_python_apps. As mentioned above, it is deepstream-lpr-app.

Also, it is not deepstream_python_apps. As mentioned above, it is deepstream-lpr-app.

My apologies, I had a lapse in attention while writing out the response. I cloned into deepstream_python_apps to try out other samples and examples, but did clone deepstream_lpr_app and was working with it originally when I initially opened this thread. The logs should be attached.

deepstream6.3pull_logfile.txt (54.1 KB)

The bottom error is similar to topic Missing dependencies in deepstream docker container - #4 by junshengy.
Seems to be a limitation in nvcr.io/nvidia/deepstream-l4t:6.3-samples .

For development purposes, nvcr.io/nvidia/deepstream:6.3-triton-multiarch can be used.

Thank you! After revisiting my Jetson after a few days, I’ve tried pulling the Triton image for dev work, but I’ve embarassingly gotten stuck again. Previously, I was able to install the tao-converter in my virtual environment by manually unzipping the download from here, but I didn’t accomplish all of that through the command line.

I pulled the triton-multiarch image, started it, and found I’d need to also set up NGC CLI. By all measures, this seemed to work, I entered my API key and “which ngc” returned the install directory. After that, I tried to pull tao-converter with:

ngc registry resource download-version nvidia/tao/tao-converter:v5.1.0_jp6.0_aarch64

I’ve ended up with a file named “tao-converter_vv5.1.0_jp6.0_aarch64” at root, and wasn’t able to figure out how to unzip it (I didn’t try to download many other unarchiving packages, unzipping it didn’t work as it’s not a .zip file). Honestly, I’m not sure if I even got the right file. I followed the rest of the instructions in the “installing on a jetson platform” section (with of course, me not figuring out if it needed to be or any way to extract/inflate the download file). At the moment, tao-converter is still an unrecognized command.

Sorry for the extra bother! Hopefully I’ll figure it out on my own first.

You can ignore ngc tool. Please go to TAO Converter | NVIDIA NGC. Then you can find different versions of tao-converter. Click “…” on the right, then you can wget the file.

Thank you again! I’ve made a few (unsuccessful) attempts, at running tao-converter and figured I’d attach a logfile for your reference as well this time:

tao-converter-attempt-output.txt (26.9 KB)

I’ve tried following the instructions on the link you’ve provided, it’s not an unfamiliar page at all, but I can’t help but feel that as a newcomer that I don’t know enough to fill in the blanks between some of the instructions. I’m still not sure if I need to or how to extract the download pulled (I run wget on line 141), there are a few other positions where I am just checking installations (like tao info --verbose, or dpkg -l |grep cuda), but otherwise I’ve just tried to replicate the linked instructions with the maybe incorrect command “chmod +x tao-converter” thrown in after configuring environment variables.

I’m a bit embarassed, as I wouldn’t be surprised if I’ve made rookie mistakes thanks to my unfamiliarity with Linux and command-line work in general. I’ve searched through a few other threads and have started to wonder if I’ve been working with the right versions of everything? I ran “./tao-converter -h” and observed:

./tao-converter: error while loading shared libraries: libcrypto.so.3: cannot open shared object file: No such file or directory

Though I’m in a docker container, my Jetson is Jetpack 6. The deepstream-lpr-app superficially is noted to have been updated for 6.4 and later on the update notes, but JetPack 6 is a more recent release than the last update to the app. I’ve seen an unaddressed issue brought up on the GitHub where someone had issues with running DeepStream 7.0 as well. Does the version pose any issues with my Jetson being configured for JetPack 6 in general, or no?

In the meantime I will try to get more information on version compatibility. Looking back at your comment, I do realize now that I glazed over your suggestion to “look for the right version” but I’m not so sure if I will end up accomplishing anything without figuring out if I’m even following the right installation process. If I don’t make further edits or give updates, I probably haven’t figured it out yet, oops.

I thank you again for your patience, and do apologize that this has been such a sticking point!

For ./tao-converter: error while loading shared libraries: libcrypto.so.3: cannot open shared object file: No such file or directory, I find it is due to mismatching ubuntu versions between Jetpack6.0 and nvcr.io/nvidia/deepstream:6.3-triton-multiarch.

For Jetpack6.0, it is Ubuntu22.04.
For nvcr.io/nvidia/deepstream:6.3-triton-multiarch, it is Ubuntu20.04.

To avoid this issue, please use nvcr.io/nvidia/deepstream:6.4-triton-multiarch. This docker has Ubuntu22.04 version.
Then tao-converter can work.

nvidia@ubuntu:~$ docker run --runtime=nvidia -it --rm -v /home/nvidia:/home/nvidia nvcr.io/nvidia/deepstream:6.4-triton-multiarch /bin/bash

=============================
== Triton Inference Server ==
=============================

NVIDIA Release  (build )

Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

Failed to detect NVIDIA driver version.

root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4#
root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"
root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4#
root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4# git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
Cloning into 'deepstream_lpr_app'...
remote: Enumerating objects: 338, done.
remote: Counting objects: 100% (72/72), done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 338 (delta 45), reused 37 (delta 24), pack-reused 266
Receiving objects: 100% (338/338), 3.66 MiB | 4.08 MiB/s, done.
Resolving deltas: 100% (223/223), done.
root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4#
root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4# wget --content-disposition 'https://api.ngc.nvidia.com/v2/resources/org/nvidia/team/tao/tao-converter/v5.1.0_jp6.0_aarch64/files?redirect=true&path=tao-converter' -O tao-converter
--2024-07-30 03:20:44--  https://api.ngc.nvidia.com/v2/resources/org/nvidia/team/tao/tao-converter/v5.1.0_jp6.0_aarch64/files?redirect=true&path=tao-converter
Resolving api.ngc.nvidia.com (api.ngc.nvidia.com)... 34.218.43.86, 35.82.180.134
Connecting to api.ngc.nvidia.com (api.ngc.nvidia.com)|34.218.43.86|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://files.ngc.nvidia.com/org/nvidia/team/tao/recipes/tao-converter/versions/v5.1.0_jp6.0_aarch64/files/tao-converter?versionId=633sVDaHxjYO2GkKa0OtITspFgtKpl1G&Expires=1722396044&Signature=R6MP9wM9dMNK5FGLd16LFEbmSzCuWdyBIvQfN-oDw1qpc1AxWNGhpQTXZNNmS4E8yeMIrtvXxMGAlsl77WjlTIINei2zjv2yaJSGw56mMwusGCgrUW98nVLbWhC~9bV-mXzJwJoFgQkHj3NYfGWfviAIMxNcjJibwklL7YIaykHSmuUjW~VpllyOAbLyMfr~dAFYHw9AzjLhGAkwOTm1-Ywp9nvOAkG5LKcFopMBYqvZ7DTDbpv2jn2NRa7rYbZkHIWqFDpASSSvPy3vytnBDz5vTq38~6N5EjpTRt0LPZ7ze4DhMSKLfdOHeIuDB5oA9zxvDdPD8tJr3tUOI1RP5w__&Key-Pair-Id=KCX06E8E9L60W [following]
--2024-07-30 03:20:44--  https://files.ngc.nvidia.com/org/nvidia/team/tao/recipes/tao-converter/versions/v5.1.0_jp6.0_aarch64/files/tao-converter?versionId=633sVDaHxjYO2GkKa0OtITspFgtKpl1G&Expires=1722396044&Signature=R6MP9wM9dMNK5FGLd16LFEbmSzCuWdyBIvQfN-oDw1qpc1AxWNGhpQTXZNNmS4E8yeMIrtvXxMGAlsl77WjlTIINei2zjv2yaJSGw56mMwusGCgrUW98nVLbWhC~9bV-mXzJwJoFgQkHj3NYfGWfviAIMxNcjJibwklL7YIaykHSmuUjW~VpllyOAbLyMfr~dAFYHw9AzjLhGAkwOTm1-Ywp9nvOAkG5LKcFopMBYqvZ7DTDbpv2jn2NRa7rYbZkHIWqFDpASSSvPy3vytnBDz5vTq38~6N5EjpTRt0LPZ7ze4DhMSKLfdOHeIuDB5oA9zxvDdPD8tJr3tUOI1RP5w__&Key-Pair-Id=KCX06E8E9L60W
Resolving files.ngc.nvidia.com (files.ngc.nvidia.com)... 13.224.163.60, 13.224.163.5, 13.224.163.79, ...
Connecting to files.ngc.nvidia.com (files.ngc.nvidia.com)|13.224.163.60|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 469976 (459K) [binary/octet-stream]
Saving to: 'tao-converter'

tao-converter                                   100%[====================================================================================================>] 458.96K  --.-KB/s    in 0.04s

2024-07-30 03:20:45 (12.3 MB/s) - 'tao-converter' saved [469976/469976]

root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4# ls
LICENSE.txt                    README              doc         rtpjitterbuffer_eos_handling.patch  tao-converter         user_additional_install.sh
LicenseAgreement.pdf           bin                 install.sh  samples                             uninstall.sh          user_deepstream_python_apps_install.sh
LicenseAgreementContainer.pdf  deepstream_lpr_app  lib         sources                             update_rtpmanager.sh  version
root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4# chmod +x tao-converter
root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4#
root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4# ./tao-converter -h
usage: ./tao-converter [-h] [-e ENGINE_FILE_PATH]
        [-k ENCODE_KEY] [-c CACHE_FILE]
        [-o OUTPUTS] [-d INPUT_DIMENSIONS]
        [-b BATCH_SIZE] [-m MAX_BATCH_SIZE]
        [-w MAX_WORKSPACE_SIZE] [-t DATA_TYPE]
        [-i INPUT_ORDER] [-s] [-u DLA_CORE]
        [-l engineLayerVerbose]
        [-v TensorRT version]
        [--precisionConstraints PRECISIONCONSTRAINTS][--layerPrecisions layerName:precision][--layerOutputTypes layerName:precision]input_file

Generate TensorRT engine from exported model

positional arguments:
  input_file            Input file (.etlt exported model).

required flag arguments:
  -d            comma separated list of input dimensions(not required for TLT 3.0 new models).
  -k            model encoding key.

optional flag arguments:
  -b            calibration batch size (default 8).
  -c            calibration cache file (default cal.bin).
  -e            file the engine is saved to (default saved.engine).
  -i            input dimension ordering -- nchw, nhwc, nc (default nchw).
  -m            maximum TensorRT engine batch size (default 16). If meet with out-of-memory issue, please decrease the batch size accordingly.
  -o            comma separated list of output node names (default none).
  -p            comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has `x` as delimiter, e.g., NxC, NxCxHxW, NxCxDxHxW, etc. Can be specified multiple times if there are multiple input tensors for the model. This argument is only useful in dynamic shape case.
  -s            TensorRT strict_type_constraints flag for INT8 mode(default false).
  -t            TensorRT data type -- fp32, fp16, int8 (default fp32).
  -u            Use DLA core N for layers that support DLA(default = -1, which means no DLA core will be utilized for inference. Note that it'll always allow GPU fallback).
  -w            maximum workspace size (in Bytes) of TensorRT engine (default is the size of total global memory in the device). If meet with out-of-memory issue, please increase the workspace size accordingly.
  -l            Print the engineLayerInfo of network(default false).
  -v            Print the version of TensorRT.
  [--precisionConstraints spec] Control precision constraint setting. (default = none)
                                  Precision Constaints: spec ::= "none" | "obey" | "prefer"
                                  none = no constraints
                                  prefer = meet precision constraints set by --layerPrecisions/--layerOutputTypes if possible
                                  obey = meet precision constraints set by --layerPrecisions/--layerOutputTypes or fail
                                         otherwise
  [--layerPrecisions spec]      Control per-layer precision constraints. Effective only when precisionConstraints is set to
                              "obey" or "prefer". (default = none)
                              The specs are read left-to-right, and later ones override earlier ones. "*" can be used as a
                              layerName to specify the default precision for all the unspecified layers.
                              Per-layer precision spec ::= layerPrecision[","spec]
                                                  layerPrecision ::= layerName":"precision
                                                  precision ::= "fp32"|"fp16"|"int32"|"int8"
  [--layerOutputTypes spec]     Control per-layer output type constraints. Effective only when precisionConstraints is set to
                              "obey" or "prefer". (default = none)
                              The specs are read left-to-right, and later ones override earlier ones. "*" can be used as a
                              layerName to specify the default precision for all the unspecified layers. If a layer has more than
                              one output, then multiple types separated by "+" can be provided for this layer.
                              Per-layer output type spec ::= layerOutputTypes[","spec]
                                                    layerOutputTypes ::= layerName":"type
                                                    type ::= "fp32"|"fp16"|"int32"|"int8"["+"type]

root@2840b4ff0c57:/opt/nvidia/deepstream/deepstream-6.4#

Hi again! Thank you for the advice, I was able to make it as far as building deepstream-lpr-app (I think successfully, though not without warnings) but I’m a bit stuck on how to run it. In that regard, I have two questions: are file1.mp4, and file2.mp4 being called through an external reference (not in immediate directory), or are they just placeholder names provided on the technical blog within the example? If the latter, will I need to pull sample validation/testing data separately from the includes inside the Triton docker image or should I be able to access them some other way? Otherwise, what of the GStreamer errors that I’m seeing when I attempt to run deepstream-lpr-app?

deepstream_lpr_app_attempt.txt (67.6 KB)

The above included .txt file are logs from beginning to end, I actually had a power brownout from a storm so it wasn’t my first time going through the 6.4 container. Regardless, it follows all the steps I’ve taken to download the deepstream_lpr_app models, install and run the tao-converter on the pre-built model (on line 165). It threw some warnings on some casts and weights, but seemed to at least complete. I then pulled the deepstream_lpr_app on line 205, which I just did in the same directory (I’d previously tried to do it in root and then would move around some other files, but I’m not actually sure if there is a specific directory I should pull the GitHub to?). I followed some (possibly redundant) instructions on GitHub to run “./download_convert.sh us 0” and ran the Makefile. I moved libnvdsinfer_custom_impl_lpr.so to /opt/nvidia/deepstream/deepstream-6.4/lib/, created dict.txt inside the nested deepstream_lpr_app folder, but could not figure out this step:

Modify the nvinfer configuration files for TrafficCamNet, LPD and LPR with the actual model path and names. The config file for TrafficCamNet is provided in DeepStream SDK under the following path:

/opt/nvidia/deepstream/deepstream-5.0/samples/models/tao_pretrained_models/trafficcamnet.txt

The sample lpd_config.txt and lpr_config_sgie_us.txt files can be found lpd_config.txt and lpr_config_sgie_us.txt . Note the parse-classifier-func-name and custom-lib-path . This uses the new nvinfer LPR library from step 1.

I checked the directory “/opt/nvidia/deepstream/deepstream-5.0/samples/models/tao_pretrained_models/” on line 562, but there was only a folder, “trafficcamnet” with “resnet18_trafficcamnet_pruned.etlt” and “trafficnet_int8.txt” inside. I’m not sure what the blog article was referring to for the 'lpd_config.txt" and ‘lpr_config_sgie_us.txt’ files either: is it just to modify these configuration files as well for the other two models? Where would they be located? I would like to think that I could figure out how to modify these config files for where I pulled the model directories and files if I were able to open and interpret them, but I wouldn’t know where to start if I had to construct them from scratch. Were these files supposed to be pulled as a part of the docker? Can they be found somewhere else or pulled in addition?

I attempted to run the compiled deepstream_lpr_app inside the same directory I had git cloned, not knowing if it would produce any outputs without facing an error. I also found that the sample syntax for running the app had an additional argument (Triton related, I believe), so I tried to give a reasonable argument and re-run the app. I wasn’t surprised to see errors, but I would need some guidance with deciphering them.

Are these errors occurring because file1.mp4 and file2.mp4 are invalid arguments? I’m inclined to think so from the “Error FileOperationFailed” errors, but I’m not sure. Furthermore, the other errors on missing/failed plugin loads are a bit concerning. Are the path files for these plugins incorrect, or could there be an issue on my installation or a requirement to install these plug-ins be required?

root@767c8f2cb2a9:/opt/nvidia/deepstream/deepstream-6.4/samples/models/LP/LPR/de
epstream_lpr_app/deepstream-lpr-app# ./deepstream-lpr-app 1 1 0 infer file1.mp4 file2.mp4 output.264
use_nvinfer_server:0, use_triton_grpc:0
(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 204)
(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
/bin/dash: 1: lsmod: not found
/bin/dash: 1: modprobe: not found

(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.258: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so’: librivermax.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.274: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpeg2enc.so’: libmpeg2encpp-2.1.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.331: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpg123.so’: libmpg123.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.382: adding type GstEvent multiple times

(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.520: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstchromaprint.so’: libavcodec.so.58: cannot open shared object file: No such file or directory

(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.560: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstopenmpt.so’: libmpg123.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.672: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpeg2dec.so’: libmpeg2.so.0: cannot open shared object file: No such file or directory
GLib (gthread-posix.c): Unexpected error from C library during ‘pthread_setspecific’: Invalid argument. Aborting.

As you can see, I initialized the docker container with: “docker run --runtime=nvidia -it --rm -v /home/nvidia:/home/nvidia nvcr.io/nvidia/deepstream:6.4-triton-multiarch /bin/bash” on line 1. I looked up some of these errors, such as with this thread where the user believed their problems were tied to their lack of a display. I’m still not sure how to interpret through what problems I might be facing myself.

Finally, asking for a sanity check: have I been following the correct workflow with building the deepstream-lpr-app in the first place? Are there things I’ve done which I don’t need to do, or worse yet there are actions I’ve taken or neglected that would result in a faulty build?

In Jetpack6.0 + nvcr.io/nvidia/deepstream:6.4-triton-multiarch,
I can reproduce your error.

root@d52c7dc9d620:/home/nvidia/morganh/deepstream_lpr_app/deepstream-lpr-app# ./deepstream-lpr-app 1 1 0 infer VJ_1H_part_2_1_crop.mp4 out.264
use_nvinfer_server:0, use_triton_grpc:0
/bin/bash: line 1: lsmod: command not found
/bin/bash: line 1: modprobe: command not found
(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 204)
(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)

(gst-plugin-scanner:217): GStreamer-WARNING **: 02:47:38.453: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:217): GStreamer-WARNING **: 02:47:38.500: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstchromaprint.so': libavcodec.so.58: cannot open shared object file: No such file or directory

(gst-plugin-scanner:217): GStreamer-WARNING **: 02:47:38.552: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpg123.so': libmpg123.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:217): GStreamer-WARNING **: 02:47:38.557: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpeg2enc.so': libmpeg2encpp-2.1.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:217): GStreamer-WARNING **: 02:47:38.570: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstopenmpt.so': libmpg123.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:217): GStreamer-WARNING **: 02:47:38.695: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpeg2dec.so': libmpeg2.so.0: cannot open shared object file: No such file or directory
GLib (gthread-posix.c): Unexpected error from C library during 'pthread_setspecific': Invalid argument.  Aborting.
/bin/bash: line 1: lsmod: command not found
/bin/bash: line 1: modprobe: command not found
Request sink_0 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.

(deepstream-lpr-app:216): GLib-GObject-WARNING **: 02:48:28.266: g_object_set_is_valid_property: object class 'GstNvTracker' has no property named 'enable_batch_process'
Now playing: 1
Opening in BLOCKING MODE
Opening in BLOCKING MODE
WARNING: Deserialize engine failed because file path: /home/nvidia/morganh/deepstream_lpr_app/deepstream-lpr-app/../models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine open error
0:00:54.890375591   216 0xaaaabd3c9690 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 3]: deserialize engine from file :/home/nvidia/morganh/deepstream_lpr_app/deepstream-lpr-app/../models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed
0:00:55.243312416   216 0xaaaabd3c9690 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 3]: deserialize backend context from engine from file :/home/nvidia/morganh/deepstream_lpr_app/deepstream-lpr-app/../models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed, try rebuild
0:00:55.243377889   216 0xaaaabd3c9690 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 3]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU

Attach log.
20240806_forum_300070_run_lpr_Jetpack6.0_orin.txt (30.7 KB)

@Fiona.Chen Do you have any idea for the issue when run lpr in Jetson Orin with Jetpack6.0 + nvcr.io/nvidia/deepstream:6.4-triton-multiarch ?

Hi @samuelxu3240
You can ignore the error in the log. Just let the application continue.
It can run to the end.
Attach my log.
20240806_forum_300070_run_lpr_Jetpack6.0_orin_update.txt (84.8 KB)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks