Hi again! Thank you for the advice, I was able to make it as far as building deepstream-lpr-app (I think successfully, though not without warnings) but I’m a bit stuck on how to run it. In that regard, I have two questions: are file1.mp4, and file2.mp4 being called through an external reference (not in immediate directory), or are they just placeholder names provided on the technical blog within the example? If the latter, will I need to pull sample validation/testing data separately from the includes inside the Triton docker image or should I be able to access them some other way? Otherwise, what of the GStreamer errors that I’m seeing when I attempt to run deepstream-lpr-app?
deepstream_lpr_app_attempt.txt (67.6 KB)
The above included .txt file are logs from beginning to end, I actually had a power brownout from a storm so it wasn’t my first time going through the 6.4 container. Regardless, it follows all the steps I’ve taken to download the deepstream_lpr_app models, install and run the tao-converter on the pre-built model (on line 165). It threw some warnings on some casts and weights, but seemed to at least complete. I then pulled the deepstream_lpr_app on line 205, which I just did in the same directory (I’d previously tried to do it in root and then would move around some other files, but I’m not actually sure if there is a specific directory I should pull the GitHub to?). I followed some (possibly redundant) instructions on GitHub to run “./download_convert.sh us 0” and ran the Makefile. I moved libnvdsinfer_custom_impl_lpr.so to /opt/nvidia/deepstream/deepstream-6.4/lib/, created dict.txt inside the nested deepstream_lpr_app folder, but could not figure out this step:
Modify the
nvinfer
configuration files for TrafficCamNet, LPD and LPR with the actual model path and names. The config file for TrafficCamNet is provided in DeepStream SDK under the following path:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tao_pretrained_models/trafficcamnet.txt
The sample
lpd_config.txt
andlpr_config_sgie_us.txt
files can be foundlpd_config.txt
andlpr_config_sgie_us.txt
. Note theparse-classifier-func-name
andcustom-lib-path
. This uses the newnvinfer
LPR library from step 1.
I checked the directory “/opt/nvidia/deepstream/deepstream-5.0/samples/models/tao_pretrained_models/” on line 562, but there was only a folder, “trafficcamnet” with “resnet18_trafficcamnet_pruned.etlt” and “trafficnet_int8.txt” inside. I’m not sure what the blog article was referring to for the 'lpd_config.txt" and ‘lpr_config_sgie_us.txt’ files either: is it just to modify these configuration files as well for the other two models? Where would they be located? I would like to think that I could figure out how to modify these config files for where I pulled the model directories and files if I were able to open and interpret them, but I wouldn’t know where to start if I had to construct them from scratch. Were these files supposed to be pulled as a part of the docker? Can they be found somewhere else or pulled in addition?
I attempted to run the compiled deepstream_lpr_app inside the same directory I had git cloned, not knowing if it would produce any outputs without facing an error. I also found that the sample syntax for running the app had an additional argument (Triton related, I believe), so I tried to give a reasonable argument and re-run the app. I wasn’t surprised to see errors, but I would need some guidance with deciphering them.
Are these errors occurring because file1.mp4 and file2.mp4 are invalid arguments? I’m inclined to think so from the “Error FileOperationFailed” errors, but I’m not sure. Furthermore, the other errors on missing/failed plugin loads are a bit concerning. Are the path files for these plugins incorrect, or could there be an issue on my installation or a requirement to install these plug-ins be required?
root@767c8f2cb2a9:/opt/nvidia/deepstream/deepstream-6.4/samples/models/LP/LPR/de
epstream_lpr_app/deepstream-lpr-app# ./deepstream-lpr-app 1 1 0 infer file1.mp4 file2.mp4 output.264
use_nvinfer_server:0, use_triton_grpc:0
(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 204)
(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
/bin/dash: 1: lsmod: not found
/bin/dash: 1: modprobe: not found
(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.258: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so’: librivermax.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.274: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpeg2enc.so’: libmpeg2encpp-2.1.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.331: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpg123.so’: libmpg123.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.382: adding type GstEvent multiple times
(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.520: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstchromaprint.so’: libavcodec.so.58: cannot open shared object file: No such file or directory
(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.560: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstopenmpt.so’: libmpg123.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:221): GStreamer-WARNING **: 18:54:26.672: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstmpeg2dec.so’: libmpeg2.so.0: cannot open shared object file: No such file or directory
GLib (gthread-posix.c): Unexpected error from C library during ‘pthread_setspecific’: Invalid argument. Aborting.
As you can see, I initialized the docker container with: “docker run --runtime=nvidia -it --rm -v /home/nvidia:/home/nvidia nvcr.io/nvidia/deepstream:6.4-triton-multiarch /bin/bash” on line 1. I looked up some of these errors, such as with this thread where the user believed their problems were tied to their lack of a display. I’m still not sure how to interpret through what problems I might be facing myself.
Finally, asking for a sanity check: have I been following the correct workflow with building the deepstream-lpr-app in the first place? Are there things I’ve done which I don’t need to do, or worse yet there are actions I’ve taken or neglected that would result in a faulty build?