Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson AGX Orin
• DeepStream Version
DeepStream 7.1
• JetPack Version (valid for Jetson only)
JetPack 6.0
• TensorRT Version
TensorRT 8.6.2.3-1+cuda12.2
• Issue Type( questions, new requirements, bugs)
Questions
Hello,
i am able to successfully run one of the deepstream sample models on my Jetson on my camera live feed using the following gstreamer pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=3840,height=1080 ! nvvideoconvert src-crop=0:0:1920:1080 ! 'video/x-raw(memory:NVMM),width=1920,height=1080,format=NV12' ! nvvideoconvert ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=configs/dstest1_pgie_config.yml ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! nv3dsink -e
Now i plan to use different models provided by the NVIDIA NGC Catalog. When downloading i am only provided with 2 files: int8_calibration.txt and model.eltl
My quesion now is: Running the nvinfer plugin in my pipeline requires passing a config, either in .yml or .txt format. How should i know, which values to specify for the configs of models i want to use? For some of the property values of the examples, i cannot even find an entry in the gst-nvinfer documentation (Gst-nvinfer — DeepStream documentation).
I am specifically interested in the models: FaceDetect (FaceDetect | NVIDIA NGC), FacialLandmarks (Facial Landmarks Estimation | NVIDIA NGC) and GazeNet (Gaze Estimation | NVIDIA NGC).
I wouldn’t mind a useful guide applicable for any model though.
Thanks!