Some problems in deepstream_parallel_app

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version6.1.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
When I run “./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml” with the repo deepstream_parallel_inference_app , some problems happens as follows:


so, how can I solve this problem?
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

  1. Are you running the program in Host or docker?

2.Have you modified the configuration file? Does your configuration file look like the following?

  #(0): nvinfer; (1): nvinferserver
  plugin-type: 1

  #config-file: ../../yolov4/config_yolov4_infer.txt
  config-file: ../../yolov4/config_yolov4_inferserver.txt

3.If the above is the same, the problem should be caused by missing tritonserver.

There are two ways,

  1. Modify plugin-type: 0 and config-file: …/…/yolov4/config_yolov4_infer.txt

  2. It is recommended to use docker with nvcr.io/nvidia/deepstream:6.3-triton-multiarch

I’m running in Host and I haven’t modified the configuration file. Now the above WARNING messages have disappeared, but ERROR is still exist


And when I modify plugin-type: 0 and config-file: …/…/yolov4/config_yolov4_infer.txt,
there are still error messages, just like:
NVDSMETAMUX_CFG_PARSER: Group ‘user-configs’ ignored
Unknown or leagcy specified ‘is-classifier’ for group [property]

Sorry for giving wrong judgment. I looked at the code and configuration files

deepstream_parallel_inference_app requires tritonserver for inference.

So you may be required to use docker to avoid the complex installation process

You can refer this

Thanks for your reply, except docker DS6.3, is Docker with DS6.1.1-triton-multiarch available?And what other methods can I use besides docker installs? Looking forward to your reply.

If you want use DS-6.1.1,you can try the following docker image.

docker pull nvcr.io/nvidia/deepstream:6.1.1-triton

Sorry, I haven’t tried installing triton server manually. I don’t know how to besides
docker installs.

Thansk for your reply sincerely, I run it successfully in docker with docker pull nvcr.io/nvidia/deepstream:6.3-triton-multiarch, and now I want to show video on the screen and display fps on the video or on the terminal, how can I do to achieve it ?

sink0:
  enable: 0
  #Type - 1=FakeSink 2=EglSink 3=File 7=nv3dsink (Jetson only)
  type: 2
  sync: 1
  source-id: 0
  gpu-id: 0
  nvbuf-memory-type: 0

modify your configuration file, change enable from 0 to 1.

Another tips, before you start docker, you need to do the operations in the link

application:
  enable-perf-measurement: 1
  perf-measurement-interval-sec: 5

In the current configuration file, fps will be displayed on the terminal by default.

When I modify my configuration file in sink 0 from 0 to 1, I get this error:

ERROR: nvdemetamux gstnvdsmetamux.cpp:1005: gst_nvdsmeatmux_aggregate:<infer_bin_muxer> push error


How do you run the docker , can you share the command line ?

And what the model of your GPU ?

Yeah, I’m running the docker, and DS version is 6.3, the command is:

./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

It’s the demo of the repo deepstream_parallel_inference_app offered.
And the error ocurrred when I modify the sink:0 (enalbe: 0 → 1)

I mean how do you start docker.
If you want to display in docker, you need to follow the following steps

xhost +

docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.3 nvcr.io/nvidia/deepstream:6.3-triton-multiarch

In addition, I found that enable-perf-measurement does not work in deepstream_parallel_inference_app, it only takes effect in deepstream-app,

I will look into this issue.

I start docker with command as follows:

docker run -it --gpus all nvcr.io/nvidia/deepstream:6.3-triton-multiarch /bin/bash
So, do I need to start it completely according to the command you provided?
And, the fps doesn’t support deepstream_parallel_inference_app?😊😊

Yes, fps currently does not support it.

If it is supported in the future, I will give feedback.

Yes, You can try it first.

Thanks for your reply, I will try it later and look forward to your subsequent feedback if it is supported in the future.

One more question, how do I use Nsight System to profile?
like this? nsys profile --stats=true ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080.p_dec_parallel_infer.yml
I’m trying to get CPU and GPU utilization

Yes, You can get more information from here

Could you give me a detailed command to display the CPU utilization and GPU utilization on the terminal, thanks.

Try the following cli.

nsys profile --stats=true --sample=cpu --trace=cuda,cudnn,cublas,nvtx,osrt,oshmem ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

You will get output such as

Time (%)  Total Time (ns)  Num Calls    Avg (ns)      Med (ns)    Min (ns)   Max (ns)     StdDev (ns)            Name
 --------  ---------------  ---------  ------------  ------------  --------  -----------  -------------  ----------------------
     61.8    6862642163937     273904    25054917.6      393098.5       802  58168989097    880367902.9  pthread_cond_wait
     28.7    3180146886600     241841    13149742.5        6569.0       483  53659046338    429000516.1  futex
      2.6     289327846731      49999     5786672.7      228345.0       826  29823194392    138261878.5  poll
      1.8     205013066942       7400    27704468.5    28453414.5     74197   4378362396    101851026.5  ppoll
      1.6     180016135286        183   983694728.3  1000087951.0     40013  10001261789   1730653101.4  pthread_cond_timedwait
      1.6     176695770522         44  4015812966.4       91113.0        60  59155971605  13635378370.8  sem_timedwait
      0.6      69216509643      13708     5049351.4      585422.0       120  16484066649    141790331.3  sem_wait
      0.5      59342157415        349   170034835.0     1098534.0    156675   1000192787    374939390.6  nanosleep
      0.3      31843013549     126709      251308.2       38076.0        79     86764234       724197.8  pthread_rwlock_wrlock
      0.2      19637564794     109638      179112.8       29251.5       108     85577782       587847.4  pthread_rwlock_rdlock
      0.1      11331747656      76205      148700.8        8848.0        35   2013220601     12638364.1  pthread_mutex_lock

You also export the report1.nsys-rep to gui-tools

Thanks for your reply sincerely, and I had try it, but I got some error:

And when I attempt to download and install Nsight tool in docker with the follow command:

NVIDIA_Nsight_Graphics_2023.3.2.23261.run

Some error happened when I run the following command on terminal:

nsys-ui


Is I can’t do this in docker, if it is, how can I do to profile the code in docker?