In deepstream 6.1, how can I make a model load to parse a corresponding video

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) GPU
**• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Not fully understand your question, can you elaborate on your use case?

I want to use different models in different video streams when I run deepstream-app -c xxx.txt, e.g. use yolov7 to detect targets in sample_720p.h264, and use yolov8 to detect targets in video sample_1080p_h264.mp4, how can I modify the code or config to achieve it.

Deepstream-app cannot support this requirement. But you can refer to our demo:deepstream_parallel_inference_app. This may meet your needs.

Thanks for your reply. I looked the requirement of the demo, the version of DeepStream is 6.1.1, so I have to upgrade my DS version to 6.1.1 or above to use gst-nvdsmetamux plugin? And how can I select a specific source for a specific model, just like this:
source0 → model0 → detection0
source1 → model1 → detection1

You can refer to the src-ids parameter: branch-group.

Thanks for your reply, and my DS version is 6.1, how can I upgrade DS from 6.1 to 6.1.1, uninstall–>insatall or upgrade directly?

No. You need to upgrade the relevant components,like CUDA, TensorRT etc… Please refer to the table below: DS_Quickstart.

Thanks for your reply again, I will try it later

Hi, when I upgrade some relevant components and compile"source build.sh" with the repo deepstream_parallel_inference_app, the error : /usr/local/cuda/include/cuda_runtime_api.h:147:10: fatal error: crt/host_defines.h: No such file or directory. How can I do to solve this problem?

Could you directly upgrade to the latest version DeepStream 6.3 and make sure the CUDA, TensorRT etc… are also upgraded to the corresponding version?

Thanks for your reply, I found out that there are multiple versions of cuda in /usr/local/cuda and I have solved this error by running the following command:
rm -rf /usr/local/cuda
sudo ln -s /usr/local/cuda-11.7 /usr/local/cuda

1 Like

But there are some errors exist when I run “./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml” in the repo deepstream_parallel_inference_app


How can I do to solve it? Could you give me some suggestions?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There may still be some issues with the installation of your related component. You can refer to the topic below: 269685 to find which component is not installed properly.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.