Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Tesla T4 • DeepStream Version
Deepstream 6.1 • JetPack Version (valid for Jetson only) • TensorRT Version
Tensorrt 8.5.0.2 • NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
I have built the docker container basing on the deepstream6.1-triton to test the deepstream_parallel_inference_app, and got the error when setting two cameras with three models as bellow:
I compile the deepstream_paraller_inference_app, and run the command: deepstream_paraller_inferece_app -c xxx.yml , and found it success at first. And I add new model to expand detect targets but got such error. I don’t know why cause such error?
This may be a problem with your new model. You can try to deploy your new model with our simple demo first: opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-test1\deepstream_test1_app.c . After the model runs successfully, you can deploy it on the parallel demo.
Thanks for your reply. We have checked the model successfully.
It seems the error caused by the setting in source4_1080p_dec_parallel_infer.yml or the dstest5_msgconv_sample_config.yml. Because I fallback the setting of these two files, the procedure run without any problem.
By the way, I want to distinguish which device the kafka msg comes from when setting mutil sources. Should I just modify the dstest5_msgconv_sample_config.ym by the id content as below:
And I change the app name in Makefile to yrvideo。
I found this error would occur when modify the setting yaml file of source4_1080p_dec_parallel_infer.yml. It seems the file can not be parsed correctly.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Yes. What do you modify to the file source4_1080p_dec_parallel_infer.yml? Also you can debug it yourself in the code I attached before.