Use nvstreammux after nvinfer

I’m implementing a image classifier handling 6 cameras concurrently.
I want to run the inference for 3 of them on DLA0 and 3 of them on DLA1 of my jetson orin nx. Then, the gst pipeline must converge to my final application.

I’m having issues in this pipeline configuration. More in detail:

  • I can merge multiple sources with nvstreammux, and then send the merged inputs to nvinfer.

  • I can run the inference on dla0 and dla1

  • i CANNOT merge the two streams using nvstreammux after the inference.

Am I missing something, is there a solution, or is it impossible?
the last option seems suggested by this other post

which by the way was slightly different, leaving room for hope the error is mine

Do you mean you can run DLA0 and DLA1 with one nvinfer? How?

What do you mean?

For your case, the parallel pipeline is recommended. NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.