• Hardware Platform: Jetson
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.4
• TensorRT Version: 7.0
• Issue Type: Questions
I’m currently developing an IVA application for the Jetson.
I want to use deepstream to fully utilize undelying hardware - the application will consist of few CV tasks - some of them are independent from each other.
- I want to split the pipeline into branches that can be processed independently, but in the end i want the results to be assigned to the corresponding frame
- Not every frame must be inferred - if pipeline is overloaded the older frames may be dropped
- I’d like to draw the inference results and expose them on output rtsp stream
The main flow:
[RTSP] ----> [Detector] ---> [Tracker] ---> [First classifier] | +-> [Additional processing(this will push downstream different buffer)] ---> [Pose estimation] ---> [Classifier] | +-> [Cascade detector] ---> [Classifier] +-->[Scene classifier]
*Each component will produce metatata basically in custom format (every single one will be operating in place)
** Each component will be working with batches that comes from multiple camera streams
I was looking on the nvinfer component and I’ve seen that it has option to infer classifier in asynchronous mode. I’m wondering if it would fit this use case - if there is an async inference how I can ensure that every frame that inference process is done for every frame that goes into the output pipeline?
Could you please advise how I should build the workload using deepstream to ensure concurrent executuion of independent tasks?