I found deepstream really promising for large scale real-time video analytics, however, I have few doubts which I want to clarify before jumping into it. I am very new to GPU, CUDA, and TensorRT, therefore, some of my doubts might sound very inferior but it would be great if someone can help me out.
- In the object detection plugin is it possible to use new models such as yolov4, at least with additional coding. I was able to run it on TesorRT but not sure about the limitations when using deepstream.
- For object tracking in addition to the reference algorithms, can we implement our own algorithm for example using LSTM model?
- Since everywhere it is mentioned only about object detections and classifiers, I just want to confirm that whether we can also use a neural network for regression problems with nvinfer plugin?
- Is it possible to write a completely own building block for which deepstream doesn’t have direct support, for example a classical computer vision algorithm using CUDA, openCV, or any other libraries and plug it wherever we want in the middle of the pipeline?
- if question 4 is possible can we choose to run it on either CPU or GPU?
- Can we choose the GPU device (when using multi GPUs) for each video stream and customize the pipeline and add specific blocks only for a particular video stream?
- Is it possible to achieve parallel processing for all the video streams despite all the modifications I mentioned above?
So overall I am expecting some clarification whether the usage of deepstream is only limited to change in configurations of existing templates or we have the full control to build our own system.
Please be kind enough to provide clarifications at least for some of my doubts which will help us a lot. Thanks in advance.