Does deepstream offer a solution allowing multiple stream to run with TLT model?

Deepstream has a default resnet10 model that is proprietary and we cannot fine tune it.

We have TLT to use models that can be fine-tuned however there are all bigger models.

As an example the traffic cam model has performance of 17fps on jetson nano.
This basically means we’ll only be able to run one camera stream in deepstream-app with our camera at about 15fps.
What if we want to run multiple cameras/streams. ?
Ie I would like to run 6 rtsp cameras.
If the TLT model can only do 17fps what about when the batch size is 6 for 6 streams?

From what I can see in the performance metrics the jetson nano is not really suitable for handling multiple streams with a TLT model.

I believe 17fps should fully utilize NANO 0.5Tops GPU if you are alrady using fp16, then if you set bs=6 for 6 streams, the fps of each streams should be ~3fps ( = 17 fps /6 streams).

TLT already does many optimization to the models, so it’s hard to make NANO to support much more fps.

Ok so basically:

If you want to handle more than one stream then the jetson nano is simply not powerful enough.

Jetson nano is only capable if you are happy with the standard resnet10 model which cannot be fine tuned.

On NANO, normally, the GPU compute capability is the bottleneck, so how many streams seriously depends on the model wanted to run on NANO, heavy model, like yolov3, yolov4, may be only able to get low fps, while some light models like Resnet10, detectnetv2, may could get high fps.

Jetson Nano: Deep Learning Inference Benchmarks | NVIDIA Developer includes some banchmark data.

Thanks!