Hi,
I’m wondering how is it possible to “pre-process 1850MP/s, perform inferencing with ResNet-based detection, and visualize each frame in just over 1 millisecond” on Jetson Xavier,
if inferencing even single 300x300 image with, say, SSD+MobileNet and TensorRT takes 10-20ms?
Thanks.
Hi,
In DS 3.0, we demonstrate 4 720p30 decoding and 30 720p30 decoding with ResNet. For your usecase, not sure if your requirement can be achieved. You may need to modify the config file to give it a try.
Not quite sure about what config file you mean.
I’m talking about this https://youtu.be/vuFo7TBisbI video, which demonstrates inferencing throughput of 1850MP/s, which corresponds to 30x 1080p 30fps. And “just over 1ms” processing time is stated.
While running TRT SSD+MobileNet sample takes more than 10ms for a single 300x300 image.
Am I missing smth?
Okay so I’m close, I ran the deepstream-app -c , then got the following:
buf_convert: Wrong src surface count in ConvertBuffer
Warning. Could not open model engine file /home/keith/Documents/DeepStreamSDK-Jetson-3.0_EA_beta5.0/deepstream_sdk_on_jetson/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_int8.engine
The DeepStream window pops-up (black) then closes within a few seconds.