DeepStream on Jetson Nano - object detection

I found this video showing object detection inference done on 8x1080p videos in real time on Jetson Nano:

I’m wondering how this was achieved? Because it is hard to believe that this isn’t mocked since SSD Mobilenetv2 runs at 40fps at 300x300 resolution on a SINGLE video.

Could someone please explain in details how this video was made and how to reproduce it?

Hi Ivan, the DeepStream demo isn’t using SSD-Mobilenet-v2, it is using a ResNet-based network with custom object detection layers that has been pruned with the NVIDIA Transfer Learning Toolkit, and using TensorRT at runtime. When the DeepStream release for Nano is made later in June/July, it will be available.

For more information, recommend that you post to the DeepStream on Jetson forum to get feedback from our DeepStream developers.

in the meantime, you can give this a go: https://devtalk.nvidia.com/default/topic/1052315/jetson-nano/python-wrapper-for-tensorrt-implementation-of-yolo-currently-v2-/

It is Beta, but works

it can do 20fps on Nano

Hi moshe.livne,

Thanks for the sharing.

Is DeepStream released yet for Nano?

Hi spasyakpaul, we are finishing up the final preparations and DeepStream 4.0 should be released soon - stay tuned.

Good to know. Thanks.

Hi gentlemen, I was one of the very first buyers of the Jetson Nano because of its promised deepstream capabilities, the community is waiting for it since a while ;)
Any realistic date? Please…, realistic :)
Thanks!

Hi all, DeepStream 4.0 was released yesterday (including support for Nano), see here for more info: https://developer.nvidia.com/deepstream-sdk

This is a piece of amazing news!

DeepStream container does not support video encode, CSI camera input or DLA

Two out of three of those things I absolutely need, and the fact you have to bind mount some libraries makes it hard pass. I know I was asking for docker before, but if the support is not there, is it possible to obtain the deepstream gsreamer plugins without docker? All I really need is nvinfer.

Nevermind. I found it.

Thank you so much for including the plugin source! Very very happy dog here.

I have a jetson nano and mu image was created a frew days ago.

desktop:~/Downloads$ sudo apt-get install ./deepstream-4.0_4.0-1_arm64.deb 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Note, selecting 'deepstream-4.0' instead of './deepstream-4.0_4.0-1_arm64.deb'
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 deepstream-4.0 : Depends: libnvinfer5 (>= 5.1.2) but 5.0.6-1+cuda10.0 is to be installed
                  Depends: libnvinfer-dev (>= 5.1.2) but 5.0.6-1+cuda10.0 is to be installed
E: Unable to correct problems, you have held broken packages.

Hi klebermagno, from the TensorRT 5.0.6 version that appears you have install, it seems you are still running the JetPack 4.2 image. Are you sure you downloaded the JetPack 4.2.1 image from here: https://developer.nvidia.com/jetson-nano-sd-card-image-r322

Thank you this works!

How to start learn DeepStream 4.0 using DLI Nano image?
How to install manually each component of JetPack 4.2.1 on top DLI Nano image?
Thank you very much in advance.

Have you installed TensorRT 5.1.5 and CUDA=10.1 ?