- Having trouble installing it in Jetson Nano

Have anyone tried installing [detectron2][https://github.com/facebookresearch/detectron2] on jetson nano device? I am not being able to install detectron2 which fails while installing a dependency called [grpcio][https://pypi.org/project/grpcio/].

Specs:
Ubuntu 18.04
Archiconda [https://github.com/Archiconda/build-tools] - conda 4.5.12

What version of Pytorch are you running? (I don’t know wether the pytorch version is relevant)
I got it working in a Docker container on a Nano and on a Xavier with Pytorch 1.3 and torchvision 0.4.2

Hi thankyou for the reply. I got it working with default python instead of archiconda.
one question, what was the max fps that u got with it?

I’m impressed you are talking about fps.

I only got around 2.7 seconds per image for object detection on the Nano using the faster_rcnn_R_50_FPN_3x model, after some finetuning transfer training with a custom image set and 2 classes.
On the Xavier this drops to 0.7 seconds per image.

For my application this was enough because it only needs to do interference on a sample frame every so many seconds, but of course for any kind of streaming video application this is not usable.

What kind of speed have you been able to reach?

We were able to optimize the model for 1.8 fps(~0.6s per image) on Jetson Nano with Retinanet50. But this wasn’t enough as the requirement is to get a model running at ~10fps. Porting it to TensorRT with ONNX was taking too much time so we had to switch because of time constraints.

We have dropped detectron and are currently exploring other models with Tensorflow Object Detection API(Mobilenetv2).

Hi thapasushil,

0.6s per image on a Nano is already impressive. Did you achieve this by simply switching to Retinanet50, or do you have any other optimising tips that you can share? Did you get similar accuracy compared to Resnet50?
I’m also glad I’m not the only one who is struggling with porting to TensorRT through ONNX :-)

I suppose using a Mobilenetv2 on tensorflow/tensorrt will yield much better speed results, and I’m very curious to see what your experiences are with respect to accuracy compared to detectron2.

I actually meant resnet50 with Retinanet model,The retinanet actually performed better than faster rcnn for our use case. We had to resize our input images and optimize pre+postprocessing on top of model to get such speed.

Do you remember the docker container that you were able to run on jetson with GPU support? I had trouble running nvcr.io/nvidia/pytorch:19.10-py3 which is really getting on my nerves now.

We haven’t run accuracy benchmarks but tensorrt with ssd mobilenet is fast and enough for our use case.

Are you sure the docker you are trying to use is for the Jetson (arm) platform? A lot of the docker images on ngc are for x86/amd platforms.
Jetson dockers need to be based on nvcr.io/nvidia/l4t-base:r32.3.1.

I have also updated the default docker installation from the JetPack SDK to docker 19.03, which allows you to use the standard docker ‘–gpus all’ setting when you run the container. This option is, as far as I know, not yet available in docker-compose.