How to get RetinaNet model (onnx,pb...) ?

Hi,
I’m currently trying to get a hand on a version of the RetinaNet model whatever the format, is it possible ?

The best would be an ONNX model but a pb file would suit too.

I just need a file with the architecture (the layers) to import and train.

I tried using RetinaNet examples but seems I can just train one choosing the backbones, but not get a full architecture model prior training.

Thank you in advance

Hi,

Do you want to inference the model from Jetson or just train it for customized usage.
We do have a TensorRT sample which can convert RetinaNet into TensorRT for fast inference.
https://github.com/NVIDIA/retinanet-examples

Thanks.

Hi,

I would need the model to train it for customized usage, not the inference model already trained.

Maybe I don’t get it well, but it seemed to me that I could get the model architecture (whatever format) before training to visualize the different layers… this is what I need.

Thank you

Hi,

Please noticed that Jetson is not an ideal device for training since it is designed for inference.
Do you want to apply a training job on TX2?

For visualizing, lots of frameworks have similar feature, ex. tensorboard.

Thanks.

Hi AastaLLL,

yeah I know, I won’t make the training on the Jetson, that’s why I need the ONNX model to import it on another device.

I’ll actually use Matlab Server to make the training, but to do so I need the model to import it on Matlab and parametrize all the training options.

My question isn’t about the tools but more where I can find the onnx file or the pb file in your RetinaNet-examples github repository ? Do I need to compile it to access it ?

If yes I would probably compile it on the Jetson, get the model to Matlab, make the training and then optimize using TensorRT. Does it sound possible to you ?

Thanks

Hi,

Are you looking for the pre-trained model of retinanet sample?
It use the coco model for fine tuning:
https://github.com/NVIDIA/retinanet-examples/blob/master/TRAINING.md#fine-tuning

Thanks.

Hi,

I’ll probably test both way (pre-trained and from scratch).

I just need an onnx file or even h5 file of the frozen graph for retinanet (pre-trained or not).

Thanks

Hi,

You can follow the document shared in comment#6 to fine tune or full train a customized model.
Thanks.

Hi guys,

I’m trying to install retinanet-examples on my Ubuntu16.04 host but I’m facing different issues. I don’t know how to use the docker so I just “pip install” it in my virtualenv and when I want to check the different modules here is what I get:

>>> from retinanet import infer
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/infer.py", line 13, in <module>
    from .model import Model
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/model.py", line 8, in <module>
    from ._C import Engine
ImportError: /home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/_C.so: undefined symbol: _ZN2cv6imreadERKNS_6StringEi
>>> from retinanet import train
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/train.py", line 14, in <module>
    from .infer import infer
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/infer.py", line 13, in <module>
    from .model import Model
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/model.py", line 8, in <module>
    from ._C import Engine
ImportError: /home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/_C.so: undefined symbol: _ZN2cv6imreadERKNS_6StringEi
>>> from retinanet import utils
>>> from retinanet.model import Model
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/model.py", line 8, in <module>
    from ._C import Engine
ImportError: /home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/_C.so: undefined symbol: _ZN2cv6imreadERKNS_6StringEi
>>> from retinanet._C import Engine
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: /home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/_C.so: undefined symbol: _ZN2cv6imreadERKNS_6StringEi
>>> from retinanet import backbones
>>>

I don’t get why some of the modules I called are available and other produce this weird error:

ImportError: /home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/retinanet-0.1-py3.5-linux-x86_64.egg/retinanet/_C.so: undefined symbol: _ZN2cv6imreadERKNS_6StringEi

I’ve spent quite some time already looking into it but I can’t figure out what’s the problem.

Here is my spec:

  • Ubuntu16.04
(cv) greenshield@greenshield-Precision-Tower-3620:~$ nvidia-smi 
Tue Mar 10 11:08:47 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro M4000        Off  | 00000000:01:00.0 Off |                  N/A |
| 46%   34C    P8    11W / 120W |    208MiB /  8125MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1223      G   /usr/lib/xorg/Xorg                           160MiB |
|    0      3438      G   compiz                                        43MiB |
+-----------------------------------------------------------------------------+
(cv) greenshield@greenshield-Precision-Tower-3620:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

-Tensorflow-gpu 1.15.0
-TensorRT 7.0.0-1+cuda10.2 (even if I can’t make “import tensorrt”, it tells me no module exists but using dpkg -l|grep TensorRT I have it)

Thank you in advance

Drop it, I found my problem.

I still have a question concerning the data augmentation: the training script seems to apply 4-5 augmentation techniques, so if I have like 500images it will be like training 2000-2500images or should I make the augmentation myself previously ?

Thanks