jetson-inference facenet testing & Other models testing

Hi
Detectnet-camera facenet of jetson-inference is being tested.
I want to detect the face by putting the Res10_300x300_SSD_iter_140000.caffemodel that has been re-learned by ResnetSSD instead of the corresponding facenet-120 network model snapshot_iter_24000.caffemodel.
However, as a result of executing NvCaffe parser, there was an error “could not parse layer type Normalize”.
Please let me know whether I approached it in the wrong way.
Please let me know if it is possible to do only caffe model learned by detectNet

Hi,

jetson_inference will run your model with TensorRT library.
So if a model is fully supported by the TensorRT, it can be executed with jetson_inference.

Unfortunately, SSD is not a fully supported model and you will need to integrate several plugin to enable it.
We also have a sample to demonstrate how to integrate a custom layer into TensorRT.
Please check this sample for more information: /usr/src/tensorrt/samples/sampleSSD/

Thanks.

Hi um,

I wasted two weeks testing, only Nvidia provided or dedicated models works :) Most others either don’t work or of very poor performance。
I stopped playing with nano, get back working on opencv and other meaningful tasks.

Hi,

Sorry to hear this.

We keep implementing new operations but still some non-covered variance.
We already have a sample for the SSD model. It’s recommended to give it a try.

Sorry for any inconvenience it brings to you.

Thanks.

Hi,

Sorry that I saw your post on another topic.
If you are finding a python SSD example, please check this one:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#uff_ssd

Thanks.

Hi ktktkkt
I am also like you.
But NVIDIA thinks it will bring us something more advanced and improved performance.

Hi AastaLLL

Your answer helped me.

Thanks you.