AI examples Jetson Nano - Could not register plugin creator - cache file not found

It is about the AI examples for the Jetson Nano

The camera viewer program works fine. I used the CSI camera.

I followed the instructions of this web site

https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md

$ sudo apt-get update
$ sudo apt-get install git cmake
$ git clone https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ git submodule update --init
$ sudo apt-get install libpython3-dev python3-numpy
$ cd jetson-inference # omit if working directory is already jetson-inference/ from above
$ mkdir build
$ cd build
$ cmake …/
$ cd jetson-inference/build # omit if working directory is already build/ from above
$ make
$ sudo make install
$ sudo ldconfig

I was just for your information.


However, I got issues when I tried the following
./imagenet-console --network=googlenet images/orange_0.jpg output_0.jpg

I got the following error messages:
[TRT] Could not register plugin creator: FlattenConcat_TRT in namespace:
[TRT] cache file not found, profiling network model on device GPU

Note, I followed the instructions of this web site

https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-console-2.md

Here more details for the execution
$ ./imagenet-console --network=googlenet images/orange_0.jpg output_0.jpg

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 1

[TRT] TensorRT version 6.0.1
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] Could not register plugin creator: FlattenConcat_TRT in namespace:
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel

[TRT] Retargeting inception_5b/3x3 to inception_5b/output
[TRT] Retargeting inception_5b/5x5 to inception_5b/output
[TRT] Retargeting inception_5b/pool_proj to inception_5b/output
[TRT] After concat removal: 66 layers
[TRT] Graph construction and optimization completed in 0.0399582 seconds.

After, many other displayed lines, but nothing else happened.

I checked on the blog, but I have not found an answer that solve those two issues.

I downloaded the official image and flashed it more than one time, but I still got those two errors.

Thank you for your help

Hi,

There is no error in your log, just some warning which might mislead you.

[TRT] cache file not found, profiling network model on device GPU

This indicates that there is no TensorRT engine file on your environment so the app will create one from the caffemodel.
The creation takes lots of time and we usually dump the engine for the next time.

[TRT] Could not register plugin creator: FlattenConcat_TRT in namespace:

Some issue in FlattenConcat_TRT plugin registration but imagenet doesn’t require this plugin.
You should be able the find the implement ion of it in our detection based sample.

This sample apply the deep inference to the input image and save the result into output_0.jpg.
You will find a output file in the executed folder.

Thanks.

Hi,

Few things happened and confused me during my experimentations like few low memory warnings that appeared outside the terminal window and other things. It was not the ssd card since I use 64 GB.
I believed that it was linked to the cache file warning.

Anyway, it works fine now.

Basically, I was not patient enough.

Thanks

Thanks for the status.
Good to know it works now.