[Xavier] cannot use my own trained model on jetson-inference

Hi there,

I got a problem like this:
I trained a detectnet model on my host PC using nvidia-docker, IMAGE: nvidia/digits.
and I copied this model to Xavier and try to use this model by runing:

./detectnet-console a.jpg output_a.jpg --prototxt=/home/nvidia/jetson-inference/build/aarch64/bin/trained_model/deploy.prototxt --model=/home/nvidia/jetson-inference/build/aarch64/bin/trained_model/snapshot_iter_11850.caffemodel --input_blob=data --output_cvg=coverage --output_bbox=bboxes
detectnet-console
  args (8):  0 [./detectnet-console]  1 [a.jpg]  2 [output_a.jpg]  3 [--prototxt=/home/nvidia/jetson-inference/build/aarch64/bin/trained_model/deploy.prototxt]  4 [--model=/home/nvidia/jetson-inference/build/aarch64/bin/trained_model/snapshot_iter_11850.caffemodel]  5 [--input_blob=data]  6 [--output_cvg=coverage]  7 [--output_bbox=bboxes]  

detectNet -- loading detection network model from:
          -- prototxt    /home/nvidia/jetson-inference/build/aarch64/bin/trained_model/deploy.prototxt
          -- model       /home/nvidia/jetson-inference/build/aarch64/bin/trained_model/snapshot_iter_11850.caffemodel
          -- input_blob  'data'
          -- output_cvg  'coverage'
          -- output_bbox 'bboxes'
          -- mean_pixel  0.000000
          -- threshold   0.500000
          -- batch_size  2

[TRT]  TensorRT version 5.0.3
[TRT]  attempting to open cache file /home/nvidia/jetson-inference/build/aarch64/bin/trained_model/snapshot_iter_11850.caffemodel.2.tensorcache
[TRT]  cache file not found, profiling network model
[TRT]  platform has FP16 support.
[TRT]  loading /home/nvidia/jetson-inference/build/aarch64/bin/trained_model/deploy.prototxt /home/nvidia/jetson-inference/build/aarch64/bin/trained_model/snapshot_iter_11850.caffemodel
could not parse layer type Python
[TRT]  failed to parse caffe network
failed to load /home/nvidia/jetson-inference/build/aarch64/bin/trained_model/snapshot_iter_11850.caffemodel
detectNet -- failed to initialize.
detectnet-console:   failed to initialize detectNet

what should I do to solve this, many thanks

P.S.

I tested the ‘DetectNet-COCO-Dog’ model it works well…

./detectnet-console dog_4.jpg output_dog4.jpg --prototxt=$NET/deploy.prototxt --model=$NET/snapshot_iter_38600.caffemodel --input_blob=data --output_cvg=coverage --output_bbox=bboxes 
detectnet-console
  args (8):  0 [./detectnet-console]  1 [dog_4.jpg]  2 [output_dog4.jpg]  3 [--prototxt=/home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/deploy.prototxt]  4 [--model=/home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel]  5 [--input_blob=data]  6 [--output_cvg=coverage]  7 [--output_bbox=bboxes]  

detectNet -- loading detection network model from:
          -- prototxt    /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/deploy.prototxt
          -- model       /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel
          -- input_blob  'data'
          -- output_cvg  'coverage'
          -- output_bbox 'bboxes'
          -- mean_pixel  0.000000
          -- threshold   0.500000
          -- batch_size  2

[TRT]  TensorRT version 5.0.3
[TRT]  attempting to open cache file /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel.2.tensorcache
[TRT]  cache file not found, profiling network model
[TRT]  platform has FP16 support.
[TRT]  loading /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/deploy.prototxt /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel
[TRT]  retrieved output tensor 'coverage'
[TRT]  retrieved output tensor 'bboxes'
[TRT]  configuring CUDA engine
[TRT]  building CUDA engine
[TRT]  completed building CUDA engine
[TRT]  network profiling complete, writing cache to /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel.2.tensorcache
[TRT]  completed writing cache to /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel.2.tensorcache
[TRT]  /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel loaded
[TRT]  CUDA engine context initialized with 3 bindings
[TRT]  /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel input  binding index:  0
[TRT]  /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel input  dims (b=2 c=3 h=640 w=640) size=9830400
[cuda]  cudaAllocMapped 9830400 bytes, CPU 0x21e5f6000 GPU 0x21e5f6000
[TRT]  /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel output 0 coverage  binding index:  1
[TRT]  /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel output 0 coverage  dims (b=2 c=1 h=40 w=40) size=12800
[cuda]  cudaAllocMapped 12800 bytes, CPU 0x21ef56000 GPU 0x21ef56000
[TRT]  /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel output 1 bboxes  binding index:  2
[TRT]  /home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel output 1 bboxes  dims (b=2 c=4 h=40 w=40) size=51200
[cuda]  cudaAllocMapped 51200 bytes, CPU 0x21f156000 GPU 0x21f156000
/home/nvidia/jetson-inference/data/networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel initialized.
[cuda]  cudaAllocMapped 16 bytes, CPU 0x216969200 GPU 0x216969200
maximum bounding boxes:  6400
[cuda]  cudaAllocMapped 102400 bytes, CPU 0x21f356000 GPU 0x21f356000
[cuda]  cudaAllocMapped 25600 bytes, CPU 0x21f162800 GPU 0x21f162800
loaded image  dog_4.jpg  (512 x 512)  4194304 bytes
[cuda]  cudaAllocMapped 4194304 bytes, CPU 0x21f556000 GPU 0x21f556000
detectnet-console:  beginning processing network (1548226675235)
[TRT]  layer deploy_transform - 0.200224 ms
[TRT]  layer conv1/7x7_s2 + conv1/relu_7x7 input reformatter 0 - 0.196576 ms
[TRT]  layer conv1/7x7_s2 + conv1/relu_7x7 - 2.883648 ms
[TRT]  layer pool1/3x3_s2 - 0.348416 ms
[TRT]  layer pool1/norm1 input reformatter 0 - 0.125664 ms
[TRT]  layer pool1/norm1 - 0.235552 ms
[TRT]  layer conv2/3x3_reduce + conv2/relu_3x3_reduce input reformatter 0 - 0.153568 ms
[TRT]  layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 0.191456 ms
[TRT]  layer conv2/3x3 + conv2/relu_3x3 - 1.860640 ms
[TRT]  layer conv2/norm2 input reformatter 0 - 0.360448 ms
[TRT]  layer conv2/norm2 - 0.635904 ms
[TRT]  layer pool2/3x3_s2 input reformatter 0 - 0.413664 ms
[TRT]  layer pool2/3x3_s2 - 0.274464 ms
[TRT]  layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 0.253920 ms
[TRT]  layer inception_3a/3x3 + inception_3a/relu_3x3 - 0.460832 ms
[TRT]  layer inception_3a/5x5 + inception_3a/relu_5x5 - 0.225280 ms
[TRT]  layer inception_3a/pool - 0.184320 ms
[TRT]  layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 0.091200 ms
[TRT]  layer inception_3a/1x1 copy - 0.085920 ms
[TRT]  layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 0.460992 ms
[TRT]  layer inception_3b/3x3 + inception_3b/relu_3x3 - 0.897888 ms
[TRT]  layer inception_3b/5x5 + inception_3b/relu_5x5 - 0.403488 ms
[TRT]  layer inception_3b/pool - 0.245728 ms
[TRT]  layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 0.133120 ms
[TRT]  layer inception_3b/1x1 copy - 0.075776 ms
[TRT]  layer pool3/3x3_s2 - 0.199904 ms
[TRT]  layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 0.218336 ms
[TRT]  layer inception_4a/3x3 + inception_4a/relu_3x3 - 0.221120 ms
[TRT]  layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.082752 ms
[TRT]  layer inception_4a/pool - 0.141120 ms
[TRT]  layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 0.067552 ms
[TRT]  layer inception_4a/1x1 copy - 0.027872 ms
[TRT]  layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 0.226112 ms
[TRT]  layer inception_4b/3x3 + inception_4b/relu_3x3 - 0.281600 ms
[TRT]  layer inception_4b/5x5 + inception_4b/relu_5x5 - 0.074752 ms
[TRT]  layer inception_4b/pool - 0.145376 ms
[TRT]  layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 0.071040 ms
[TRT]  layer inception_4b/1x1 copy - 0.027264 ms
[TRT]  layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 0.226304 ms
[TRT]  layer inception_4c/3x3 + inception_4c/relu_3x3 - 0.288800 ms
[TRT]  layer inception_4c/5x5 + inception_4c/relu_5x5 - 0.074944 ms
[TRT]  layer inception_4c/pool - 0.145408 ms
[TRT]  layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 0.070560 ms
[TRT]  layer inception_4c/1x1 copy - 0.024480 ms
[TRT]  layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 0.212064 ms
[TRT]  layer inception_4d/3x3 + inception_4d/relu_3x3 - 0.497568 ms
[TRT]  layer inception_4d/5x5 + inception_4d/relu_5x5 - 0.071648 ms
[TRT]  layer inception_4d/pool - 0.143616 ms
[TRT]  layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 0.086784 ms
[TRT]  layer inception_4d/1x1 copy - 0.025024 ms
[TRT]  layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 0.310880 ms
[TRT]  layer inception_4e/3x3 + inception_4e/relu_3x3 - 1.034240 ms
[TRT]  layer inception_4e/5x5 + inception_4e/relu_5x5 - 0.114688 ms
[TRT]  layer inception_4e/pool - 0.153568 ms
[TRT]  layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 0.087232 ms
[TRT]  layer inception_4e/1x1 copy - 0.040800 ms
[TRT]  layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 0.429024 ms
[TRT]  layer inception_5a/3x3 + inception_5a/relu_3x3 - 0.496672 ms
[TRT]  layer inception_5a/5x5 + inception_5a/relu_5x5 - 0.108512 ms
[TRT]  layer inception_5a/pool - 0.232448 ms
[TRT]  layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 0.118208 ms
[TRT]  layer inception_5a/1x1 copy - 0.034816 ms
[TRT]  layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 0.537184 ms
[TRT]  layer inception_5b/3x3 + inception_5b/relu_3x3 - 0.599040 ms
[TRT]  layer inception_5b/5x5 + inception_5b/relu_5x5 - 0.194528 ms
[TRT]  layer inception_5b/pool - 0.235520 ms
[TRT]  layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 0.122016 ms
[TRT]  layer inception_5b/1x1 copy - 0.050432 ms
[TRT]  layer cvg/classifier - 0.121472 ms
[TRT]  layer coverage/sig input reformatter 0 - 0.009216 ms
[TRT]  layer coverage/sig - 0.009664 ms
[TRT]  layer bbox/regressor - 0.106048 ms
[TRT]  layer bbox/regressor output reformatter 0 - 0.008160 ms
[TRT]  layer network time - 20.205059 ms
detectnet-console:  finished processing network  (1548226675261)
5 bounding boxes detected
bounding box 0   (5.100000, 11.400001)  (505.350006, 435.200012)  w=500.250000  h=423.800018
bounding box 1   (15.675000, 18.400000)  (281.100006, 218.400009)  w=265.425018  h=200.000015
bounding box 2   (9.775001, 119.312500)  (203.949997, 224.350006)  w=194.175003  h=105.037506
bounding box 3   (117.450005, 127.425003)  (211.199997, 204.850006)  w=93.749992  h=77.425003
bounding box 4   (5.700000, 234.062500)  (111.525002, 336.100006)  w=105.825005  h=102.037506
draw boxes  5  0   0.000000 200.000000 255.000000 100.000000
detectnet-console:  writing 512x512 image to 'output_dog4.jpg'
detectnet-console:  successfully wrote 512x512 image to 'output_dog4.jpg'

shutting down...

I edited the deploy.prototxt file

deleted the last layer :

layer {
  name: "cluster"
  type: "Python"
  bottom: "coverage"
  bottom: "bboxes"
  top: "bbox-list"
  python_param {
    module: "caffe.layers.detectnet.clustering"
    layer: "ClusterDetections"
    param_str: "512, 512, 16, 0.6, 3, 0.02, 22, 1"
  }
}

Then, It works …

You can find the information here:
[url]https://github.com/dusty-nv/jetson-inference#detectnet-patches-for-tensorrt[/url]

Thanks.