could not parse layer type Python

Hi,

I trained Object Detection Network with Kitti object dataset using DIGITS according to DIGITS/examples/object-detection at digits-4.0 · NVIDIA/DIGITS · GitHub. I copied the trained model (i.e. deploy.prototxt, mean.binaryproto, snapshot_iter_?.caffemodel) to Jetson TX1. On Jestson TX1, I modified jetson-inference/detectnet-console/detectnet-console.cpp to create detectNet with the trained model. In other words, I modified like detectNet* net = detectNet::Create(“deploy.prototxt”, “sanpshot_iter_?.caffemodel”, “mean.binaryproto”) around Line 42.

When I ran detectnet-console, I encountered an error message, “could not parse layer type Pyton” and “[GIE] failed to parse caffe network”.

I’ve used DIGITS 4.0.0, Caffe 0.15.13 and installed JetPack-L4T-2.3.1-linux-64.run. Any comment would be appreciated.

Hi,

Thanks for your question.

If you use standard DetectNet, please comment or remove the last python layer.

#layer {
#  name: "cluster"
#  type: "Python"
#  bottom: "coverage"
#  bottom: "bboxes"
#  top: "bbox-list"
#  python_param {
#    module: "caffe.layers.detectnet.clustering"
#    layer: "ClusterDetections"
#    param_str: "640, 480, 16, 0.6, 3, 0.02, 22, 1"
#  }
#}

Thanks.

Hello,

Removing the last python layer made DetectNet working on Jetson TX1. Could you explain why I need to remove the last layer of DetectNet? Dose the last layer of DetectNet is also not necessary for training using DIGITS?

Thank you,

Hi,

Thanks for your response.

The function of last layer is to summarize network output and populate bounding bbox locations.
It’s a DetectNet-specific layer and doesn’t support by standard caffe and tensorRT.

In DIGITs, we integrate this function with python(so-called python layer) and check-in this function into nvCaffe.
For jetson-inference, we implement this with c++ and you can find the related code in detectNet.cpp

This function is kind of post-process so we implement this in different manners.

Read more at DetectNet blog:
https://devblogs.nvidia.com/parallelforall/detectnet-deep-neural-network-object-detection-digits/

Hi,

I have a similar question but I’m using the DrivePX2 instead of the Jetson TX1.

I’ve trained my caffe model using the default DetectNet network (caffe/detectnet_network.prototxt at caffe-0.15 · NVIDIA/caffe · GitHub), but commenting out the last 4 python layers (cluster, cluster_gt, score, mAP). I presume I have to comment out all 4 python layers and not just the cluster one.

However, after using TensorRT to optimize the model, running the TensorRT binary with sample_object_detector does not return any bounding boxes, whereas using the default TensorRT binary does.

How should we modify the DetectNet network / sample_object_detector source file to get the object detector to work with our own network?

Thanks!

Hi ruijie,

Thanks for your question.

Please check if following topic can solve you issue.
https://devtalk.nvidia.com/default/topic/993552/detection-result-difference-between-jetson-inference2-3-and-digits5-1/

If not, since we are dedicated to tegra platform, please file your issue on DRIVE platforms board.

Thanks and sorry for the inconvenience.

Thanks for the quick reply AastaLLL. I’ve re-posted the question in https://devtalk.nvidia.com/default/topic/999885/drive-platforms/using-detectnet-caffe-model-in-sample_object_detector/