Problem Deploy trained model from DIGITS

So I have recently trained a model through DIGITS, and downloaded the model to Jetson TX2. I have also installed caffe on Jetson TX2. However, when I try to deploy the model. An error showed up:

I1107 23:19:12.893411 25303 net.cpp:257] Network initialization done.
I1107 23:19:13.149886 25303 net.cpp:746] Ignoring source layer train-data
F1107 23:19:13.149960 25303 blob.cpp:496] Check failed: count_ == proto.data_size() (34848 vs. 0)
*** Check failure stack trace: ***
Aborted (core dumped)

According to some research, this might be caused by version of deploy.prototxt file, which means the model was trained on one version of caffe while the Jetson tx2 is running another version. Someone mentioned prototxt file 1.0 has same problem running on 1.1 version. Any way I can transfer between these two version? Or am I missing something for setting? Please let me know, thank you so much!

Here is my prototxt file:
input: “data”
input_shape {
dim: 1
dim: 3
dim: 227
dim: 227
}
layer {
name: “conv1”
type: “Convolution”
bottom: “data”
top: “conv1”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
weight_filler {
type: “gaussian”
std: 0.00999999977648
}
bias_filler {
type: “constant”
value: 0.0
}
}
}
layer {
name: “relu1”
type: “ReLU”
bottom: “conv1”
top: “conv1”
}
layer {
name: “norm1”
type: “LRN”
bottom: “conv1”
top: “norm1”
lrn_param {
local_size: 5
alpha: 9.99999974738e-05
beta: 0.75
}
}
layer {
name: “pool1”
type: “Pooling”
bottom: “norm1”
top: “pool1”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “conv2”
type: “Convolution”
bottom: “pool1”
top: “conv2”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
weight_filler {
type: “gaussian”
std: 0.00999999977648
}
bias_filler {
type: “constant”
value: 0.10000000149
}
}
}
layer {
name: “relu2”
type: “ReLU”
bottom: “conv2”
top: “conv2”
}
layer {
name: “norm2”
type: “LRN”
bottom: “conv2”
top: “norm2”
lrn_param {
local_size: 5
alpha: 9.99999974738e-05
beta: 0.75
}
}
layer {
name: “pool2”
type: “Pooling”
bottom: “norm2”
top: “pool2”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “conv3”
type: “Convolution”
bottom: “pool2”
top: “conv3”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
weight_filler {
type: “gaussian”
std: 0.00999999977648
}
bias_filler {
type: “constant”
value: 0.0
}
}
}
layer {
name: “relu3”
type: “ReLU”
bottom: “conv3”
top: “conv3”
}
layer {
name: “conv4”
type: “Convolution”
bottom: “conv3”
top: “conv4”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: “gaussian”
std: 0.00999999977648
}
bias_filler {
type: “constant”
value: 0.10000000149
}
}
}
layer {
name: “relu4”
type: “ReLU”
bottom: “conv4”
top: “conv4”
}
layer {
name: “conv5”
type: “Convolution”
bottom: “conv4”
top: “conv5”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: “gaussian”
std: 0.00999999977648
}
bias_filler {
type: “constant”
value: 0.10000000149
}
}
}
layer {
name: “relu5”
type: “ReLU”
bottom: “conv5”
top: “conv5”
}
layer {
name: “pool5”
type: “Pooling”
bottom: “conv5”
top: “pool5”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “fc6”
type: “InnerProduct”
bottom: “pool5”
top: “fc6”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
inner_product_param {
num_output: 4096
weight_filler {
type: “gaussian”
std: 0.00499999988824
}
bias_filler {
type: “constant”
value: 0.10000000149
}
}
}
layer {
name: “relu6”
type: “ReLU”
bottom: “fc6”
top: “fc6”
}
layer {
name: “drop6”
type: “Dropout”
bottom: “fc6”
top: “fc6”
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: “fc7”
type: “InnerProduct”
bottom: “fc6”
top: “fc7”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
inner_product_param {
num_output: 4096
weight_filler {
type: “gaussian”
std: 0.00499999988824
}
bias_filler {
type: “constant”
value: 0.10000000149
}
}
}
layer {
name: “relu7”
type: “ReLU”
bottom: “fc7”
top: “fc7”
}
layer {
name: “drop7”
type: “Dropout”
bottom: “fc7”
top: “fc7”
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: “fc8”
type: “InnerProduct”
bottom: “fc7”
top: “fc8”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
inner_product_param {
num_output: 2
weight_filler {
type: “gaussian”
std: 0.00999999977648
}
bias_filler {
type: “constant”
value: 0.0
}
}
}
layer {
name: “softmax”
type: “Softmax”
bottom: “fc8”
top: “softmax”
}

Hi,

DIGITs use the nvcaffe-0.15, do you use the same source?

Suppose DIGITs will give you a prototxt named as ‘deploy.prototxt’ when downloading the snapshot.
Do you use it for launching on TX2?

By the way, it’s recommended to use TensorRT for better performance.
You can find a tutorial here: [url]https://github.com/dusty-nv/jetson-inference[/url]

Thanks.

Hi,

Thank you so much for the reply, the tutorial link you provide is great!

I think the caffe I build on tx2 is just a normal version from here: GitHub - jetsonhacks/installCaffeJTX2: Install Caffe on the NVIDIA Jetson TX2 Development Kit
However, I found nvcaffe for tx2 here: NVCaffe support on TX2 - Jetson TX2 - NVIDIA Developer Forums

And thanks for the suggestion for using TensorRT, since I’m new to this platform and DIGITS, just trying to play around the try out everything.

Thanks again!