TensorRT network definition creaion for the frozen inception V3 model

Hi all,

I am trying to run a re-trained inception V3 model on jetson TX2. It was trained in Keras platform on custom dataset comprising of 3 classes.

The keras model was saved as a “.h5” file and i converted them into a frozen graph(.pb) file.

Then i converted that to uff format, which i believe is a requisite to convert this model into a TensorRT engine file(.plan). The “.plan” the is file type that will do inference in Jetson TX2!

The preliminary requirement to convert a “.uff” model file into “.plan” file is to create a TensorRT network definition for the model, I am getting the following error while i am trying to do this.

Please see section 3.2.4 in this link, it will tell you what I am trying to achieve.

The command i run in python ----- parser.parse(model_file, network)
The error (i get one or the other error) that’s generated after i run the above command in the linux terminal is;

Error-1

[TensorRT] ERROR: Parameter check failed at: …/builder/Network.cpp::addInput::410, condition: inName != knownInputs->name
[TensorRT] ERROR: UFFParser: Failed to parseInput for node inception_v3_input
[TensorRT] ERROR: UFFParser: Parser error: inception_v3_input: Failed to parse node - >>Invalid Tensor found at node inception_v3_input
False

Error-2

[TensorRT] ERROR: Parameter check failed at: …/builder/Network.cpp::addInput::406, condition: isValidDims(dims)
[TensorRT] ERROR: UFFParser: Failed to parseInput for node inception_v3_input
[TensorRT] ERROR: UFFParser: Parser error: inception_v3_input: Failed to parse node - >>Invalid Tensor found at node inception_v3_input
False

I am also doubting whether I must be performing this engine conversion task on Jetson or host PC?

Please can you help me with this issue?

Regards
Sri

Hi,

.pb → .uff : you can apply this on either desktop or device.
.uff → PLAN: device only.
Since the PLAN engine is not portable, it’s required to generate it directly on the target device.

From you log, it looks like there are some issue in the setting of conversion.
Could you follow this tutorial and do the conversion again?
[url]https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification[/url]

Thanks.

Hi @AastaLLL,

Thank you so very much for your advice! I tried running the convert_plan.py script on jetson TX2 from the above link and now I have successfully generated a “.plan” file.

I tried performing inference and i got the following results;

Loading TensorRT engine from plan file…
Preprocessing input…
Executing inference engine…

The top-5 indices are: 0 2 1 65 0
Which corresponds to class labels:
0. car

  1. clear_landspace
  2. clear_waterspace
    Segmentation fault (core dumped)

Also I am not sure how to perform inference on multiple images at once and how to interpret with the results above!

I am also expecting this to give the confidence percentage for each classes and I don’t see any from the results above!

It will be great if you can provide some assistance on this?

Many thanks in advance!

Sri

Hi,

The output of TensorRT is a CUDA buffer and please move it back to CPU before accessing.
Another alternative is to use unified memory. The memory pointer is shared with CPU and GPU.

Here is a sample code for you reference:
https://github.com/dusty-nv/jetson-inference/blob/master/imageNet.cpp#L406

void* bindBuffers[] = { mInputCUDA, mOutputs[0].CUDA };	
...
	for( size_t n=0; n < mOutputClasses; n++ )
	{
		const float value = mOutputs[0].CPU[n];
		
		if( value >= 0.01f )
			printf("class %04zu - %f  (%s)\n", n, value, mClassDesc[n].c_str());
	
		if( value > classMax )
		{
			classIndex = n;
			classMax   = value;
		}
	}

Thanks.