CUDA 9.0 on Xavier

Hello,

I’d like to install CUDA 9.0 on my Xavier, what’s the best way to do this?

Thanks

Hi,

Sorry that we don’t have CUDA-10.0 release for Xavier.
Both JetPack4.1.1 and JetPack4.2 supports CUDA-10.0.

May I know why you need to use CUDA 9.0?
Thanks.

I’d like to run TensorRT 3.0.4 on the xavier because TensorRT 5 fails to parse a UFF that works on 3.0.4.

Hi,

We want to check this issue further.
Would you mind to share the uff file or the error log with us?

Thanks.

I private messaged you the link to the uff.

There are 2 main issues:

  1. I can’t change the size of the input. With TensorRT 3.0.4 I can input arbitrary size. I get this error when I change size in 5.0.3:
uffInput: image,3,368,432
output: outputs/conf
output: outputs/paf
UFFParser: Parser error: outputs/paf: Reshape: Volume mismatch
Engine could not be created
Engine could not be created

Note:
outputs/conf should be (19, 46, 54)
outputs/paf should be (38, 46, 54)

  1. My inference expects “outputs/conf” as bindingIndex=1 and “outputs/paf” as bindingIndex=2. TensorRT 5.0.3 always switches the bindings. So on successful run I get this:
uffInput: image,3,256,384
output: outputs/conf
output: outputs/paf
Tensor imagecannot be both input and output
name=image, bindingIndex=0, buffers.size()=3
name=outputs/conf, bindingIndex=2, buffers.size()=3
name=outputs/paf, bindingIndex=1, buffers.size()=3

Thanks for your help!

Hi,

Thanks for your file.
We will check this internally and update more information with you later. : )

Hi,

We want to test your model with TensorRT 3.0 but found there is no uff support.
Would you mind to share the detailed environment of your experiment? Is it on Jetson?

Thanks.

There is definitely uff support, the documentation is here: [url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_304/tensorrt-api/topics/classnvuffparser_1_1_i_uff_parser.html[/url]. I am using a Jetson TX2 with Jetpack 3.2.1.

Hi,

Maybe there is some mistake on our side.
Let us check it further.

Sorry about that.
Thanks.

Hi,

Sorry about the incorrect message in the comment #7.
We found there is no uff support in the giexec sample but doesn’t do further checking on the TensorRT API.
Sorry about the mistake.

When we try to convert your model, we meet the following error in both v3.0 and v5.0.

[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 1:1: Invalid control characters encountered in text.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2:22: Invalid control characters encountered in text.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2:28: String literals cannot cross line boundaries.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2:24: Message type "ditcaffe.NetParameter" has no field named "tensorflow_extension".

It’s weird to us since this error is related to the caffe-based model.
We are still working on this. Have you met this before?

By the way, there is a possible issue for your reference.
Our uff parser will update corresponding to the TensorRT API.
Have you re-generated the .uff file with the TensorRT 5.0 uffparser?

Thanks.

No, I haven’t met that error. The model is based on TensorFlow by the way.

The main issue I’m having is that the outputs are switched on TensorRT 5.0.4. (Jetson Xavier with Jetpack 4.1.1)

If I run this on the Jetson Xavier:

./trtexec --uff=<uff_file> --uffInput=image,3,256,384 --output=outputs/conf --output=outputs/paf

This is my output:

name=outputs/conf, <b>bindingIndex=2</b>, buffers.size()=3
name=outputs/paf, <b>bindingIndex=1</b>, buffers.size()=3

This is the correct output:

name=outputs/conf, <b>bindingIndex=1</b>, buffers.size()=3
name=outputs/paf, <b>bindingIndex=2</b>, buffers.size()=3

Hi,

Thanks. We are using JetPack4.2 for the TensorRT5.0.
We will give JetPack4.1.1 a try.

Hi,

We can reproduce this issue now.

May I know the model you shared is generated by which TensorRT version?
Is it v5.0? If not, would you mind to generate one with v5.0 and share it with us?

Thanks.

The previous model was built with TensorRT 3.

I direct messaged you the new model built with TensorRT 5. It fails with a concat error on my Xavier.

Thanks.

We will check it and update more information with you later.

Hi,

Sorry for keeping you waiting.

To give a further suggestion, would you mind to share your .pb file with us?
Thanks.