how to start TensorRT on TX1?

hi,
I want to use the nvidia TensorRT to accelerate my deep network.
I install GIE(TensorRT) on the jetson tx1.
When I want to use the TensorRT, I don’t find any documents about TensorRT on the nvidia web.

where can I get the documents about TensorRT and TensorRT API?

thanks!

Hi,

TensorRT’s document is located at ‘/usr/share/doc/gie/doc’ on the device.

We also have some sample code to demonstrate how to use tensorRT on Tegra:

hi AastaLLL,
thanks for your reply.
I have compiled jetson-inference on jetson tx1 board successfully.
But when run the imagenet-console,some problems appears:

[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] cache file not found, profiling network model
[GIE] platform has FP16 support.
[GIE] loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[libprotobuf FATAL …/…/…/externals/protobuf/aarch64/10.0/include/google/protobuf/repeated_field.h:1378] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of ‘google::protobuf::FatalException’
what(): CHECK failed: (index) < (current_size_):
Aborted

In addition, I can’t open the https:drive.google.com,so the network model and pretrain net of ped-100,multiped-500,facenet can’t loaddown. and the https:nvidia.box.com alse can’t be open.

tar: Child returned status 1
tar: Error is not recoverable: exiting now
–2016-02-12 01:20:27-- https://nvidia.box.com/shared/static/y1mzlwkmytzwg2m7akt7tcbsd33f9opz.gz
Resolving nvidia.box.com (nvidia.box.com)… 107.152.27.197, 107.152.26.197
Connecting to nvidia.box.com (nvidia.box.com)|107.152.27.197|:443… connected.
Unable to establish SSL connection.

so can you send the network data to my email mengdi2012@163.com

thanks very much!

Hi,

Thanks for your feedback.

Just confirmed that ‘imagenet-console’ sample and model link work properly.
Looks like there is some internet issue on your device.

Could you check if your ssh works normally?
Or is there any blocking service under your networks.

I also have compiled jetson-inference on jetson tx1 board successfully.
But when run the imagenet-console,some problems appears:

[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] cache file not found, profiling network model
[GIE] platform has FP16 support.
[GIE] loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[libprotobuf FATAL …/…/…/externals/protobuf/aarch64/10.0/include/google/protobuf/repeated_field.h:1378] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of ‘google::protobuf::FatalException’
what(): CHECK failed: (index) < (current_size_):
Aborted

have you solved the problem?
please help me! Thanks!

Hi,

Does this commend hit the error?

./imagenet-console orange_0.jpg output_0.jpg

Just tries it and works well in my environment.
Could you attach the photo you tested for us debugging?

Thanks.

yes,the command
./imagenet-console orange_0.jpg output_0.jpg
hit th error!

I slove the problem by replacing the bvlc_googlenet.caffe.model from https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet.

Maybe the original caffemodel is corrupted.

Hi,

Thanks for your feedback.

We download googlenet by this command in CMakePreBuild.sh:

wget --no-check-certificate 'https://nvidia.box.com/shared/static/at8b1105ww1c5h7p30j5ko8qfnxrs0eg.caffemodel' -O bvlc_googlenet.caffemodel

Confirmed that model can completely download and jeston_inference can run successfully now.
Not sure what cause the error but we will keep monitor the status of our server.

Thanks the information.

Hi all, I would like to enquire more on the methods to use TensorRT on Faster RCNN using a ZF/VGG16 model. I’m trying to carry out a real time object detection using Faster RCNN on a Jetson TX1. I know that for convenience, I should use DetectNet instead, however, I was assigned to use the Faster RCNN framework. With ./jetson_clocks.sh, the fastest detection time took 0.48s for 300 object proposals. As such, it would like to make use of TensorRT to reduce the detection time.

I researched and read up a lot of forums, including https://github.com/dusty-nv/jetson-inference, but I’m still confused on the methods to implement TensorRT on a Faster RCNN caffe model. I tried executing ./giexec --model=/usr/src/gie_samples/samples/data/samples/googlenet/googlenet.caffemodel --deploy=/usr/src/gie_samples/samples/data/samples/googlenet/googlenet.prototxt --output=prob --half2=true --batch=12 and I got around 63ms.(With --batch=2, I get around 14ms) Thus, I would like to use a small net such as ZF net to run my detection task.

I have trouble understanding and following the sample code on the dusty-nv/jetson-inference page as I do not know which file to edit, which part to edit and etcetc. Are there any guides or other websites that are more comprehensive which you can recommend?

Alternatively, I tried to run demo.py (I’m using py-faster-rcnn) using the googlenet.caffemodel but I ran into “Check failed: K_ == new_K (1024 vs. 281600) Input size incompatible with inner product parameters.” I read that I should convert the inner product layer into a fully convolutional layer to solve the problem, but I really have no idea how to do it.
Also, how do I enable the use of TensorRT when running the detection with VGG16 or ZF.

Thank you and I would really appreciate any help given!

Hi,

ROI-pooling is not supported by tensorRT, so you need to implement by yourself.

More details please refer to this topic:
https://devtalk.nvidia.com/default/topic/1008935

all the caffemodel file I can not download

Is there any other method I can get caffemodel eg baidunetd

Hi,

Pretrained model is on NVIDIA box server.
Please remember to check your network functionality first.

Model link can be found here:
https://github.com/dusty-nv/jetson-inference/blob/master/CMakePreBuild.sh#L30