hi,
I want to use the nvidia TensorRT to accelerate my deep network.
I install GIE(TensorRT) on the jetson tx1.
When I want to use the TensorRT, I don’t find any documents about TensorRT on the nvidia web.
where can I get the documents about TensorRT and TensorRT API?
hi AastaLLL,
thanks for your reply.
I have compiled jetson-inference on jetson tx1 board successfully.
But when run the imagenet-console,some problems appears:
[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] cache file not found, profiling network model
[GIE] platform has FP16 support.
[GIE] loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[libprotobuf FATAL …/…/…/externals/protobuf/aarch64/10.0/include/google/protobuf/repeated_field.h:1378] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of ‘google::protobuf::FatalException’
what(): CHECK failed: (index) < (current_size_):
Aborted
In addition, I can’t open the https:drive.google.com,so the network model and pretrain net of ped-100,multiped-500,facenet can’t loaddown. and the https:nvidia.box.com alse can’t be open.
Confirmed that model can completely download and jeston_inference can run successfully now.
Not sure what cause the error but we will keep monitor the status of our server.
Hi all, I would like to enquire more on the methods to use TensorRT on Faster RCNN using a ZF/VGG16 model. I’m trying to carry out a real time object detection using Faster RCNN on a Jetson TX1. I know that for convenience, I should use DetectNet instead, however, I was assigned to use the Faster RCNN framework. With ./jetson_clocks.sh, the fastest detection time took 0.48s for 300 object proposals. As such, it would like to make use of TensorRT to reduce the detection time.
I researched and read up a lot of forums, including GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson., but I’m still confused on the methods to implement TensorRT on a Faster RCNN caffe model. I tried executing ./giexec --model=/usr/src/gie_samples/samples/data/samples/googlenet/googlenet.caffemodel --deploy=/usr/src/gie_samples/samples/data/samples/googlenet/googlenet.prototxt --output=prob --half2=true --batch=12 and I got around 63ms.(With --batch=2, I get around 14ms) Thus, I would like to use a small net such as ZF net to run my detection task.
I have trouble understanding and following the sample code on the dusty-nv/jetson-inference page as I do not know which file to edit, which part to edit and etcetc. Are there any guides or other websites that are more comprehensive which you can recommend?
Alternatively, I tried to run demo.py (I’m using py-faster-rcnn) using the googlenet.caffemodel but I ran into “Check failed: K_ == new_K (1024 vs. 281600) Input size incompatible with inner product parameters.” I read that I should convert the inner product layer into a fully convolutional layer to solve the problem, but I really have no idea how to do it.
Also, how do I enable the use of TensorRT when running the detection with VGG16 or ZF.
Thank you and I would really appreciate any help given!