TensorRT 3.0 RC now available with support for TensorFlow

Hi, according to issue 12052 on TensorFlow GitHub, cuDNN 7.0 works when building TensorFlow 1.4 from source and planned to be in v1.5 binary.

Another workaround you can try installing a CPU-only TensorFlow package. For importing a TF model, a CPU-based module should be enough.

Hi,
I install tensorrt 3.0 in the tx1,but i can not install python API,so,is there other way that I can transform tensorflow models to UFF models.

Hi, please run TensorFlow on PC (x86_64) first to convert the model to UFF, then copy it over to TX1.

Hi,
I still don’t know how to transform tensorflow models to UFF models in PC(x86_64) , you mean that tensorflow has some API to do this.can you give me some advices.thanks!

Please see section 2.3.2.1.2 ‘Converting a TensorFlow Model to UFF’ from the TensorRT 3.0 RC User Guide.
The UFF Toolkit provides the API to perform the model conversion.

External Media

Hi,

my GPU is Titan-GPU

and run sample-fasterRCNN face error:

‘nvinfer1::CudaError’ in createInferBuilder(gLogger)

is it caused by unsupported Titan-GPU?

THX

Hello,

Please I am new to TensorRT. I have just finished training a few models using the Tensorflow slim interface and I managed to get the models frozen in a frozen.pb file. Will it be possible for me to convert these .pb files to .UFF to be used with TensorRT3 RC? I realize in the release notes it states that “The TensorFlow to TensorRT model export does not work with network models specified using the TensorFlow Slim interface, nor does it work with models specified using the Keras interface.” However does this refer to frozen models produced using these interfaces as well? Thank you for your patience and I appreciate any reply!

Are you running the tarball for x86_64 or from the Jetson? Do any of the other samples run or just the Faster-RCNN fails?

Note that you also may want to post on CUDA board for Titan-X support.

Hi, I am not very familiar with TensorFlow frozen graphs, do the networks still use Keras/Slim layers internally after the freezing? Or are those eliminated as part of the freezing process. If so, it may stand a chance of working, but I’m not sure.

I want to know how to install tensorRT api for python3, thx

does the tensorRT3.0.0 support the dropout operation within tensorflow

Hi, please download the TensorRT 3.0 RC tarball for x86_64 and look in the extracted python/ directory. The TensorRT python API isn’t available for ARM/Jetson in the TensorRT 3.0 RC.

Dropout doesn’t appear in the list of supported TensorFlow layers in section 2.3.2.2.4 of the TensorRT 3.0 RC User Guide, so it may not be.
However in the PyTorch example (section 2.3.3.1), there is Dropout2D layer used. You could use the TensorRT custom plugin API to make it yourself.

Hello,

I have installed TensorRT 3.0 RC on Jetson TX2 using the .deb package. After the installation, I see that both, TRT 2.1 and 3.0 are currently present.

sudo dpkg -l | grep TensorRT
[sudo] password for nvidia: 
ii  libnvinfer-dev                                         4.0.0-1+cuda8.0                              arm64        TensorRT development libraries and headers
ii  libnvinfer-samples                                     4.0.0-1+cuda8.0                              arm64        TensorRT samples and documentation
ii  libnvinfer3                                            3.0.2-1+cuda8.0                              arm64        TensorRT runtime libraries
ii  libnvinfer4                                            4.0.0-1+cuda8.0                              arm64        TensorRT runtime libraries
ii  tensorrt                                               3.0.0-1+cuda8.0                              arm64        Meta package of TensorRT
ii  tensorrt-2.1.2                                         3.0.2-1+cuda8.0                              arm64        Meta package of TensorRT

How do I make sure that TensorRT 3.0 is used when I try to run an example application like ‘jetson-inference’?

Edit:

Both the versions in my system creates the jetson-inference to fail. See below GDB output:

[GIE]  TensorRT version 2.1, build 2102
[GIE]  attempting to open cache file detectNet_vehicle_snapshot_iter_2240.caffemodel.2.tensorcache
[GIE]  loading network profile from cache... detectNet_vehicle_snapshot_iter_2240.caffemodel.2.tensorcache
[GIE]  platform has FP16 support.
[GIE]  detectNet_vehicle_snapshot_iter_2240.caffemodel loaded

Program received signal SIGSEGV, Segmentation fault.
0x0000007fb20553c0 in nvinfer1::cudnn::deserializeDims(ifb::Dims const&) ()
   from /usr/lib/aarch64-linux-gnu/libnvinfer.so.4
(gdb) backtrace 
#0  0x0000007fb20553c0 in nvinfer1::cudnn::deserializeDims(ifb::Dims const&) ()
   from /usr/lib/aarch64-linux-gnu/libnvinfer.so.4
#1  0x0000007fb1fd7b78 in nvinfer1::cudnn::Engine::deserialize(void const*, unsigned long, nvinfer1::IPluginFactory*) ()
   from /usr/lib/aarch64-linux-gnu/libnvinfer.so.4
#2  0x0000007fb1fd0fe0 in nvinfer1::Runtime::deserializeCudaEngine(void const*, unsigned long, nvinfer1::IPluginFactory*) ()
   from /usr/lib/aarch64-linux-gnu/libnvinfer.so.4
#3  0x0000007fb7eb053c in tensorNet::LoadNetwork(char const*, char const*, char const*, char const*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, unsigned int) ()
   from /home/nvidia/vehicle-tracking-jetson/build/aarch64/lib/libjetson-vehicle-tracking.so
#4  0x0000007fb7ea9984 in detectNet::Create(char const*, char const*, float, float, char const*, char const*, char const*, unsigned int) ()
   from /home/nvidia/vehicle-tracking-jetson/build/aarch64/lib/libjetson-vehicle-tracking.so
#5  0x0000007fb7eaa064 in detectNet::Create(int, char**) ()
   from /home/nvidia/vehicle-tracking-jetson/build/aarch64/lib/libjetson-vehicle-tracking.so
#6  0x00000000004044a8 in main ()
(gdb) quit

File names are a bit different as I had derived the work for video based on detectnet-camera.

Any help on the above is really appreciated. I cannot even go back to version 2.1 now. Whenever I install it, the 3.0 version is installed automatically, along with libnvinfer.so.4.

I believe the packages use symbolic links to link the specific libinfer versions (ie libinfer.so.4) to the base libinfer.so. The CMakeLists.txt from jetson-inference links to nvinfer to use the latest TensorRT.

Have you tried dpkg --purge command to remove the offending package and starting fresh?

Normally, in a production release, JetPack would only install one version of TensorRT (the latest), so you wouldn’t have to worry about this mismatch. If you continue to experience the issue with the TensorRT 3.0 RC, you may want to start with a fresh JetPack install and install the RC from the provide tarballs so you can retain better control over the linking.

Hi,

Thanks for your comments.

I did purge all the TensorRT packages and then tried to install TensorRT-2.1 using ‘apt-get’. When I do that, the 3.0 packages are also installed. (As I mentioned in my previous comment).

I tried this as well. I downloaded the latest JetPack and then installed TensorRT using JetPack. However, I did not install any other packages, neither did I flash the Jetson again by proceeding with a custom installation. The deb file from the JetPAck also installed TensorRT 3.0 related packages. Did you mean I should have re-installed everything and re-flashed the Jetson?

Currently, after removing all the TRT packages, I installed 3.0 only. Currently building the jetson-inference and will test it soon. Will keep posted.

Update:
The last thing worked.

Thanks,
Bhargav

Hi Bhargav, thanks for reporting how you got it to work. JetPack 3.1 should not be installing TensorRT 3.0 RC, only TensorRT 2.1, so that may be anomalous behavior. Perhaps the package was still left over somehow. Anyways, this should all be much easier and automated in the production release (next JetPack). Thanks for bearing with us on this one!

I am pretty sure I had purged everything before I started the installation from JetPack. At least ‘sudo dpkg -l | grep TensorRT’ didn’t show anything.

I had removed the installation files from the host and the deb file was downloaded again while I was trying to install. Maybe you might want to check it depending on the time you can spend.

Can you comment on when you guys are planning to release the next JetPack?

Sorry, I can’t comment on the release date. Hopefully soon!