Have some qustion aboutf TX2,want transfer from Intel platform.

1:Because we originally write is C++ with keras model (using cudnn+cuda for direct computation).We think the first step is not to worry too much about speed, but also to run this way on Tx2.We want to know if
We can use TX2 as a Linux computer to run our QT+cudnn algorithm program without using Tensor for the moment. its ok?
2:Can we directly install QT development interface on TX2 and GCC /g++ to transplant the C/C++ code I wrote with cudnn instead of using host + target machine? I have done this on RK3399.
3:have see:TensorRT Developer Guide 2.2.1.: Creating A Network Definition From Scratch Using The C++ API
Instead of using a parser, you can also define the network directly to TensorRT via the network definition API. This scenario assumes that the per-layer weights are ready in host memory to pass to TensorRT during the network creation.
its show be able to custom network completely, its true? If possible, please tell me which is faster and how much faster than the speed of cudnn custom network.
thanks a lot

Hi,

1.
You can install the ARM version QT package to get the QT support.
Not sure which backend frameworks do you use for Keras.
If you are using TensorFlow, please install it with the command here:
[url]https://devtalk.nvidia.com/default/topic/1038957/jetson-tx2/tensorflow-for-jetson-tx2-/[/url]

2.
Suppose yes.

3.
Suppose your Keras is using TensorFlow, here is the comparison between TensorRT and TensorFlow for your reference:
[url]https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference/[/url]

Thanks.

HI Asasta: thanks for your reply
about questions 3 I still have some confuse
I can use TensorRT, but with all customize network, don’t use Parser, it ok?

Hi,

Would you mind to share more detail about what you are going to do with us?

Are you trying to implement a custom layer for your model?
If yes, it looks like you don’t need TensorRT anymore.

Or you don’t want to use our parser but trying to create TensorRT layers on your own?
This is possible but risky. This will require you to know all the mapping between the operation and the TensorRT layer.
Since our parser is open-sourced now, you should be able to get the information from GitHub directly.
[url]https://github.com/NVIDIA/TensorRT/tree/master/parsers[/url]

Thanks.

ok thanks , we want to implement a custom layer for our own model,