Import PyTorch model to TensorRT on Jetson TX

Hello everybody,
I have a PyTorch trained model. I want to import that model to TensorRT for optimization on Jetson TX2. TensorRT3.0 have a example with PyTorch for Python API,but Jetson TX2 only support C++ API. So anybody have experience witch C++ API for PyTorch?

Hi,

[s]Similar workflow of the TensorFlow model:

1. Convert model to UFF with python API on x86-machine
Check sample /usr/local/lib/python2.7/dist-packages/tensorrt/examples/pytorch_to_trt/

2. Import UFF model with C++ interface on Jetson
Check sample /usr/src/tensorrt/samples/sampleUffMNIST/[/s]

Thanks.

From PyTorch model, i create an tensor RT engine and save on disk. Then how i can deploy engine to Jetson? I read Tensor RT document but i can’t find where metioned

Hi,

[s]You will get a UFF file for your pyTorch model.
Copy it into Jetson and create a TensorRT engine from it.

You can find more information in 3.3. SampleUffMNIST UFF Usage of our document.
[/s]
Thanks.

Sorry, I can’t find pytorch_to_trt example on my x86 machine? can u give me example?

I find out

uff.from_tensorflow()

function. So i can use network defined from PyToch with Network defination API as argument or only support for tensorflow model?

hi,i meet the same question,i want to load the pytorch model(*.opt) by tensor-rt,i can not find the “/usr/local/lib/python2.7/dist-packages/tensorrt/examples/pytorch_to_trt/” demo and i dont know how to install that,can u give us some advice?

Hi,

There is some incorrect information in my previous comment.
Let me check it internally and update with you.

Thanks.

Thanks,waiting for u.

Hi,

Sorry for the incorrect information before.
Currently, we don’t support pyTorch model on Jetson.

To run a pyTorch model with TensorRT, it is required to manually build a TensorRT engine from python interface.
Now, Python API is only available on x86 machine, not for Jetson.

Thanks.

if i have a pytorch model only, don’t know the net struct,what can i do for loading this model with tensorrt on x86 machine?

I also meet this question. I first convert pytorch model to caffe model with the pooling floor handle(not ceil), When I convert the caffe model, Then I get an error: error addmmbackward76: kernel weights has count 4096 but 16384 was expected. Then assertion engine failed

Can I use UFF to do that? Can python engine or network be stored as a file then I can load it in my C++ project?

Can I use UFF to do that? Can python engine or network be stored as a file then I can load it in my C++ project?

i only found that tensorflow provide the interface for covering caffemodel to UFF file,i also tried to found this method in pytorch, but failed.

I think the reason may be: pytorch forward use the different pooling handle with caffe,When I convert pytorch model to caffe, I change the ceil and use a floor.I think it a simple way to provide a API for this diffenent pooling.

I think the reason may be: pytorch forward use the different pooling handle with caffe,When I convert pytorch model to caffe, I change the ceil and use a floor.I think it a simple way to provide a API for this diffenent pooling.

I think the reason may be: pytorch forward use the different pooling handle with caffe,When I convert pytorch model to caffe, I change the ceil and use a floor.I think it a simple way to provide a API for this diffenent pooling.

Hi both,

Currently, UFF parser only support TensorFlow format.
If you can transfer the pyTorch model to caffe, it might be an alternative to create TensorRT engine from caffeparser.

Thanks.