TensorRT from caffe proto with runtime-defined input dimensions.

Hello, I have been porting openpose ([url]https://github.com/CMU-Perceptual-Computing-Lab/openpose[/url]) to TensorRT and there is one thing that I don’t know how to do properly : Using ICaffeParser parse function but specifying net input dimensions at runtime.

Any ideas on how I could do this ?

My PR is visible here : Tensor RT by bushibushi · Pull Request #285 · CMU-Perceptual-Computing-Lab/openpose · GitHub
It was working on commit 8023fb1 but you have to follow the instructions described in the PR to hack the issue cited here.
Head of the PR branch is not working right now due to heavy refactoring on the repo that I haven’t yet caught up.

Hi,

Currently, TensorRT doesn’t support to change the input dimension on the fly.
The inference algorithm is chosen only once when creating the engine.

May I know which Caffe layer do you use?
Is there any opportunity to hardcode the input dimension to a fixed value? Ex. Maximal value?

Thanks.

May I know which Caffe layer do you use?

The prototxt file with all caffe layers is here :
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/models/pose/coco/pose_deploy_linevec.prototxt

Is there any opportunity to hardcode the input dimension to a fixed value? Ex. Maximal value?

This is what I have been doing for it to work, but I did not find it very clean.

Thanks for your quick answer.

Hi,

Welcome to file a topic if you need a help.
Thanks.

Deleted (created new post).