Does Nvinferserver support custom input order?

• Hardware Platform (GPU): Tesla T4
• DeepStream Version: 6.2
• TensorRT Version:
• NVIDIA GPU Driver Version: 515.65.01
• Issue Type: Question


Using Nvinferserver, we have to pass tensor order with this shape (4;3;64;224;224). But It throws an error.

0:00:00.353155940   324      0x3392400 ERROR          nvinferserver gstnvinferserver.cpp:407:gst_nvinfer_server_logger:<primary-inference_0> nvinferserver[UID 6]: Error in fixateInferenceInfo() <infer_cuda_context.cpp:135> [UID = 6]: InferContext(uid:6) cannot figure out input tensor order, please specify in config file(preprocess.)

We have referred Nvinferserver document and also checked all the ‘tesnor_order’ options. But We couldn’t see relevant provision for tensor order.

Is there any feasibility in Nvinferserver to support this shape (4;3;64;224;224)?

Dax Jain

this tensor order is not supported by nvinferserver directly, please find tensor_order in nvinferserver.

I suggest to use nvpreprocess + nvinferserver to implement, here is a sample opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-3d-action-recognition\deepstream_3d_action_recognition.cpp, its tensor order is NCDHW, nvpreprocess is used to gernerate tenser data, nvinferserver will infer this tensor directly.

Hi @fanzh, This sample opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-3d-action-recognition\deepstream_3d_action_recognition.cpp has nvpreprocess + nvinfer and not nvinferserver.

If above statement is correct then,

How would above statement be possible? Sample app is for nvinfer deepstream execution and not nvinferserver based triton server’s execution.

Our objective is to use nvpreprocess with nvinferserver for NCDHW and not with nvinfer.

I hope this makes it more clear.


1 Like

if using deepstream6.2, this sample already supports nvinferserver, please refer to readme, especially here:
inference config file path ‘triton-infer-config=config_triton_infer_primary_3d_action.txt’.

from deepstream6.2, nvinferserver starts to support tensor meta input, please find input_tensor_from_meta in plugin doc.


Okay, Got it. We were referring the wrong version of the sample app. We’'ll refer this and use it accordingly.


thanks for the update! If need further support, please open a new one. Thanks

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.