How can I batch in TensorRT 6?


How can I batch in TensorRT 6?

In TensorRT 5, it works but now output value is all zero.

I allocated B x C x H x W and copy all of them.

But in output, batch 0’s result is good but batch 1’s result is all zero.

I convert trt engine from onnx using onnx-trt and I add arg -b 2, batch size is checked in trt engine.

Is there anything i should do?


Hi, please provide the following.

Provide details on the platforms you are using:
Linux distro and version
GPU type
Nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version
If Jetson, OS, hw versions

Describe the problem


Include any logs, source, models (.uff, .pb, etc.) that would be helpful to diagnose the problem.

If relevant, please include the full traceback.


***Please provide a minimal test case that reproduces your error.

NVIDIA Enterprise Support

Jetson Xavier
Ubuntu 18.04
gpu_arch: 7.2
cuda: 10.0.326

trt engine using onnx2trt (set batch size=2) ->memory copy two input images -> execute -> output(batch0: OK, batch1: all zero)

In 5.1.6 version, it works. But after update to 6.0.1 version, It’s not work even with the same code. (I changed only one thing (execute(batch_size, buffer) -> executeV2(buffer)))

So, I want to know if something has change in setting batch size.


Please checkout the samples for examples of using executeV2():

This sample passes batch size to a buffer manager before calling executeV2:

Buffer manager code is here:

However, you should still be able to use the original execute() function, this sample still uses it:

Is it not working with the original execute() method in TensorRT 6?

If so, please provide the script / model and the commands you ran to repro the error so that I can further debug it.

NVIDIA Enterprise Support

i meet the same question, i run sampleOnnxMNIST with trt7.0, and set batch=2, the first batch result is ok, other resulu all zero

I have exactly the same problem using tensorRT 7, previously using tensorRT 5 it worked perfectly.

Currently making changes to executeV2, adding explicit batch, etc … It does not work, the first element is predicted well but the rest… is done wrong (object detection in my case).

In the past I was checked the order of the output that is NCHW, it has not changed, right?

I have a very similar issue, did you guys find any solution? Thanks!

I’m working with Tensorrt- and I met this problem two

I have a same problem at JetPack 4.3 on Jetson Xevier NX.
TensorRT version is

How did you solve this?

I’m having the same problem in TensorRT 8

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

ah my problem was not setting the batch_size parameter in the python api (IExecutionContext.execute)