Backend has maxBatchSize 1 whereas 8 has been requested error for a pytorch converted model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
T4

• DeepStream Version
5.0

• JetPack Version (valid for Jetson only)
NA

• TensorRT Version
7.0.0.11

• NVIDIA GPU Driver Version (valid for GPU only)
440.64.00

• Issue Type( questions, new requirements, bugs)
I am using deepstream_test1.py. I am using my own NN. It is a simple variant of Resnet created in pytorch using the below code

model = resnet18(pretrained=True)
model.fc = torch.nn.Linear(512, 4)
model = torch.nn.Sequential(model, torch.nn.Softmax(1))

Then I converted it to TRT format using torch2trt, saved it as an engine file, used it in deepstream_test1.py with a customer classifier parser function. Everything is going good for batch-size=1. But if I increase the batch-size to 8 in the nvinfer config file, I see the errors below. Is there something different I need to do when converting the pytorch model using torch2trt? I checked other tickets and did not find any solution.

0:00:03.041280073 1427 0x3425c10 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1647> [UID = 1]: Backend has maxBatchSize 1 whereas 8 has been requested
0:00:03.041308098 1427 0x3425c10 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1818> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-5.0/samples/models/pytorch-resnet-to-trt.engine failed to match config params, trying rebuild
0:00:03.044811811 1427 0x3425c10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1720> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:03.045126259 1427 0x3425c10 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1740> [UID = 1]: build engine file failed
0:00:03.045162610 1427 0x3425c10 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1826> [UID = 1]: build backend context failed
0:00:03.045184524 1427 0x3425c10 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1153> [UID = 1]: generate backend failed, check config file settings
0:00:03.045424731 1427 0x3425c10 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.045442242 1427 0x3425c10 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Config file path: 1.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: 1.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Hi,

When generating TensorRT engine, you will need to set the maxBatch value.
It seems that the value is smaller than 8 and leads to this error.

Please recreate the TensorRT engine with an appropriate maxBatch value and try it again.

Thanks

Thanks for the help. Just for the record, the following code fragment created a model which could handle larger batch sizes.

model_trt = torch2trt(model, [x], max_batch_size=64)

1 Like