Batch size > 1 and max workspace

Hi all,
Q-1 As you know, in the TensorRT 5/6, batch size >1 has a problem, It’s perform image by image instead of all images of batch at same time. I want to know this problem is occurred in UFF parser? Is this problem solved by ONNX parser even in the TensrRT 5/6? I know this problem solved in the TensrRT 7 with profiler.

Q-2, On what basis, we define the value of max workspace in the converting to TensorRT? If the value that we define be smaller than min size, The process of convert will be stop? I want to know, How do we set optimal value for workspace?

Q1: In my understanding i don’t think TRT 5/6 has such behavior for batch size > 1.
Only if single batch size is utilizing the complete compute then the processing will be per image.
TRT 7 optimization profiles are to handle dynamic shapes and optimize model for specific input shape range for better performance.
Could you please share the repro script and model file along with system configuration related information so we can help better?

Q2: https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-700/tensorrt-developer-guide/index.html#troubleshooting
Section “How do I choose the optimal workspace size?” should cover the explanation.
If workspace size is low TRT conversion will fail or not optimized with better performing algorithm.

Thanks

Thanks.
1- I used this link. Is it possible please help me to modify this code for using profiler in TRT7.

Please refer to below samples:
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-700/tensorrt-developer-guide/index.html#opt_profiles

Thanks

Thanks