trtexec or giexec with CUDA stream

Hello,

I’m wondering whether or not I can execute .uff model file with giexec or trtexec in CUDA streaming mode to compare the inference time between running a batch of 4 images and running asynchronously 4 images in parallel on a jetson Xavier ?

Hello,

As far as I know, trtexec doesn’t support CUDA streaming. For an example using streaming, see yais/inference.cc. https://github.com/NVIDIA/yais/blob/master/examples/00_TensorRT/inference.cc#L102-L120