batch-size question / suggestion

So, since batch-size must be a multiple of the number of sources connected, and this can be determined by the number of sink pads requested from the stream muxer, can batch-size not be automatically determined and configured by dowstream pipeline elements based on the GST_NVEVENT_PAD_ADDED (and removal) events that are sent downstream? The documentation says:

However I am not clear on whether the downstream elements such as nvinfer are configured to handle these events by default, and how. the Nvinfer documentation makes no mention of it. Do I need to update the ‘batch-size’ property on my inference element when adding a source to my inference bin myself, or will the property be updated automatically? Ideally, I want to be able to add and remove sources at runtime without restarting my pipeline. It would be nice if just requesting and releasing pads from the stream muxer could handle everything downstream.

Also, can the number of rows and columns for the tiler element be automatically determined as well like (what I am currently using) rows_and_columns = (int) ceil(sqrt(batch-size)). Would be nice to write less code.

Hi mdegans,
nvinfer does not handle these events, nvinfer always run with the batch_size that you use to build the TensorRT engine, and the batch you feed to TensorRT can’t be larger than the batch_size of nvinfer.

Do I need to update the ‘batch-size’ property on my inference element when adding a source to my inference bin myself, or will the property be updated automatically?
If you want to nvinfer to run at batch-size which is higher than original setting, you need to update the configuratin and rebuild the TensorRT engine. If you want to nvinfer to run at batch-size which is less than original setting, it’s ok, but the perf of nvinfer is the same as the perf running with the original batch_size.

can the number of rows and columns for the tiler element be automatically determined as well like (what I am currently using) rows_and_columns = (int) ceil(sqrt(batch-size)). Would be nice to write less code.
will check this and get back to you.