A problem about deepstream

When the result is written to outputs[0], it works:

enqueue(int batchSize, const void* const* inputs, void** outputs, void*, cudaStream_t stream)
{
    cudaMemcpyAsync(outputs[0], result, sizeof(float) * count, cudaMemcpyHostToDevice, stream);
}

When the result is written to outputs[1], it occur the following error:

enqueue(int batchSize, const void* const* inputs, void** outputs, void*, cudaStream_t stream)
{
    cudaMemcpyAsync(outputs[1], result, sizeof(float) * count, cudaMemcpyHostToDevice, stream);
}

error:

0:00:07.136854117 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.137152387 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.137216771 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.137271234 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
Quitting
0:00:07.151866492 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.152040539 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.152087131 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.152122427 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.165851803 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.166033082 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.166081337 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.166120345 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.178521347 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.178694209 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.178742881 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.178781057 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.194647826 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.194831121 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.194876465 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.194915344 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.205676101 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.205862820 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.205907300 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.205946468 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.217750033 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.217929456 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.217975600 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.218013231 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.235123480 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.235278839 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.235393270 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.235430582 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.249408917 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.249603731 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.249655603 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.249714226 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1160): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
0:00:07.297213415 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.297442502 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.297491749 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.297532933 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.309611697 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.309820144 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.309865615 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.309906543 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.320707844 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.320883842 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.320928802 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.320968130 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:00:07.331379897 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.331551928 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): engine.cpp (501) - Cuda Error in enqueue: 11 (invalid argument)
0:00:07.331598168 13880      0x603ae80 ERROR                nvinfer gstnvinfer.cpp:569:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to enqueue inference batch
0:00:07.331639543 13880      0x603ae80 WARN                 nvinfer gstnvinfer.cpp:1160:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
App run failed

the batchSize is 2.

I think this ‘enqueue’ is the call in IPlugin, right?
In plugin, batchSize==2 does not mean the plugin has two outputs, how many outputs depend on the network struture.
And, the outputs should be device memory, so you can’t use “cudaMemcpyHostToDevice” for the memory copy since the source memory is in device.

Hi mchi:
yes,it is called in IPlugin.

Who confirm the amount of outputs except the network struture?

int enqueue(int batchSize, const void* const* inputs, void** outputs, void*, cudaStream_t stream)

This function “getOutputDimensions” confirm the amount of outputs?

Dims getOutputDimensions(int index, const Dims* inputs, int nbInputDims)

Hi,
Sorry for late response!

Who confirm the amount of outputs except the network struture?
You should tell the TensorRT how many outputs the IPlugin layer has through TensorRT IPlugin API - getNbOutputs(). So, you need to look into the network and find out the output number.

This function “getOutputDimensions” confirm the amount of outputs?
It returns the Dimension of the output specified by index parameter.

I think, you could refer to TRT doc https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#extending and sample - samplePlugin to get more about TRT IPlugin implementation.