I added a custom ONNX operator, but it doesn't work

Hello.

I added a custom ONNX operator and ran DeepStream, but I couldn’t get the right output.

However, when I changed the code as follows, I got the correct output.
<nvdsinfer_context_impl.cpp>

/* Queue the bound buffers for inferencing. */
//if (!m_InferExecutionContext->enqueue(enqueueBatchSize, bindingBuffers,
//                                      m_InferStream, &m_InputConsumedEvent))
if (!m_InferExecutionContext->execute(enqueueBatchSize, bindingBuffers))

Can you tell me what you think could be the cause?
I am at a loss as to what points to pursue.

Hi,

Would you mind to share more information about the onnx custom operation?
Do you implement it as a TensorRT plugin?
How do you link it to the TensorRT?

https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_execution_context.html
The execute call is to launch TensorRT job synchronously while enqueue is asynchronous API.
In Deepstream SDK, the asynchronous function is used.

Based on this, guess that your plugin is set to the TensorRT (lower-level) but doesn’t well set in Deepstream.

Thanks.