• Hardware Platform: Jetson AGX Orin
• DeepStream Version: 6.3
• JetPack Version: 5.1.3
• TensorRT Version: 8.5.2-1+cuda11.4
• Issue Type: Question // Bug
• How to reproduce the issue ?
Hello,
I would like to ask about a specific behavior in the NvBufSurfaceTransform API. It has been stated that the API is thread-safe in the following post: Multimedia API: thread-safety of NvBufSurfTransform? . However we are experiencing unexpected changes to NvBufSurfTransformConfigParams
without modifying it ourselves.
How to reproduce:
Begin with /opt/nvidia/deepstream/deepstream-6.3/sources/gst-plugins/gst-nvdsvideotemplate/customlib_impl/customlib_impl.cpp
from the official Deepstream examples (part of Deepstream installation).
Change the void SampleAlgorithm::OutputThread(void)
to the following demonstrative version, and compile it as usual using: CUDA_VER=11.4 make
/* Output Processing Thread */
void SampleAlgorithm::OutputThread(void)
{
GstFlowReturn flow_ret;
GstBuffer *outBuffer = NULL;
std::unique_lock<std::mutex> lk(m_processLock);
NvDsBatchMeta *batch_meta = NULL;
if(hw_caps == true) {
cudaError_t cuErr = cudaSetDevice(m_gpuId);
if(cuErr != cudaSuccess) {
GST_ERROR_OBJECT(m_element, "Unable to set cuda device");
return;
}
}
// set surface transform session when transform mode is on
int err = NvBufSurfTransformSetSessionParams(&m_config_params);
if (err != NvBufSurfTransformError_Success) {
GST_ERROR_OBJECT (m_element, "Set session params failed");
return;
}
NvBufSurfTransformConfigParams query;
NvBufSurfTransformGetSessionParams(&query);
printf("ComputeMode: %d, Stream: %p\n",query.compute_mode,query.cuda_stream);
/* Run till signalled to stop. */
while (1) {
/* Wait if processing queue is empty. */
if (m_processQ.empty()) {
if (m_stop == TRUE) {
break;
}
m_processCV.wait(lk);
continue;
}
PacketInfo packetInfo = m_processQ.front();
m_processQ.pop();
m_processCV.notify_all();
lk.unlock();
NvBufSurfTransformGetSessionParams(&query);
printf("ComputeMode: %d, Stream: %p\n",query.compute_mode,query.cuda_stream);
outBuffer = packetInfo.inbuf;
nvds_set_output_system_timestamp (outBuffer, GST_ELEMENT_NAME(m_element));
flow_ret = gst_pad_push (GST_BASE_TRANSFORM_SRC_PAD (m_element), outBuffer);
lk.lock();
continue;
}
outputthread_stopped = true;
lk.unlock();
return;
}
Start a video processing using this plugin (replace stream with your stream):
#!/bin/bash
gst-launch-1.0 \
uridecodebin3 uri=rtspt://192.168.5.10:28554/42_speed ! \
nvvideoconvert ! 'video/x-raw(memory:NVMM),format=I420' ! \
nvdsvideotemplate customlib-name=/opt/nvidia/deepstream/deepstream-6.3/sources/gst-plugins/gst-nvdsvideotemplate/customlib_impl/libcustom_videoimpl.so ! \
'video/x-raw(memory:NVMM),format=I420' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! \
fakesink
Observe the standard output of the program:
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
ComputeMode: 1, Stream: 0xaaab23003130
ComputeMode: 1, Stream: 0xaaab23003130
ComputeMode: 2, Stream: 0xffff2c056e50
ComputeMode: 2, Stream: 0xffff2c056e50
Additional Observations:
As we can see, the compute mode and CUDA stream does change at some point, even though no NvBufSurfTransformSetSessionParams
is called in the while
loop. How can this happen?
Per our suspicion, if we remove the second convert nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12'
from the pipeline, the behaviour ceases to manifest. So it seems that the nvvideoconvert has a part to play in this.
Does this mean that downstream elements can change NvBufSurfTransformConfigParams
at any given time without us knowing? Is there any obvious oversight on our part which should be mended to avoid this?
Thank you for your support,
Simon
Post scriptum: Additional findings
The change is very likely caused by the second nvvideoconvert element. The params seem be dictated by the configuration of the second nvvideoconvert. Try setting compute-hw=1
, then only cuda stream changes.
Also, adding a queue between nvdsvideotemplate
and nvvideoconvert
seems to help.