ERROR from element nvtiler: GstNvTiler: FATAL ERROR; NvTiler::Composite failed

I use deepstream 4.0,run the sample deepstream-test3,but it report an error.

nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3$ ./deepstream-test3-app rtsp://admin:hik12345@172.16.2.250:554/ rtsp://admin:hik12345@172.16.2.248:554/
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: rtsp://admin:hik12345@172.16.2.250:554/, rtsp://admin:hik12345@172.16.2.248:554/,

Using winsys: x11
Creating LL OSD context new
0:00:01.134962249 17161 0x5587079720 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:01.135355109 17161 0x5587079720 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:34.174182861 17161 0x5587079720 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b2_fp16.engine
Decodebin child added: source
Decodebin child added: source
Running…
Decodebin child added: decodebin0
Decodebin child added: decodebin1
Decodebin child added: rtph264depay0
Decodebin child added: decodebin2
Decodebin child added: rtph264depay1
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: h264parse1
Decodebin child added: rtppcmadepay0
Decodebin child added: capsfilter1
Decodebin child added: alawdec0
In cb_newpad
Decodebin child added: nvv4l2decoder0
Seting bufapi_version
Decodebin child added: nvv4l2decoder1
Seting bufapi_version
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
Decodebin child added: decodebin3
Decodebin child added: rtppcmadepay1
Decodebin child added: alawdec1
In cb_newpad
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad
In cb_newpad
Creating LL OSD context new
Frame Number = 0 Number of objects = 1 Vehicle Count = 1 Person Count = 0
Frame Number = 0 Number of objects = 1 Vehicle Count = 1 Person Count = 0
ERROR from element nvtiler: GstNvTiler: FATAL ERROR; NvTiler::Composite failed
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvtiler/gstnvtiler.cpp(665): gst_nvmultistreamtiler_transform (): /GstPipeline:dstest3-pipeline/GstNvMultiStreamTiler:nvtiler
Returned, stopping playback

0:00:34.760577901 17161 0x5586d06000 WARN nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:34.760640204 17161 0x5586d06000 WARN nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Frame Number = 1 Number of objects = 1 Vehicle Count = 1 Person Count = 0
Frame Number = 1 Number of objects = 1 Vehicle Count = 1 Person Count = 0
Deleting pipeline

The RTSP stream above is 1080p, and the following is 720p.it runs well, I also have this problem when synthesizing multi-channel video with tegra_multimedia_api .

nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3$ ./deepstream-test3-app rtsp://admin:Sunm_123@192.168.15.4:554/ rtsp://admin:Sunm_123@192.168.15.4:554/
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: rtsp://admin:Sunm_123@192.168.15.4:554/, rtsp://admin:Sunm_123@192.168.15.4:554/,

Using winsys: x11
Creating LL OSD context new
0:00:01.141137522 17084 0x55b2e2b120 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:01.141494158 17084 0x55b2e2b120 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:33.610339350 17084 0x55b2e2b120 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b2_fp16.engine
Decodebin child added: source
Decodebin child added: source
Running…
Decodebin child added: decodebin0
Decodebin child added: rtph264depay0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Seting bufapi_version
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Decodebin child added: decodebin1
Decodebin child added: rtph264depay1
Decodebin child added: h264parse1
Decodebin child added: capsfilter1
Decodebin child added: nvv4l2decoder1
Seting bufapi_version
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad
Creating LL OSD context new
In cb_newpad
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 1 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 1 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 2 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 2 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 3 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 3 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 4 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 4 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 5 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 5 Number of objects = 0 Vehicle Count = 0 Person Count = 0

Hi
Which camera model you are using, the camera which have issues.

we have similar issues solved, see this post,
https://devtalk.nvidia.com/default/topic/1058086/deepstream-sdk/how-to-run-rtp-camera-in-deepstream-on-nano/post/5369676/#5369676
please try attached two library in this post.

After replacing these two libraries, run deepstream-test3-app is ok, but when synthesizing multi-channel video with tegra_multimedia_api, it still report the error.

NvDdkVicConfigure Failed
nvbuffer_composite Failed
NvDdkVicConfigure Failed
nvbuffer_composite Failed
NvDdkVicConfigure Failed
nvbuffer_composite Failed

Can you share us one test code which can repro the isse you mentioned ?

code as follow.

#include <errno.h>
#include <sched.h>
#include <unistd.h>
#include <algorithm>
#include "Logger.h"
#include "AutoLock.h"
#include "CommonUtil.h"
#include "NvCompositeStream.h"

using namespace cv;

//#define PROC_TIME_DEBUG
#define RROC_AVG_FRAME_NUMS   20

#define BEGIN_SKIP_FRAMES   (30)
#define QUEUE_THRESHOLD     (3)

#define  SET_BIT(a, x)      ((a)|=(1<<x))
#define  CLEAR_BIT(a, x)    ((a)&=(~(1<<x)))

NvCompositeStream::NvCompositeStream(IAVMgrStream *mgrStream)
    : m_compositedFrameFd(0),
      m_nWindowNums(0),
      m_nCompositeValue(0),
      m_strLayout("1-25700"),
      m_bFirstFramesReady(false),
      m_eglDisplay(NULL)
{
    m_pMgrStream = mgrStream;
    memset(&m_nvcompositeParams, 0, sizeof(m_nvcompositeParams));
    memset(m_dmabufFd, 0, sizeof(m_dmabufFd));

    memset(m_nChIdMapToWinId, -1, sizeof(m_nChIdMapToWinId));
    memset(m_nWinIdMapToChId, -1, sizeof(m_nWinIdMapToChId));

    for(int i=0; i<PLANE_NUMS; i++)
    {
        m_pCompositeMapAddr[i] = NULL;
    }

    m_curDetectTime = GetMilliSecondsTimeStamp();
}

NvCompositeStream::~NvCompositeStream()
{
    for(int i=0; i<MAX_STREAM_CHS; i++)
    {        
        CAutoLock lock1(&m_MutexSrcNvBuf[i]);
        while(!m_queSrcNvBufferDmaFd[i].empty())
        {
            m_queSrcNvBufferDmaFd[i].pop();
        }       
    }
    
    if (m_compositedFrameFd)
    {
        NvBufferDestroy(m_compositedFrameFd);
        m_compositedFrameFd   = 0;
    }

    CAutoLock lock2(&m_mutexVectorNextStream);
    while(!m_vectorNextStream.empty())
    {
        m_vectorNextStream.pop_back();
    }

    while(!m_vectorProcStream.empty())
    {
        m_vectorProcStream.pop_back();
    }
    
}

void NvCompositeStream::UpdateStreamChid(int nChid)
{
    if(nChid<0 || nChid >= MAX_STREAM_CHS)
    {
        WarningLog(<<"Invalid chId: "<<nChid<<std::endl);
        return;
    }

    StreamChIdDetect* pStreamChidDetect = &m_chidDetect[nChid];

    pStreamChidDetect->bNormal   = true;
    pStreamChidDetect->lastTime  = m_curDetectTime;    
}

void NvCompositeStream::CheckStreamChid()
{    
    unsigned long long curTime = GetMilliSecondsTimeStamp();

    StreamChIdDetect* pStreamChIdDetect = NULL;    
    for(int i=0; i< MAX_STREAM_CHS; i++)
    {
        pStreamChIdDetect = &m_chidDetect[i];

        if(false==pStreamChIdDetect->bNormal)
            continue;

        if(0==pStreamChIdDetect->lastTime
           ||pStreamChIdDetect->lastTime > curTime)
        {
            pStreamChIdDetect->lastTime = curTime;
            continue;
        }

        unsigned long long usedTime = curTime - pStreamChIdDetect->lastTime;
        if(usedTime > STREAM_DATA_NORMAL_DELAY_TIME)        
        {
            pStreamChIdDetect->bNormal = false;
            DebugLog(<<"ChId "<<i<<" stream is abnormal"<<std::endl);
        }
    }

    m_curDetectTime = curTime;
}

int NvCompositeStream::CheckLayoutStype(const std::map<int, WINRECT> mapWinRect)
{
    int nRet = 0;

    const WINRECT* pRECT = NULL;
    std::map<int, WINRECT>::const_iterator it = mapWinRect.begin();
    for( ;it != mapWinRect.end();)
    {
        pRECT = &it->second;
        int nChid = pRECT->chId;
        if(false==m_chidDetect[nChid].bNormal
           && (m_chidDetect[nChid].lastTime !=0))
        {
            nRet = -1;
            break;
        }
        it++;       
    }

    return nRet;
}

int NvCompositeStream::GenerateCompositeParams(std::string strLayout)
{
    int nRet =0;
    NvCompositeCreateParams* pCompositeCreateParams = &m_compositeCreateParam;

    /* get window position of every stream */
    std::map<int, WINRECT> mapLayoutStype;

    GetVideoLayoutStyle(strLayout.c_str(), mapLayoutStype);

    nRet = CheckLayoutStype(mapLayoutStype);
    if(-1==nRet)
    {
        ErrLog(<<"Unsupported layout:"<<strLayout.c_str()<<std::endl);
        return nRet;
    }

    /* Initialize composite parameters */
    int nWidth  = pCompositeCreateParams->nWidth;
    int nHeight = pCompositeCreateParams->nHeight;    
    
    m_nWindowNums = mapLayoutStype.size();
    
    m_nvcompositeParams.composite_flag = NVBUFFER_COMPOSITE;
    m_nvcompositeParams.input_buf_count = m_nWindowNums;

    memset(m_nChIdMapToWinId, -1, sizeof(m_nChIdMapToWinId));
    memset(m_nWinIdMapToChId, -1, sizeof(m_nWinIdMapToChId));

    WINRECT* pRECT = NULL;
    std::map<int, WINRECT>::iterator it;
    for(int i=0; i < m_nWindowNums; i++)
    {
        NvBufferRect *pNvRect = &m_nvcompositeParams.dst_comp_rect[i];

        it = mapLayoutStype.find(i);
        if(it != mapLayoutStype.end())
        {
            pRECT = &it->second;
        }
        else
        {
            ErrLog(<<"No composite window id is found, i="<<i<<std::endl);
            continue;
        }
                
        pNvRect->top    = nHeight*pRECT->top/100;
        pNvRect->left   = nWidth*pRECT->left/100;
        pNvRect->width  = nWidth*(pRECT->right - pRECT->left)/100;
        pNvRect->height = nHeight*(pRECT->bottom - pRECT->top)/100;

        pNvRect->top    += pNvRect->top%2;
        pNvRect->left   += pNvRect->left%2;
        pNvRect->width  += pNvRect->width%2;
        pNvRect->height += pNvRect->height%2;

m_nWinIdMapToChId[i]            = pRECT->chId;
        m_nChIdMapToWinId[pRECT->chId]  = i;

        CAutoLock lock(&m_MutexSrcNvBuf[i]);
        while(!m_queSrcNvBufferDmaFd[i].empty())
        {
            m_queSrcNvBufferDmaFd[i].pop();
        }

        DebugLog(<<"winId:"<<i
                 <<",chId:"<<pRECT->chId
                 <<",["<<pNvRect->top
                 <<","<<pNvRect->left
                 <<"], ["<<pNvRect->width
                 <<","<<pNvRect->height
                 <<"]"<<std::endl);
        
    }

    m_nvcompositeParams.composite_bgcolor.r = BACKGROUND_COLOR_R;
    m_nvcompositeParams.composite_bgcolor.g = BACKGROUND_COLOR_G;
    m_nvcompositeParams.composite_bgcolor.b = BACKGROUND_COLOR_B;

    m_strLayout = strLayout;

    CAutoLock lock(&m_MutexComposite);
    m_nCompositeValue = 0;
    memset(m_dmabufFd, 0, sizeof(m_dmabufFd));

    return nRet;    
}

int NvCompositeStream::ExportDataFromDmafd(int dmafd, std::vector<cv::Mat>& vecSrcFrame)
{
    NvBufferParams parm;
    int ret = NvBufferGetParams(dmafd, &parm);
    if(ret != 0 || (NvBufferColorFormat_NV12==parm.pixel_format))
    {
        return -1;    
    }
    
    for(unsigned int plane=0; plane<PLANE_NUMS; plane++)
    {
        int nRet = NvBufferMemMap(dmafd, plane, NvBufferMem_Read_Write, &m_pCompositeMapAddr[plane]);
        if(-1==nRet)
        {
           ErrLog(<<"Failed to NvBufferMemMap for plane:"<<plane<<std::endl); 
           continue;
        }
    
        cv::Mat srcFrame;
        srcFrame.create(parm.height[plane], parm.width[plane], CV_8UC1);
    
        int nBufferLen=0;
        unsigned char* pCompBuffer = srcFrame.data;
        NvBufferMemSyncForCpu(dmafd, plane, &m_pCompositeMapAddr[plane]);    
        for(unsigned int i = 0; i < parm.height[plane]; i++)
        {
            memcpy(pCompBuffer+nBufferLen, (unsigned char *)m_pCompositeMapAddr[plane] + i * parm.pitch[plane], parm.width[plane]); 
            nBufferLen += parm.width[plane];
        }
        vecSrcFrame.push_back(srcFrame);
    }
               
    return ret;
}

int NvCompositeStream::ImportDataToDmafd(int dmafd, std::vector<cv::Mat>& vecSrcFrame)
{
    NvBufferParams parm;
    int ret = NvBufferGetParams(dmafd, &parm);
    if(ret != 0 || (NvBufferColorFormat_NV12==parm.pixel_format))
    {
        return -1; 
    }

    for(unsigned int plane=0; plane<PLANE_NUMS;plane++)
    {
        unsigned char* pOutData = vecSrcFrame[plane].data;
        for(unsigned int i = 0; i < parm.height[plane]; i++)
        {
            memcpy((unsigned char *)m_pCompositeMapAddr[plane] + i * parm.pitch[plane], pOutData, parm.width[plane]); 
            pOutData += parm.width[plane];
        }

        int ret = NvBufferMemSyncForDevice(dmafd, plane, (void**)&m_pCompositeMapAddr[plane]);
        if(ret != 0)
        {
            ErrLog(<<"NvBufferMemSyncForDevice() fail"<<std::endl);    
        } 
        
        NvBufferMemUnMap(dmafd, plane, &m_pCompositeMapAddr[plane]);
        m_pCompositeMapAddr[plane] = NULL;        
    } 
    return 0;
}

int NvCompositeStream::ProcStreamData(int dmafd)
{
    CAutoLock l(&m_mutexVectorNextStream);

    int nSize = m_vectorProcStream.size();
    if(nSize <=0 || dmafd <=0)
    {
        return -1;
    }

    FrameInfor infor;
    infor.dmabufFd  = dmafd;

#if defined(PROC_TIME_DEBUG) 
    static long long int s_frame_num=0;
    s_frame_num++;
    static double s_use_time[10]; 
    memset(s_use_time, 0.0f, sizeof(s_use_time));
    cv::TickMeter tm;
#endif

    for(int i=0; i< nSize; i++)
    {
        IAVProcessStream* pAVProcStream = m_vectorProcStream.at(i);
        int exportDmafd = 0;
        if(pAVProcStream->ProcessStreamIsNeed(exportDmafd))
        {             
#if defined(PROC_TIME_DEBUG)    
            tm.reset();tm.start();
#endif        
            std::vector<cv::Mat> vecSrcFrame;
            if(exportDmafd != 0)
            {
                ExportDataFromDmafd(dmafd, vecSrcFrame); 
            }
        
            pAVProcStream->ProcessStreamData(NULL, infor, (void*)&vecSrcFrame);
            
            if(exportDmafd != 0)
            {
                ImportDataToDmafd(dmafd, vecSrcFrame);
            }
            
#if defined(PROC_TIME_DEBUG)    
            tm.stop();
            s_use_time[i] +=tm.getTimeMilli();
            if(0==(s_frame_num%RROC_AVG_FRAME_NUMS))
            {
                double avgTime = s_use_time[i]/RROC_AVG_FRAME_NUMS;
                DebugLog(<<"alg:"<<i<<" avgTime:"<<avgTime<<"ms"<<std::endl);                
                s_use_time[i] = 0.0f;                 
            }
#endif
        }        
    } 

    return 0;
    
}

int NvCompositeStream::OutputStreamData(NvBuffer * buffer,const FrameInfor info)
{
    CAutoLock l(&m_mutexVectorNextStream);

    int nSize = m_vectorNextStream.size();
    IAVStream* pNextStream = NULL;
    for(int i=0; i< nSize; i++)
    {
        pNextStream = m_vectorNextStream.at(i);

        pNextStream->PushStreamData(buffer, info);
    }
    return 0;
}

int NvCompositeStream::StreamCreate(void *p)
{
    NvCompositeCreateParams* pCompositeCreateParams = (NvCompositeCreateParams*)p;

    sprintf(pCompositeCreateParams->streamName, "%s", NV_COMPOSITE_STREAM_NAME);

    DebugLog(<<"streamName: "<<pCompositeCreateParams->streamName<<std::endl);

    m_compositeCreateParam  = *pCompositeCreateParams;

    /* init framerate control */
    std::string strName = pCompositeCreateParams->streamName;
    ControlInit(strName, pCompositeCreateParams->nOutputFps);

    std::string strLayout = pCompositeCreateParams->layoutStype;
    
    GenerateCompositeParams(strLayout);

    /* Allocate composited buffer */
    int nWidth  = pCompositeCreateParams->nWidth;
    int nHeight = pCompositeCreateParams->nHeight; 
    
    NvBufferCreateParams input_params = {0};
    input_params.payloadType = NvBufferPayload_SurfArray;
    input_params.width       = nWidth;
    input_params.height      = nHeight;
    input_params.layout      = NvBufferLayout_Pitch;
    input_params.colorFormat = (NvBufferColorFormat)NvBufferColorFormat_YUV420; //NvBufferColorFormat_NV12;
    input_params.nvbuf_tag   = NvBufferTag_VIDEO_CONVERT;

    int nRet = NvBufferCreateEx(&m_compositedFrameFd, &input_params);
    if (-1 == nRet)
        ErrLog(<<"Failed to allocate composited buffer"<<std::endl);

    /* setting statistics params */
    m_fpsStatistics.bEnable          = true;
    m_fpsStatistics.nPeriodFrames    = 1000;
    m_fpsStatistics.strName          = pCompositeCreateParams->streamName;

    for (int i = 0; i < MAX_COMPOSITE_FRAME; i++)
    {        
        m_nvcompositeParams.dst_comp_rect_alpha[i] = 1.0f;
        m_nvcompositeParams.src_comp_rect[i].top   = 0;
        m_nvcompositeParams.src_comp_rect[i].left  = 0;
        m_nvcompositeParams.src_comp_rect[i].width = nWidth;
        m_nvcompositeParams.src_comp_rect[i].height = nHeight;
    } 

    return 0;
}

int NvCompositeStream::StreamStart()
{
    DebugLog(<< "StreamStart begin" << std::endl);
    if(m_bStreamon)
    {
        ErrLog(<<"stream had start."<<std::endl);
        return -1;
    }

    CThread::StartThread();
    m_bStreamon = true;

    DebugLog(<< "StreamStart end"<< std::endl);
    return 0;
}

int NvCompositeStream::StreamStop()
{
    DebugLog(<< "StreamStop begin" << std::endl);
    if(!m_bStreamon)
    {
        ErrLog(<<"stream had stop."<<std::endl);
        return -1;
    }
   
    CThread::shutdown();
	CThread::join();

    m_bStreamon = false;    
    DebugLog(<< "StreamStop end"<< std::endl);
    return 0;
}

/*************************************************************************
Describe:
NvCompositeStream may have one more next stream, so used m_vectorNextStream

*************************************************************************/
int NvCompositeStream::AddNextStream(IAVStream* pStream)
{
    CAutoLock l(&m_mutexVectorNextStream);

    int nSize = m_vectorNextStream.size();
    for(int i=0; i<nSize; i++)
    {
        if(pStream == m_vectorNextStream.at(i))
        {
            ErrLog(<<"The next stream had exist."<<std::endl);
            return -1;
        }
    }

    m_vectorNextStream.push_back(pStream);    
  
    return 0;
}

int NvCompositeStream::PushStreamData(NvBuffer * /*buffer*/, const FrameInfor info)
{
    if(false==m_bStreamon)
    {
        return -1;
    }

    bool bSkipFrame = false;
    int nChid   = info.stream_index;
    int nWinId  = m_nChIdMapToWinId[nChid];
    UpdateStreamChid(nChid);
    
    if(nWinId != -1)
    {   
        /*******************************************************************************
            !!!Warning:
            The first 30 frames need to be dropped to avoid video jitter or frame skipping.        
          *******************************************************************************/        
        m_nvcompositeParams.src_comp_rect[nWinId].width = info.width;
        m_nvcompositeParams.src_comp_rect[nWinId].height = info.height;
        
        m_processFramesArray[nChid]++;
        m_totalProcessFrames++;
        if(m_totalProcessFrames < BEGIN_SKIP_FRAMES)
        {        
            bSkipFrame = true;
        }
        else
        {
            int nSize = m_queSrcNvBufferDmaFd[nWinId].size();
            if(nSize >= QUEUE_THRESHOLD)
            {
                if(0==m_processFramesArray[nChid]%5)
                {
                    /* Discard the one frame every 3 frames when out of threshold */
                    DebugLog(<<"winId:"<<nWinId<<", chId:"<<nChid<<", nSize:"<<nSize<<", discard the frame: "<<m_processFramesArray[nChid]<<std::endl);
                    bSkipFrame = true;
                }
            } 
        }

        if(false==bSkipFrame)
        {
            CAutoLock l(&m_MutexSrcNvBuf[nWinId]);   

            int nSrcDmaFd = info.dmabufFd;
            m_queSrcNvBufferDmaFd[nWinId].push(nSrcDmaFd);  
            
            if(false ==m_bFirstFramesReady)
            {
                m_bFirstFramesReady = true;
                ControlReset();
                
                Lock lock(m_mutexNewFrame);
                m_conditionNewFrame.signal();
            }  
        }          
    }

    return 0;
}

int NvCompositeStream::CommonControl(CCmdPacket &cmdPacket)   
{
    int nRet = -1;
    std::string strCmd = cmdPacket.GetCmd();

    if(strCmd == "changeLayout")
    {
        std::string strLayout = cmdPacket.GetAttrib("layout");

        nRet = GenerateCompositeParams(strLayout);        
    }

    return nRet;
}

int NvCompositeStream::AddProcessStream(IAVProcessStream* pIAVProcStream)
{
    CAutoLock l(&m_mutexVectorNextStream);

    int nSize = m_vectorProcStream.size();
    for(int i=0; i<nSize; i++)
    {
        if(pIAVProcStream == m_vectorProcStream.at(i))
        {
            ErrLog(<<"The proc stream had exist."<<std::endl);    
        }
    }
    m_vectorProcStream.push_back(pIAVProcStream);    

    return 0;
}

int NvCompositeStream::WillStopStream()
{
    m_bWillStreamEos = true;    

    return 0;
}

void NvCompositeStream::ThreadProcMain(void)
{
    int      nRet;
    int      nSrcDmaFd;
    NvBufferCompositeParams nvcompositeParams;
    int      dmabufFd[MAX_STREAM_CHS];
    bool     bNeetComposite = false;

    DebugLog(<<"NvCompositeStream::ThreadProcMain() begin"<<std::endl);
        
    while (!CThread::isShutdown())
    {
        if(true == m_bWillStreamEos)
        {
            usleep(200*1000);
            continue;
        }
    
        if(false==m_bFirstFramesReady)
        {
            Lock lock(m_mutexNewFrame);
            m_conditionNewFrame.wait(m_mutexNewFrame, 1000);
            continue;
        }
    
        for(int i=0; i < m_nWindowNums; i++)
        {
            if(!m_queSrcNvBufferDmaFd[i].empty())
            {
                {
                    CAutoLock l(&m_MutexSrcNvBuf[i]);
                    nSrcDmaFd = m_queSrcNvBufferDmaFd[i].front();
                    m_queSrcNvBufferDmaFd[i].pop();                    
                }

                {
                    CAutoLock lock(&m_MutexComposite);
                    SET_BIT(m_nCompositeValue, i);
                }
                               
                m_dmabufFd[i]   = nSrcDmaFd;
            }
        }

        // Composite multiple input to one frame
        bNeetComposite = false;
        {
            CAutoLock lock(&m_MutexComposite);
            if(m_nCompositeValue == ((1<<m_nWindowNums)-1))
            {
                bNeetComposite = true;
                memcpy(&nvcompositeParams, &m_nvcompositeParams, sizeof(m_nvcompositeParams));
                memcpy(dmabufFd, m_dmabufFd, sizeof(m_dmabufFd));
            }
        }
        
        if(bNeetComposite)
        {   
            nRet = NvBufferComposite(dmabufFd, m_compositedFrameFd, &nvcompositeParams);
            if(0==nRet)
            {
                /* update statistics */
                updataFramerateStatistics();

                FrameInfor infor;
                infor.dmabufFd  = m_compositedFrameFd;

                /* alg process */
                ProcStreamData(m_compositedFrameFd);
                
                /* output data to next IAVStream */
                OutputStreamData(NULL, infor);
            }
        }         

        ControlUpdate();        

        CheckStreamChid();        
    }

    DebugLog(<<"NvCompositeStream::ThreadProcMain() end"<<std::endl);
}

NvCompositeStream.cpp (18.3 KB)

Your code is just some part, can you share one test code which can build and run and can repro the issue you mentioned?

The code was written with reference to the example. It’s okay , it works fine with jetpach 32.2, I think the problem is on the new version the tegra_multimedia_api NvBufferComposite not support for Haikang camera: DS-ECD8012-MD,I tried another HaiKang camera DS-2CD2310-I, and it works well with tegra_multimedia_api at jetpach 4.2.2.

HI
DS and MM SDK is two different module, please create one new thread for this MM SDK issue, thanks.

ok,thank you very much