R28.2.1 : decodeToFd leak memory ?!

Hello,

I have a new issu.
I have tranformed TX1 r24.2.3 code to TX2 r28.2.1 … in few second … --’

So , i used same MMAP V4L2 , and moved NvVideoConverter to NvBufferTransform.

All work nice, i can make RGB32 for use NVOSD functions , after a NV12 and a recording in h264.

in_buffer was delete in ach new V4L2 buffer :
But, i check for a memory leak : in 4K ,that’ lost 5Mo / s.

  • disable Recording => Leak
  • disable NvBufferTransform => Leak
  • disable Decoderjpeg function => No Leak
  • disable decodeToFd : comment “m_JpegDec->decodeToFd” line and return 1 in Decoderjpeg => No leak
    Only V4L2 capture ioctl(_ctx->cam_fd, VIDIOC_DQBUF, &_ctx->cam_buf) => No leak
int MyEncoder::Decoderjpeg( Context_T *_ctx , uint64_t _length )
{
    
    
    int fd = -1 ;
    bool lret = true;      
    int jret = -1;
    uint8_t *in_buffer ;
              
    if( 0 < _length ) {
            if(_ctx->in_buffer != nullptr) {
                free(_ctx->in_buffer);
               _ctx->in_buffer= nullptr;
            }
                    
            _ctx->in_buffer = (unsigned char *)malloc(( _length+1)*sizeof(unsigned char) );
             memset(_ctx->in_buffer ,'

int MyEncoder::Decoderjpeg( Context_T *_ctx , uint64_t _length )
{

int fd = -1 ;
bool lret = true;      
int jret = -1;
uint8_t *in_buffer ;
          
if( 0 < _length ) {
        if(_ctx->in_buffer != nullptr) {
            free(_ctx->in_buffer);
           _ctx->in_buffer= nullptr;
        }
                
        _ctx->in_buffer = (unsigned char *)malloc(( _length+1)*sizeof(unsigned char) );
         memset(_ctx->in_buffer ,'\0',( _length+1)*sizeof(unsigned char));
        memcpy(_ctx->in_buffer , _ctx->buffers[_ctx->cam_buf.index].cam_start,_length*sizeof(unsigned char));
         _ctx->in_file_size = _length;

        
        jret = _ctx->m_JpegDec->decodeToFd(fd, _ctx->in_buffer, _ctx->in_file_size, pixfmt, width, height); 
    }
    else{
        
            LOG_WARN(" CAMERA [%li] : NvBufferMemMap is NULL ",_ctx->m_Id_Camera );
    }
}        
return fd;   

}

',( _length+1)*sizeof(unsigned char));
            memcpy(_ctx->in_buffer , _ctx->buffers[_ctx->cam_buf.index].cam_start,_length*sizeof(unsigned char));
             _ctx->in_file_size = _length;

            
            jret = _ctx->m_JpegDec->decodeToFd(fd, _ctx->in_buffer, _ctx->in_file_size, pixfmt, width, height); 
        }
        else{
            
                LOG_WARN(" CAMERA [%li] : NvBufferMemMap is NULL ",_ctx->m_Id_Camera );
        }
    }        
    return fd;   
}

here, function to prepare mmap buffer :

bool MyEncoder::PrepareBufferParse(Context_T * _ctx ) {
    
    bool lret = true;
	/* Buffer allocation
	 * Buffer can be allocated either from capture driver or
	 * user pointer can be used
	 */
	/* Request for MAX_BUFFER input buffers. As far as Physically contiguous
	 * memory is available, driver can allocate as many buffers as
	 * possible. If memory is not available, it returns number of
	 * buffers it has allocated in count member of reqbuf.
	 * HERE count = number of buffer to be allocated.
	 * type = type of device for which buffers are to be allocated.
	 * memory = type of the buffers requested i.e. driver allocated or
	 * user pointer */
	memset(&_ctx->cam_req, 0, sizeof (_ctx->cam_req));
    _ctx->cam_type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    
	_ctx->cam_req.count = TEST_BUFFER_NUM;
	_ctx->cam_req.type = _ctx->cam_type;
	_ctx->cam_req.memory = V4L2_MEMORY_MMAP;
    
	if (ioctl(_ctx->cam_fd, VIDIOC_REQBUFS, &_ctx->cam_req) < 0) {
		LOG_ERROR("CAPTURE: VIDIOC_REQBUFS");
        LOG_ERROR("Failed to request v4l2 buffers: %s (%d)", strerror(errno), errno);
		lret = false;
	}
	/* Mmap the buffers
	 * To access driver allocated buffer in application space, they have
	 * to be mmapped in the application space using mmap system call */
	for (uint32_t i = DISABLE; i < _ctx->cam_req.count; i++)	{
        
        if( lret ) {
            memset(&_ctx->cam_buf,DISABLE, sizeof(_ctx->cam_buf));
            _ctx->cam_buf.type = _ctx->cam_type;
            _ctx->cam_buf.index = i;
            _ctx->cam_buf.memory = V4L2_MEMORY_MMAP;
            int  err_chk;
            if((err_chk=ioctl(_ctx->cam_fd, VIDIOC_QUERYBUF, &_ctx->cam_buf)) < 0) {
                LOG_ERROR("CAPTURE: VIDIOC_QUERYBUF");
                LOG_ERROR("Failed to request v4l2 buffers: %s (%d)",
                            strerror(errno), errno);
                lret = false;
            }
            else {
                
                LOG_INFO("CAPTURE: Prepare MMAP buffer");
                _ctx->buffers[i].cam_offset = (size_t) _ctx->cam_buf.m.offset;
                _ctx->buffers[i].cam_length = _ctx->cam_buf.length;
                _ctx->buffers[i].cam_start = (unsigned char *)mmap(NULL, _ctx->cam_buf.length,
                                PROT_READ | PROT_WRITE, MAP_SHARED, _ctx->cam_fd,
                                _ctx->buffers[i].cam_offset);
                
                if (_ctx->buffers[i].cam_start == MAP_FAILED) {
                    LOG_ERROR("Cannot mmap = %d buffer\n", i);
                    lret = false;
                }
                else {
                    /* Enqueue buffers
                    * Before starting streaming, all the buffers needs to be
                    * en-queued in the driver incoming queue. These buffers will
                    * be used by thedrive for storing captured frames. */
                    if(ioctl(_ctx->cam_fd, VIDIOC_QBUF, &_ctx->cam_buf) < 0) {
                        LOG_ERROR("CAPTURE: VIDIOC_QBUF :" );
                        LOG_ERROR("- Failed to request v4l2 buffers: %s (%d)",strerror(errno), errno);
                        lret = false;
                    }
                }
            }
        }	
	}
	return lret;

}

thats’ work on TX1 …

thanks for help

Hi Syd,
Just as I have said many times to you in some other threads, we need a test sample and clear steps to reproduce the issue first. Or we cannot do further check.

Please make a patch on existing sample so that we can build and run. Also how you check the memory leak, via top, pmap or other tool?

yes, i try to make a test sample from mmapi sample.

  • First, to build an unit test for check the code between TX1 and TX2 .
  • Second , more easy to share an example.

I preview this leak memory with htop ( top fork ).
Valgrind is usefull if i can make a test sample. At now, there is too much track log with our application and no relevant trace…

I come back to you as soon as I have a sample for shared, or found a reason for this problem.

I back for share a sample MMAPI code :

https://github.com/Syd76/NV_MMAPI_camera_v4l2_mjpg

There are options :
“-m” for enable each step streaming : 0 for capture only , 3 all pipe for display.
“-l” for enable valgrind (see README.md).

I have leak memory with decoder process for my application, this test and a see3cam sample.
There are directory with many log.

Have i forgotten few mmap buffer or fd ?

Thanks

Syd

Hi Syd,
We have a sample for MJPG decoding:
https://devtalk.nvidia.com/default/topic/1042843/jetson-tx2/decoding-mjpeg-stream-using-the-nvjpegdecoder/post/5290172/#5290172

Don’t see memory leak with the sample. Can you compare it with yours?

Thanks !

I go to apply the patch and try to understand the new API operation.

i follow many post from decoding-mjpeg-stream-using-the-nvjpegdecoder post.

… a find https://devtalk.nvidia.com/default/topic/1035699/jetson-tx1/premature-end-of-jpeg-file-tegra_multimedia-master-samples-06_jpeg_decode-/post/5263099/#5263099

And I try the patch (sample) and update libnvjpeg.so … that’s works for patch , my sample and a unit test.

But, it’s on R28.2.1 and used debootstrap solution like Abacco Git for make a lightly system.
https://github.com/Abaco-Systems/jetson-tx2-sample-filesystems

I use “Tegra_Linux_Sample-Root-Filesystem_R28.2.1_aarch64” archive.

L4T Sample Root File System  28.2.1 2018/06/13

I try to understand why i have a wrong libnvjpeg.so. @infontz6r used the R28.2.1 update for fix issu.

Now, i can rebuild properly a system.

Thanks for help !