Why the deepstream is hanging when I pushing mpeg4 data?

Hi,

We cannot find the attachment.
Could you try it again?

Thanks.

Hi,
I clicked the attach button then select the video file, and clicked OK to upload,
However, the page did not indicate that the upload was successful or failed, I tried several times and the results were the same.

I am not sure whether the file upload was successful.

Can you provide an email address? I can send the video by email.

Hi,

We can download the attachment and already share to our internal team.
Will update information with you later.

Thanks for sharing your video.

HI AastaLLL,

Is there any new progress?

Thanks.

Hi,

Sorry for keeping you waiting.

We are still checking this issue.
Will update information with you later.

Thanks.

Hi,

These files you shared is NOT raw video format.
They are of mp4 container format containing audio and video channels, and DeepStream doesn’t support it.

You can create a raw video file from mp4 with this command:

ffmpeg -i mpeg_4.mp4 -vcodec mpeg4 -pix_fmt yuv420p -an -f h264 -b 4096k mpeg_4.mpeg4

Thanks.

HI AastaLLL,

I have already read the video stream data from the video channel by ffmpeg, then sending it to deepstream.I think it is already the raw data. You can see that in the code below.

And it doesn’t matter with audio data. I didn’t read or send audio stream data to deepstream.

I don’t think this is the cause of the problem, Can you help me further analysis?

Thank you very much.

bool getData(uint8_t **ppBuf, int *pnBuf)
	{
		while (true)
		{
			AVPacket *packet = av_packet_alloc();
			int r = av_read_frame(m_pFormatCtx, packet);
			if(r >= 0)
			{
<b>				if(packet->stream_index == m_videoIndex)</b>
				{					
					<b>memcpy(*ppBuf, packet->data, packet->size);</b>
					*pnBuf = packet->size;
					//PrintBuffer(*ppBuf, 20);
					av_packet_unref(packet);
					av_packet_free(&packet);
					return true;
				}
			}
			else
			{
				return false;
			}
		}
		return false;
	}

Hi,

Could you try the command shared in comment #26 first?
Please convert the video file into raw and feed it into DeepStream.

Thanks.

Hi,

I want to use the GPU to do the decoding directly.
I do not want to transcode the video to H264 and then decode it.
Transcoding processing is to do decoding first, and then do encoding again, it already includes a decoding process, it is an extra operation and will waste my resources.

Thanks.

We will check this internally and update information to you.
Thanks.

Hi,

Sorry for keeping you waiting.

It’s recommended to use our latest 2.0 SDK.
Deepstream 1.5 won’t be updated and please switch to our new SDK to get future support.

For your use case, it looks like there is memcpy in getData:

memcpy(*ppBuf, packet->data, packet->size);

The ppBuf argument is set to NULL in our sample. Have you modified it to a valid memory address?

Our suggestion is NOT to modify getData implementation within dataProvider.h but loadDataFromFile function:

Original

int nRead = fread(pLoadBuf_, 1, count, fp_);
vCache_.insert(vCache_.end(), &pLoadBuf_[0], &pLoadBuf_[nRead]);

Modify it to read the container file (eg: mp4) to extract next frame:

av_read_frame(m_pFormatCtx, packet);
..... //check if frame is from video stream
vCache_.insert(vCache_.end(), &packet.data[0], &packet.data[packet->size];

There are several pieces should be validated: the container file demux logic, input video, network.

You can try to convert the video to h264 and running through the same deepstream pipeline.
If this runs without error then the issue is likely on the demux logic that inserts video frames into vCache as mentioned above.

Thanks.