Hello, I get the real-time stream based on V4L2 and input it to the decoder for decoding. I do this by following the routine unittest_samples/decoder_unit_sample. In the dp_event function in the decoder’s capture thread, the ioctl of VIDIOC_DQEVENT always returns -1. May I ask how to solve this problem
We have the samples demonstrating hardware video decoding:
Please check if you can reproduce the issue with either sample. And share us the steps so that we can investigate the issue.
I wrote my own V4L2 based real-time stream grab program, grab the /usr/src/jetson_multimedia_api/samples/unittest_samples/decoder_unit_sample/ to decode, My decoder part and the data size of this routine only decoder output plane do not match.
Do you want me to post the program?
You are from Taiwan, right? Can you communicate in Chinese?
It would be better if you can patch decoder_unit_sample so that we can build and run it to reproduce the issue. Please help do this.
I have an external USB camera, the output of the camera is H264 data stream, through the collection of suspected head output 264 video data for decoding, I do not know if you have available camera test program
ret_val = v4l2_ioctl(ctx->dec_fd, VIDIOC_DQEVENT, &event) is stuck in the decoding capture thread;
The h264 stream may be invalid. Probably SPS/PPS is missing. Please dump the stream and try to decode it through 00_video_decode or decoder_unit_sample. To check if the stream is decode-able in this way.
The stream I store can be decoded using decoder_unit_sample, about SPS/PPS my camera has continuous output SPS/PPS, it is impossible to lose SPS/PPS frames every time
We would suggest compare the working and non working cases to clarify the issue. Maybe you don’t buffer enough stream data when feeding to decoder. Or maybe SPS/PPS is not put in the beginning.
I compare work cases and non-work cases,
In the work case, the decoder input queue is up to 4000000, and then the input file is cycled until the collection is complete.
Instead of working in a case where the decoder input queue size depends on the size of a frame of my video capture, such as in 0-20000, loop capture, does this matter?
However, when I set the decoder format, I set the sizeimage of plane to 4000000
There are two modes in the reference sample:
How to continuously feed h264 data to HW decoder - #3 by DaneLLL
Please try –input-nalu
You mean that although I filled the buffer of the decoder ouput plane, the decoder did not detect the frame, so it did not wait for the Resolution change event, and the decoder could not get the resolution of the stream, so there would be a mistake like mine
Your thought looks reasonable. The information of resolution is stored in SPS/PPS and it generally is put in beginning of h264 stream. Probably it is not correctly fed to decoder in the non working case.
I saved the buffer queue data of the decoder output plane to a file, and found that every 30 frames there will be a SPS/PPS, so the decoder should be able to recognize it normally
You may discard the data before first SPS/PPS. So that it’s fed at very beginning. Please refer to the patch to identify SPS
Xavier AGX : Video encoding crash - #15 by DaneLLL
I have checked that the first byte of the input queue of the decoder is the SPS frame, this frame is entered into the queue through V4L2 VIDIOC_QBUF, and then every second (30 frames) there will be an SPS frame into the qbuf queue. The difference between me and the case (samples/unittest_samples/decoder_unit_sample) is that the data size of the input plane is set to 4000000 in the case, and the size of the input queue is also set to 4000000 until the data of the input file is read for the last time. When I set the input plane format, I set the maximum byte size of the plane to 4000000, but the actual input queue size is the size of a frame of data, which is much smaller than this value, will there be a problem?
Does the size of the decoder input buffer have anything to do with this problem?
Do you have any other suggestions about this problem
It is confirmed the dumped h264 stream can be decoded when running 00_video_decode and decoder_unit_sample samples. In your implementation, you feed camera stream to decoder and it fails. For next you may try to feed the dumped h264 stream to decoder and check if it works. If you follow 00_video_decode and decoder_unit_sample, it is supposed to work the same as the two reference samples.