H264 image decoder fails to decode on ROS

Hi,

I am trying to write an H264 image decoder to take in ros messages in the form of compressed images from ROS. However, the decoder is unable to create the instance of the NvMediaImageDecoder with the stream from ROS. after checking the sensor_msg from ROS, I noticed the first 16 bytes of the message remains unchanged. are there any possible reasons why the callback cbBeginSequence is not getting called after parsing the data? the same set of messages can be decoded and viewed with an OpenCV decoder, hence the information from the bitstream does not seem to be invalid. Appreciate any help/advice!

Dear hleong,

W.r.t. this message posted on the forum, it is not enough information e.g. we don’t know which device generated the encoded content, whether you have properly extracted the elementary stream, etc.
W.r.t. OpenCV, maybe generated the encoded stream on some other system (e.g. PC?) using some opencv-based application and that’s inserting some encapsulation on the actual bitstream? Thanks.

Hi Steve,

The device which generated the encoded content was a PX2, and the encoding was performed using a Driveworks sample. The PX2 has an Ubuntu 16.04 installed onboard. From my understanding, each ROS message should contain a full frame’s worth of encoded data for decoding. Would it have made a difference to the bits in the bitstream if the Driveworks sample had converted the YUV image streamed from GMSL camera to an RGBa format before the encoding process, and does the NvImageDecoder support decoding of H264 to RGBa before converting the NvImage to YUV?

In addition, the encoded stream was able to be decoded using an openCV-based decoder using libav tools.

Thanks so much for the help!

Dear hleong,

Could you please let us know below?

  • Which camera you had connected to their DPX2 for capture & encoding
  • Full command line used while invoking the Driveworks sample used for encoding.

Would it have made a difference to the bits in the bitstream if the Driveworks sample had converted the YUV image streamed from GMSL camera to an RGBa format before the encoding process
→ No, the nvmedia encoder does support taking input frames in YUV color formats as described in the programmers’ guide

does the NvImageDecoder support decoding of H264 to RGBa before converting the NvImage to YUV?
The decoder’s output is generated in a YUV color format. The “nvmimg_play” sample is available as a reference:
https://docs.nvidia.com/drive/active/5.0.10.3L/nvvib_docs/index.html#page/NVIDIA%2520DRIVE%2520Linux%2520SDK%2520Development%2520Guide%2FNvMedia%2520Sample%2520Apps%2Fnvmedia_nvmimg_play.html%23

Dear Steve,

The camera that was connected to the DPX2 was GMSL camera. Also, apologies for the lack of clarity on the encoding process earlier. The encoding was performed using a code that was integrated with ROS, which had been adapted from the Driveworks camera_gmsl sample. In the node, the serializer flag was set to h264 when using the dwSAL_createSensor() function to initialize the sensors. The frame reading had been done using the dwSensorCamera_readFrame function.

Dear hleong,

Thank you for your update.
So may I know this topic is solved?

Hi Steve, unfortunately, the topic has not been solved, as the h264 decoder created with the nvm_img sample was not able to decode the h264 imgs encoded using the serializer in the camera_gmsl sample. Is there any way to check h264 the output from the serializer?

Dear hleong,

Could you please share the command line given to the unmodified driveworks (or nvmedia) encoder sample app? And not any custom app.
And “camera_gmsl” is the driveworks encoding sample app. Could you please share the command line you used to invoke it? Thanks.

Hi Steve, unfortunately, as the issue had occurred using the custom app that was built on modifying the “camera_gmsl” driveworks encoding sample app, there was no command line issued in the recording. However, we have tested the decoder app with the h264 samples in the data files that came with the Driveworks, hence concluding that the decoding issue was with the interface with ROS callbacks.