Cannot run the jpeg_encode example from Tegra MMAPI

Hello,

I am trying to leverage the Tegra MMAPI samples for saving jpeg images from my application. Ultimately, I want to solve this problem Seeking suggestions of faster way of saving images with jetson-inference camera processing - Jetson TX2 - NVIDIA Developer Forums

I tried to run the compiled sample in

tegra_multimedia_api/samples/05_jpeg_encode

by the following command

./jpeg_encode ../../data/Picture/nvidia-logo.jpg 1920 1080 dec-logo.jpg

I get the following output on command line and nothing is produced in the corresponding folder.

Failed to query video capabilities: Inappropriate ioctl for device
libv4l2_nvvidconv (0):(774) (INFO) : Allocating (1) OUTPUT PLANE BUFFERS Layout=0
libv4l2_nvvidconv (0):(790) (INFO) : Allocating (1) CAPTURE PLANE BUFFERS Layout=1
Could not read a complete frame from file
App run was successful

What kind of input does it take?

After going through the source code, and generating a test file using

gst-launch-1.0 videotestsrc num-buffers=1 ! 'video/x-raw, width=640, height=480, format=I420' ! filesink location=test.yuv

and then ran the following

./jpeg_encode test.yuv 640 480 test-out.jpg --encode-buffer

Works well and I can see the output file.

Thanks!

That being said, what is the difference between using the jpeg encode example as above and the nvjpegenc element in the gstreamer pipeline?

Found my answer from the MMAPI reference docs: