Hello,
I am trying to leverage the Tegra MMAPI samples for saving jpeg images from my application. Ultimately, I want to solve this problem Seeking suggestions of faster way of saving images with jetson-inference camera processing - Jetson TX2 - NVIDIA Developer Forums
I tried to run the compiled sample in
tegra_multimedia_api/samples/05_jpeg_encode
by the following command
./jpeg_encode ../../data/Picture/nvidia-logo.jpg 1920 1080 dec-logo.jpg
I get the following output on command line and nothing is produced in the corresponding folder.
Failed to query video capabilities: Inappropriate ioctl for device
libv4l2_nvvidconv (0):(774) (INFO) : Allocating (1) OUTPUT PLANE BUFFERS Layout=0
libv4l2_nvvidconv (0):(790) (INFO) : Allocating (1) CAPTURE PLANE BUFFERS Layout=1
Could not read a complete frame from file
App run was successful
What kind of input does it take?
After going through the source code, and generating a test file using
gst-launch-1.0 videotestsrc num-buffers=1 ! 'video/x-raw, width=640, height=480, format=I420' ! filesink location=test.yuv
and then ran the following
./jpeg_encode test.yuv 640 480 test-out.jpg --encode-buffer
Works well and I can see the output file.
Thanks!
That being said, what is the difference between using the jpeg encode example as above and the nvjpegenc element in the gstreamer pipeline?
Found my answer from the MMAPI reference docs:
Welcome to the NVIDIA Multimedia API Reference. Documentation is preliminary and subject to change.
[u]Multimedia API is a collection of lower-level APIs that support flexible application development. The lower-level APIs enable flexibility by providing better control over the underlying hardware blocks.
The Multimedia API interface is separate from the GStreamer framework, which provides high-level APIs. That framework is included in current and previous releases.[/u]
The Multimedia API provides libraries, header files, API documentation and sample source code for developing embedded applications for the Jetson platform.