Hello,
I am currently trying to live JPEG compress cameras from a V4L source. I’d initially tried to use the gstreamer pipelines for this, using the nvidia JPEG compress components. Unfortunately, due to some hardware bottleneck we’ve found on the particular AGX machine we’ve purchased we’re fairly certain we can’t feed it in directly to the encoder without limiting us to 6 cameras (vs the 7 cameras we need).
I’ve written a wrapper class around the NvJpegEncoder class, and have an image in RGB8 (as in, an array where every 3 bytes are a pixel with R, G and B values specifically, each one byte). Unfortunately, when I run the code I get an immediate error saying:
[ERROR] (src/jetson/NvBuffer.cpp:428) Unsupported pixel format 859981650
[ERROR] (src/jetson/NvBuffer.cpp:226) Buffer 0, Plane 4 already allocated
[ERROR] (src/jetson/NvJpegEncoder.cpp:177) <jpenenc> Buffer not in proper format
Relevant source code here:
JpegEncoder::JpegEncoder(uint32_t quality) : quality_(quality), jpegenc_(NvJPEGEncoder::createJPEGEncoder("jpenenc")) {
if (!jpegenc_) {
throw std::runtime_error("Could not create Jpeg Encoder");
}
}
std::size_t JpegEncoder::encode(const Image& image, rust::Slice<uint8_t> buffer) const {
std::size_t out_buf_size = buffer.length();
t
NvBuffer nvbuffer(V4L2_PIX_FMT_RGB24, image.width, image.width, 0);
nvbuffer.allocateMemory();
std::memcpy(
nvbuffer.planes[0].data, image.data.data(), std::min((size_t)nvbuffer.planes[0].length, image.data.size()));
auto data = buffer.data();
if (jpegenc_->encodeFromBuffer(nvbuffer, JCS_YCbCr, &data, out_buf_size, quality_) < 0) {
throw std::runtime_error("Error while encoding from buffer");
}
return out_buf_size;
}
My question is, an I just unable to use the Jetson Multimedia APIs for JPEG encoding if I have the data in this format? Additionally, I know the sensor outputs V4L2_PIX_FMT_SGRBG12
when I fetch the data from V4L2. I thought about sending that in instead, but, I don’t see that listed as a supported format in the NvBuffer either. What are my options here?