Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
AGX Xavier
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
4.4
• TensorRT Version
7.1.3
• Issue Type( questions, new requirements, bugs)
question
I would like to get some advice on how I can jpeg encode (hw-accelerated) the output of my pipeline using the nvjpegenc plugin and read it from an appsink so I can perform further operations on it.
The pipeline does object detection and should publish the bounding boxes from the ObjectMeta and the encoded jpeg camera frames over the network.
This is the relevant part of my pipeline:
appsrc ! nvvideoconvert ! video/x-raw(memory:NVMM) !
m.sink_0 nvstreammux name=m batch-size=1 width=2000 height=1500 enable-padding=0 !
nvinfer config-file-path=config_infer_primary_yoloV4.txt !
nvvideoconvert ! video/x-raw(memory:NVMM),width=640,height=480,format=NV12 ! nvjpegenc quality=90 ! appsink name=appsink_vis emit-signals=True sync=True
This pipeline throws a segmentation fault
Thread 17 “gst” received signal SIGSEGV, Segmentation fault.
[gst-1] [Switching to Thread 0x7f70a85d60 (LWP 14117)]
[gst-1] 0x0000007f9414d33c in jpegTegraEncoderCompress ()
If I remove the memory:NVMM from the nvvideoconvert caps just before the nvjpegenc the pipeline works. Like this:
appsrc ! nvvideoconvert ! video/x-raw(memory:NVMM) !
m.sink_0 nvstreammux name=m batch-size=1 width=2000 height=1500 enable-padding=0 !
nvinfer config-file-path=config_infer_primary_yoloV4.txt !
nvvideoconvert ! video/x-raw,width=640,height=480,format=I420 ! nvjpegenc quality=90 ! appsink name=appsink_vis emit-signals=True sync=True
However I would like to avoid buffer copies and I want to use the jpeg block of my Jetson Xavier. Looking at gst-inspect of nvjpegenc, it should support memory:NVMM with NV12 format:
SINK template: ‘sink’
Availability: Always
Capabilities:
video/x-raw(memory:NVMM)
format: { (string)I420, (string)NV12 }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
video/x-raw
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)RGB, (string)BGR, (string)RGBx, (string)xRGB, (string)BGRx, (string)xBGR, (string)GRAY8 }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
Could you give some support on how to integrate hw-accelerated jpeg encoding into this pipeline?
If it doesn’t work with nvjpegenc maybe I can perform the encoding directly in my appsink using the NvJPEGEncoder::encodeFromFd? If there is some demo code for doing this a link would be appreciated.