• Hardware Platform (Jetson / GPU) Jetson • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) 5.1.1 • TensorRT Version 8.5.2.2 • Issue Type( questions, new requirements, bugs) Questions
Hi, I would like to integrate a function in this example code, that gets a jpg image with what is shown on the screen, i.e. original image plus deepstream annotations, detected objects, etc.
Thank you very much for your reply, I am trying jpegenc as I use an Orin Nano.
I create the stages like this:
# Create jpegenc instance to encode jpeg format.
print("Creating jpgenc ")
jpegenc = Gst.ElementFactory.make('jpegenc', 'jpegenc')
if not jpegenc:
sys.stderr.write(" Could not create jpegenc.")
jpegenc.set_property('quality', 50)
# Create filesink instance for jpeg saving
print("Creating filesink ")
filesink = Gst.ElementFactory.make("filesink", "filesink")
if not filesink:
sys.stderr.write(" Could not create filesink \n")
# Set filesink properties, location
filesink.set_property("location", "output.jpg")
My queue skipping order is as follows
streammux → pgie → tracker → nvanalytics → tiler → nvvidconv → nvosd → jpegenc → filesink → sink
But it doesn’t get to show the screen output, at frame 5-6 it pauses and doesn’t show any error, just
The last sink is a filesink, thus there is no screen out. And as the filename is output, there will only be one output.
As there is no hardware encoder in Orin nano, you have to use software encoder if needed, which costs quite a lot CPU resource in this scenario.
Ok, I’ll try to get an image with opencv, similar to the deepstream-imagedata-multistream-redaction examples. As it will be once every 30 to 60 sec, it will not influence the overall performance very much.