• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 6.3
• JetPack Version (valid for Jetson only) : 5
• TensorRT Version : 8.5.2
• Issue Type( questions, new requirements, bugs) : questions
I want to create such a pipeline:
- read h264 file
- crop frame from 1920x1080 to 256:976 in x-axis and 600:1320 in y-axis
- resize it to 224x224 frame
- perform inference
- draw result on display
- save result to file
My question is: How can I crop frame before inference and then resize it to the desired scale? This is a fragment from my pipeline and I dont know where this preprcoessing step should be placed. Should I link nvvideoconvert before pgie and set crop values or should I create nvdspreprocess element? If so what are the parameters that I should set? I can see a deepstream-preprocess-test example with nvdspreprocess element but I am not sure where it sets the crop if it does at all.
...
# Create gstreamer elements
# Create Pipeline element that will form a connection of other elements
print("Creating Pipeline\n")
pipeline = Gst.Pipeline()
if not pipeline:
sys.stderr.write("Unable to create Pipeline\n")
# Source element for reading from the file
print("Creating Source\n")
source = Gst.ElementFactory.make("filesrc", "file-source")
if not source:
sys.stderr.write("Unable to create Source\n")
# Data format in the input file is elementary h264 stream, we need a parser
print("Creating parser\n")
parser = Gst.ElementFactory.make("h264parse", "h264-parser")
if not parser:
sys.stderr.write("Unable to create parser\n")
# Use nvdec_h264 for hardware accelerated decode on GPU
print("Creating Decoder\n")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
if not decoder:
sys.stderr.write("Unable to create Nvv4l2 Decoder\n")
# Create nvstreammux instance to form batches from one or more sources
print("Creating Streammux\n")
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
sys.stderr.write("Unable to create NvStreamMux\n")
# Use nvinfer to run inferencing on decoder's output, behaviour of inferencing is set through config file
print("Creating Primary Inference\n")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
sys.stderr.write("Unable to create pgie\n")
# Use converter to convert from NV12 to RGBA as required by nvosd
print("Creatting nvvideoconvert\n")
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "converter")
if not nvvidconv:
sys.stderr.write("Unable to create nvvidconv\n")
# Create OSD to draw on the converted RGBA buffer
print("Creating OSD\n")
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
if not nvosd:
sys.stderr.write("Unable to create nvosd\n")
...