Can't encode video with DeepStream / Gstreamer Python on Jetson Orin Nano

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson Orin Nano
• DeepStream Version: 7.1
• JetPack Version (valid for Jetson only): 6.2.1 (L4T 36.4.4)
• TensorRT Version: 10.3.0.30
• NVIDIA GPU Driver Version (valid for GPU only): 540.4.0
• Issue Type( questions, new requirements, bugs): bugs

I’m working with the face detection with 2 version MobileNet and Resnet Retinaface with DeepStream. I’ve successfully implemented with deepstream-app:

deepstream_retinaface.txt (2.1 KB)

config_infer_retinaface.txt (676 Bytes)

custom_parser_retinaface.cpp:

// To make the custom parser file to make it works with 2 versions of Retinaface, 

// two version of the generate_priors function had been implemented, also the image size need to be changed belong to the engine. Other code had kept unchanged.

#include <iostream>

#include <vector>

#include <cmath>

#include <algorithm>

#include <cstring>

#include <array>

#include "nvdsinfer_custom_impl.h"

#define NUM_LANDMARKS 5

#define LANDMARK_STRIDE 10   // 5 points -> 10 values

// RetinaFace prior settings

static const std::vector<std::vector<int>> MIN_SIZES = {

  {16, 32},

  {64, 128},

  {256, 512}

};

static const std::vector<int> STEPS = {8, 16, 32};

static const int IMAGE_SIZE = 840;  

// // Generate priors (for 640x640 input)

// static bool generate_priors(std::vector<std::array<float,4>> &priors)

// {

//    priors.clear();

//    for (size_t s = 0; s < STEPS.size(); s++) {

//        int step = STEPS[s];

//        int feature_h = std::ceil(IMAGE_SIZE / step);

//        int feature_w = std::ceil(IMAGE_SIZE / step);

//        for (int i = 0; i < feature_h; i++) {

//            for (int j = 0; j < feature_w; j++) {

//                for (size_t k = 0; k < MIN_SIZES[s].size(); k++) {

//                    int min_size = MIN_SIZES[s][k];

//                    float cx = (j + 0.5f) * step / IMAGE_SIZE;

//                    float cy = (i + 0.5f) * step / IMAGE_SIZE;

//                    float w = (float)min_size / IMAGE_SIZE;

//                    float h = (float)min_size / IMAGE_SIZE;

//                    priors.push_back({cx, cy, w, h});

//                }

//            }

//        }

//    }

//    return true;

// }



// // Generate priors (for 840x840 input)

static bool generate_priors(std::vector<std::array<float,4>> &priors)

{

priors.clear();



for (size_t s = 0; s < STEPS.size(); s++) {

int step = STEPS[s];

       // CRITICAL: Calculate feature map size exactly as Python math.ceil()

       // int feature_h = std::ceil(IMAGE_SIZE / step);

       // Integer division in C++ rounds down. We need ceil.

int feature_h = (IMAGE_SIZE + step - 1) / step;

int feature_w = (IMAGE_SIZE + step - 1) / step;

for (int i = 0; i < feature_h; i++) {

for (int j = 0; j < feature_w; j++) {

for (size_t k = 0; k < MIN_SIZES[s].size(); k++) {

int min_size = MIN_SIZES[s][k];



                   // Center coordinates

float cx = (j + 0.5f) * step / IMAGE_SIZE;

float cy = (i + 0.5f) * step / IMAGE_SIZE;

                   // Width and Height relative to image size

float w = (float)min_size / IMAGE_SIZE;

float h = (float)min_size / IMAGE_SIZE;

priors.push_back({cx, cy, w, h});

               }

           }

       }

   }

return true;

}



// Decode bbox

static void decode_boxes(

const float *bbox,

const std::vector<std::array<float,4>> &priors,

std::vector<std::array<float,4>> &out)

{

out.resize(priors.size());

for (size_t i = 0; i < priors.size(); i++) {

float cx = priors[i][0];

float cy = priors[i][1];

float w  = priors[i][2];

float h  = priors[i][3];

float dx = bbox[i*4 + 0] * 0.1f;

float dy = bbox[i*4 + 1] * 0.1f;

float dw = bbox[i*4 + 2] * 0.2f;

float dh = bbox[i*4 + 3] * 0.2f;

float pred_cx = dx * w + cx;

float pred_cy = dy * h + cy;

float pred_w  = std::exp(dw) * w;

float pred_h  = std::exp(dh) * h;



float x1 = pred_cx - pred_w / 2.f;

float y1 = pred_cy - pred_h / 2.f;

float x2 = pred_cx + pred_w / 2.f;

float y2 = pred_cy + pred_h / 2.f;

out[i] = {x1, y1, x2, y2};

  }

}



// Decode 5-point landmarks

static void decode_landmarks(

const float *landm,

const std::vector<std::array<float,4>> &priors,

std::vector<std::array<float,10>> &out)

{

out.resize(priors.size());

for (size_t i = 0; i < priors.size(); i++) {

float cx = priors[i][0];

float cy = priors[i][1];

float w  = priors[i][2];

float h  = priors[i][3];



std::array<float,10> pts{};

for (int p = 0; p < NUM_LANDMARKS; p++) {

float lx = landm[i*LANDMARK_STRIDE + p*2 + 0] * 0.1f * w + cx;

float ly = landm[i*LANDMARK_STRIDE + p*2 + 1] * 0.1f * h + cy;

pts[p*2 + 0] = lx;

pts[p*2 + 1] = ly;

      }

out[i] = pts;

  }

}

// NMS

static void do_nms(

const std::vector<std::array<float,5>> &dets,

float thresh,

std::vector<int> &keep)

{

keep.clear();

if (dets.empty()) return;



std::vector<int> order(dets.size());

for (size_t i = 0; i < dets.size(); i++) order[i] = (int)i;

  // sort by score desc

std::sort(order.begin(), order.end(),

      [&](int a, int b){ return dets[a][4] > dets[b][4]; });



std::vector<bool> removed(dets.size(), false);



for (size_t _i = 0; _i < order.size(); _i++) {

int i = order[_i];

if (removed[i]) continue;



keep.push_back(i);



float x1 = dets[i][0];

float y1 = dets[i][1];

float x2 = dets[i][2];

float y2 = dets[i][3];

float area_i = (x2 - x1) * (y2 - y1);



for (size_t _j = _i + 1; _j < order.size(); _j++) {

int j = order[_j];

if (removed[j]) continue;



float xx1 = std::max(x1, dets[j][0]);

float yy1 = std::max(y1, dets[j][1]);

float xx2 = std::min(x2, dets[j][2]);

float yy2 = std::min(y2, dets[j][3]);



float w = std::max(0.0f, xx2 - xx1);

float h = std::max(0.0f, yy2 - yy1);

float inter = w * h;

float area_j = (dets[j][2] - dets[j][0]) * (dets[j][3] - dets[j][1]);



float ovr = inter / (area_i + area_j - inter);



if (ovr >= thresh)

removed[j] = true;

      }

  }

}



// Parser entry

extern "C" bool NvDsInferParseCustomRetinaFace(

const std::vector<NvDsInferLayerInfo> &layersInfo,

const NvDsInferNetworkInfo &networkInfo,

const NvDsInferParseDetectionParams &detectionParams,

std::vector<NvDsInferObjectDetectionInfo> &objects)

{

const float *loc_data = nullptr;

const float *conf_data = nullptr;

const float *landm_data = nullptr;



for (auto &layer : layersInfo) {

if (strcmp(layer.layerName, "bbox") == 0)

loc_data = (float*)layer.buffer;

else if (strcmp(layer.layerName, "confidence") == 0)

conf_data = (float*)layer.buffer;

else if (strcmp(layer.layerName, "landmark") == 0)

landm_data = (float*)layer.buffer;

  }



if (!loc_data || !conf_data || !landm_data) {

std::cerr << "Missing RetinaFace output tensors!" << std::endl;

return false;

  }



  // 1. Generate priors

std::vector<std::array<float,4>> priors;

generate_priors(priors);



int num_priors = priors.size();



  // 2. Decode boxes

std::vector<std::array<float,4>> boxes;

decode_boxes(loc_data, priors, boxes);



  // 3. Decode landmarks

std::vector<std::array<float,10>> landms;

decode_landmarks(landm_data, priors, landms);

  // 4. Filter + collect raw detections

std::vector<std::array<float,5>> dets;

for (int i = 0; i < num_priors; i++) {

float score = conf_data[i*2 + 1];



if (score < detectionParams.perClassThreshold[0])

continue;



float x1 = boxes[i][0] * networkInfo.width;

float y1 = boxes[i][1] * networkInfo.height;

float x2 = boxes[i][2] * networkInfo.width;

float y2 = boxes[i][3] * networkInfo.height;



dets.push_back({x1, y1, x2, y2, score});

  }



  // 5. NMS

float nms_thresh = 0.5f;

if (detectionParams.perClassPostclusterThreshold.size() > 0) {

float t = detectionParams.perClassPostclusterThreshold[0];

if (t > 0.0f && t < 1.0f) nms_thresh = t;

  }



std::vector<int> keep;

do_nms(dets, nms_thresh, keep);

  // 6. Output objects

for (int idx : keep) {

      NvDsInferObjectDetectionInfo obj;

obj.classId = 0;

obj.detectionConfidence = dets[idx][4];

obj.left = dets[idx][0];

obj.top  = dets[idx][1];

obj.width  = dets[idx][2] - dets[idx][0];

obj.height = dets[idx][3] - dets[idx][1];



objects.push_back(obj);

  }



return true;

}

extern "C" bool NvDsInferInitialize()

{

std::cout << "Custom RetinaFace parser initialized!" << std::endl;

return true;

}



extern "C" void NvDsInferDeInitialize()

{

}


But I need to implement face recognition later so I decided to change to DeepStream / GStreamer Python, I keep the custom parser and model-level config file “config_infer_retinaface.txt”, use the Python file as the pipeline-level config file. The problem happens now: I’m trying to write the output file without hardware encoder because we don’t have it in Jetson Orin Nano right? But I can’t do it.
To be specific, let’s me give the code. At first, I tried this code, the program can run without any bug in log, but the window is keep freezing and the output video file is nothing.

import sys

import gi

gi.require_version("Gst", "1.0")

gi.require_version("GstVideo", "1.0")

from gi.repository import Gst, GLib

# Initialize GStreamer

Gst.init(None)


# -------------------------

# USER CONFIG

# -------------------------

VIDEO_PATH = "/home/ubuntu/projects/faceRecognizer-main/demoVideos/unknown/demo_video_3.mp4"

OUTPUT_PATH = "out_retinaface.mp4"    # Use "" to disable saving file

DISPLAY = True

MODEL_CONFIG_PATH = "./config_infer_retinaface.txt"

MUXER_WIDTH = 1920

MUXER_HEIGHT = 1080

BATCH_PUSH_TIMEOUT = 33333 # 1/ SOURCE_FPS * 1000 * 1000 (microseconds)

# -------------------------

# BUS CALLBACK

# -------------------------

def bus_call(bus, message, loop):

   msg_type = message.type

   if msg_type == Gst.MessageType.EOS:

       print("End of stream")

       loop.quit()

   elif msg_type == Gst.MessageType.ERROR:

       err, debug = message.parse_error()

       print(f"Error: {err} — {debug}")

       loop.quit()

   return True

# -------------------------

# MAIN PIPELINE

# -------------------------

def main():



   loop = GLib.MainLoop()

   pipeline = Gst.Pipeline.new("ds-retinaface-pipeline")



   # Source

   source = Gst.ElementFactory.make("filesrc", "file-source")

   source.set_property("location", VIDEO_PATH)

   # Demux

   demux = Gst.ElementFactory.make("qtdemux", "demuxer")

   # Parser + decoder

   parser = Gst.ElementFactory.make("h264parse", "h264-parser")

   decoder = Gst.ElementFactory.make("nvv4l2decoder", "video-decoder")

   # Streammux

   streammux = Gst.ElementFactory.make("nvstreammux", "streammux")

   streammux.set_property("width", MUXER_WIDTH)

   streammux.set_property("height", MUXER_HEIGHT)

   streammux.set_property("batch-size", 1)

   streammux.set_property("batched-push-timeout", BATCH_PUSH_TIMEOUT)

   # PGIE (RetinaFace)

   pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")

   pgie.set_property("config-file-path", MODEL_CONFIG_PATH)

   pgie.set_property("gpu-id", 0)



   # Converter + OSD

   nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "nvvideo-converter")

   nvosd = Gst.ElementFactory.make("nvdsosd", "onscreen-display")

   

   # Add all to pipeline

   for elem in [source, demux, parser, decoder, streammux, pgie, nvvidconv, nvosd]:

       pipeline.add(elem)



   # Connect demux dynamically

   def on_pad_added(demux, pad):

       pad_caps = pad.query_caps(None).to_string()

       if "video" in pad_caps:

           pad.link(parser.get_static_pad("sink"))



   demux.connect("pad-added", on_pad_added)

   # Link fixed components

   source.link(demux)

   parser.link(decoder)



   # decoder -> streammux

   sinkpad = streammux.get_request_pad("sink_0")

   decoder_srcpad = decoder.get_static_pad("src")

   decoder_srcpad.link(sinkpad)



   # Inference pipeline

   streammux.link(pgie)

   pgie.link(nvvidconv)

   nvvidconv.link(nvosd)

   # -----------------------------

   # SINK SETUP 

   # -----------------------------

   if DISPLAY and OUTPUT_PATH:

       # Tee

       tee = Gst.ElementFactory.make("tee", "tee")

       q1 = Gst.ElementFactory.make("queue", "q-display")

       q2 = Gst.ElementFactory.make("queue", "q-file")



       # Display sink

       sink_display = Gst.ElementFactory.make("nveglglessink", "display")

       sink_display.set_property("sync", 0)



       # File sink (CPU Encoder setup)

       videoconv_cpu = Gst.ElementFactory.make("videoconvert", "videoconv-cpu") # Chuyển NVMM -> CPU

       encoder_cpu = Gst.ElementFactory.make("x264enc", "encoder-cpu") # CPU Encoder

       encoder_cpu.set_property("bitrate", 4000000)

       encoder_cpu.set_property("speed-preset", 1)

   

       muxer = Gst.ElementFactory.make("qtmux", "muxer")

       sink_file = Gst.ElementFactory.make("filesink", "file-sink")

       sink_file.set_property("location", OUTPUT_PATH)

       pipeline.add(tee)

       pipeline.add(q1)

       pipeline.add(q2)

       pipeline.add(sink_display)

       pipeline.add(videoconv_cpu) # THÊM

       pipeline.add(encoder_cpu) # THAY THẾ nvv4l2h264enc

       pipeline.add(muxer)

       pipeline.add(sink_file)



       nvosd.link(tee)

       # branch 1: display

       tee.link(q1)

       q1.link(sink_display)

       # branch 2: save file

       tee.link(q2)

       q2.link(videoconv_cpu)

       videoconv_cpu.link(encoder_cpu)

       encoder_cpu.link(muxer)

       muxer.link(sink_file)

   elif DISPLAY:

       sink_display = Gst.ElementFactory.make("nveglglessink", "display")

       sink_display.set_property("sync", 0)

       pipeline.add(sink_display)

       nvosd.link(sink_display)



   elif OUTPUT_PATH:

       # File sink (CPU Encoder setup)

       videoconv_cpu = Gst.ElementFactory.make("videoconvert", "videoconv-cpu")

       encoder_cpu = Gst.ElementFactory.make("x264enc", "encoder-cpu")

       encoder_cpu.set_property("bitrate", 4000000)

       encoder_cpu.set_property("speed-preset", 1)

       

       muxer = Gst.ElementFactory.make("qtmux", "muxer")

       sink_file = Gst.ElementFactory.make("filesink", "file-sink")

       sink_file.set_property("location", OUTPUT_PATH)



       pipeline.add(videoconv_cpu)

       pipeline.add(encoder_cpu)

       pipeline.add(muxer)

       pipeline.add(sink_file)



       nvosd.link(videoconv_cpu)

       videoconv_cpu.link(encoder_cpu)

       encoder_cpu.link(muxer)

       muxer.link(sink_file)

   # -----------------------------

   # BUS

   # -----------------------------

   bus = pipeline.get_bus()

   bus.add_signal_watch()

   bus.connect("message", bus_call, loop)

   # Run

   pipeline.set_state(Gst.State.PLAYING)

   try:

       loop.run()

   except:

       pass

   pipeline.set_state(Gst.State.NULL)

if __name__ == "__main__":

   main()

Then I tried this code, in general, sometimes it runs the whole video, displays and records the video well, sometimes it runs smoothly and suddenly stops halfway, the display window closes, the recorded video also has an error, cannot be opened, also there are some bugs in log that I will give below:

import sys
import gi
import os
import signal
gi.require_version("Gst", "1.0")
from gi.repository import Gst, GLib, GObject

# Optional: import pyds if you want to use DeepStream metadata functions later.
try:
    import pyds
except Exception:
    pyds = None

Gst.init(None)

# ---------------- USER CONFIG ----------------
VIDEO_PATH = "/home/ubuntu/projects/faceRecognizer-main/demoVideos/unknown/demo_video_3.mp4"
# MODEL_CONFIG_PATH must include parse-bbox-func-name + custom-lib-path if you're using your C++ parser.
MODEL_CONFIG_PATH = "/home/ubuntu/retinaface_deepstream/config_infer_retinaface.txt"

# Output file (leave "" to disable saving)
OUTPUT_PATH = "/home/ubuntu/projects/faceRecognizer-main/out_retinaface_python.mp4"

# Display toggle
DISPLAY = True

# Streammux settings (choose what you used in your pipeline-level config)
MUXER_WIDTH = 552   # use your desired streammux width
MUXER_HEIGHT = 1080
BATCH_SIZE = 1
BATCH_PUSH_TIMEOUT = 33000  # in ms (matches your earlier config)

# GPU id
GPU_ID = 0

# ---------------------------------------------

def bus_call(bus, message, loop):
    t = message.type
    if t == Gst.MessageType.EOS:
        print("End-of-stream")
        loop.quit()
    elif t == Gst.MessageType.ERROR:
        err, debug = message.parse_error()
        print("Error from element %s: %s" % (message.src.get_name(), err))
        print("Debug info: %s" % debug)
        loop.quit()
    return True

def create_element(factoryname, name=None, required=True):
    el = Gst.ElementFactory.make(factoryname, name)
    if required and el is None:
        raise RuntimeError(f"Could not create element {factoryname} ({name})")
    return el

def main():
    loop = GLib.MainLoop()

    # Build pipeline
    pipeline = Gst.Pipeline.new("ds-retinaface-pipeline-py")
    if not pipeline:
        print("Unable to create pipeline")
        return 1

    # Source + demux + parse + decode
    source = create_element("filesrc", "file-source")
    source.set_property("location", VIDEO_PATH)

    demux = create_element("qtdemux", "demuxer")
    h264parse = create_element("h264parse", "h264-parser")
    decoder = create_element("nvv4l2decoder", "nvv4l2-decoder")

    # Streammux (collect frames into batch for inference)
    streammux = create_element("nvstreammux", "stream-muxer")
    streammux.set_property("width", MUXER_WIDTH)
    streammux.set_property("height", MUXER_HEIGHT)
    streammux.set_property("batch-size", BATCH_SIZE)
    streammux.set_property("batched-push-timeout", BATCH_PUSH_TIMEOUT)
    streammux.set_property("live-source", False)

    # Primary inference (RetinaFace) - using your model-level config with custom parser
    pgie = create_element("nvinfer", "primary-inference")
    pgie.set_property("config-file-path", MODEL_CONFIG_PATH)
    pgie.set_property("gpu-id", GPU_ID)

    # Convert + OSD (NVMM pipeline for display)
    nvvidconv_post = create_element("nvvideoconvert", "nvvidconv-post")
    nvosd = create_element("nvdsosd", "nv-onscreendisplay")

    # Elements for display branch
    sink_display = None
    if DISPLAY:
        sink_display = create_element("nveglglessink", "nv-display")
        sink_display.set_property("sync", False)

    # Elements for file save branch (software encode path)
    # We will force NVMM -> System memory before x264enc:
    # nvosd (NVMM) -> queue -> nvvidconv -> capsfilter(video/x-raw, format=I420, memory:System) -> queue -> x264enc -> qtmux -> filesink
    encoder = None
    muxer = None
    sink_file = None
    if OUTPUT_PATH:
        # Use nvvidconv to do format conversion and memory:System
        nvvidconv_to_cpu = create_element("nvvideoconvert", "nvvidconv-to-cpu")
        # capsfilter to ensure I420 and System memory
        caps = Gst.Caps.from_string("video/x-raw, format=I420, width=%d, height=%d, memory:System" % (MUXER_WIDTH, MUXER_HEIGHT))
        capsfilter = create_element("capsfilter", "capsfilter-to-cpu")
        capsfilter.set_property("caps", caps)

        # Make a queue before software encoder to isolate thread pools
        queue_encode = create_element("queue", "queue-encode")
        # Use x264enc (software) — configure properties as you need (bitrate, tune, speed-preset...)
        encoder = create_element("x264enc", "x264-encoder")
        # tune zero-latency and speed options may help
        encoder.set_property("bitrate", 4000)  # kbps
        encoder.set_property("speed-preset", "ultrafast")
        encoder.set_property("tune", "zerolatency")

        # Optionally parse h264 stream (not always required)
        h264parse_out = create_element("h264parse", "h264parse-out")

        # Container muxer
        muxer = create_element("qtmux", "qtmux")
        sink_file = create_element("filesink", "file-sink")
        sink_file.set_property("location", OUTPUT_PATH)

    # Add elements to pipeline
    elems = [source, demux, h264parse, decoder, streammux, pgie, nvvidconv_post, nvosd]
    if DISPLAY:
        elems.append(sink_display)
    if OUTPUT_PATH:
        elems += [nvvidconv_to_cpu, capsfilter, queue_encode, encoder, h264parse_out, muxer, sink_file]

    for e in elems:
        pipeline.add(e)

    # Link source->demux dynamically
    def demux_pad_added(demux, pad):
        caps = pad.query_caps(None)
        s = caps.to_string()
        if "video" in s:
            sink_pad = h264parse.get_static_pad("sink")
            if not pad.is_linked() and sink_pad and not sink_pad.is_linked():
                pad.link(sink_pad)
    demux.connect("pad-added", demux_pad_added)

    # static links:
    source.link(demux)
    h264parse.link(decoder)

    # decoder -> streammux (request sink pad)
    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        raise RuntimeError("Unable to get streammux sink_0 pad")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        raise RuntimeError("Unable to get decoder src pad")
    ret = srcpad.link(sinkpad)
    if ret != Gst.PadLinkReturn.OK:
        raise RuntimeError("Failed to link decoder src to streammux sink_0")

    # streammux -> pgie -> nvvidconv_post -> nvosd
    streammux.link(pgie)
    pgie.link(nvvidconv_post)
    nvvidconv_post.link(nvosd)

    # create tee after nvosd
    if DISPLAY or OUTPUT_PATH:
        tee = create_element("tee", "tee-postosd")
        pipeline.add(tee)
        nvosd.link(tee)

    # Branch 1: display (NVMM path)
    if DISPLAY:
        queue_disp = create_element("queue", "queue-display")
        pipeline.add(queue_disp)
        tee.link(queue_disp)
        queue_disp.link(sink_display)

    # Branch 2: file encode (convert to CPU memory then software encode)
    if OUTPUT_PATH:
        queue_enc_in = create_element("queue", "queue-enc-in")
        pipeline.add(queue_enc_in)
        tee.link(queue_enc_in)

        # Link branch: queue_enc_in -> nvvidconv_to_cpu -> capsfilter -> queue_encode -> encoder -> h264parse_out -> muxer -> filesink
        queue_enc_in.link(nvvidconv_to_cpu)
        nvvidconv_to_cpu.link(capsfilter)
        capsfilter.link(queue_encode)
        queue_encode.link(encoder)
        # Some x264enc versions output bytestream requiring h264parse
        encoder.link(h264parse_out)
        h264parse_out.link(muxer)
        muxer.link(sink_file)

    # Bus / messages
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    # Handle SIGINT gracefully
    def sigint_handler(sig, frame):
        print("SIGINT received: stopping pipeline")
        pipeline.send_event(Gst.Event.new_eos())
    signal.signal(signal.SIGINT, sigint_handler)

    # Start
    print("Starting pipeline...")
    pipeline.set_state(Gst.State.PLAYING)

    try:
        loop.run()
    except Exception as e:
        print("Exception in main loop:", e)
    finally:
        pipeline.set_state(Gst.State.NULL)
        print("Pipeline stopped, cleaned up.")

    return 0

if __name__ == "__main__":
    sys.exit(main())

Bug 1:

(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ python3 deepstream-face_detection.py 
/home/ubuntu/projects/deepstream-python/deepstream-face_detection.py:159: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  sinkpad = streammux.get_request_pad("sink_0")
Starting pipeline...

Using winsys: x11 
Opening in BLOCKING MODE 
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.327408789 86244 0xaaaaef57d130 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
INFO: [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       min: 1x3x640x640     opt: 32x3x640x640    Max: 32x3x640x640    
1   OUTPUT kFLOAT bbox            16800x4         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT confidence      16800x2         min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT landmark        16800x10        min: 0               opt: 0               Max: 0               

0:00:00.327677821 86244 0xaaaaef57d130 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
0:00:00.352690201 86244 0xaaaaef57d130 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/ubuntu/projects/deepstream-python/config_infer_retinaface.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
0:00:11.403036272 86244 0xaaaaeec98060 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:11.403092689 86244 0xaaaaeec98060 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason error (-5)
Error from element primary-inference: gst-stream-error-quark: Internal data stream error. (1)
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2423): gst_nvinfer_output_loop (): /GstPipeline:ds-retinaface-pipeline-py/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)
nvstreammux: Successfully handled EOS for source_id=0

(python3:86244): GStreamer-CRITICAL **: 14:57:24.512: gst_mini_object_unref: assertion '(g_atomic_int_get (&mini_object->lockstate) & LOCK_MASK) < 4' failed
Pipeline stopped, cleaned up.
(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ 

Bug 2:

(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ gst_debug=5 python3 deepstream-face_detection.py 
/home/ubuntu/projects/deepstream-python/deepstream-face_detection.py:159: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  sinkpad = streammux.get_request_pad("sink_0")
Starting pipeline...

Using winsys: x11 
Opening in BLOCKING MODE 
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.303223457 107768 0xaaab0c1b3330 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
INFO: [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       min: 1x3x640x640     opt: 32x3x640x640    Max: 32x3x640x640    
1   OUTPUT kFLOAT bbox            16800x4         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT confidence      16800x2         min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT landmark        16800x10        min: 0               opt: 0               Max: 0               

0:00:00.303377669 107768 0xaaab0c1b3330 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
0:00:00.329360490 107768 0xaaab0c1b3330 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/ubuntu/retinaface_deepstream/config_infer_retinaface.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:438: => Failed in mem copy

0:01:00.022592279 107768 0xaaab0d72b700 ERROR                nvinfer gstnvinfer.cpp:1267:get_converted_buffer:<primary-inference> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:01:00.022648985 107768 0xaaab0d72b700 WARN                 nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame:<primary-inference> error: Buffer conversion failed
Error from element primary-inference: gst-stream-error-quark: Buffer conversion failed (1)
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1576): gst_nvinfer_process_full_frame (): /GstPipeline:ds-retinaface-pipeline-py/GstNvInfer:primary-inference
0:01:00.056651745 107768 0xaaab0d72b700 ERROR                nvinfer gstnvinfer.cpp:1267:get_converted_buffer:<primary-inference> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:01:00.056708899 107768 0xaaab0d72b700 WARN                 nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame:<primary-inference> error: Buffer conversion failed
0:01:00.056814854 107768 0xaaab0d72b700 ERROR                nvinfer gstnvinfer.cpp:1267:get_converted_buffer:<primary-inference> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:01:00.056843303 107768 0xaaab0d72b700 WARN                 nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame:<primary-inference> error: Buffer conversion failed
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ 

(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ GST_DEBUG=3 python3 deepstream-face_detection.py 
0:00:00.115030181 127722 0xaaaabfb2d2f0 WARN               structure gststructure.c:2250:gst_structure_parse_field: missing assignment operator in the field, str=memory:System
0:00:00.115088678 127722 0xaaaabfb2d2f0 WARN               structure gststructure.c:2350:priv_gst_structure_parse_fields: Failed to parse field, r=memory:System
/home/ubuntu/projects/deepstream-python/deepstream-face_detection.py:159: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  sinkpad = streammux.get_request_pad("sink_0")
Starting pipeline...

Using winsys: x11 
Opening in BLOCKING MODE 
0:00:00.213501112 127722 0xaaaabfb2d2f0 WARN                    v4l2 gstv4l2object.c:4682:gst_v4l2_object_probe_caps:<nvv4l2-decoder:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
0:00:00.215119362 127722 0xaaaabf616980 WARN              aggregator gstaggregator.c:2099:gst_aggregator_query_latency_unlocked:<qtmux> Latency query failed
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.291660279 127722 0xaaaabfb2d2f0 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
INFO: [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       min: 1x3x640x640     opt: 32x3x640x640    Max: 32x3x640x640    
1   OUTPUT kFLOAT bbox            16800x4         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT confidence      16800x2         min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT landmark        16800x10        min: 0               opt: 0               Max: 0               

0:00:00.291784762 127722 0xaaaabfb2d2f0 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
0:00:00.316514370 127722 0xaaaabfb2d2f0 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/ubuntu/retinaface_deepstream/config_infer_retinaface.txt sucessfully
0:00:00.317081233 127722 0xaaaabfb2d2f0 WARN                 basesrc gstbasesrc.c:3688:gst_base_src_start_complete:<file-source> pad not activated yet
0:00:00.317655616 127722 0xaaaabfb656a0 WARN                 qtdemux qtdemux_types.c:249:qtdemux_type_get: unknown QuickTime node type sgpd
0:00:00.317683361 127722 0xaaaabfb656a0 WARN                 qtdemux qtdemux_types.c:249:qtdemux_type_get: unknown QuickTime node type sbgp
0:00:00.317750467 127722 0xaaaabfb656a0 WARN                 qtdemux qtdemux.c:3121:qtdemux_parse_trex:<demuxer> failed to find fragment defaults for stream 1
0:00:00.317906279 127722 0xaaaabfb656a0 WARN                 qtdemux qtdemux.c:3121:qtdemux_parse_trex:<demuxer> failed to find fragment defaults for stream 2
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
0:00:00.429077255 127722 0xaaaabfb656a0 WARN                    v4l2 gstv4l2object.c:4682:gst_v4l2_object_probe_caps:<nvv4l2-decoder:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
0:00:00.435811895 127722 0xaaaabfb656a0 WARN            v4l2videodec gstv4l2videodec.c:2297:gst_v4l2_video_dec_decide_allocation:<nvv4l2-decoder> Duration invalid, not setting latency
0:00:00.464161566 127722 0xaaaabfb656a0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1130:gst_v4l2_buffer_pool_start:<nvv4l2-decoder:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:00.493687683 127722 0xaaaac19c1b60 WARN          v4l2bufferpool gstv4l2bufferpool.c:1607:gst_v4l2_buffer_pool_dqbuf:<nvv4l2-decoder:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:00.584235239 127722 0xaaaabf616980 FIXME               basesink gstbasesink.c:3395:gst_base_sink_default_event:<file-sink> stream-start event without group-id. Consider implementing group-id handling in the upstream elements
0:00:00.634210852 127722 0xaaaabf616980 FIXME             aggregator gstaggregator.c:1410:gst_aggregator_aggregate_func:<qtmux> Subclass should call gst_aggregator_selected_samples() from its aggregate implementation.
0:00:06.941681687 127722 0xaaaabf616920 ERROR            egladaption gstegladaptation.c:363:got_gl_error: GL ERROR: glEGLImageTargetTexture2DOES returned 0x0502
0:00:06.941823643 127722 0xaaaabf616920 ERROR          nveglglessink gsteglglessink.c:2154:gst_eglglessink_upload:<nv-display> Failed to upload texture
0:00:06.944127063 127722 0xaaaabfb65460 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:06.944175545 127722 0xaaaabfb65460 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason error (-5)
Error from element primary-inference: gst-stream-error-quark: Internal data stream error. (1)
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2423): gst_nvinfer_output_loop (): /GstPipeline:ds-retinaface-pipeline-py/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)
0:00:06.955896748 127722 0xaaaabfb656a0 WARN                 qtdemux qtdemux.c:6760:gst_qtdemux_loop:<demuxer> error: Internal data stream error.
0:00:06.956044624 127722 0xaaaabfb656a0 WARN                 qtdemux qtdemux.c:6760:gst_qtdemux_loop:<demuxer> error: streaming stopped, reason error (-5)
nvstreammux: Successfully handled EOS for source_id=0

(python3:127722): GStreamer-CRITICAL **: 16:13:40.947: gst_mini_object_unref: assertion '(g_atomic_int_get (&mini_object->lockstate) & LOCK_MASK) < 4' failed
Pipeline stopped, cleaned up.
(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ 

(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ GST_DEBUG=:nvinfer:3 python3 deepstream-face_detection.py 
/home/ubuntu/projects/deepstream-python/deepstream-face_detection.py:159: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  sinkpad = streammux.get_request_pad("sink_0")
Starting pipeline...

Using winsys: x11 
Opening in BLOCKING MODE 
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.308702580 129138 0xaaaaff42e930 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
INFO: [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       min: 1x3x640x640     opt: 32x3x640x640    Max: 32x3x640x640    
1   OUTPUT kFLOAT bbox            16800x4         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT confidence      16800x2         min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT landmark        16800x10        min: 0               opt: 0               Max: 0               

0:00:00.308832632 129138 0xaaaaff42e930 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
0:00:00.333184098 129138 0xaaaaff42e930 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/ubuntu/retinaface_deepstream/config_infer_retinaface.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:438: => Failed in mem copy

[cuOSD Error] at /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/cuda/cuosd_kernel.cu:1072 : Launch kernel (render_elements_kernel) failed, code = 700CUDA Runtime error cudaPeekAtLastError() # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/cuosd.cpp:968
0:00:30.014167083 129138 0xaaab009a6b00 ERROR                nvinfer gstnvinfer.cpp:1267:get_converted_buffer:<primary-inference> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:00:30.014432722 129138 0xaaab009a6b00 WARN                 nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame:<primary-inference> error: Buffer conversion failed
Error from element primary-inference: gst-stream-error-quark: Buffer conversion failed (1)
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1576): gst_nvinfer_process_full_frame (): /GstPipeline:ds-retinaface-pipeline-py/GstNvInfer:primary-inference
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:30.020030248 129138 0xaaaafeb4a0c0 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-inference> error: Failed to queue input batch for inferencing
nvstreammux: Successfully handled EOS for source_id=0
Unable to set device in gst_nvstreammux_src_collect_buffers
Unable to set device in gst_nvstreammux_src_collect_buffers
(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ 

You are right, the Orin Nano doesn’t have any hardware units for video encoding (NVENC).

I think one of your errors is that you’re using videoconvert to convert from NVMM to regular memory. videoconvert can’t do that, you need to use nvvideoconvert instead.

you can use software encoder instead. First please use “gst-inspect-1.0 x264enc” to check if x264enc is available. in docker container, please run user_additional_install.sh script. Second Please refer to following sample to use x264enc.

gst-launch-1.0  filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder  ! nvvideoconvert ! x264enc ! filesink location=test.264

@miguel.taylor
I had already used nvvideoconvert in the second version code.
Also, I have just changed the videoconvert in the line below in the first version to nvvideoconvert, nothing changed, the video keeps freezing.
videoconv_cpu = Gst.ElementFactory.make(“videoconvert”, “videoconv-cpu”)

@fanzh

This is why I got after check if x264enc is available:

ubuntu@tegra-ubuntu:~$ gst-inspect-1.0 x264enc

Factory Details:

  Rank                     primary (256)

  Long-name                x264 H.264 Encoder

  Klass                    Codec/Encoder/Video

  Description              libx264-based H.264 video encoder

  Author                   Josef Zlomek <josef.zlomek@itonis.tv>, Mark Nauwelaerts <mnauw@users.sf.net>




Plugin Details:

  Name                     x264

  Description              libx264-based H.264 encoder plugin

  Filename                 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstx264.so

  Version                  1.20.1

  License                  GPL

  Source module            gst-plugins-ugly

  Source release date      2022-03-14

  Binary package           GStreamer Ugly Plugins (Ubuntu)

  Origin URL               https://launchpad.net/distros/ubuntu/+source/gst-plugins-ugly1.0



GObject

 +----GInitiallyUnowned

: 

Also I have just tried your sample: everything run succesfully, I can recieve an output file, even when I try with test.mp4.

if you run the cmd in my last comment, can you get a valid h264 file? if so, you can compare the x264 usage in the code and cmd. Regearding the error “Failed in mem copy”, please refer to ths faq to fix.

@fanzh

  1. Yes, I can get a file .h264 file, how do we know if it valid or not? I tried to output .mp4 as I said before, is it meaningful?
  2. I trying to use window display + write output file at the same time, may be anything wrong with the synchronization? If if I just use window display, everything is fine.
  3. Your link is wrong I think.
  1. It is h264 raw encoded steram. you can use ffplay or other palyers to play. Since the mp4 is based on h264 data, you can test with h264 file first. 2. First you can only test with h264 file. 3. I have corrected the link.

@fanzh

  1. The output .264 is valid, I have played it successfully.
  2. So now I should modify the code only output with writing video right then applying window display at the same time later right, but should I use .264 or .mp4, why?
  3. Okay, let me check the link.

If generating h264 work, you can output mp4 file because all players support this format.

@fanzh

I have just test with only output file, it works but still get the bug “failed in mem copy“ sometimes although I had followed the link to do: set_property(“copy-hw”, 2) for nvvideoconvert. Also, there also some issue in log, like this:

(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ python3 deepstream-face_detection.py 
/home/ubuntu/projects/deepstream-python/deepstream-face_detection.py:176: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  sinkpad = streammux.get_request_pad("sink_0")
Starting pipeline (Software Encode Optimized Only)...
Opening in BLOCKING MODE 
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.392344490 32251 0xaaaadde4ee10 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
INFO: [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       min: 1x3x640x640     opt: 32x3x640x640    Max: 32x3x640x640    
1   OUTPUT kFLOAT bbox            16800x4         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT confidence      16800x2         min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT landmark        16800x10        min: 0               opt: 0               Max: 0               

0:00:00.392500143 32251 0xaaaadde4ee10 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/ubuntu/models/engine/retinaface_engine/FaceDetector_MobileNet_Dynamic_640_b32.engine
0:00:00.459131006 32251 0xaaaadde4ee10 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/ubuntu/retinaface_deepstream/config_infer_retinaface.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:438: => Failed in mem copy

Unable to set device in gst_nvstreammux_src_collect_buffers
ERROR: [TRT]: IExecutionContext::enqueueV3: Error Code 1: Cuda Driver (TensorRT internal error)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:40.123274058 32251 0xaaaadd86f6a0 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-inference> error: Failed to queue input batch for inferencing
ERROR: queue buffer failed to set cuda device((null)), cuda err_no:4, err_str:cudaErrorCudartUnloading
0:00:40.123687703 32251 0xaaaadd86f6a0 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-inference> error: Failed to queue input batch for inferencing
Error: gst-stream-error-quark: Failed to queue input batch for inferencing (1) — /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:ds-retinaface-sw-encode-pipeline/GstNvInfer:primary-inference
Unable to set device in gst_nvstreammux_change_state
Pipeline stopped and cleaned up.

(python3:32251): GStreamer-CRITICAL **: 10:42:01.422: 
Trying to dispose element ds-retinaface-sw-encode-pipeline, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.


(python3:32251): GStreamer-CRITICAL **: 10:42:01.422: 
Trying to dispose element streammux, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.


(python3:32251): GStreamer-CRITICAL **: 10:42:01.422: 
Trying to dispose element file-source, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.


(python3:32251): GStreamer-CRITICAL **: 10:42:01.422: 
Trying to dispose element file-sink, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.

(venv) ubuntu@tegra-ubuntu:~/projects/deepstream-python$ 

It’s not convinient for me to downgrade Jetpack by the way.

Any for solution for me? :(

Please set compute-hw to 2 for nvstrreammux, nvinfer, nvvideoconvert, which do video conversion. If still can’t work, please simplfy the pipeline to check which element will cause the error.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.