Instance with invalid (NULL) class pointer

Hardware Platform: Jetson Nano
DeepStream Version: 6.0.1
JetPack Version: 4.6.1
TensorRT Version: 8.2.1

As I noticed, when I enable primary GIE based on YOLOv7 for DeepStream from this repository, after a while the application has the following error:

g_signal_handler_unblock: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed
instance with invalid (NULL) class pointer
g_signal_handler_block: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed
instance with invalid (NULL) class pointer
g_signal_handler_unblock: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed
instance with invalid (NULL) class pointer
g_signal_handler_block: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed
instance with invalid (NULL) class pointer
g_signal_handler_unblock: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed
NVPARSER: HEVC: Seeking is not performed on IRAP picture
gst_mini_object_unlock: assertion 'state >= SHARE_ONE' failed
gst_mini_object_unref: assertion 'GST_MINI_OBJECT_REFCOUNT_VALUE (mini_object) > 0' failed
g_object_unref: assertion 'G_IS_OBJECT (object)' failed
g_object_ref: assertion 'G_IS_OBJECT (object)' failed
gst_allocator_free: assertion 'GST_IS_ALLOCATOR (allocator)' failed
gst_object_unref: assertion '((GObject *) object)->ref_count > 0' failed
gst_mini_object_unlock: assertion 'state >= SHARE_ONE' failed
gst_mini_object_unref: assertion 'GST_MINI_OBJECT_REFCOUNT_VALUE (mini_object) > 0' failed

The application most likely crashes, but sometimes it can continue to work, but data from one of the RTSP camera is not processed further.
Also I have a dynamic pipeline, so every n seconds it writes the video to a new file (example of this pipeline I mentioned here).

Yes, the error is probably related to the YOLOv7 repository, but maybe you can help.

dose that sample run ok without any modification? if yes, please check if it is related to your code modification, if no, please share the whole logs.

Without the primary-gie module everything works fine, but if it is enabled, this error appears.
I made a reproducible case, you can check the attached zip file. Logs are here too. It took me about 2 hours to get this error.
test.zip (92.2 MB)

** INFO: <perf_cb:49>: **PERF: 5.02 (4.72) what is your RTSP source’s fps? as the logs shown, the pipeline’s fps is only about 5, please check the memory usage because frame will be buffered if can’t process ASAP.

RTSP source’s FPS is 8. But pipeline FPS on single camera is about 4.5. I’ve already made an experiment with memory, but I didn’t notice something abnormal. I described results in the same GitHub issue as this.

I also tried skipping frames in the primary GIE using interval=1, but the problem is still there.

Also I noticed a warning which also appears sometimes:

GStreamer-CRITICAL **: 19:18:30.202: 
Trying to dispose element fakesink, but it is in READY instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.

It happens when I try to remove fakesink from the pipeline (video_logs.c file):

    gst_element_set_state(instance_bin->fake_sink, GST_STATE_NULL);
    gst_bin_remove(GST_BIN(pipeline->pipeline), instance_bin->fake_sink);
    instance_bin->fake_sink = NULL;

I don’t know is it related to this issue and I don’t understand why this warning occurs.

  1. you can enable faksink by set type to 1 in config.txt.
  2. can DeepStream-Yolo run successfully without any modification? please narrow down this issue if DeepStream-Yolo can run.

@fanzh

  1. I have a dynamic pipeline, so I always have to switch between usual sink and fakesink to create new output file depends on some registered event. Yeah, I can use smart record, but it’s not flexible in my case.

  2. As it turned out, the error doesn’t depend on the static or dynamic pipeline. So I run code on simple pipeline: source->streammux->GIE->demuxer->fakesink and I also have this error.

Also I made some mistake here:

I launched primary GIE using interval=1 on usual pipeline again and after 20 hours problem still doesn’t appear.
So with interval=0 the problem is here and with interval=1 the problem is gone.

So if there isn’t enough FPS to process everything, the pipeline will crash, as you mentioned here:

I updated code for reproducible case with pipeline source->streammux->GIE->demuxer->fakesink. Now it’s more easy to understand code. Logs with the error are also here.
test.zip (92.2 MB)

So what should be the solution to this problem? Can I handle this error in code?

  1. did you need to process every frame?
  2. if using fakesink, you can set its sync propery to false.
  1. No, I don’t need
  2. But if I am using sink to file, what should I do?

you can set filesink’s sync propery to false.

I set sync property to FALSE in fakesink in this example, but it doesn’t help.

So I’ve just added g_object_set (G_OBJECT (pipeline->fake_sink), "sync", FALSE, NULL); line in fakesink.c file:

  pipeline->fake_sink = gst_element_factory_make("fakesink", "fakesink");
  g_object_set (G_OBJECT (pipeline->fake_sink), "sync", FALSE, NULL);
  gst_bin_add(GST_BIN(pipeline->pipeline), pipeline->fake_sink);

sorry for the late reply, " interval=1" will improve the performance if no need to process every frame , you can use filesink if want to save to disk, here is some sample code: https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app/blob/master/deepstream-lpr-app/deepstream_lpr_app.c#L673

Yeah, interval=1 will do inference every second frame, so performance will increase. But can we handle this error in code to prevent it in the future? Because the solution to make interval=1 seems unreliable.

Yeah, I can use filesink, but it should work with fakesink also.

Also I tried running yolov7 in the default deepstream-app application. For one rtsp source it works fine, but for multiple sources (I tested with 4 sources) it crashes.
I have described the results here.

thanks for your sharing, I will try to reproduce. did you test two sources? how long it ran before crashing?

I haven’t tested the two rtsp sources yet. For 4 cameras crashes in 20 minutes.
For one source, the deepstream-app worked for 30 hours, after which I stopped it.

if using 4 RTSP sources , please set batch-size of [streammux] to 4, set live-source of [streammux] to 1, set batch-size of [primary-gie] to 4.

Also I’ve noticed a new error, I set interval=0 instead of 3 in this config for deepstream-app. So with any 1 and 2 rtsp streams everything is ok, but when I have 3 or 4 rtsp sources, than I get a new error something like that:

**PERF:  1.63 (1.63)	1.62 (1.55)	1.61 (1.58)	1.63 (1.65)	
** INFO: <perf_cb:189>: 2022-10-31__19_17_57

**PERF:  1.63 (1.57)	1.63 (1.59)	1.63 (1.61)	1.63 (1.59)	
** INFO: <perf_cb:189>: 2022-10-31__19_17_58

** WARN: <watch_source_status:738>: No data from source 2 since last 5 sec. Trying reconnection
NVMEDIA: NVMEDIABufferProcessing: 1099: Consume the extra signalling for EOS 
** INFO: <reset_source_pipeline:1546>: Resetting source 2
**PERF:  1.64 (1.60)	1.64 (1.62)	1.64 (1.56)	1.64 (1.62)	
** INFO: <perf_cb:189>: 2022-10-31__19_17_59

ERROR from src_elem2: Unhandled error
Debug info: gstrtspsrc.c(6161): gst_rtspsrc_send (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin2/GstRTSPSrc:src_elem2:
Option not supported (551)
ERROR from src_elem2: Could not write to resource.
Debug info: gstrtspsrc.c(8244): gst_rtspsrc_pause (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin2/GstRTSPSrc:src_elem2:
Could not send message. (Generic error)
NvMMLiteOpen : Block : BlockType = 279 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 279 
Opening in BLOCKING MODE 
**PERF:  1.57 (1.56)	1.52 (1.58)	1.55 (1.60)	1.55 (1.65)	
** INFO: <perf_cb:189>: 2022-10-31__19_18_00

NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
**PERF:  1.56 (1.59)	1.58 (1.61)	1.59 (1.56)	1.59 (1.61)	
** INFO: <perf_cb:189>: 2022-10-31__19_18_01

But the application does not crash completely, after this error it continues to work (at least for 2 hours).
Maybe this problem is also somehow related to the original problem.

Now I’m going to run tests with the batch-size and will write about the results later.

I set batch-size=4 in [streammux], [primary-gie] and also in config_infer_primary_yoloV7.txt file and I got the same error with 4 rtsp sources:

**PERF:  0.67 (5.02)    0.67 (5.04)     0.67 (5.06)     0.00 (5.01)
** INFO: <perf_cb:189>: 2022-10-31__20_32_54


**PERF:  FPS 0 (Avg)    FPS 1 (Avg)     FPS 2 (Avg)     FPS 3 (Avg)
**PERF:  1.98 (5.01)    1.98 (5.03)     1.98 (5.05)     0.00 (5.00)
** INFO: <perf_cb:189>: 2022-10-31__20_32_55

** WARN: <watch_source_status:738>: No data from source 2 since last 5 sec. Trying reconnection
NVMEDIA: NVMEDIABufferProcessing: 1099: Consume the extra signalling for EOS
** INFO: <reset_source_pipeline:1546>: Resetting source 2
**PERF:  1.70 (5.01)    1.70 (5.02)     1.70 (5.04)     0.26 (4.99)
** INFO: <perf_cb:189>: 2022-10-31__20_32_56

ERROR from src_elem2: Unhandled error
Debug info: gstrtspsrc.c(6161): gst_rtspsrc_send (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin2/GstRTSPSrc:src_elem2:
Option not supported (551)
ERROR from src_elem2: Could not write to resource.
Debug info: gstrtspsrc.c(8244): gst_rtspsrc_pause (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin2/GstRTSPSrc:src_elem2:
Could not send message. (Generic error)

(deepstream-app:25612): GStreamer-CRITICAL **: 20:32:57.241: gst_mini_object_unlock: assertion 'state >= SHARE_ONE' failed

(deepstream-app:25612): GStreamer-CRITICAL **: 20:32:57.241: gst_mini_object_unref: assertion 'GST_MINI_OBJECT_REFCOUNT_VALUE (mini_object) > 0' failed

(deepstream-app:25612): GLib-GObject-WARNING **: 20:32:57.242: instance with invalid (NULL) class pointer

(deepstream-app:25612): GLib-GObject-CRITICAL **: 20:32:57.242: g_signal_handler_disconnect: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed

Also zero fps after NVPARSER: HEVC: Seeking is not performed on IRAP picture warning is also confusing me:

NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
NVPARSER: HEVC: Seeking is not performed on IRAP picture 
**PERF:  5.42 (5.30)	3.96 (5.34)	3.96 (5.34)	3.96 (5.31)	
** INFO: <perf_cb:189>: 2022-10-31__21_04_41

**PERF:  0.00 (5.29)	0.00 (5.33)	0.00 (5.33)	0.00 (5.31)	
** INFO: <perf_cb:189>: 2022-10-31__21_04_42

**PERF:  1.39 (5.29)	1.69 (5.33)	1.69 (5.33)	1.76 (5.30)	
** INFO: <perf_cb:189>: 2022-10-31__21_04_43

**PERF:  1.00 (5.29)	0.00 (5.33)	0.00 (5.32)	1.00 (5.30)	
** INFO: <perf_cb:189>: 2022-10-31__21_04_44

**PERF:  0.00 (5.28)	0.00 (5.32)	0.00 (5.32)	0.00 (5.30)	
** INFO: <perf_cb:189>: 2022-10-31__21_04_45

**PERF:  2.15 (5.28)	0.89 (5.32)	1.18 (5.32)	2.15 (5.29)	
** INFO: <perf_cb:189>: 2022-10-31__21_04_46

using 4 of the same rtsp sources , I tested GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.1.1 / 6.1 / 6.0.1 / 6.0 configuration for YOLO models on orin+deepstream6.1.1 about 10 hours, the application did not crash, and there is no error information, memory usage are normal, here is test report.
log.zip (164.5 KB)