Deepstream App with Basler ace2 camera

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Model: NVIDIA Jetson Xavier NX Developer Kit - Jetpack 5.0.2 GA [L4T 35.1.0]
NV Power Mode[2]: MODE_15W_6CORE
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:

  • P-Number: p3668-0001
  • Module: NVIDIA Jetson Xavier NX
    Platform:
  • Distribution: Ubuntu 20.04 focal
  • Release: 5.10.104-tegra
    jtop:
  • Version: 4.2.1
  • Service: Active
    Libraries:
  • CUDA: 11.4.239
  • cuDNN: 8.4.1.50
  • TensorRT: 8.4.1.5
  • VPI: 2.1.6
  • Vulkan: 1.3.203
  • OpenCV: 4.5.2 - with CUDA: NO

• DeepStream Version
DeepStream version 6.1.1

I wanted to do the live inference on the Basler video stream using DeepStream.
The default config take the mp4 sample, but I am not sure how to enable the live view inside the DeepStream framework.

I am able to execute the Gstreamer pipleine and able to see the live view of the Basler camera on GStreamer. But how to integrate it with the DeepStream?
Could you please give me some hints.

Thank you very much in advance.

thanks
Arun

What is the specific format of your live view from the Basler? Could you attach the pipeline of the Basler camera on GStreamer?

gst-launch-1.0 pylonsrc cam::ExposureTime=2000 cam::Gain=10.3 ! “video/x-raw,width=640,height=480,framerate=10/1,format=GRAY8” ! videoconvert ! autovideosink

You can just try to add nvvideoconvert, nvstreammux, nvinfer to your pipeline, like:

$ gst-launch-1.0 pylonsrc cam::ExposureTime=2000 cam::Gain=10.3 ! “video/x-raw,width=640,height=480,framerate=10/1,format=GRAY8” ! videoconvert ! nvvideoconvert \
! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt \
batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream
/lib/libnvds_nvmultiobjecttracker.so \
! nvinfer config-file-path= configs/deepstream-app/config_infer_secondary_carcolor.txt batch-size=16 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0 \
! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nveglglessink

It’s just an example to show how to link after the videoconvert in your pipeline. You can add the nvstreammux, nvinfer, nvosd… plugin according to your needs and actual environment.

Update here:
Pipeline worked for 30 Sec, and then crashed after changing the Sync to nvvideoconvert ! nvegltransform ! nveglglessink as you suggested.

gst-launch-1.0 pylonsrc cam::ExposureTime=20000 cam::Gain=10.3 ! “video/x-raw,width=640,height=480,framerate=10/1,format=GRAY8” ! videoconvert ! nvvideoconvert
! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 ! nvinfer config-file-path= config_infer_primary_yoloV5.txt
batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
! nvinfer config-file-path= config_infer_primary_yoloV5.txt batch-size=1 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0
! nvmultistreamtiler rows=1 columns=1 width=640 height=480 ! nvvideoconvert ! nvegltransform ! nveglglessink

But now crashing saying the memory for buffering not sufficient:
gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0:
There may be a timestamping problem, or this computer is too slow.
ERROR: from element /GstPipeline:pipeline0/GstPylonSrc:pylonsrc0: Failed to create buffer.
Additional debug info:
…/ext/pylon/gstpylonsrc.cpp(992): gst_pylon_src_create (): /GstPipeline:pipeline0/GstPylonSrc:pylonsrc0:
The buffer was incompletely grabbed. This can be caused by performance problems of the network hardware used, i.e., the network adapter, switch, or Ethernet cable. Buffer underruns can also cause image loss. To fix this, use the pylonGigEConfigurator tool to optimize your setup and use more buffers for grabbing in your application to prevent buffer underruns.
Execution ended after 0:00:38.483632219
Setting pipeline to NULL …
[NvMultiObjectTracker] De-initialized
Freeing pipeline …

What should be done to optimize this program to sustain and not crashing?

Kind regards
Arun

So the pipeline runs 30s and it crashes, is that right?
We can first briefly locate the cause of the crash. Could you use gdb tool to debug the location of the crash?
You can also change the nveglglessink to fakesink and see if it still crashes.
You can refer to the link below to get the changes in memory to see if there are any issues such as memory leaks. Capture HW & SW Memory Leak log

The problem was the lost connection with the Basler camera, not sure why Pylon reconnected the camera. Providing the Local link solved the problem. Auto IP is not good option. thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.