Hey, thanks for the swift reply.
So, I used deepstream as an isaac node with an isaac::deepstream::Pipeline component. That means that I had to write my own gstreamer pipeline to get Yolo going.
I tried to mimic the deepstream_app_config_yoloV3_tiny.txt file as much as i could, as the deepstream-app -c command is working really well.
I came up with this gstreamer pipeline and tested it on the jetson (without isaac) :
$ gst-launch-1.0 nvarguscamerasrc sensor_mode=4 ! 'video/x-raw(memory:NVMM), width=1280, height=720, framerate=60/1, format=NV12' ! nvvidconv flip-method=0 ! video/x-raw, format=RGBA ! nvvideoconvert nvbuf-memory-type=4 ! 'video/x-raw(memory:NVMM)' ! mux.sink_0 nvstreammux name=mux width=640 height=480 batch-size=1 live-source=1 batched-push-timeout=400000 enable-padding=0 nvbuf-memory-type=0 ! nvinfer config-file-path=config_infer_primary_yoloV3_tiny.txt ! nvdsosd ! nvvideoconvert nvbuf-memory-type=0 ! nvvidconv flip-method=0 ! 'video/x-raw,width=640, height=480' ! nvvidconv ! nvegltransform ! nveglglessink -e
This pipeline however resulted in " A lot of buffers being dropped":
WARNING: from element /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0:
There may be a timestamping problem, or this computer is too slow.
But the inference was happening and some bounding boxes showed up, although the frame rate was not the best.
I then changed the pipeline to fit isaac thinking I’d get the same low frame rate (which is not an issue for my use case)
"pipeline": "nvarguscamerasrc sensor_mode=4 ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1, format=NV12 ! nvvidconv flip-method=0 ! video/x-raw,format=RGBA ! nvvideoconvert nvbuf-memory-type=4 ! video/x-raw(memory:NVMM) ! mux.sink_0 nvstreammux name=mux width=640 height=480 batch-size=1 batched-push-timeout=4000000 nvbuf-memory-type=0 ! nvinfer config-file-path=apps/deepstream_yolo/configs/config_infer_primary_yoloV3_tiny.txt nvbuf-memory-type=0 ! nvvideoconvert nvbuf-memory-type=0 ! nvdsosd nvbuf-memory-type=0 ! nvvideoconvert nvbuf-memory-type=0 ! video/x-raw,format=RGBA ! videoconvert ! video/x-raw,format=RGB ! appsink name=fireImage"
The pipeline loads and no errors happen, but it hangs at execution and I’m thinking that it’s a buffer issue of some sort but I couldn’t figure out a solution. Thats why I was wondering if there was an easier way of using deepstreamer in isaac as the deepstream-app -c <.txt file> is doing the job really well.