DeepStream nvosd hello world

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• Issue Type( questions, new requirements, bugs) using NVOSD without upstream module creating metadata

I am trying to write a simple python program to print “Hello World” over a video stream using the nvosd deepstream/gstreamer module.
the gstreamer string i am parsing is:
nvv4l2camerasrc device=/dev/video1 ! video/x-raw(memory:NVMM), format=(string)UYVY, width=(int)640, height=(int)480 ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)RGBA ! nvdsosd name=info ! nvoverlaysink
I probe the nvosd sink pad but there is no metadata to attach the text to. this is expected because there is no upstream module that generates the metadata, line an inference module.
Reading through the API docs i cannot find any function to create a new metadata object to which i can attach the text.

How do I use the nvosd module standalone without upstream inference or similar modules?

This is simple program to test the module; the final program will use more of the nvosd functionality, but the data on where to draw the shapes is coming from separate algorithms based on other sensor data.

nvstreammux will add batch meta, then application can add meta in probe function.

Thanks for getting back to me.
Using the command line to check the new pipeline links correctly; i get the following error

nvidia@nvidia-desktop:~$ gst-launch-1.0 nvv4l2camerasrc device=/dev/video1 ! 'video/x-raw(memory:NVMM), format=(string)UYVY, width=(int)640, height=(int)480' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)RGBA' ! nvstreammux ! nvdsosd ! nvoverlaysink
WARNING: erroneous pipeline: could not link nvvconv0 to nvstreammux0, nvstreammux0 can't handle caps video/x-raw(memory:NVMM), format=(string)RGBA

Using gst-inspect-1.0 to check what inputs nvstreammux can handle; it can take RGBA format. so what is it complaining about?
Do I need to manually link it?

I’ve had a go a t manually linking them, using the deepstream_test1.py for reference.
The pipeline Links sucessfully; but then errors out on that there are no surfaces in the input to the nvstreammux

Error: gst-stream-error-quark: Input buffer number of surfaces (0) must be equal to mux->num_surfaces_per_frame (1)
	Set nvstreammux property num-surfaces-per-frame appropriately
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.c(485): gst_nvstreammux_chain (): /GstPipeline:pipeline0/GstNvStreamMux:mux

what else am I missing?
Attached is my test program.
Do you have any deepstream examples without inferencing?
nvosd test v2.py (6.3 KB)

I have removed the inference from python test app deepstream_test_1_usb.py it works until is swap the source from v4l2src to nvv4l2camerasrc. With nvv4l2camerasrc I get same error relating to the number of surfaces

I have looked to see if this error appears elsewhere on the forum:
Appsrc with numpy input in Python - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

DeepStream 4.0) nvmux: Input buffer number of surfaces (-336860181) must be equal to mux->num_surfaces_per_frame - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

but none provide a solution

@kesong this is still unsolved do you have any more insights?
If I solve this offline I WILL post a solution, so don’t close this thread until then

What is the issue now?

@kesong as described in the previous posts, there is an error with the number of surfaces nvstreammux expects.(receiving 0, expecting 1-4).
this may be related to using nvv4l2camerasrc as deepstream_test_1_usb.py works with v4l2src but not nvv4l2camerasrc (caps adjusted accordingly).

Can you have a try with bufapi-version=true in nvv4l2camerasrc ? Can you replace nvvidconv with nvvideoconvert?

@kesong Thank you it is now working.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.