RTSP in RTSP out example on Jetson Orin nano

I am getting this error on Jetson Orin Nano. Is there an example to not use hardware encoding here?

Error: gst-resource-error-quark: Could not open device ‘/dev/nvhost-msenc’ for reading and writing. (7): /dvs/git/dirty/git-master_linux/3rdparty/gst/gst-v4l2/gst-v4l2/v4l2_calls.c(651): gst_v4l2_open (): /GstPipeline:pipeline0/nvv4l2h264enc:encoder:
system error: Cannot allocate memory

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only): 5.1.1-b56
• TensorRT Version: 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing): Trying to run the deepstream-rtsp-in-rtsp-out python sample
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

As there is no hardware encoder in the Orin Nano, the encoding plugin need to be changed to software encoder x264enc , please refer to the pipeline here: Software Encode in Orin Nano — Jetson Linux Developer Guide documentation (nvidia.com).

2 Likes

Hi, I’ve stuck with the same issue trying to run rtsp-in-rtsp-out example on a freshly set up Orin Nano. Have corrected the encoder from to nvv4l2h264enc to x264enc
And getting following error:

Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstlibav.so’: /lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block

Please file a new topic with your logs to debug the issue, thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.