Not able to run deepstream application as a systemd service

Hi,

I have developed a custom deepstream application that runs manually from console as expected.
But when I tried creating a new systemd service to make it run on system startup, it fails to run.

I get following error
0:00:05.700770840 #033[332m10873#033[00m 0x55895dbb20 #033[33;01mWARN #033[00m #033[00m nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:#033[00m error: Internal data stream error.
0:00:05.701985392 #033[332m10873#033[00m 0x55895dbb20 #033[33;01mWARN #033[00m #033[00m nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:#033[00m error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Deleting pipeline

******systemd service *******

[Unit]
Description=bec app

Wants=network.target
After=syslog.target network-online.target

[Service]
Type=simple
WorkingDirectory=/usr/local/sys_dev/bec_app
ExecStart=/usr/local/sys_dev/bec_app/bec_app
Restart=on-failure
RestartSec=1
KillMode=process
StandardOutput=syslog

[Install]
WantedBy=multi-user.target

Can you please help.

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Here are the details.

  • Jetson xavier nx
  • Deepstream 6.0
  • Jetpack version - Tegra release - rev 6.1
  • Tensor RT 8.0.1.6
  • NVIDIA GPU driver - CUDA 10.2
  • Issue type - bug
  • To reproduce - try to run deepstream sample app 3 as a service using the service script that was shared with this message
  • This is for production firmware on Jetson xavier nx to start auto loading the required deepstream based application, if necessary we may need to connect the xavier nx to monitor thats the reason OSD was not removed from pipeline
  • Pipeline is exactly similar to deepstream sample app 3 except for addition of videorate converter directly to sources to modify required FPS

With all this, application works as expected from shell when manually invoked but not as a service. We tried stopping and re-starting the service after complete OS startup, still it doesn’t work and fails exactly at inference module (logs highlight this)

Thanks

How about change nveglglessink to fakesink?

Yes I tried that earlier as it was suggested in several discussion topics, but it did not help.

Hi Nvidia Team,

Any update on this issue?

Thanks

Can you replace the app with builtin app to rule out if or not your app issue?

Yes, I tried running deepstream sample app3 as a service, again I see exactly the same error with inference.

Looking forward for your support to fix this problem.

Thanks