Not able to run deepstream application as a systemd service

Hi,

I have developed a custom deepstream application that runs manually from console as expected.
But when I tried creating a new systemd service to make it run on system startup, it fails to run.

I get following error
0:00:05.700770840 #033[332m10873#033[00m 0x55895dbb20 #033[33;01mWARN #033[00m #033[00m nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:#033[00m error: Internal data stream error.
0:00:05.701985392 #033[332m10873#033[00m 0x55895dbb20 #033[33;01mWARN #033[00m #033[00m nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:#033[00m error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Deleting pipeline

******systemd service *******

[Unit]
Description=bec app

Wants=network.target
After=syslog.target network-online.target

[Service]
Type=simple
WorkingDirectory=/usr/local/sys_dev/bec_app
ExecStart=/usr/local/sys_dev/bec_app/bec_app
Restart=on-failure
RestartSec=1
KillMode=process
StandardOutput=syslog

[Install]
WantedBy=multi-user.target

Can you please help.

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Here are the details.

  • Jetson xavier nx
  • Deepstream 6.0
  • Jetpack version - Tegra release - rev 6.1
  • Tensor RT 8.0.1.6
  • NVIDIA GPU driver - CUDA 10.2
  • Issue type - bug
  • To reproduce - try to run deepstream sample app 3 as a service using the service script that was shared with this message
  • This is for production firmware on Jetson xavier nx to start auto loading the required deepstream based application, if necessary we may need to connect the xavier nx to monitor thats the reason OSD was not removed from pipeline
  • Pipeline is exactly similar to deepstream sample app 3 except for addition of videorate converter directly to sources to modify required FPS

With all this, application works as expected from shell when manually invoked but not as a service. We tried stopping and re-starting the service after complete OS startup, still it doesn’t work and fails exactly at inference module (logs highlight this)

Thanks

How about change nveglglessink to fakesink?

Yes I tried that earlier as it was suggested in several discussion topics, but it did not help.

Hi Nvidia Team,

Any update on this issue?

Thanks

Can you replace the app with builtin app to rule out if or not your app issue?

Yes, I tried running deepstream sample app3 as a service, again I see exactly the same error with inference.

Looking forward for your support to fix this problem.

Thanks

Hi Nvidia Team,

Any update on this issue.
I even tried removing OSD from pipeline, this also dosnt work and stops the same way.
One more observation to share, like its able to infer 1 or 2 batches and only after that error pops out from inference engine.

Looking forward for your support.

Thank you.

How about link nvinfer directly with fakesink using app3?

Yes, tried using fakesink with test app3, still same result.

Inference engine fails after few frames.

/* we link the elements together
* nvstreammux -> nvinfer -> nvdslogger -> nvtiler -> nvvidconv -> nvosd
* -> video-renderer */

nvinfer not linked with sink, i mean remove other plugins after nvinfer before renderer.

Thanks, this works, now the application can start as a service and works as expected.

But this is ok for headless system, will it not be possible to have OSD in the pipeline, to be able to view the inference live?

Hi @aurointelli
Besides OSD, do you need render to display TV? I saw the app runs with multi-user.target , I guess rendering to display TV is not necessary, right?

No OSD should be enough, this is just for any troubleshooting or verifying inference.

How about remove tiler, keep osd in the pipeline, and sink use fakesink?

This pipeline is not working, it stops, here is pipeline for your reference

if (!gst_element_link_many (streammux, queue1, pgie, queue3,
      nvosd, queue5, sink, NULL)) { 

(infer module directly connected to OSD)

If you remove tiler, you need to add demux into the pipeline.
you can add it tiler back, add enable GST_DEBUG=5 to get more debug log.
nvstreammux → nvinfer → nvdslogger → nvtiler → nvvidconv → nvosd → fakesink
nvstreammux → nvinfer → nvdslogger → nvtiler → nvvidconv → fakesink
besides, in your past test, how about the error you get for the case which reomove the OSD? same as the error in description?

Have added logger to pipeline (with environment variable GST_DEBUG=5)

nvdslogger = gst_element_factory_make("nvdslogger", "nvdslogger");
if (!gst_element_link_many (streammux, queue1, pgie, queue2, nvdslogger, tiler, queue3,
      nvvidconv, queue4, nvosd, queue5, transform, sink, NULL)) { 

With this I am not able to see additional logs (but with nvdslogger added without OSD, I can see additional logs as the pipeline is running - just to confirm that nvdslogger was added to pipeline)

As mentioned in the start of this thread (logs listed), with OSD enabled, pipeline stops with error code -5

Please enable GST_DEBUG=5 to get more debug log.