"Failed to load config file: No such file or directory" and No video display in the docker

If I run /deepstream-test1/deepstream_test_1.py in the code server, it replies that:


Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating EGLSink
Playing file …/sample_720p.mp4
Failed to load config file: No such file or directory
** ERROR: <gst_nvinfer_parse_config_file:1260>: failed
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Terminated


It seems that there is some error during parsing the configure file dstest1_pgie_config.txt. But its position is certain. I can run it in a terminal without this error.
If I run the script in the terminal, there is no any error, but no any video display. I hope to watch the output of the sink plugin.


Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating EGLSink
Playing file sample_720p.mp4
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
0:00:00.144573587 22888 0x3b50ed0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:07.929096569 22888 0x3b50ed0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:07.934491354 22888 0x3b50ed0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully


It just stops here without any feedback.
**What is wrong with the program. **
I just hope to debug the script and look what meta data look like. How to fix it?

Any help?

Did you want to watch the output through display or you hope to debug the script and look what meta data look like

Both

Which platform you are runing on?

ubuntu 18.04. 1080ti. nvcr.io/nvidia/deepstream: 5.1-21.02-devel

Tegra GPU dedicated for computing, not for display. if you want to see the output, you may set sink type to File or rtspstreaming, about check meta data, you do not need to use nveglglessink, you can set to Fakesink.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.