Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) dGPU - Geforce GTX 1050Ti
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version: 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only): 450.66
** OS **: Ubuntu 18.04
I’m using docker container for dGPU - nvcr.io/nvidia/deepstream:5.0-20.07-triton and downloaded latest DeepStream 5.0 Python apps inside this container. I’m able to run few Gstreamer samples inside docker and some commands from FAQ section as well. -
Examples:
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.104:5554/front latency=300 ! decodebin ! autovideosink
- gst-launch-1.0 filesrc location = ./streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 ! nvinfer config-file-path= ./configs/deepstream-app/config_infer_primary.txt ! dsexample full-frame=1 ! nvvideoconvert ! nvdsosd ! nveglglessink sync=0
But when I run Python apps, I can’t see any error, but it doesn’t show the output video display after few info logs:
root@910a070aa72d:/opt/nvidia/deepstream/deepstream-5.0/deepstream_python_apps/apps/deepstream-test1# python3 deepstream_test_1.py …/…/…/samples/streams/sample_1080p_h264.mp4
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating EGLSink
Playing file ../../../samples/streams/sample_1080p_h264.mp4
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:00.937827421 667 0x2fe9960 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:00.937888166 667 0x2fe9960 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:00.938528851 667 0x2fe9960 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
After the last message nothing happens. I have already enabled the xhost display with command xhost +.
I have tried other Python examples as well, but same issue.
Can you help me to understand this issue?