Not getting video playback on deepstream_test2_app

I get video playback off of some deepstream_test3_app as well as with the gst-launch-1.0 -v videotestsrc pattern=snow ! video/x-raw,width=1280,height=720 ! autovideosinkcommand, but none with deepstream_test2_app. Is that normal?

• Hardware Platform (Jetson / GPU) GTX 1070
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) 440.82

Hi
Can you paste your error log?

Now playing: ../../../../samples/streams/sample_1080p_h264.mp4
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:00.903185280  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 4]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 6x1x1           

0:00:00.903266378  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 4]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:00.904636550  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<secondary3-nvinference-engine> [UID 4]: Load new model:dstest2_sgie3_config.txt sucessfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:00.927882653  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 3]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 20x1x1          

0:00:00.927944209  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 3]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:00.929050352  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<secondary2-nvinference-engine> [UID 3]: Load new model:dstest2_sgie2_config.txt sucessfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:00.943755703  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 12x1x1          

0:00:00.943828029  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:00.944984879  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:00.959572550  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:00.959659042  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:00.960277145  3883 0x562f13377e90 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest2_pgie_config.txt sucessfully
Running...

I just noticed that playing sample_720p.h264 rather than sample_1080p_h264.mp4 yields playback however.

Please noticed, test1 and test2 sample only accept h264 elementary stream, test3 sample accept any container type that gstreamer supported, and any codec can be used as input.