How to display a video which are infered by a deepstream docker through a jupyter notebook?

It seems that a video infered by a deepstream docker can be displayed. How to set it? For example, I just test the deepstream_test_1.py without any visual image. If I input the command in terminal


python3 deepstream_test_1.py /workspace/deepDocker/streams/sample_720p.mp4


It replied that


*Creating Pipeline *

*Creating Source *

*Creating H264Parser *

*Creating Decoder *

*Creating EGLSink *

*Playing file /workspace/deepDocker/streams/sample_720p.mp4 *
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
*Adding elements to Pipeline *

*Linking elements in the Pipeline *

*Starting pipeline *

0:00:00.136984653 2890 0x2a78ad0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:04.449687189 2890 0x2a78ad0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
*0 INPUT kFLOAT input_1 3x368x640 *
*1 OUTPUT kFLOAT conv2d_bbox 16x23x40 *
*2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40 *

0:00:04.452503273 2890 0x2a78ad0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully


If I test the deepstream_test_1.py in a jupyter notebook, it replied that


*Creating Pipeline *

*Creating Source *

*Creating H264Parser *

*Creating Decoder *

*Creating EGLSink *

*Playing file /workspace/deepDocker/streams/sample_720p.mp4 *
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
*Adding elements to Pipeline *

*Linking elements in the Pipeline *

*Starting pipeline *

0:00:00.151568850 2906 0x25a70d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:04.388998006 2906 0x25a70d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
*0 INPUT kFLOAT input_1 3x368x640 *
*1 OUTPUT kFLOAT conv2d_bbox 16x23x40 *
*2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40 *

0:00:04.391653937 2906 0x25a70d0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully


**There is no any visual result in both ways. How to display the video? Especially how to display in Jupyter. **
ps. I can watch the video /workspace/deepDocker/streams/sample_720p.mp in a docker.

Can you search the info in google?
For example, see How can I play a local video in my IPython notebook? - Stack Overflow

from ipywidgets import Video
Video.from_file("./play_video_test.mp4", width=320, height=320)

Ok, thank you. After I learn the tool ipywidgets, I find the real question is that why the sink plugin do not show the output. The script, deepstream_test_1.ipynb, has a plugin sink without ipywidgets or IPython.display. And the output video is not displayed.


print(“Starting pipeline \n”)
pipeline.set_state(Gst.State.PLAYING)
try:
loop.run()
except:
pass
pipeline.set_state(Gst.State.NULL)


This cell just output 2 words “Starting pipeline”. Should not a video been shown following? I am curious. If I want to show the output of the sink plugin in jupyter, how to deal with it?

If it is a question in deepstream docker, could you create a new forum topic under Deepstream forum? Thanks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.