Hello, here’s my hardware setup
• Hardware Platform: Jetson
• DeepStream Version: 5.0 dp
• JetPack Version: 4.4
• TensorRT Version: 7.1/OSS
We’ve developed a Deepstream pipeline using the python bindings. It receives 3 simultaneous streams from USB cameras (Intel RealSense d435/d435i, if it helps), and uses Transfer Learning Toolkit’s models for inference and further transfer to a custom Kivy App. We’ve managed to get the following pipeline working:
Now we want to add saving capabilities for the streams. The idea is to use deepstream/gstreamer elements. We’ve tried the following pipeline, (which is not working):
It fails with the following error:
0:00:03.591168720 28935 0x36507800 ERROR default video-frame.c:175:gst_video_frame_map_id: invalid buffer size 64 < 1382400
0:00:03.592211652 28935 0x36507800 WARN jpegenc gstjpegenc.c:550:gst_jpegenc_handle_frame:<jpg_encoder_1> invalid frame received
0:00:03.592412142 28935 0x36507800 ERROR videoencoder gstvideoencoder.c:2345:gst_video_encoder_finish_frame:<jpg_encoder_1> Output state was not configured
0:00:03.634634837 28935 0x365078f0 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<inference_engine> error: Internal data stream error.
0:00:03.634694264 28935 0x365078f0 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<inference_engine> error: streaming stopped, reason error (-5)
Is the above design right? The idea is to save the images for further post-processing, which is why I’d like to avoid writing videos.
In case it’s not, is it feasible for a high throughput pipeline (3x30 fps) to save the images using something similar to deepstream_python_apps/apps/deepstream-imagedata-multistream at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub.
Thanks!