I want to use the deepstream pipeline as an inference service. When the picture is received through the network, it will perform inference; then the structured data will be sent out through the network.
- The continuous picture used by mulitifilesrc, but this App does not proceed until the picture is received; is there any good way to achieve the above pipeline
- After getting the infer data, I don’t know which picture is the infer data, how to solve this problem
• Hardware Platform (Jetson / GPU)
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
wha’s the format of the picture?
Sorry for late! I think, there are two solutions:
- use appsrc to read the jpg file, and pass the jpeg decoding for further DS processing. The problem is, one DS run can only process one jpg image
- create a mjpeg video with all the jpg files, and DeepStream can accept one mjpeg video to process the frame one by one, just like process h264 video.
Sorry! multifilesrc may be a better solution.
HI @mchi ,
I’m following your advice to process a .mp4 video created from jpg files. I modified the objectDetector_Yolo example to conduct detections.
However, for some frames, inferences were successful but for others were not. Testing failed frames as images caused segmentation fault.
Can you tell me what other aspects I should check to fix it?
Please help to open a new topic if it’s still an issue.