Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Nvidia Jetson TX2 NX (Advantech MIC-710ailt).
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 18.104.22.168 // CUDA 10.2.300
• Issue Type( questions, new requirements, bugs) Questions
Hi I would like to process the jpeg image stream from a mobotix camera faststream.jpeg stream, which is obtained from http://<your.camera.ip>/control/faststream.jpg?stream=full (I use this one and not the rtsp to improve the latency time and get a real time application).
I am trying to use sample app 1, so I remove the rtsp plugin and replace it with the gst-nvjpegdec plugin, is this the right one, or should I use another one?
Once I replace one with another, it tells me that the link between the different pipelines cannot be created, do I need to use another plugin? Is there any other way?
Sorry if my question is very basic, I’m just starting with Nvidia Deepstream and I’m still learning.
What’s the stream format of your camera? You may use gst-discoverer-1.0 to check it’s format in details. In your case it seems jpeg is included in the streaming protocol.
Replacing rtsp plugin to gst-nvjpegdec is not enough. rtsp plugin only extract streams in a format (may be in H264/265 etc), there should be plugins tackling the streams with correct codec behind it.
Besides, does the “sample app 1” refer to “deepstream-test1”? There is no rtsp plugin in deepstream-test1, I’m not sure if you are referring from the most similar example.
Thank you very much for your help, with gst-discover-1.0 I get the following:
Done discovering http:///control/faststream.jpg?stream=full
formato del contenedor: Multipart
I have used test1 from the following location, it is the C++ example,
We don’t have such a camera to test, but googled a page on pipeline of such camera: GSCAM_CONFIG for IP camera from gst-launch - ROS Answers: Open Source Q&A Forum . You can check if it works or not in your case, if works, you can use
souphttpsrc to parse the URL, and use
multipartdemux to extract the jpeg content, then use nvjpegdec or nvv4l2decoder to decode the jpeg frames.
Ok in principle, with gst-launch-1.0 I managed to receive the image stream and store it in a video.mkv. Using
gst-launch-1. -e souphttpsrc location='http://<camera_ip>/control/faststream.jpg?stream=full' do-timestamp=true ! multipartdemux ! image/jpeg,width=640,height=480 ! matroskamux ! filesink location=video.mkv
Now what I will try to do is to put it inside one of the deepstream C examples (test1 probably).
So I will use the above mentioned. Thank you very much and I will let you know my results.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Please let us know whether this topic can be closed or not, thank you.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.