I can rectify my video image with dewarper. However, if I change the input source to my camera, the dewarper no longer works. But my camera image is displayed. The config files are identical to those in the previous post.
If you use the camera, the source of the config file should changed. Some parameters in dewarper also need to be adjusted according to your camera parameters.
My video file was recorded with the camera and the same settings. I’ve used it for testing so far. The dewarper settings should therefore be fine. I would now like to undistort a live image with the camera. Which source do you mean? My camera is already set as the source because I see the image. Do I also have to define a source in dewarper? I didn’t see anything about this in the documentation.
As you said before, the source is a live camera source. So you should use the camera source setup instead of a uri source in your deepstream config file.
Please refer to our deepstream-app Guide first. https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#source-group
Which line of my config file are you referring to? In [source0] type=1 is set to camera. type=2 would be uri source.
- If you use CSI camera, the type is 5.
- You should comment out the
Please refer to our Guide I attached before first.
I have a MIPI CSI-2 camera module with a V4L2 driver.
type=5 does not work → Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:751 No cameras available
I removed all unnecessary things in my config but it still doesn’t work.
deepstream_app_config_yoloV3.txt (4.7 KB)
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
• Hardware Platform (Jetson / GPU) Jetson Orin NX 16GB • DeepStream Version Deepstream-6.2 • JetPack Version (valid for Jetson only) Jetpack 5.1 • TensorRT Version 5.1 • OpenCV without CUDA
Could you attach the specific brand of your camera?
OK. So the camera can display normally, but the dewarper is not working now? Could you attach the log with
source= video & dewarper enable = 1: no errors
soruce= cam & dwarper enable = 0:
(deepstream:12158): GStreamer-CRITICAL **: 12:29:21.170: passed ‘0’ as denominator for
GstFraction' (deepstream:12158): GStreamer-CRITICAL **: 12:29:21.170: passed '0' as denominator for GstFraction’
source = cam & dewarper enable = 1:
(deepstream:12195): GStreamer-CRITICAL **: 12:29:36.860: passed ‘0’ as denominator for
GstFraction' (deepstream:12195): GStreamer-CRITICAL **: 12:29:36.860: passed '0' as denominator for GstFraction’
(deepstream:12195): GLib-GObject-CRITICAL **: 12:29:36.862: g_object_set: assertion ‘G_IS_OBJECT (object)’ failed
This is my output with GST_DEBUG=3
The dewarper works on video but not on camera when enabled.
Just from the log attached, they both can play the video normally. But dewarper cannot work when using the camera. And the video was also obtained from this camera. Are these all right？
Yes that is correct
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
We currently do not have this camera. Cause it’s open source, could you help to debug it first?
1.You can add some log on the:
sources\gst-plugins\gst-nvdewarper path to check if the function call process is normal.
2.Could you attach the pipeline to us by referring the link: https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/10
3.You also can use the gst-lunch-1.0 to run the pipeline to check if it works well:
gst-launch-1.0 v4l2src device=/dev/video0 ! nvvideoconvert ! nvdewarper config-file=config_dewarper.txt source-id=6 ! m.sink_0 nvstreammux name=m width=960 height=752 batch-size=4 num-surfaces-per-frame=4 ! nvmultistreamtiler ! nv3dsink
I still have the problem and just can’t get any further. Attached are my error logs and pipeline graphs. It seems like my dewarper doesn’t initialize with the camera at all. For example, if I intentionally build an error into the config_dewarper.txt, then an error message appears immediately with “source = video” that my dewarper_config file could not be loaded. If I then switch the “source = camera”, I don’t get an error message. I therefore assume that my config file is not loaded.
Unfortunately, your Gstreamer command does not work. However, it works with this one.
gst-launch-1.0 v4l2src device=/dev/video0 ! decodebin ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! nvdewarper config-file=config_dewarper_1280x720.txt ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=(string)I420, width=(int)1280, height=(int)720” ! nvv4l2h264enc qp-range=20,20:20,20:-1,-1 ! h264parse ! matroskamux ! queue ! filesink location=file.mkv
I added a log output to
The nvdewarper_parse_surface_attributes() function is only called if source=2 is a video file. With source=1 the function is not called.
Can someone please help?
Sorry for late reply. If there is no response from customer for a long time and we close the issue, we may miss the tracking of the issue later.
From the pipeline you attached, the “source=camera” did not start the
As you can see in the open source code:
sources\apps\apps-common\src\deepstream_source_bin.c, we can only support
create_rtsp_src_bin now. We’ll consider to implement that in
create_camera_source_bin. Since you have this camera and this part is open source, you can also try to add the
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.