Hi,
We are working with gstreamer pluging V1.16, using cameraundistort module, it works on CPU, and we are getting 30FPS, the problem is that we need to moved into CPU and then back to GPU, but we get for 30FPS video 30FPS output.
When we use deepstream-5.0 with nvdewarper module, we are working with GPU only in our pipeline, but the fps drop to 15FPS. Even when we stop all the networks and let the GPU be use only for the camera calibration, it seems to be the code latency.
When we review the needed math for distortion calibration, only intrinsic calibration is used, the math is done in advance and after crating the pixel mapping we should be running on O(1).
The question will be about the latency and if there is a way to make the code more useable for our pipeline.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used
Hardware Platform - jetson NX
DeepStream Version - 5.0
TensorRT Version - 7.1.3.1
NVIDIA GPU Driver Version - 10.2
Issue Type - question and maybe bug
How to reproduce the issue - just set any stream with 4k resolution and try to run the intrinsic calibration over it.
Requirement details - as define above.
The pipeline being used - as well as define above.
i’m using cpp to build my pipeline, the pipeline is:
gst_bin_add_many(GST_BIN(src_pipeline), src_elem, cap_filter_src, undist_nvvidconvert, undist_cap_filter_conv, undist, vidconvert1, cap_filter_conv1, vidconvert2, cap_filter_conv2, identity, v4l2sink, NULL);
gst_element_link_many(src_elem, cap_filter_src, undist_nvvidconvert, undist_cap_filter_conv, undist, vidconvert1, cap_filter_conv1, vidconvert2, cap_filter_conv2, identity, v4l2sink, NULL);
where:
cap_filter_src is a capturing process converting data from source stream into NVMM memory NV12 format frames.
then undist part that convert data into RGBA and using nvdewarper.
then converting into NVMM format for deep pipeline and in the end converting into streaming format (converting into cpu memory).
So some plugins are developed by yourself, like source, udist. And you want to copy the cpu memory to gpu first , then copy it from gpu to cpu.
Could you refer the link below to attach your pipeline image? https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/10
none of my pipeline was develop by myself, the equivalent pipeline will be something like:
gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvdewarper config-file=/data/juganu/jedge-sense-hub/config/video_capturing_config/calibration_dewarper.txt nvbuf-memory-type=3 ! m.sink_0 nvstreammux name=m width=4032 height=3040 batch-size=1 num-surfaces-per-frame=1 ! fakesink
Ok, you can just use the fpsdisplaysink to test the fps without change the NVMM memory to the SystemMemory.
Could you test it with the stream source instread of the camera source and see the diff of the fps?
The conversion to SystemMemory is intended for the pipeline, so I used a tool that measures the total time it takes to run. That is, from the input to the output and not only on the given component, note that when I do not use the calibration component, I achieve full FPS, and only when inserting it do I see The FPS drops.
Hi @tamirg1 , Cause the dewarper need some complex algorithms,it may take some time. So the fps will definitely decrease. Could you set some performance paras and see if it works?
For the performance test:
1.Max power mode is enabled: $ sudo nvpmodel -m 0
2.The GPU clocks are stepped to maximum: $ sudo jetson_clocks
Hi @yuweiw,
we are already running with max power and GPU clock to maximum, in addition we running without UI as well.
about the math of the undistortion:
as we know, the distortion correction can be define as:
x_dist = x(1+k1r^2+k3r^6+…)+(2p1xy+p2*(r^2+2x^2))(1+p3r^2+…)
y_dist = y(1+k1r^2+k3r^6+…)+(2p2xy+p1(r^2+2y^2))(1+p3*r^2+…)
then, by the undist module i’m loading into the system the parameters k1,k2,k3,p1,p2 and i am assuming that p3=0 and all the order above it as well as it is seems that there is no option to insert them.
then, by calculating the mapping of each pixel by the above formula for each pixel, we can pick a frame, remap the pixels with the mapping, this should take O(1).
this is way i’m assuming i’m missing something, will be glad to have a proper answer about this.
Hi @yuweiw ,
You are right, seems that in my testing i did remove this field, it is the only field missing as it set into 3 to be PerspectivePerspective, as describe in Gst-nvdewarper — DeepStream 6.1.1 Release documentation.
tried default as well in the end.
the file should be look as:
[property]
projection-type=3
output-width=1920
output-height=1080
distortion = 0;0;0;0;0
focal-length = 1700 # Focal length of camera lens, in pixels per radian.
src-x0 = 960 # (srcWidth-1)/2
src-y0 = 540 # (srcHeight-1)/2
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
The projection-type para and some other paras should be set in surface group. But you set it in property group. So we cannot parsing it correctly. Could you refer our demo file to set the paras and see what the fps is?