I want to use nvinfer after cameraundistort

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) GPU
**• DeepStream Version 6.1
**• Cuda Version 11.6
**• TensorRT Version 8.4.1
**• NVIDIA GPU Driver Version (valid for GPU only) 510.39.01

DATA="<?xml version=\"1.0\"?><opencv_storage><cameraMatrix type_id=\"opencv-matrix\"><rows>3</rows><cols>3</cols><dt>f</dt><data>1473.44968 0.  938.49271 0. 1421.5382 545.12286 0. 0. 1.</data></cameraMatrix><distCoeffs type_id=\"opencv-matrix\"><rows>5</rows><cols>1</cols><dt>f</dt><data>-0.527372 0.228445 0.005982 0.006808  0.000000</data></distCoeffs></opencv_storage>"

gst-launch-1.0 rtspsrc location= rtsp://admin:admin@192.168.0.1:554 latency=100 short-header=true ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! cameraundistort settings=“$DATA” ! ximagesink

then I can get a undistorted image , and I want to use it to detect people and vehicle so How to create the pipeline make use of nvinfer to get it . Thanks!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can try to add nvvideoconvert->nvstreammux->nvinfer->… after the cameraundistort plugin.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.