• Hardware Platform (Jetson / GPU):Jetson Nano 4GB
• DeepStream Version:6.0.0
• JetPack Version (valid for Jetson only):4.6
• TensorRT Version:8.0.1.6
• NVIDIA GPU Driver Version (valid for GPU only):NONE
• Issue Type( questions, new requirements, bugs):questions
HI,
I run the yolo3_tiny in file /deepstream-6.0/sources/obiectDetector_Yolo/ by default config for MP4 and I get the 40 fps.
but after adding the camera setup as below to yolo3_tiny.txt, the fps decreased into 14.
[source0]
enable=1
type=1
camera-id=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
my camera’s format is as below:
v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : Motion-JPEG
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 352x288
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 176x144
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 160x120
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 800x600
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 960x720
Interval: Discrete 0.033s (30.000 fps)
Index : 1
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUYV 4:2:2
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 352x288
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 176x144
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 160x120
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 800x600
Interval: Discrete 0.050s (20.000 fps)
Size: Discrete 960x720
Interval: Discrete 0.067s (15.000 fps)
I keep the config and other files’ content unchanged.
Why is the FPS drop so obvious? Is it because of my camera model or the wrong config file settings?
In addition, I would like to know the time consumption of each stage of the frame map from the USB camera to the generation of the tracking results. Is this possible? How should it be implemented?
Many thanks and I’m looking forward to an early reply.