• Hardware Platform (Jetson / GPU) Jetson TX2
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4.1
• TensorRT Version 7.1.3
• Issue Type( questions, new requirements, bugs) New Requirements
My team and I have been attempting to run Yolov4 and deep-sort on the Jetson TX2, but we’re running into a ton of compatibility issues. I realized today that PeopleNet serves our purposes perfectly. I was able to get the sample PeopleNet for Deepstream running today, and I have a few questions I was hoping the community could help me with.
We have a generic (non-brand named) USB Camera that shows up under /dev/video0, but when I modify the
deepstream_app_source1_peoplenet.txt file, I’m getting the following error:
incorrect camera parameter provided, please provide supported resolution and frame rate.
Here is the modified config params:
[source0] enable=1 #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=Camera (CSI) (Jetson Only) type=1 camera-width=1920 camera-height=1080 camera-fps-n=30 camera-fps-d=1 camera-v4l2-dev-node=0 gpu-id=0
and this is the output from running
v4l2-ctl -d /dev/video0 --all:
Driver Info (not using libv4l2): Driver name : uvcvideo Card type : HiCamera Bus info : usb-3530000.xhci-2.4 Driver version: 4.9.140 Capabilities : 0x84200001 Video Capture Streaming Extended Pix Format Device Capabilities Device Caps : 0x04200001 Video Capture Streaming Extended Pix Format Priority: 2 Video input : 0 (Camera 1: ok) Format Video Capture: Width/Height : 1920/1080 Pixel Format : 'MJPG' Field : None Bytes per Line : 0 Size Image : 4147200 Colorspace : Default Transfer Function : Default (maps to Rec. 709) YCbCr/HSV Encoding: Default (maps to ITU-R 601) Quantization : Default (maps to Full Range) Flags : Crop Capability Video Capture: Bounds : Left 0, Top 0, Width 1920, Height 1080 Default : Left 0, Top 0, Width 1920, Height 1080 Pixel Aspect: 1/1 Selection: crop_default, Left 0, Top 0, Width 1920, Height 1080 Selection: crop_bounds, Left 0, Top 0, Width 1920, Height 1080 Streaming Parameters Video Capture: Capabilities : timeperframe Frames per second: 30.000 (30/1) Read buffers : 0 brightness 0x00980900 (int) : min=0 max=100 step=1 default=50 value=50 exposure_auto 0x009a0901 (menu) : min=0 max=3 default=2 value=2
Is there any reason why this wouldn’t be working? I’m not quite sure how I messed up the camera config…
After running inference on a live camera feed, our goal is to transmit the output to AWS via IoT MQTT. We’re looking to transmit the unique object count on screen at any given time. Is there an easy way to get this information from the Deepstream sample? As I mentioned, the sample already does everything we need, so I would prefer not to have to modify the source if it isn’t necessary. However, if it is necessary, where would I start?