How would I extract output from PeopleNet?

• Hardware Platform (Jetson / GPU) Jetson TX2
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4.1
• TensorRT Version 7.1.3
• Issue Type( questions, new requirements, bugs) New Requirements


My team and I have been attempting to run Yolov4 and deep-sort on the Jetson TX2, but we’re running into a ton of compatibility issues. I realized today that PeopleNet serves our purposes perfectly. I was able to get the sample PeopleNet for Deepstream running today, and I have a few questions I was hoping the community could help me with.

First Question
We have a generic (non-brand named) USB Camera that shows up under /dev/video0, but when I modify the deepstream_app_source1_peoplenet.txt file, I’m getting the following error:

incorrect camera parameter provided, please provide supported resolution and frame rate.

Here is the modified config params:

#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=Camera (CSI) (Jetson Only)

and this is the output from running v4l2-ctl -d /dev/video0 --all:

Driver Info (not using libv4l2):
	Driver name   : uvcvideo
	Card type     : HiCamera
	Bus info      : usb-3530000.xhci-2.4
	Driver version: 4.9.140
	Capabilities  : 0x84200001
		Video Capture
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x04200001
		Video Capture
		Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'MJPG'
	Field             : None
	Bytes per Line    : 0
	Size Image        : 4147200
	Colorspace        : Default
	Transfer Function : Default (maps to Rec. 709)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Full Range)
	Flags             :
Crop Capability Video Capture:
	Bounds      : Left 0, Top 0, Width 1920, Height 1080
	Default     : Left 0, Top 0, Width 1920, Height 1080
	Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 1920, Height 1080
Selection: crop_bounds, Left 0, Top 0, Width 1920, Height 1080
Streaming Parameters Video Capture:
	Capabilities     : timeperframe
	Frames per second: 30.000 (30/1)
	Read buffers     : 0
                     brightness 0x00980900 (int)    : min=0 max=100 step=1 default=50 value=50
                  exposure_auto 0x009a0901 (menu)   : min=0 max=3 default=2 value=2

Is there any reason why this wouldn’t be working? I’m not quite sure how I messed up the camera config…

Second Question
After running inference on a live camera feed, our goal is to transmit the output to AWS via IoT MQTT. We’re looking to transmit the unique object count on screen at any given time. Is there an easy way to get this information from the Deepstream sample? As I mentioned, the sample already does everything we need, so I would prefer not to have to modify the source if it isn’t necessary. However, if it is necessary, where would I start?


your camera is mjpeg camera, you could try below command. If it does not, you need to upgrade to Jetpack4.5, it may work in my memory.
Current deepstream-app does not support mjpeg USB camera, so, if below pipeline works, you need to write the gstreamer code referring to the deepstream samples.

$ gst-launch-1.0 v4l2src device="/dev/video0" ! ‘image/jpeg,width=640,height=480’ ! nvv4l2decoder mjpeg=1 ! nvvidconv ! nvegltransform ! nveglglessink

foe your 2nd question, can deepstream-test5 meet your requirement?

doc : DeepStream Reference Application - deepstream-test5 app — DeepStream DeepStream Version: 5.0 documentation

Hello, and thank you for your response. I apologize, I should have made this two separate posts for two different issues.

Re: your first reply about the camera - I switched to a CSI camera instead (pi-cam v2) and it works perfectly now. I realized after posting that USB cameras are inferior.

Re: your second reply - yes, I believe this will help. I did a lot of exploration on the forums and documentation after posting and I realize I have all the info I need.

Thank you!

1 Like

Glad to know that!

Sorry for late reply!