Jetson nano noob here, just got my jetson nano for about a week, cloned dusty-nv’s jetson-inference, hard-coded my home ip camera’s rtsp stream to detect who is walking by using detectnet-camera. Everything is working fine and I am getting 22fps which I think is pretty good to start with.
The odd part is the screen is displaying negative color, i.e. the red flowers are green, and green leaves are purple. Here’s my rtsp resource line:
“rtspsrc location=rtsp://USERNAME:PASSWORD@camera_ip:port/suffix ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! appsink name=mysink”
I ran ·gst-launch-1.0· with the same rtsp line except the sink part as
gtksink and it shows the right color. Not sure if it is the rtsp pipeline, or something else caused this negative color.
Any input is appreciated.
For running deep learning inference, we would suggest try DeepStream SDK:
Please install the package through SDKManager and give it a try.
Maybe in this pipeline you converted to BGR instead of expected RGB format ?
Yeah I was wondering the same, that’s why I used gst-launch-1.0 to verify the same pipeline, except the sink part where code requires “appsink name=mysink” but gst-launch-1.0 uses gtksink. gst-launch-1.0 displays the right color. If the pipeline converts RGB to BGR, it should happen at the sink stage - still looking to see if that’s where RGB got converted to BGR.
You may have a look to this dirty patch for jetson-utils (dirty because it may work for your case, but may break some other cases) and adapt for your case.
In short it should end with:
... ! videoconvert ! video/x-raw, format=RGB, width=(int)" << mWidth << ", height=(int)" << mHeight << " ! appsink name=mysink
Thanks for the tip, I tried this however it does not solve the color issue, and it shows a much bigger screen, I am streaming from a 1920x1080 camera and display on a monitor with same resolution, with this “video/x-raw, format=RGB, width=(int)” << 1920 << “, height=(int)” << 1080 << " stage the display now only shows part of the video, I think somehow the resolution wasn’t properly captured. I will dig into the code to see if I can find anywhere the color space is defined.
Or you may try format=NV12 and make sure it calls into cudaNV12ToRGBA32().
Thanks! This fixes the problem, and I don’t have to specify the width and height, the pipeline is smart enough to capture the dimensions.
The final pipeline is this:
rtspsrc location=rtsp://USERNAME:PASSWORD@camera_ip:port/suffix ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! video/x-raw, format=NV12 ! appsink name=mysink