Color falsification by inference

I’m having a little colour problem. Behind inference the output color is not the same as before. I’m missing a good part of the red component.

Way of the data:

  • My PC pushes via WebRTC WHIP to the server. The original camera image is the top window
  • The server forwards this as RTSP (no transcoding involved) to the inference script, which in turn outputs annotated RTSP again
  • The annotated RTSP is transformed (again no transcoding) to WebRTC on the server and polled by the app using WebRTC WHEP protocol.

As you can see especially the colour of my arm has lost a couple of red pixels and made them more yellow.

Situation is completely different, if I don’t consume the annotated video (which has been gone through the entire inference chain), but getting back what I sent upstream. There you see - no color changes…

What could that be? What screw has to be tightened?

Here the effects could be seen a bit better. Feeding the server with a GStreamer videotestsrc. On the left the inference input. On the right the output.

I see differences. Darker Yellow, Cyan and Green bars, brighter Magenta, Red and Blue on the right.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Hardware Platform: GPU, T4, Amazon AWS
DeepStream Version: 6.4
TensorRT Version: 8.6.1
NVIDIA GPU Driver Version: 535.104.12

The issue is easy to reproduce:

  • I was using my camera and forwarded this to an RTSP server. In another experiment I forwarded a colourful video via FFMPEG to one on my RTSP servers (MediaMTX). Input/Output is H.264
  • On the device I was running your deepstream-rtsp-in-rtsp-out python sample
  • I pulled the original video from the RTSP server via FFPLAY and did that also with what you send back via “localhost:8554/ds-test”. Compared both side by side
  • There is a colour difference, even marginally, but it hits skin colours most and red areas, so it is noticeably, even in side-by-side situations.

Most likely caused by the H264 encoder

Can you dump the original camera h264 stream from your RTSP server? It is better to use your original video to reproduce the issue.

Sure. Could you give me your favorite time? In UTC. I’m not at my desk yet. Maybe in 3 hours. Or tomorrow. Here is UTC+1.

The dumped h264 stream file in byte-stream format is enough for us to debugging. You can share the file through the forum message if you don’t want the video be visible publicly.

Can you tried the following pipeline with the raw h264 stream transcoding?

gst-launch-1.0 filesrc location=xxx.264 ! h264parse ! nvv4l2decoder ! queue ! nvv4l2h264enc ! h264parse ! mux.video_0 qtmux name=mux ! filesink location=xxx_nv.mp4

Here is the stream:

ffplay rtsp://ai.votix.com:8554/ny

The original 4k video can be found at https://www.youtube.com/watch?v=yhkbg8p2Gts. I downloaded it as 720p version to ny.mp4 and re-stream it to my RTSP server like so in an endless loop currently and as long as required:

ffmpeg -re -stream_loop -1 -i ny.mp4 -c:v libx264 -bf 0 -f rtsp -rtsp_transport tcp rtsp://127.0.0.1:8554/ny

You should take the a.m. RTSP stream as input do your deepstream-rtsp-in-rtsp-out.py script. Observe the output of your script and compare it with the original stream. Especially check red areas (e.g. the bike lane) and yellow walls (e.g. the Bulgary shop).

Let me know, when done. I will close this stream latest in 8h from now (UTC 09:00 now)

I can reproduce the color change with the simple video transcoding pipeline. We will investigate the issue.

That’s perfect. I’ll switch off the stream now.

Thanks