• Jetson Xavier NX
• DeepStream 5.1
• JetPack 4.5.1
• TensorRT 7.1.3.1
• Issue Type: bugs… and questions
• How to reproduce the issue?
I think the easiest way to reproduce this issue will be to create a dummy image and send it through an RTSP stream and access it inside the deepstream-test3 example, and compare the two images at a pixel level (pretty much, that is exactly what I’m doing, but my setup also includes ROS, and that will be too much of a headache to make a setup just for this).
The problem
For debugging purposes, I’m creating a dummy RGBA image with these values on the Red, Green, Blue and Alpha channels:
[24, 0, 0, 255]
And I’m sending this image through an RTSP stream to DeepStream, in a modified version of the deepstream-test3 example (modified in such a way so that I can access the frames through the get_nvds_buf_surface function, pretty similar with the deepstream-imagedata-multistream) where I am acquiring this image, but here, has other values:
[38, 1, 0, 255]
And this is a big problem for me, because even though visually there is no difference, I want that from these values to reconstruct a 16UC1 depth image (e.g. in this case the resulting depth image would have only these values: [2400]
), but in this version it has [3810]
, so for me, even a +/- 1 difference in a pixel is very important.
Questions and trials
I am pretty new to DeepStream, Gstreamer, RTSP, basically one of my first projects of this kind, so practically,
I know nothing about them. That is why I have a few questions:
-
Does RTSP applies any kind of compression/transformations upon images that might affect
the image, or it should leave the image completely untouched? -
Might be Gstreamer that somehow distorts the image because of some settings that
I’ve might set, or not? Even though I’ve also tried the deepstream-imagedata-multistream
example and there was the same result, same “noise”. -
Might this small distortion be a result of some operations applied on the image
(e.g. resize, reshape) during its journey through DeepStream? -
Basically, the main question would be: has anyone else seen this kind of “noise”
happening to an image, compared to what it was before going through
an RTSP stream and after it was processed by DeepStream?
It is worth mentioning, once again, that these distortion values are not greater
than +/- 15 on a pixel (as far as I could observe), so almost invisible changes to the eye.
For as much as I could found about this topic, I’ve seen a few cases of distorted images,
but those were serious/obvious cases of data alterations, but I couldn’t find anything
about this pixel level distortion.
Below it’s the content of my dstest3_pgie_config.txt:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1
And my modified version of deepstream_test_3.py:
custom_deepstream_test_3.py (18.3 KB)
I would very much appreciate any kind of response/comment/remark/guidance/advice regarding
this problem as I don’t know if it is a problem of the aforementioned tools,
or it is a problem of how I’m using them, because if it is the latter,
I might still have a chance of solving this, otherwise I might be forced to give up
on this idea of accessing depth images inside DeepStream. Thanks!