When testing videos on Xavier NX, we encountered an issue where input images with non-standard resolutions result in the following error: NvOFGetStatus failed
e.g.
*On NX (Ubuntu 20, DeepStream 6.3, CUDA 11.4), if the image width and height are not multiples of 32, the error NvOFGetStatus failed occurs.
*On NX (Ubuntu 18, DeepStream 6.0, CUDA 10.2), the error NvOFGetStatus failed occurs regardless of whether the image dimensions are multiples of 32. Even if they are multiples of 32, the error still appears.
Case 1: 1654 × 1080
Without resizing the image, nvof automatically aligns it to 1656 × 1080, and can execute successfully.
Case 2: 1172 × 880
Without resizing the image, nvof automatically aligns it to 1176 × 880, but fails to execute.
Manually adjusting the resolution to 1152 × 864 (multiples of 32)allows the GPU to execute.
Setting it to 1184 × 896 (multiples of 32) causes the GPU to fail again.
We would like to ask:
What is the cause of this issue?
Do we need to manually align the input to a specific resolution to resolve this error?
So due to limitation in Optical Flow SDK, the output width and height only support common resolutions like 1920x1080 or 1280x720 ? If so, the conclusion is that only these common resolutions are supported ?
As what you have observed, the resolution width should be the multiple of 32. This limitaion is for the Jetson Xavier, Jetson Xavier NX and Jetson TX2 platforms with JetPack version lower or equal to 35.4.x.
For Jetson AGX Orin serial, Jetson Orin NX serial and Jetson Orin Nano serial platforms with JetPack version higher or equal to 36.3, there is no such limitation.
As you mentioned, “the resolution width should be a multiple of 32.” However, we tested a resolution that meets this requirement under JetPack 35.4.X but still encountered an error (NvOFGetStatus failed).
Could this issue be related to how we padded the resolution to make it a multiple of 32? Specifically, while the resolution now meets the requirement (being a multiple of 32), could the padding values be causing the error?
I found a relevant note in the documentation you provided:
The padding I mentioned earlier refers to the following:
The original video resolution is 1172 × 880. To meet specific processing requirements, zero padding is added to the width, resulting in a final resolution of 1184 × 896.
The implementation methods I used are as follows:
Y means error, N means no error, Sample test means the file path you provided
On NX20, when using non-standard resolutions such as 1172 × 880 and 1654 × 1080, errors occurred in both the sample test and our own files.
However, adjusting the output size to a multiple of 32 resolved the issue, making it executable without the need to apply zero padding to the input video beforehand.
[NX18(Ubuntu 18, DeepStream 6.0, CUDA 10.2, JetPack4.6.1, L4T 32.7.1)]
On NX18, when using the same non-standard resolutions (1172 × 880 and 1654 × 1080), errors also occurred in both the sample test and our own files.
However, when adjusting the output size to a multiple of 32:
The sample test ran successfully.
Errors persisted in our own files.
1.Could this issue be caused by a plugin we implemented in our own files?
2.Why are we still encountering the limitation that the resolution width must be a multiple of 32, even though our Jetpack version is above 35.4 (we are using 35.6.0)?
3.Does only width need to be adjusted to a multiple of 32 for it to work, while the height remains unaffected?
Where I implement with DeepStream APIs?
I implemented it in the following path: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/test
(test is a folder I created.)
How I implement with DeepStream APIs?
I implemented it in a similar way to the file you previously provided: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-nvof-test
The only difference is that we added our own plugin after nvof in the pipeline.
[Reproducing the issue ?]
No, I can’t reproduce the issue.
Additionally, I also can’t reproduce it in the case where the original video resolution is 1172 × 880, and zero padding is added to the width, resulting in a final resolution of 1184 × 880.
So, I asked: Does only the width need to be adjusted to a multiple of 32 for it to work, while the height remains unaffected?
Because based on the testing outcome, it seems that we only need to adjust the width to a multiple of 32 in deepstream_nvof_test.c.
Also, as you mentioned, the 32-multiple width restriction should only occur if the JetPack version is below 35.4.x, but we are using 35.6, and the issue can still be reproduced.
The other reproduction results are shown in the table above (the column named Sample test refers to deepstream_nvof_test.c)
Where did you implement the scaling and padding with the pipeline? With a probe function or a customized plugin?
Why the original deepstream-nvof-test can’t be used since there is no issue with it?? Have you tried a 11184x880 image directly with the original deepstream-nvof-test? According to your description, the issue is caused by the modification you make. And you mentioned you scale the image directly in your application code, the input resolution is changed but we don’t know how you handled the caps negotiation for such change. Why do you think it is a gst-nvof issue?
I scaled and padded the video using an FFmpeg command, without utilizing a probe function or a custom plugin. The process I followed is outlined below:
This generates a padded video original_pad.mp4 with a resolution of 1184x880.
Step 2:
./deepstream-nvof-app file:///original_pad.mp4
I’ve tried a 1184x880 image directly with the original deepstream-nvof-test, and it worked. However, it doesn’t work with our files, so the issue appears to be related to the plugin added to the pipeline. Thank you for your reply!
but there is also an unresolved issue
So, I asked: Does only the width need to be adjusted to a multiple of 32 for it to work, while the height remains unaffected?
Because based on the testing outcome, it seems that we only need to adjust the output width parameter to a multiple of 32 in deepstream_nvof_test.c .
so how can this be explained?
I did not adjust the original video resolution of 1172x880. In the deepstream_nvof_test.c, I set:
This indicates that the issue can be solved by adjusting the MUXER_OUTPUT_WIDTH to a multiple of 32 (in this case, 1184), while leaving MUXER_OUTPUT_HEIGHT unchanged (880, which is not a multiple of 32).
Thus, we can conclude that the MUXER_OUTPUT_WIDTH needs to be a multiple of 32 for the issue to be resolved, without needing to adjust the MUXER_OUTPUT_HEIGHT.
as you mentioned, the 32-multiple width restriction should only occur if the JetPack version is below 35.4.x , but we are using JetPack version 35.6 , and the issue can still be reproduced.
The resolution of the file “original_pad.mp4” is 1184x880, right? You use this “original_pad.mp4” as the input of deepstream-nvof-test, you can’t set the MUXER_OUTPUT_WIDTH as 1172, it is wrong. For the video is treated as a whole, no one knows about the “padding” pixels as you have generated them into the video.
Yes.
The issue does not happen with Orin platforms. For Xavier NX, it is always a limitation.