Another clarification about the image size parameter

hey,
Im confused with different parameters where i’ve been request the declare image size:
at the pipeline there is two parametres:

streammux.set_property('width', 1920)
streammux.set_property('height', 1080)

at the tao file training file

augmentation_config {
  output_width: 1248
  output_height: 384

and in the config infer file there is another image size config

infer-dims=3;384;1248

lets say my frame are sized 640X 480 should I change all the values to that?

is there a preferred values or its only depend on the inputs ?

Moving to deepstream forum.

The nvstreammux plugin forms a batch of frames from multiple input sources, If w/h are non-zero, nvstreammux scales input frames to this w/h. please refer to Gst-nvstreammux — DeepStream 6.3 Release documentation
“infer-dims=3;384;1248” is used to set the model dimension, nvinfer plugin will do data scale, let input data 's dimension be same with model’s.

infer-dims is model 's dimension, you can only modify streammux.set_property(‘width’, 1920) ,streammux.set_property(‘height’, 1080)

you can try streammux.set_property(‘width’, 1280) ,streammux.set_property(‘height’, 720)

ok. thanks.
if I understand correctly, in the documentation, it says that these values can be changed:

Input size: C * W * H (where C = 1 or 3, W >= 128, H >= 128, W, H are multiples of 32)

But with any different value (which is multiples of 32 ) that I replace with the output_width and output_height (for example, 640X480 or 1920X1088) I receive the following error:

INFO: Training loop in progress Epoch 2/200 60/97 [=================>............] - ETA: 1:40 - loss: 23132.11832022-12-29 20:08:40.109369: F tensorflow/stream_executor/cuda/redzone_allocator.cc:287] Check failed: !lhs_check.ok() || !rhs_check.ok() Mismatched results with host and device comparison [ef9bb8f429eb:00078] *** Process received signal *** [ef9bb8f429eb:00078] Signal: Aborted (6) [ef9bb8f429eb:00078] Signal code: (-6) [ef9bb8f429eb:00078] [ 0] /usr/lib/x86_64-linux-gnu/libc.so.6(+0x46210)[0x7f832c345210] [ef9bb8f429eb:00078] [ 1] /usr/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f832c34518b] [ef9bb8f429eb:00078] [ 2] /usr/lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7f832c324859] [ef9bb8f429eb:00078] [ 3] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(+0xc1b1788)[0x7f82b07c1788] [ef9bb8f429eb:00078] [ 4] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(+0x236d57b)[0x7f82a697d57b] [ef9bb8f429eb:00078] [ 5] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow13CheckRedzonesINS_27AutotuneExecutionPlanResultEEEvRKN15stream_executor4cuda16RedzoneAllocatorEPT_+0x52)[0x7f82ad983a02] [ef9bb8f429eb:00078] [ 6] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow27LaunchConv2DBackpropInputOpIN5Eigen9GpuDeviceEfEclEPNS_15OpKernelContextEbbRKNS_6TensorES8_iiiiRKNS_7PaddingERKSt6vectorIxSaIxEEPS6_NS_12TensorFormatE+0x2719)[0x7f82ac943799] [ef9bb8f429eb:00078] [ 7] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow21Conv2DBackpropInputOpIN5Eigen9GpuDeviceEfE7ComputeEPNS_15OpKernelContextE+0x253)[0x7f82ac944723] [ef9bb8f429eb:00078] [ 8] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1(_ZN10tensorflow13BaseGPUDevice7ComputeEPNS_8OpKernelEPNS_15OpKernelContextE+0x3d3)[0x7f82a3910333] [ef9bb8f429eb:00078] [ 9] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1(+0x11500b7)[0x7f82a396e0b7] [ef9bb8f429eb:00078] [10] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1(+0x1150723)[0x7f82a396e723] [ef9bb8f429eb:00078] [11] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1(_ZN5Eigen15ThreadPoolTemplIN10tensorflow6thread16EigenEnvironmentEE10WorkerLoopEi+0x28d)[0x7f82a3a23e6d] [ef9bb8f429eb:00078] [12] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1(_ZNSt17_Function_handlerIFvvEZN10tensorflow6thread16EigenEnvironment12CreateThreadESt8functionIS0_EEUlvE_E9_M_invokeERKSt9_Any_data+0x4c)[0x7f82a3a2097c] [ef9bb8f429eb:00078] [13] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xd6de4)[0x7f8322e44de4] [ef9bb8f429eb:00078] [14] /usr/lib/x86_64-linux-gnu/libpthread.so.0(+0x9609)[0x7f832c2e5609] [ef9bb8f429eb:00078] [15] /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x43)[0x7f832c421293] [ef9bb8f429eb:00078] *** End of error message *** 2022-12-29 22:08:41,985 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

so, are they need to be changed regarding the size of the input frame? or should I leave them as they have been defined?

what application are you testing? is there any deepstream problem?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.