Using dynamic shape with space_to_depth implementation

Description

I build my network from scratch using INetwork.add_xxx.
by my implementation, I can use reshape and transpose in shuffle layer to build a depth_to_space operation.

however, fetching the reduced_height and reduced_width is trouble some when using dynamic input.

So is there any clear implementation for this function?

Environment

TensorRT Version: 8.0.0 EA
GPU Type: Tesla T4
Nvidia Driver Version: 460.67
CUDA Version: 11.3
CUDNN Version: 8.2
Operating System + Version: ubuntu 1604
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

##Code
class ModelData(object):

INPUT_NAME = "input_image"

INPUT_SHAPE = (1,256,256,3)

OUTPUT_NAME_0 = "output_feature"

DTYPE = trt.float32

def populate_network(network):

'''

# inputs

'''

input_tensor = network.add_input(name=ModelData.INPUT_NAME, dtype=ModelData.DTYPE, shape=ModelData.INPUT_SHAPE)

# trying to use the shape fatched dynamicly, but dims only accept int

shape = network.add_shape(input = input_tensor)

print(shape.get_output(0))

reduced_height = reduced_width = 32

space_to_depth_01 = network.add_shuffle(input_tensor)

space_to_depth_01.reshape_dims = [1, reduced_height, 8, reduced_width, 8, 3]

space_to_depth_01.second_transpose = trt.Permutation((0, 1, 3, 2, 4, 5))

space_to_depth_02 = network.add_shuffle(space_to_depth_01.get_output(0))

space_to_depth_02.reshape_dims = [1, reduced_height, reduced_width, 192]

workaround is Developer Guide :: NVIDIA Deep Learning TensorRT Documentation