Broadcasting Constant Tensor across Batch Dimension


I have a constant tensor of shape (800,). I would like to broadcast this across an unknown batch dimension in explicit batch mode. I have tried many approaches, but none of them have worked. What is the best way to do this?

I have tried:

  • Creating the tensor as a constant of size (-1, 800)
  • Creating the tensor with size (800,) and reshaping to (-1, 800)
  • Creating the tensor and using a shuffle layer to reshape it at runtime to (N, 800). This gives the error [Shuffle]: reshaping failed for tensor: (Unnamed Layer* 188) [Constant]_output Reshape would change volume.)


TensorRT Version: 8.5.2
GPU Type: A100 40GB
Nvidia Driver Version: 510.47.03
CUDA Version: 12.0
CUDNN Version: 8.7.0
Operating System + Version: Ubuntu 20.04.5 LTS
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable): 2.11.0+nv23.1
PyTorch Version (if applicable): n/a
Baremetal or Container (if container which image + tag):

Relevant Files

Steps To Reproduce

git clone
cd p3achygo
git checkout dev/trt-backend
docker build docker/Dockerfile-base_local .
docker start <image_name>
docker exec -it --gpus=all <container_id> /bin/bash
bazel build --config=dbg //cc/nn/engine/scripts:build_and_run_trt_engine
./bazel-bin/cc/nn/engine/scripts/build_and_run_trt_engine --weights_path=/app/cc/nn/engine/__testdata__/model.h5

We recommend you to check the below samples links in case of tf-trt integration issues.

If issue persist, We recommend you to reach out to Tensorflow forum.