Description
I have a constant tensor of shape (800,). I would like to broadcast this across an unknown batch dimension in explicit batch mode. I have tried many approaches, but none of them have worked. What is the best way to do this?
I have tried:
- Creating the tensor as a constant of size (-1, 800)
- Creating the tensor with size (800,) and reshaping to (-1, 800)
- Creating the tensor and using a shuffle layer to reshape it at runtime to (N, 800). This gives the error
[Shuffle]: reshaping failed for tensor: (Unnamed Layer* 188) [Constant]_output Reshape would change volume.)
Environment
TensorRT Version: 8.5.2
GPU Type: A100 40GB
Nvidia Driver Version: 510.47.03
CUDA Version: 12.0
CUDNN Version: 8.7.0
Operating System + Version: Ubuntu 20.04.5 LTS
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable): 2.11.0+nv23.1
PyTorch Version (if applicable): n/a
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:23.01-tf2-py3
Relevant Files
Steps To Reproduce
git clone https://github.com/p3achyjr/p3achygo.git
cd p3achygo
git checkout dev/trt-backend
docker build docker/Dockerfile-base_local .
docker start <image_name>
docker exec -it --gpus=all <container_id> /bin/bash
bazel build --config=dbg //cc/nn/engine/scripts:build_and_run_trt_engine
./bazel-bin/cc/nn/engine/scripts/build_and_run_trt_engine --weights_path=/app/cc/nn/engine/__testdata__/model.h5