tensorRT api 8.6 torch.multinomial

Description

Hi,

Is there something that plays the role of torch.multinomial in tensorRT?

I couldn’t find it on the site below.
https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/index.html

Thanks!

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 8.6
GPU Type: rtx A6000
Nvidia Driver Version: 528.49
CUDA Version: 12.0
CUDNN Version: X
Operating System + Version:
Python Version (if applicable): 3.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

The below links might be useful for you.

https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html

For multi-threading/streaming, will suggest you to use Deepstream or TRITON

For more details, we recommend you raise the query in Deepstream forum.

or

raise the query in Triton Inference Server Github instance issues section.

Thanks!