Torch-TensorRT model compilation error - expected value of type Tensor

Hello,
I am struggling with an error while trying to compile a model to tensor rt.
I am doing it on Jetson Xavier NX with JetPack 4.6.
Important thing is that following this tutorial https://nvidia.github.io/Torch-TensorRT/_notebooks/Resnet50-example.html works with no problem. The error only occurs with my custom network. I suspect that the problem might be the fact that the network outputs an array of three tensors, not just one. However, i don’t know what to do to make it work.

Thank you for any help

The steps that cause the error:

traced_model = torch.jit.trace(model.model, [torch.rand((3, 3, 384, 768)).to("cuda")])

trt_model_fp32 = torch_tensorrt.compile(traced_model, inputs = [torch_tensorrt.Input((2, 3, 384, 768), dtype=torch.float32)],
    enabled_precisions = torch.float32, 
    workspace_size = 1 << 22
)

The error is the following:

RuntimeError                              Traceback (most recent call last)
<ipython-input-13-97c241158f53> in <module>
      1 trt_model_fp32 = torch_tensorrt.compile(traced_model, inputs = [torch_tensorrt.Input((2, 3, 384, 768), dtype=torch.float32)],
      2     enabled_precisions = torch.float32, # Run with FP32
----> 3     workspace_size = 1 << 22
      4 )

~/.local/lib/python3.6/site-packages/torch_tensorrt/_compile.py in compile(module, ir, inputs, enabled_precisions, **kwargs)
     95             )
     96             ts_mod = torch.jit.script(module)
---> 97         return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)
     98     elif target_ir == _IRType.fx:
     99         raise RuntimeError("fx is currently not supported")

~/.local/lib/python3.6/site-packages/torch_tensorrt/ts/_compiler.py in compile(module, inputs, device, disable_tf32, sparse_weights, enabled_precisions, refit, debug, strict_types, capability, num_min_timing_iters, num_avg_timing_iters, workspace_size, max_batch_size, calibrator, truncate_long_and_double, require_full_compilation, min_block_size, torch_executed_ops, torch_executed_modules)
    117     }
    118 
--> 119     compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
    120     compiled_module = torch.jit._recursive.wrap_cpp_module(compiled_cpp_mod)
    121     return compiled_module

RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
/home/abc/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py(19): scatter_map
/home/abc/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py(23): scatter_map
/home/abc/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py(36): scatter
/home/abc/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py(44): scatter_kwargs
/home/abc/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py(175): scatter
/home/abc/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py(158): forward
/home/abc/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1090): _slow_forward
/home/abc/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1102): _call_impl
/home/abc/.local/lib/python3.6/site-packages/torch/jit/_trace.py(965): trace_module
/home/abc/.local/lib/python3.6/site-packages/torch/jit/_trace.py(750): trace
<ipython-input-12-eb8b1e74df3d>(2): <module>
/home/abc/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py(3343): run_code
/home/abc/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py(3263): run_ast_nodes
/home/abc/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py(3072): run_cell_async
/home/abc/.local/lib/python3.6/site-packages/IPython/core/async_helpers.py(68): _pseudo_sync_runner
/home/abc/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2895): _run_cell
/home/abc/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2867): run_cell
/home/abc/.local/lib/python3.6/site-packages/ipykernel/zmqshell.py(539): run_cell
/home/abc/.local/lib/python3.6/site-packages/ipykernel/ipkernel.py(302): do_execute
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(162): _fake_ctx_run
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(234): wrapper
/home/abc/.local/lib/python3.6/site-packages/ipykernel/kernelbase.py(541): execute_request
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(162): _fake_ctx_run
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(234): wrapper
/home/abc/.local/lib/python3.6/site-packages/ipykernel/kernelbase.py(261): dispatch_shell
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(162): _fake_ctx_run
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(234): wrapper
/home/abc/.local/lib/python3.6/site-packages/ipykernel/kernelbase.py(361): process_one
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(775): run
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(162): _fake_ctx_run
/home/abc/.local/lib/python3.6/site-packages/tornado/gen.py(814): inner
/home/abc/.local/lib/python3.6/site-packages/tornado/ioloop.py(741): _run_callback
/home/abc/.local/lib/python3.6/site-packages/tornado/ioloop.py(688): <lambda>
/usr/lib/python3.6/asyncio/events.py(145): _run
/usr/lib/python3.6/asyncio/base_events.py(1451): _run_once
/usr/lib/python3.6/asyncio/base_events.py(438): run_forever
/home/abc/.local/lib/python3.6/site-packages/tornado/platform/asyncio.py(199): start
/home/abc/.local/lib/python3.6/site-packages/ipykernel/kernelapp.py(619): start
/home/abc/.local/lib/python3.6/site-packages/traitlets/config/application.py(664): launch_instance
/home/abc/.local/lib/python3.6/site-packages/ipykernel_launcher.py(16): <module>
/usr/lib/python3.6/runpy.py(85): _run_code
/usr/lib/python3.6/runpy.py(193): _run_module_as_main
RuntimeError:  expected value of type Tensor for return value but instead got value of type tuple.
Value: (tensor([[[[1., 0., 4.,  ..., 1., 0., 2.],
          [4., 0., 1.,  ..., 2., 1., 2.],
          [4., 0., 0.,  ..., 3., 4., 2.],
          ...,
          [1., 3., 2.,  ..., 3., 0., 3.],
          [2., 0., 2.,  ..., 4., 0., 1.],
          [2., 2., 1.,  ..., 3., 2., 4.]],

         [[0., 0., 2.,  ..., 1., 2., 3.],
          [0., 0., 4.,  ..., 4., 0., 0.],
          [3., 4., 3.,  ..., 1., 4., 2.],
          ...,
          [1., 3., 2.,  ..., 3., 3., 3.],
          [2., 4., 1.,  ..., 4., 2., 3.],
          [0., 2., 4.,  ..., 1., 4., 2.]],

         [[2., 3., 0.,  ..., 1., 4., 0.],
          [2., 1., 0.,  ..., 3., 1., 2.],
          [0., 4., 0.,  ..., 0., 2., 3.],
          ...,
          [2., 2., 4.,  ..., 0., 3., 0.],
          [0., 0., 3.,  ..., 1., 3., 3.],
          [1., 0., 4.,  ..., 4., 2., 1.]]],


        [[[1., 1., 1.,  ..., 2., 3., 2.],
          [1., 0., 4.,  ..., 4., 0., 2.],
          [0., 1., 1.,  ..., 3., 3., 3.],
          ...,
          [4., 0., 2.,  ..., 0., 4., 4.],
          [1., 0., 1.,  ..., 0., 4., 1.],
          [1., 1., 4.,  ..., 2., 0., 0.]],

         [[1., 0., 1.,  ..., 3., 3., 3.],
          [3., 2., 3.,  ..., 1., 1., 0.],
          [3., 4., 3.,  ..., 0., 3., 4.],
          ...,
          [3., 0., 3.,  ..., 4., 0., 3.],
          [2., 1., 4.,  ..., 0., 4., 0.],
          [1., 4., 2.,  ..., 2., 3., 4.]],

         [[1., 4., 3.,  ..., 0., 1., 0.],
          [2., 1., 2.,  ..., 4., 3., 0.],
          [3., 0., 2.,  ..., 3., 0., 1.],
          ...,
          [0., 2., 4.,  ..., 3., 0., 0.],
          [4., 1., 2.,  ..., 0., 4., 4.],
          [1., 4., 4.,  ..., 2., 4., 2.]]]], device='cuda:0'),)
Cast error details: Unable to cast Python instance to C++ type (compile in debug mode for details)

Hi,

Would you mind sharing the model as well as a simple reproducible source with us?

RuntimeError:  expected value of type Tensor for return value but instead got value of type tuple.

Based on the error, it seems that it is expected to get a tensor output rather than the tensor group.
We want to check this with our internal team to see if only one output tensor is supported.

Thanks.

Hi,
I will ask the authors of the model if I can share it and will be right back with the response.

Thank you!

Hi,
unfortunately I cannot share the model.
However I can tell what the outputs are. If I convert the model to ONNX and see the graph outputs then they are:

[Variable (output): (shape=[1,5136], dtype=float32)
Variable (1000): (shape=[1,5136], dtype=float32)
Variable (input2): (shape=[1,2048], dtype=float32)
]

Can you confirm whether it is possible to work with such a model using torch_tensorrt?

Thank you!

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

We will need a model to see if this is a bug or just needs some guidence.
Do you meet any similar error on a public model that with multple outputs?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.