How to implement SlowFast model in DeepSteeam

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : RTX 2060
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version : 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only) : 525.85.12
• Issue Type( questions, new requirements, bugs) : Question

How can I use SlowFast models in DeepStream ? I’m using it for Mutli-Object Multi-Action Recognition with Tracking.

Is it possible to use torch2trt for this architecture or does it not support the architecture at all ?

Seems you have problem to generate TensorRT model from the Pytorch model,right?
DeepStream supports ONNX model, you can generate ONNX model first, (optional) Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime — PyTorch Tutorials 2.0.1+cu117 documentation

How to Convert a PyTorch Model to ONNX in 5 Minutes - Deci

Thanks , I tried the ONNX conversion and I get the error

bn 220, non bn 112, zero 0 no grad 0
Loaded pretrained HAR model successfully
/home/maouriyan/Downloads/sample_slowfast-20230516T050511Z-001/sample_slowfast/slowfast/models/stem_helper.py:117: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
  len(x) == self.num_pathways
Traceback (most recent call last):
  File "trt.py", line 31, in <module>
    torch.onnx.export(model,
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 504, in export
    _export(
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1529, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1111, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args)
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 987, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 891, in _trace_and_get_graph_from_model
    trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1184, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in wrapper
    outs.append(self.inner(*trace_inputs))
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1178, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/maouriyan/Downloads/sample_slowfast-20230516T050511Z-001/sample_slowfast/slowfast/models/video_model_builder.py", line 420, in forward
    x = self.s1(x)
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1178, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/maouriyan/Downloads/sample_slowfast-20230516T050511Z-001/sample_slowfast/slowfast/models/stem_helper.py", line 116, in forward
    assert (
AssertionError: Input tensor does not contain 2 pathway

This is an action recognition model and uses temporal+spatial data.
I Read here that this cannot be converted to ONNX.

I am trying to achieve Multi-person , Multi-Activity recognition on DeepStream. Please suggest any ways to implement the same.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

For the model conversion, please consult the author of the model.

For DeepStream multi-person, multi-activity recognition sample, there is a sample in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-action-recognition (C/C++ Sample Apps Source Details — DeepStream 6.2 Release documentation) with our own TAO models.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.