Thanks , I tried the ONNX conversion and I get the error
bn 220, non bn 112, zero 0 no grad 0
Loaded pretrained HAR model successfully
/home/maouriyan/Downloads/sample_slowfast-20230516T050511Z-001/sample_slowfast/slowfast/models/stem_helper.py:117: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
len(x) == self.num_pathways
Traceback (most recent call last):
File "trt.py", line 31, in <module>
torch.onnx.export(model,
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 504, in export
_export(
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1529, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1111, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 987, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 891, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1184, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1178, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/maouriyan/Downloads/sample_slowfast-20230516T050511Z-001/sample_slowfast/slowfast/models/video_model_builder.py", line 420, in forward
x = self.s1(x)
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/maouriyan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1178, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/maouriyan/Downloads/sample_slowfast-20230516T050511Z-001/sample_slowfast/slowfast/models/stem_helper.py", line 116, in forward
assert (
AssertionError: Input tensor does not contain 2 pathway
This is an action recognition model and uses temporal+spatial data.
I Read here that this cannot be converted to ONNX.
I am trying to achieve Multi-person , Multi-Activity recognition on DeepStream. Please suggest any ways to implement the same.