Unable to Import PyTorch

I have finally managed to implement my tracker into the pipeline, but only after jumping through many hoops.

I ported out the embedder into a nvinfer block and attached the tensor output as Deepstream metadata.
However, as there is no way to access said metadata in a nvtracker block, I had to forgo the block completely and create a GStreamer buffer probe to perform the job of the nvtracker block manually.

This meant that the buffer probe had to parse the Deepstream metadata, extract the tensor output, wrap the tensor output as a numpy array, call the tracker module, then modify the Deepstream metadata again to insert the tracker information.
I tried to do this using the supplied pyds module, but it seemed to cause to Python de-allocate some of the buffers, so I ended up having to write a Python C extension just to interact with the metadata.

My issue is resolved for now, but I must emphasize that this is just a workaround, not a solution. It shouldn’t be that hard to integrate GPU-based trackers into Deepstream, given that the more recent state of the art trackers (e.g. DeepSORT) are starting to use GPUs.

Hi @azy, were you able to import torch in the end or were you forced not to use PyTorch?
Error is still present in DS 6.0 triton container.

Unfortunately we couldn’t get import torch to work.We had to use the workaround I posted above, but it was a pain to implement it.
To recap, we put the model in a nvinfer block and attached a gstreamer buffer probe to operate on the model’s output.