I have a custom trained pytorch model in .pt format, how can I give this as an input for deepstream python apps

can someone explain how do i intergrate a pytorch classification model to deepstream?
@yuweiw

Could you attach the detailed steps of your modle training here? Thanks

I have a .pt file now what I am supposed to do get in working with deepstream? @yuweiw

from torchvision.models.alexnet import alexnet

# create some regular pytorch model...
model = alexnet(pretrained=True).eval().cuda()

model.fc = nn.Sequential(
               nn.Linear(2048, 128),
               nn.ReLU(inplace=True),
               nn.Linear(128, 2)).to(device)

torch.save(model.state_dict(), 'alexnet_trt.pth')

Please describe the complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

I am using
GPU
Deepstream 6.1
Tensorrt 8.2
Driver: 510.47.03 / cuda 11.6

@yuweiw

Ok, you can refer the link below to convert your pytorch model to onnx model.
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#convert-onnx-engine
https://github.com/NVIDIA-AI-IOT/torch2trt

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.