Pytorch model example on Deepstream 5.0

I am interested in just doing an example flow of running a pytorch model using deepstream 5.0, before I use my own custom trained model.

Are there any resources out there that I can use to see how the end to end process would work using a trained pytorch model successfully with deepstream 5.0?

You can try and use torch2trt to conver the pytorch model to a TRT engine, and then use it with DeepStream.

I am not sure if all network configurations work successfully with this though, but most off the shelf models like ResNet etc do. There are sample usage instructions given as well in the repo README.

To save you a bit of time, here are some differences from their example usage given for some additional features:

# Create example data (As far as I can tell, this is just to ascertain the network input dimensions)
x = torch.ones((1, 3, 224, 224)).cuda()

# Convert to TensorRT feeding sample data as input (I specified the max_batch_size parameter quite randomly)
model_trt = torch2trt(model, [x], fp16_mode=True, max_batch_size=16)

# Save the TensorRT model i.e. the Cuda engine
with open($ENGINE_FILE_PATH, "wb") as f:
    f.write(model_trt.engine.serialize())

You can then modify one of the sample configuration files and specify the path for the generated TRT engine and use it with the nvinfer plugin.

1 Like

You can refer the torch2trt as @smaqbool said.
Also you can use nviferserver for pytorch model if you are using DS 5.0, pls refer https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.3.01.html#wwpID0E04CB0HA

1 Like

Thanks for the helpful tips @smaqbool @bcao. If I can convert my pytorch model to an onnx file, will the deepstream module automatically generate a .engine file from this generated onnx file similar to what it does with .etlt files.

I have the same intent. Thanks @smaqbool @bcao for the helpful tips.
I tried torch2trt, it is working. In terms of the TRT engine, do you mean the model_trt.engine.serialize() code? After saving, can nvinfer directly use it by just modifying the configuration file? Are there any other necessary steps?

I haven’t tried saving any of the engines, so thanks for the instruction in advance.

1 Like

If you are using DS, the engine file will be generated under the same dir as your mdoel file.
You can refer https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.3.01.html#wwpID0E0OFB0HA and the nvinfer source code which is open sourced.

The code at the end is indeed for saving the model and then using it with nvinfer in DeepStream. Ideally, that should work.

However, I have personally tried and failed using a classifier (RestNet) like that successfully. In fact, the nvinfer secondary element doesn’t seem to work for particular pipeline or platform. Even using one of the included classifiers such as the carcolor one did not work for me. So, it must be pipeline specific for me.

UPDATE: The plugin and engines work fine. The problem was that the model output was scores instead of probabilities. Apparently, nvinfer requires probabilites as output to work correctly.

Dear @bcao,
I look into the config file of deepstream-test1 in python app. The contents are:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

The problem is it is a caffe based config file. What items should I change if I want to use pytorch model files which tail as .pt or .pth?
Also @smaqbool

Once you have saved the engine as described in my comment (using torch2trt), you need to specify the path to that in model-engine-file. You can ignore model-file, proto-file, output-blob-names, int8-calib-file in case of engine generated from pytorch. Based on what pre-processing you used while training the model, you might want to set net-scale-factor and offsets appropriately.

net-scale-factor=0.0039215697906911373 is basically a division by 255.
offsets=103.939;116.779;123.68 are the RGB mean pixel values based on imagenet. You might want to change these so that they are identical to the values used during training, if any.

If you trained the network on standard normalized pixel values, you should specify the above two. Otherwise, if you trained on pixel values in range [0-255], you should omit offsets and set net-scale-factor=1.

One additional point that might save you some time is that if you used a pre-trained model from torchvision, the torchvision models usually do not have a softmax layer as output. So, if you want probabilities as output from the network, you should also add a softmax layer to the network if it is not already there. I asked this question from the torch2trt team and their response was helpful: Coverting torchvision trained models for use with DeepStream · Issue #333 · NVIDIA-AI-IOT/torch2trt · GitHub

@HIVE Have you solved your issue based on @smaqbool comments?

yeap I am trying. I think his answer is very clear. If I have another issue coming up, I will start a new post. Thanks for your guy’s help.

@smaqbool How to calculate the offsets? I have trained using mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225].