Deploy custom networks on Deepstream

I have a network trained on tensorflow . How do I deploy that in deepstream?

1 Like

Hi,

You can find the instruction to deploy a custom model here:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_custom_model.html

We also have a sample for TensorFlow model.
You can find it in /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD.

Thanks.

I followed the guide on the official website. This is the code that I used to convert the uff file to engine

import random
#from PIL import Image
import numpy as np

import pycuda.driver as cuda
# This import causes pycuda to automatically manage CUDA context creation and cleanup.
import pycuda.autoinit

import tensorrt as trt

import sys, os
sys.path.insert(1, os.path.join(sys.path[0], ".."))
#import common

model_file = '/home/god/Downloads/recog1.uff'
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
engine_file_path = "sample.engine"

def build_engine_uff(model_file):
    # You can set the logger severity higher to suppress messages (or lower to display more messages).
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
        # Workspace size is the maximum amount of memory available to the builder while building an engine.
        # It should generally be set as high as possible.
        builder.max_workspace_size = 1 << 20
        # We need to manually register the input and output nodes for UFF.
        parser.register_input("conv2d_1_input", (3, 25, 15))
        parser.register_output("dense_2/Softmax")
        # Load the UFF model and parse it in order to populate the TensorRT network.
        parser.parse(model_file, network)
        # Build and return an engine.
        engine = builder.build_cuda_engine(network)
	with open(engine_file_path, "wb") as f:
		f.write(engine.serialize())

build_engine_uff(model_file)

However when testing I am not able to get any output detection. I am trying the classifier in deepstream-test-app2 as a secondary classifier.

Also the network that I converted consists of two conv layers and a fully connected layer. The model gave 99% plus accuracy when tested in tensorflow.

Hi,

May I know is your model a classifier or a detector?
For a detector, you may also need to implement a bounding box parser for a customized output format.

Please check ‘Custom Output Parsing’ section for the information:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_custom_model.html%23wwpID0EJHA

Thanks.

No. The model is not a detector. It is a classifier consisting of two conv layers and a fully connected layer. I am using it for optical character recognition. Also I am trying to deploy this on Jetson nano.

Hi,

Just want to confirm first.

So there is no detection in your usecase.
The classification takes the entire image as input, is it correct?

Thanks.

My current pipeline consist of two detectors and a classifier.

image → detector1 → detector2 → classifier.

The classifier takes the output from detector2 as input which consists of the whole image and meta-data (if I am reading the documentation right). I have disabled “process-on-whole-image” which means the classifier runs the model on the detected part. However I am not able to get any output from the classifier. The classifier works if I am using any existing models on TLT. It does not work when I am using my own model.

Can you refer to https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps → back-to-back-detectors, and and add your classifier after it.

We are currently using the back-to-back detectors code. We have added the classifier in the end. However our custom classifier does not seem to give any output. However if we use the Resnet-10 trained using the official TLT, it works properly. Hence there is a high probable chance that there is some error in the conversion from tensorflow model to the uff file to be used in deepstream. We will send the weights if needed.

You can try to deploy your conversion model by tensorRT directly to debug.

ChrisDing, Can you please elaborate on the previous comment. It would be helpful

Do you know tensorRT? It has uff parser.You can refer to tenorRT sample to deploy your classifier uff model. And then deploy it by deepstream

What if I have multiple outputs, then how do I define this line?
parser.register_output(“dense_2/Softmax”)

Hi swchew5649,

Please help to open a new topic for your issue. Thanks