Custom Model Inference

**• Hardware Platform- Jetson Orin NX
**• DeepStream Version- 6.2
**• JetPack Version (valid for Jetson only)- 5.1.0

I have an anomaly detection model which gives reconstructed image as output.
I am able to convert my .pb model to .onnx using GitHub - onnx/tensorflow-onnx: Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
my pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw, format=YUY2, height=1080, width=1920’ ! nvvideoconvert ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file=/opt/nvidia/deepstream/deepstream-6.2/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! xvimagesink sync=false

My model gets built but there is no reconstructed image as output.
I set output-tensor-meta=1, input-tensor-meta=1 and network-type=100 in my nvinfer config file.

What am I missing?

You can refer to the documentation to modify the relevant configuration. I don’t know the details of your model. I think that directly outputting tensor, nvinfer cannot be converted into the corresponding metadata.

https://docs.nvidia.com/metropolis/deepstream/6.2/dev-guide/text/DS_plugin_gst-nvinfer.html

I am using an autoencoder model.
I have provided my config file, pipeline and model.
Issue- My model is getting built but I am not getting the output i.e. reconstructed image.

Pipeline: gst-launch-1.0 gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw, format=YUY2, width=1920, height=1080’ ! nvvideoconvert ! ‘video/x-raw(memory:NVMM), format=NV12’ ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1920 ! nvvideoconvert ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! xvimagesink sync=false

model.zip (2.4 MB)

hey, any update?

I’m afraid gst-launch can’t match your requirements.

You need to add post-processing to nvinfer, and then combine the output tensor with the original image.

Finally, use the output image to replace the image in nvbufsurface before it can be displayed.

You can refer to deepstream-infer-tensor-meta-test as a starting point.

Can I refer to any app from deepstream_python_apps for this?
Will nvdsinfertensormeta help me atleast get a numpy array of my output(reconstructed) image?

deepstream-ssd-parser is an example of using python to process tensor.

Need to parse the tensor by yourself, because nvinfer does not know how to process the tensor

model.zip (2.4 MB)

I have an anomaly detection model which gives reconstructed image as output.
I am able to convert my .pb model to .onnx using GitHub - onnx/tensorflow-onnx: Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX

I’m now using deepstream_infer_tensor_meta_test sample app. I have attached my model, code, config file as well as the output image.
I have created a custom parser function (NvDsInferParseCustomOnnx) in nvdsinfer_custombboxparser.cpp
My model gets built but I do not get the reconstructed image. I have tried to parse the tensor.

What am I doing wrong?

If you have successfully parsed the tensor, first try to reconstruct a new image from the tensor and the original image, then save the new image as a file.
How to use tensor and original image to reconstruct a new image is only related to your model. We cannot provide more help on this

Don’t try to display it directly yet, that’s another question

deepstream_imagedata-multistream.py shows how to extract the original image.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

I think tensor + original image = reconstructed image.

Now you get tensor and original image, you can get reconstructed image based on the processing when you train the model

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.