**• Hardware Platform- Jetson Orin NX
**• DeepStream Version- 6.2
**• JetPack Version (valid for Jetson only)- 5.1.0
I have an anomaly detection model which gives reconstructed image as output.
I am able to convert my .pb model to .onnx using GitHub - onnx/tensorflow-onnx: Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
my pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw, format=YUY2, height=1080, width=1920’ ! nvvideoconvert ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file=/opt/nvidia/deepstream/deepstream-6.2/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! xvimagesink sync=false
My model gets built but there is no reconstructed image as output.
I set output-tensor-meta=1, input-tensor-meta=1 and network-type=100 in my nvinfer config file.
You can refer to the documentation to modify the relevant configuration. I don’t know the details of your model. I think that directly outputting tensor, nvinfer cannot be converted into the corresponding metadata.
I am using an autoencoder model.
I have provided my config file, pipeline and model.
Issue- My model is getting built but I am not getting the output i.e. reconstructed image.
Can I refer to any app from deepstream_python_apps for this?
Will nvdsinfertensormeta help me atleast get a numpy array of my output(reconstructed) image?
I’m now using deepstream_infer_tensor_meta_test sample app. I have attached my model, code, config file as well as the output image.
I have created a custom parser function (NvDsInferParseCustomOnnx) in nvdsinfer_custombboxparser.cpp
My model gets built but I do not get the reconstructed image. I have tried to parse the tensor.
If you have successfully parsed the tensor, first try to reconstruct a new image from the tensor and the original image, then save the new image as a file.
How to use tensor and original image to reconstruct a new image is only related to your model. We cannot provide more help on this
Don’t try to display it directly yet, that’s another question
deepstream_imagedata-multistream.py shows how to extract the original image.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
I think tensor + original image = reconstructed image.
Now you get tensor and original image, you can get reconstructed image based on the processing when you train the model