Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU) ** Nano • DeepStream Version6.0.1
**• JetPack Version (valid for Jetson only)**4.6.1 • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
So I have first trained InceptionResnetV2 model of Keras for binary classification. Later I converted that model into a tflite model. Then I used tf2onnx converter to convert my tflite into onnx. Finally, I converted this onnx into tensorrt engine. I’m using this tensorrt engine for inference on the deepstream-test2 python app. Although the app is running without any errors it is not displaying any output of the sgie on the video. Does deepstream does not support the model I’ve used ? Do I need to make changes to the output layer? Any help would be really appreciated.
you can test your model first, then test deepstream-test2. please refer to DeepStream SDK FAQ - #25 by fanzh19. [DSx_All_App] How to use classification model as pgie? , it is a sample to test classification model.
gst-launch-1.0 filesrc location=/home/apxnano3/Desktop/tensorrt_face/data/test.jpg ! jpegdec ! videoconvert ! video/x-raw,format=I420 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=./dstest_appsrc_config.txt ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA ! nvdsosd ! nvvideoconvert ! video/x-raw,format=I420 ! jpegenc ! filesink location=out.jpg
Unknown or legacy key specified ‘is-classifier’ for group [property]
Setting pipeline to PAUSED …
0:00:51.105947846 24378 0x5579b98000 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/apxnano3/Documents/converting_trt/gender.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT serving_default_input_1:0 3x160x160
1 OUTPUT kFLOAT StatefulPartitionedCall:0 1
0:00:51.106846098 24378 0x5579b98000 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/apxnano3/Documents/converting_trt/gender.engine
0:00:51.271711691 24378 0x5579b98000 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:./dstest_appsrc_config.txt sucessfully
Pipeline is PREROLLING …
ERROR: from element /GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream
Additional debug info:
gstvideodecoder.c(1161): gst_video_decoder_sink_event_default (): /GstPipeline:pipeline0/GstJpegDec:jpegdec0:
no valid frames found
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to NULL …
Freeing pipeline …
from the error “No valid frames decoded before end of stream”, test.jpg decode failed, you can use “gst-launch-1.0 filesrc location=/home/apxnano3/Desktop/tensorrt_face/data/test.jpg ! jpegdec ! fakesink” to check. y if test.jpg is ok, please share this jpg.
I tried with another image and the command worked. Deepstream-test2(python app) works perfectly with the default configuration but when I change the configuration for my model it doesn’t provide the classifier’s output.
please modify the configuration file according to your model 's preprocess and postprocess. did you use gst-launch command test your model? can the above whole command give the right output?
if “The command worked fine and gave me the correct output”, your model is supported by deepstream.
dose your model used to recognize people? operate-on-class-ids should be 2, please refer to deepstream\deepstream\samples\models\Primary_Detector\labels.txt
In the link that you have provided, am I suppose to add the provided file in this " /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/ " directory?
no, need to port the modification of dump_infer_input_to_file.patch.txt, then build and replace the old so, by this method you can check if input tensor in the second model is ok.
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks