Sample for simple classification deepstream6 app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson NX.
• DeepStream Version
6.0
• JetPack Version (valid for Jetson only)
4.6
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’ve retrained a .etlt classification model by TAO with training dataset image size 224,224, and trying to start a test with DeepStream app by feed in some custom images, But I didn’s see any samples there that using classification as PGIE, could you help?

Is it possible to run you model with below pipeline?

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux !
h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert !
nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt !
nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt
input-tensor-meta=1 batch-size=7 ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

thanks, but my input are bunch of image files with varies of resolutions, not a video file.

could you help to modify a bit to support it.

It is just one example for using nvinfer. Please check with GStreamer for more GStreamer related question.

GStreamer

thanks.
Just wonder any specific reason that Deepstream build-in samples not include a classification as PGIE sample, or I missed sth? (I do see several classification samples but worked as SGIE)

I’ll look at the GStreamer though there’s a gap, as I’m quite new for it.

Can you refer this sample?

thanks.
I’ve build the build-in sample of tao_classifier from the above suggested repo, and copied in my 2 class classification etlt model, updated the pgie_multi_task_tao_config.txt to using my model, but got an error when run it:

eow@jtsNX:~/deepstream_tao_apps/apps/tao_classifier$ ./ds-tao-classifier -c ../../configs/multi_task_tao/pgie_multi_task_tao_config.txt -i data/train/bicycle/bicycle_000114_6460ff2157d9bcdc_282_459.jpg
Now playing: ../../configs/multi_task_tao/pgie_multi_task_tao_config.txt
Opening in BLOCKING MODE 
ERROR: Deserialize engine failed because file path: /home/eow/deepstream_tao_apps/models/classification_2_class_bic/abc.etlt_b1_gpu0_fp16.engine open error
0:00:02.439849221 21461   0x5589337ef0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/eow/deepstream_tao_apps/models/classification_2_class_bic/abc.etlt_b1_gpu0_fp16.engine failed
0:00:02.439989190 21461   0x5589337ef0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/eow/deepstream_tao_apps/models/classification_2_class_bic/abc.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:02.440028870 21461   0x5589337ef0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Output error: Output season/Softmax not found
parseModel: Failed to parse UFF model
ERROR: Failed to build network, error in model parsing.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.372454426 21461   0x5589337ef0 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
terminate called after throwing an instance of 'nvinfer1::InternalError'
  what():  Assertion mRefCount > 0 failed.
Aborted (core dumped)

I guess it’s caused by the different output of my model and sample model. my model simply output 1 of 2 class: bicycle, electric_bicycle. could you help?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Did you build the OSS plugin library?

deepstream_tao_apps/README.md at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.