Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson NX. • DeepStream Version
6.0 • JetPack Version (valid for Jetson only)
4.6 • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I’ve retrained a .etlt classification model by TAO with training dataset image size 224,224, and trying to start a test with DeepStream app by feed in some custom images, But I didn’s see any samples there that using classification as PGIE, could you help?
thanks.
Just wonder any specific reason that Deepstream build-in samples not include a classification as PGIE sample, or I missed sth? (I do see several classification samples but worked as SGIE)
I’ll look at the GStreamer though there’s a gap, as I’m quite new for it.
thanks.
I’ve build the build-in sample of tao_classifier from the above suggested repo, and copied in my 2 class classification etlt model, updated the pgie_multi_task_tao_config.txt to using my model, but got an error when run it:
eow@jtsNX:~/deepstream_tao_apps/apps/tao_classifier$ ./ds-tao-classifier -c ../../configs/multi_task_tao/pgie_multi_task_tao_config.txt -i data/train/bicycle/bicycle_000114_6460ff2157d9bcdc_282_459.jpg
Now playing: ../../configs/multi_task_tao/pgie_multi_task_tao_config.txt
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /home/eow/deepstream_tao_apps/models/classification_2_class_bic/abc.etlt_b1_gpu0_fp16.engine open error
0:00:02.439849221 21461 0x5589337ef0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/eow/deepstream_tao_apps/models/classification_2_class_bic/abc.etlt_b1_gpu0_fp16.engine failed
0:00:02.439989190 21461 0x5589337ef0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/eow/deepstream_tao_apps/models/classification_2_class_bic/abc.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:02.440028870 21461 0x5589337ef0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Output error: Output season/Softmax not found
parseModel: Failed to parse UFF model
ERROR: Failed to build network, error in model parsing.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.372454426 21461 0x5589337ef0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
terminate called after throwing an instance of 'nvinfer1::InternalError'
what(): Assertion mRefCount > 0 failed.
Aborted (core dumped)
I guess it’s caused by the different output of my model and sample model. my model simply output 1 of 2 class: bicycle, electric_bicycle. could you help?
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks