Custom Model ~> Core Dumped

Please provide complete information as applicable to your setup.

• Hardware Platform Jetson
• DeepStream Version 6.0
• JetPack Version 4.6
• TensorRT Version
• Issue Type( bugs)
• How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am trying to test a custom DetectNet_v2 retrained using TAO SDK on the deepstream-test3 app with python. When using with the pre-set model, it works, but when using my model it fails with core dumped error.

I tried converting the model using both trt-converter and passing the raw .etlt to the pipeline. Both failed.

Am attaching the config files plus the .etlt model for the reproduction of the error.

labels.txt (10 Bytes)
nvinfer_config.txt (265 Bytes)
vcellphoneNet_v1.etlt (42.8 MB)

Could you share the error log also?

Thanks for your reply, here it is.

errorLog.txt (5.0 KB)

from the log , the TRT engine - vcellphoneNet_v1.etlt_b1_gpu0_fp16.engin has been generated successfully, I don’t think it’s caused by the tlt model.
But, it’s “Segmentation fault (core dumped)” at last, could you use “gdb --args …” to run your app and capture the backtrace for more clues?

Never used gbd before, so let me know if this is what you meant.

gdb --args python3 rtsp://admin:vanguard123*@
GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "aarch64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
Find the GDB manual and other documentation resources online at:
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python3...(no debugging symbols found)...done.

What made me think of the model to be the problem is that if I use the pre-defined model (primary detector) the pipeline works as usual.

Hi @AndresGodoy ,
Sorry for delay!
Looks the nvinfer_config.txt is not so correct.

Could you refer to the config_infer_primary_*.txt under deepstream_reference_apps/deepstream_app_tao_configs at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub to correct your nvinfer_config.txt?
Or share me the exact report steps in deepstream ?


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.