Please provide complete information as applicable to your setup.
• Hardware Platform Jetson • DeepStream Version 6.0 • JetPack Version 4.6 • TensorRT Version 8.0.1.6 • Issue Type( bugs) • How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I am trying to test a custom DetectNet_v2 retrained using TAO SDK on the deepstream-test3 app with python. When using with the pre-set model, it works, but when using my model it fails with core dumped error.
I tried converting the model using both trt-converter and passing the raw .etlt to the pipeline. Both failed.
Am attaching the config files plus the .etlt model for the reproduction of the error.
from the log , the TRT engine - vcellphoneNet_v1.etlt_b1_gpu0_fp16.engin has been generated successfully, I don’t think it’s caused by the tlt model.
But, it’s “Segmentation fault (core dumped)” at last, could you use “gdb --args …” to run your app and capture the backtrace for more clues?
Never used gbd before, so let me know if this is what you meant.
gdb --args python3 deepstream_test_3.py rtsp://admin:vanguard123*@192.168.0.18/Streaming/Channels/1
GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "aarch64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python3...(no debugging symbols found)...done.
What made me think of the model to be the problem is that if I use the pre-defined model (primary detector) the pipeline works as usual.