Custom trained model Detectnet V2 and deploying to jetson nano

Hi, i have train a custom model detectnet_v2 lpdnet by using notebook from tao launcher starter kit and create the .tlt model. I’m using Ubuntu 20.04 WSL2 with GPU RTX3060.
This is the result from training process:

When i try to deploy using jetson nano using deepstream-app there is a error like this:

Any ideas?
• Hardware Platform (Jetson Nano)
• DeepStream Version 6.0.1**
• JetPack Version 4.6.1
• TensorRT Version 8.2.1
• CUDA 10.2

Can you share the full log and config file?

Sure, the config is similar to lpdnet us
lpd_id_config.txt (1012 Bytes)

Please read the log. it can’t find the etlt file. Please check whether there is /home/irumadev/Downloads/expe/deepstream_lpr_app/deepstream-lpr-app/resnet_lpd/resnet18_detector_pruned_lpd.etlt exists in your machine.

I already try that too and the this is the results:

Is it because .etlt file failed during exporting from tlt to etlt?


You set


That is not expected.
Please set it to an etlt file.

Sorry for late respond, i just try to redo the entire process on the notebook.
Its working on jetson nano.
Screenshot 2023-10-26 213347

Glad to know it works. Any more issues?

Last one, Do you have any ideas sometimes why fps is dropped?

Adapter Power issue?

You may refer to Troubleshooting — DeepStream 6.3 Release documentation

And you can use “nvidia-smi dmon” for x86 and “tegrastats” for Jetson to monitor the GPU/CPU usage during the app running.

$ sudo nvpmodel -m 0
$ sudo jetson_clocks
Already done.
But sometimes fps still drop maybe i should try change the power adapter.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Have you monitored the CPU and GPU usage during the app is running?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.