NVIDIA-AI-IOT / deepstream_lpr_app

Getting error while running deepstream app of lpr

command - ./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4 output.264

System config:
Deepstream 6.0
Jetson Xavier NX

error -

WARNING: INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in ‘NvDsInferCreateNetwork’ implementation
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream-6.0/new_project/deepstream_lpr_app/deepstream-lpr-app/…/models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:09.675094863 91 0x55b5a1ae90 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
free(): double free detected in tcache 2
Aborted (core dumped)

Sorry for the late response, is this still an issue to support? Thanks


Please download the model firstly.

Please follow the steps in README deepstream_lpr_app/README.md at master · NVIDIA-AI-IOT/deepstream_lpr_app (github.com)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.