Deployment hardware specification
- Hardware Platform : Jetson xavier nx
- DeepStream Version : 6.2
- JetPack Version : 5.1
• Issue : Like subject suggest i am unable to create int8 Engine for my jetson xavier device I have done follwing procedure to create my model.
- On Jetson.
- Built TensorRT OSS on jetson.
- built ds-tao-detection app.
- Model Details and Creation.
- QAT enable trained .etlt model.
- labels file.
- Hardware used for model creation
- GeForce RTX 4070 Ti
- Network Type (yolov4_tiny )
- TLT Version (format_version: 2.0, toolkit_version: 4.0.1)
• my config file to run model is as follows:
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=/home/nvidia/Downloads/export_qat/labels.txt
model-engine-file=/home/nvidia/Downloads/export_qat/yolov4_tiny_int8.engine
tlt-encoded-model=/home/nvidia/Downloads/deepstream_tlt_apps/post_processor/yolov4_cspdarknet_tiny_epoch_080.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;640;640
maintain-aspect-ratio=0
output-tensor-meta=0
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/nvidia/Downloads/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tao.so
[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
• How to reproduce the issue :
I run following command.
./ds-tao-detection -c /home/nvidia/Downloads/pgie_yolov4_tiny_tao_config.txt -i 1.mp4
• I get following error any help will be really appreciated.
nvidia@nvidia-desktop:~/Downloads/deepstream_tlt_apps/apps/tao_detection$ ./ds-tao-detection -c /home/nvidia/Downloads/pgie_yolov4_tiny_tao_config.txt -i 1.mp4
Request sink_0 pad from streammux
batchSize 1...
Now playing: /home/nvidia/Downloads/pgie_yolov4_tiny_tao_config.txt
Opening in BLOCKING MODE
WARNING: Deserialize engine failed because file path: /home/nvidia/Downloads/export_qat/yolov4_tiny_int8.engine open error
0:00:05.437904948 732312 0xaaaac96bbd30 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/nvidia/Downloads/export_qat/yolov4_tiny_int8.engine failed
0:00:05.523338857 732312 0xaaaac96bbd30 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/nvidia/Downloads/export_qat/yolov4_tiny_int8.engine failed, try rebuild
0:00:05.523459402 732312 0xaaaac96bbd30 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /home/nvidia/Downloads/deepstream_tlt_apps/post_processor/yolov4_cspdarknet_tiny_epoch_080.etlt
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:07.280273253 732312 0xaaaac96bbd30 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::65] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Aborted (core dumped)
Also if there is any better way it will be really helpful.