I have installed Deep stream on dGPU and i have tested samples and it worked for me.
I have trained model with Tao on custom data and i have export .etlt model and also i have used tao deploy which has generated multiple files mentioned in titlte and below.
Now i am looking in deep stream reference apps repo i followed all commands that mention and run tao pretrained model successfully. so now i am looking in deepstream_app_source1_detection_models.txt file
whch has some refrence to model weight and other related files
but i have the following files which make confusion for me because of file extensions, different from the above mentined extensions in configuration file.
kindly help me out next i will use these model on Jetson axavier kit.
and also give me suggestion how to make config file for deep stream or we just modify build configuration file of deep stream.
I looked into TAO pretrained models Deep stream samples github repo .
where they use tao converter and they saving ( model engine file ) in this .etlt_b1_gpu0_fp16.engine format but i am using TAO 4.0 version which use tao deploy an it save three output files in the following format.
trt.engine ( FP32)
trt.engine.fp16
you can check tao deploy commands in posted question.
so what i want which file i choce to use instead of this .etlt_b1_gpu0_fp16.engine as mentioned in samples configuration .
what do you suggest?
Thanks for fast reply.
Hi i got this error when specified path in config_infer_primary_yolov4.txt and then run deepstream_app_source1_detection_models.txt file it got me the following error.
/opt/nvidia/deepstream/deepstream-6.1/samples/configs/tao_pretrained_models$ sudo deepstream-app -c deepstrean_app_source1_custom_yolov4.txt
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 213, Serialized Engine Version: 232)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_pretrained_models/yolov4/n/trt.engine
0:00:01.046042522 2825753 0x5573b6294e60 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_pretrained_models/yolov4/n/trt.engine failed
0:00:01.144150066 2825753 0x5573b6294e60 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_pretrained_models/yolov4/n/trt.engine failed, try rebuild
0:00:01.144166156 2825753 0x5573b6294e60 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
parseModel: Failed to parse ONNX model
ERROR: tlt/tlt_decode.cpp:389 Failed to build network, error in model parsing.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:01.961677566 2825753 0x5573b6294e60 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Aborted
config_infer_primary_yolov4.txt i named it nvinfer_config.txt
Is the engine generated in the same machine with the same GPU? Have you put the engine file in the path you set “/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_pretrained_models/yolov4/n/trt.engine”? Please fill the model parameters correctly. Seems your “tlt-model-key” is not correct either.