- deepstream-app version 6.1.0
- DeepStreamSDK 6.1.0
- CUDA Driver Version: 11.4
- CUDA Runtime Version: 11.6
- TensorRT Version: 8.2
- cuDNN Version: 8.4
- libNVWarp360 Version: 2.0.1d3
Can the eltlt weights downloaded from NVIDIA TAO be deployed directly in the local deepstream, or must they be converted to bin or engine files?
The eltlt weights are deployed in the triton service?
You can use etlt model directly with option “tlt-encoded-model” and “tlt-model-key” for nvinfer config, refer here: Gst-nvinfer — DeepStream 6.2 Release documentation (nvidia.com)
tlt-encoded-model=./deepstream_tao_apps-master/configs/unet_tao/unet_tao/unet_resnet18.etlt
tlt-model-key=tlt_encode
Yes, it is in the configuration file, but it runs with an error
RUN:
./ds-tao-classifier -c ./unet_tao/pgie_unet_tao_config.txt -i file:///sample_1080p_h264.mp4
ERROR: [TRT]: 4: [network.cpp::validate::2959] Error Code 4: Internal Error (input_1: for dimension number 2 in profile 0 does not match network definition (got min=320, opt=320, max=320), expected min=opt=max=608).)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1119 Build engine failed from config file
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
unet is a segmentation model, please use ./apps/tao_segmentation/ds-tao-segmentation configs/app/seg_app_unet.yml, please refer to GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
if still failed, please share the whole log.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.