Converting .etlt file to trt engine using tao-converter

tao-converter -d 3,544,960 -k tao -o NMS frcnn_kitti_vgg16.etlt
log********
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[INFO] UFFParser: Did not find plugin field entry isBatchAgnostic in the Registered Creator for layer NMS
[INFO] UFFParser: Did not find plugin field entry scoreBits in the Registered Creator for layer NMS
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 548370337
Killed


env :
JETSON NANO
jetpack 4.5.1
DeepStream 5.1
Tao 3.0.11.8

Could you download below model and retry?

GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

→ wget https://nvidia.box.com/shared/static/3a00fdf8e1s2k3nezoxmfyykydxiyxy7 -O models.zip

Did you follow user guide to build and replace OSS plugin?

Seems to be out of memory(OOM), please retry after adding “-w 100000000”.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.