Executing TAO command with older version image

Is there a way to execute the TAO_convert using the previous version of image i.e
tao-toolkit-tf:v3.21.11-tf1.15.4-py3
instead of tao-toolkit-tf:v3.21.11-tf1.15.5-py3
I already have the old image and I dont want to download the new image

Yes, it can. Actually tao-toolkit-tf:v3.21.11-tf1.15.4-py3 and tao-toolkit-tf:v3.21.11-tf1.15.5-py3 are released at the same time (3.21.11 version)

tao-toolkit-tf:v3.21.11-tf1.15.4-py3 is for detectnet_v2 and faster_rcnn
tao-toolkit-tf:v3.21.11-tf1.15.5-py3 is for other networks.

That is interesting information.
Thank you for that.

Let me give a bit of background. I ran through the notebook detectnet_v2 and at the end there was no engine file I found in any of the folders that got generated. so I thought I will just modify the export command and try to generate the .engine file to be used with deepstream

Which resulted in a new Docker Pull for the version xxx-15.5-py3
Initial training & all the steps were all preformed on xxxx-15.4-py3

is this the right way to generate the .engine file to be used with deepstream?

"
!tao converter $USER_EXPERIMENT_DIR/Converter/resnet18_detector.etlt
-k $KEY
-c $USER_EXPERIMENT_DIR/Converter/calibration.bin
-o output_cov/Sigmoid,output_bbox/BiasAdd
-d 3,384,1248
-i nchw
-m 64
-t int8
-e $USER_EXPERIMENT_DIR/Converter/resnet18_detector.engine
-b 4
"

Yes, your above command can generate trt engine.

You can also directly copy .etlt file into the machine where you want to run deepstream inference. Then, it will also generate trt engine.

Thank you Morganh for the response.
I am running on the computer :).

.etlt file is same as engine file and i will have the path to the etlt file under model-engine file?
I got through examples i haven’t seen any where target is point to etrt file so hence asking questions.

Thank you for help

For the config files, refer to deepstream_tao_apps/configs at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

Thank you so much