Tlt-converter does not return the engine file

Hi,
I have trained a SSD MobileNet v2 network with TLT and export the model to an .etlt file.
now I want to create a Tensor RT engine in my deployment machine (x86).
I installed Tensor RT OOS with the docker container provided in its Github Repo as well as tensorRT 7.0.
now I want to create a TensorRT engine for my model.
when I run followings:

trtuser@d0e9e685c7c2:/workspace/TensorRT$ export API_KEY=[MY API KEY]
trtuser@d0e9e685c7c2:/workspace/TensorRT$ export OUTPUT_NODES=NMS                                   
trtuser@d0e9e685c7c2:/workspace/TensorRT$ export INPUT_DIMS=3,320,320
trtuser@d0e9e685c7c2:/workspace/TensorRT$ export D_TYPE=fp16
trtuser@d0e9e685c7c2:/workspace/TensorRT$ export ENGINE_PATH=/workspace/TensorRT/tlt_model/ped_ssd_mobilenet_v2.engine
trtuser@d0e9e685c7c2:/workspace/TensorRT$ export MODEL_PATH=/workspace/TensorRT/tlt_model/ped_ssd_mobilenet_v2_epoch_110.etlt 
trtuser@d0e9e685c7c2:/workspace/TensorRT$ ./tlt-converter -k $API_KEY -o $OUTPUT_NODES -d $INPUT_DIMS -e $ENGINE_PATH -t $D_TYPE $MODEL_PATH

I get following massage:

[INFO] Detected 1 inputs and 2 output network tensors.

but the .engine file does not created.

Do you have access to write any file in your current folder?
Looks like you have already generated the engine successfully but it is not saved.
Do you mount separate volumes for the dataset and the experiment results so that they persist outside of the docker?

1 Like

yeah the problem was with write permission. thanks for the solution.