TLT Converter UffParser: Unsupported number of graph 0

Hi, there!

Currently, I am trying to convert my .etlt model to .trt inside a Jetson TX2.

In my Jupyter notebook, I performed Int8 optimization by executing this code:

!tlt-int8-tensorfile detectnet_v2 -e $SPECS_DIR/detectnet_v2_train_resnet18_kitti_OCR_third_trial_2.txt \
                                  -m 10 \
                                  -o $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/calibration.tensor \

Then, we exported the model into .etlt by executing this code:

!tlt-export $USER_EXPERIMENT_DIR/experiment_third_trial_OCR/weights/resnet18_detector_OCR_third_trial.tlt \
            -o $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/resnet18_detector_jeff.etlt \
            --outputs output_cov/Sigmoid,output_bbox/BiasAdd \
            --enc_key $KEY \
            --input_dims 3,320,832 \
            --max_workspace_size 1100000 \
            --export_module detectnet_v2 \
            --cal_data_file $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/calibration.tensor \
            --data_type int8 \
            --batches 10 \
            --cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/calibration.bin \
            --cal_batch_size 4 \
            --verbose

After that, we tried to generate TensorRT engine by executed this code inside the Jupyter:

!tlt-converter $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/resnet18_detector_jeff.etlt \
               -k $KEY \
               -c $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/calibration.bin \
               -o output_cov/Sigmoid,output_bbox/BiasAdd \
               -d 3,320,832 \
               -i nchw \
               -m 64 \
               -t int8 \
               -e $USER_EXPERIMENT_DIR/experiment_dir_final_OCR/resnet18_detector.trt \
               -b 4

All seems to work inside Jupyter x86. Then, we tried to generate TensorRT engine from Jetson TX2, so we tried to execute this code:

./tlt-converter ./resnet18_detector_jeff.etlt \
               -k XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
               -c ./calibration.bin \
               -o output_cov/Sigmoid,output_bbox/BiasAdd \
               -d 3,320,832 \
               -i nchw \
               -m 64 \
               -t int8 \
               -e ./resnet18_detector_jetson.trt \
               -b 4

Unfortunately, we met this problem:

[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine

Do you have any insights about this problem? I would appreciate any ideas.
(Note: by they way, the Readme.md inside https://developer.nvidia.com/tlt-converter is broken.)

Best regards,
Jeff

Hi jefflgaol,
What do you mean by “the Readme.md inside https://developer.nvidia.com/tlt-converter is broken”?

Please note that, we must use Jetson platform version of tlt-converter when you want to generate trt engine in Jetson platform.

When I accessed the Readme.md, it looked like this:

PK\00\00j_+O\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00word/numbering.xml\B5Wَ\9B0\FD\82\FEB\EAc\C2\B2
\99\87\8E\A6j5\AA\AA\E9\F4\E3\805^еI&_\B3OҴ\82^\B0\B8˹\F7؇kqw\FFƙu \A0\A8\91\ED\CD]\DB"˘\8A$\B2\BD<\CE6\B6\A541bR\90\C8>e\DFo?\DCC\91\F3g\A1B\8E#;\D5:G\E1\94p\A4\E62#\C28\F78\D2\E6\87#xͳ\96<C\9A\EE(\A3\FA\E4\F8\AE\BB\B2k\D99\88\B0\86\98q\8AA*\B9\D7EJ(\F7{\8AI\BD4Чn\95\F2 qΉ\D0eEa3=H\A1R\9A\A9\8Dߊf\9Cir\F8\89gM\DC1\EBS-t4\FB\CCYU\E8(!\CE@b\A2\94\B1>T\CE\D1s{l`\D1f\F4i\E1\BCf\D3	GT\B40\85:.\80\DA\DAsS\BB޴\AA#\D2\ED\85b}\A9\OtaNv\81n\D8\CF\F7\F9\ED\A5\E2\93\A5shyN\E8\80݂\C0$~%\F1g$\A8s\9C\F4\92\F3RLQ\88w"U\83N\D6s/\E4\F23E\E9В\FFC\FB2\CF:\B9a\B7\A0\BD\FB\BD\E50\00\BFؚ\88vJ\C2\FA{έ\B3\B7\AF\B1\A6e;0\E3\A2f\89l\B7\B4\98q
\DA\D8\88Aζ\A6\8F\BC5\C6S\8EX\E52\99/\E4\AD\F5}\F4\E6\AD\FDen\AC\8C\ECue\CE~@\B1Pe_a\8E\EC\B5o\A6\FA1L\91Hʱ\BEX\B9E\AC\D3C\B5\E4
\9A0\A3\BE\8E(]NY풍ד
\93GODk\D7\F9\83yA0	%\A5gɑ\B8\CEhq\8D\D0$\FD;%\DF[MBi1\86\E6\82\C1'\E4o6\93\D0	\C6\DDr0%\C3`JˑD\B7.\BA`1\CDhX\8D!\BA\F5\E0Z\BAӌ\85\F5X\A2\DB\A7\B4\9Ef,lFݧ\E1\A2[c\8D\E7\ECέ\9B\B7\CAgw\9F\DD\CAg\\9D2\B2Y\AB\9F\99\EDoPKa̬j\00\00
\00\00PK\00\00j_+O\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00word/settings.xml\A5\95\CDn\DB0ǟ`\EF\E8\9E\F8\A3I6uzX\B1\ED\B0\9E\D2=\00#ɶ}A\92\E3\E5\ED'ǖդ@\E1f\A7H\92?\D2M?>\FD|q\A2\C62%K\94\ADR\B4\A0+\C2d]\A2?\AF?\96\DF\D0\C2:\90\B8\92\B4Dgj\D1\D3\EE\CBcWX\EA\9C\F7\B2O\90\B6\B8D\8Ds\BAH\8Be*\C0\AE\94\A6\D2e+e85u"\C0[\BD\C4Jhp\EC\C08s\E7$O\D3-1\AAD\AD\91ňX
\86\8D\B2\AAr}H\A1\AA\8Aa:\FE\843'\EF\F2\ACp+\A8t\97\8C\89\A1\DCנ\A4m\98\B6\81&\EE\A5yc \A7\8F\E2$x\F0\EB\F4\9Cl\C4@\E7e-\F8\90\A8S\86h\A30\B5֫σq"f\E9\8C\F6\88)bN	\D79C%\98\9C0\FDp܀\A6\DC+\9F{l\DA

That’s why this How to export model using tlt-converter for Jetson Nano - TAO Toolkit - NVIDIA Developer Forums said :
“Hi Guys,
I am training a custom object detection model (resnet10 , detectnet_v2) on my x86 machine. I wish to use the trained models on Jetson Nano. In the sample code, there is an instruction to download tlt-converter for Jetson platform. I downloaded it but the readme is not readbale.
Kindly let me know the instructions to use ‘tlt-converter’ for Jetson Nano.
Thanks.”

Also, I already ran the trt-converter inside Jetson TX2, but got this result:

[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine

Do you have any insights?

Best regards,
Jeff

I will check the Readme.md later.
Anyway, if you can download the tlt-converter successfully, you can check its “help” along with tlt user guide for its usage.

For the error, please check

  1. The $KEY is really set when you train the etlt model. Also make sure it is correct.
  2. The key is correct when you run tlt-converter. The key should be exactly the same as used in the TLT training phase
  3. The etlt model is available

Reference topic:
https://devtalk.nvidia.com/default/topic/1067539/transfer-learning-toolkit/tlt-converter-on-jetson-nano-error-/
https://devtalk.nvidia.com/default/topic/1065680/transfer-learning-toolkit/tlt-converter-uff-parser-error/?offset=11#5397152