I need to fill uff-input-blob-name and output-blob-names with the correct input and output layers names of this model, but I can’t find this information anywhere.
Is there any tool that we can use to visualize the network architecture and layers names of a TLT model?
Thanks, in fact I tried that before open this topic :)
There’s no mention on the Gesturenet documentation (NGC or TLT 3.0 pages) of what underlying network model it is using.
As in your suggestion, I took by example configurations of TLT models, and had tried to use with Gesturenet the following config that is working fine with some other TLT detectors:
but in the Gesturenet case I get the following error when deepstream is trying to convert the .etlt model to a TensorRT engine:
mar 04 11:39:29 : Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 2]: Trying to create engine from model files
mar 04 11:39:33: ERROR: [TRT]: UffParser: Could not read buffer.
mar 04 11:39:33: parseModel: Failed to parse UFF model
mar 04 11:39:33: ERROR: failed to build network since parsing model errors.
mar 04 11:39:33: ERROR: Failed to create network using custom network creation function
mar 04 11:39:33: ERROR: Failed to get cuda engine from custom library API
This kind of error generally happens when the tlt-model-key or uff-input-blob-name/output-blob-names are wrong, that’s why I asked in this topic if there’s some way to validate them.
Thanks but I’m afraid that does not answer my question at all :(
It’s still not clear why Deepstream is capable of convert FaceDetect TLT model to a TensorRT engine, but is not capable of doing the same with the Gesturenet
Additional note: I can not use tlt-convert as workaround for the problem I’m trying to solve
I did a try for the step number 2) that you shared about using the TLT 3.0 export, but is not working with the unpruned Gesturenet model available on NGC.
I’ve mounted the following folder to tlt’s docker:
And it fails with: Failed to convert: inputs/outputs specified do not exist
2021-03-04 14:52:57,940 [WARNING] tf2onnx.tfonnx: Argument verbose for process_tf_graph is deprecated. Please use --verbose option instead.
2021-03-04 14:52:58,693 [ERROR] tf2onnx.tfonnx:
Failed to convert: inputs/outputs specified do not exist, make sure your passedin format: input/output_node_name:port_id. Problematical inputs/outputs are: {‘None:0’}
Traceback (most recent call last):
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/classifynet/scripts/export.py”, line 114, in
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/classifynet/scripts/export.py”, line 110, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/common/utilities/tlt_utils.py”, line 316, in save_etlt_file
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/common/utilities/tlt_utils.py”, line 407, in pb_to_onnx
File “/usr/local/lib/python3.6/dist-packages/tf2onnx/tfonnx.py”, line 407, in process_tf_graph
raise ValueError(“Inputs/Outputs Not Found”)
ValueError: Inputs/Outputs Not Found
Traceback (most recent call last):
File “/usr/local/bin/gesturenet”, line 8, in
sys.exit(main())
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/classifynet/entrypoint/classifynet.py”, line 12, in main
The corrupted etlt file generated today is 1MB bigger than the one shared to download in nvidia GTC as “unpruned” model, which might indicate the one’s in NGC is corrupted also:
45M mar 4 14:51 model.etlt
and the one directly downloaded from NGC:
44M fev 24 01:24 model.etlt
Not sure about this until someone can share if effectively was able to run the etlt model available on the NGC market…
Ah, what I wrote about tlt-export is that I cannot use it, is not practical for production solutions using DeepStream and multiple kinds of HW targets with different TensorRT/Cuda versions.