@dusty_nv
hello
I trained a model in docker container (jetson-inference) and converted to the .onnx format now when I want to use a model with detecnet and convert it to tensorRT engine and predict a video
command: detectnet --model=models/SatModel/ssd-mobilenet.onnx --labels=data/SatData/labels.txt --input-blob=input_0 --ouput-cvg=scores --output-bbox=boxes data/videos/T1_East.mp4
it gives me a this Error:
[TRT] binding 0
-- index 0
-- name 'input_0'
-- type FP32
-- in/out INPUT
-- # dims 4
-- dim #0 1
-- dim #1 3
-- dim #2 300
-- dim #3 300
[TRT] binding 1
-- index 1
-- name 'scores'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1
-- dim #1 3000
-- dim #2 3
[TRT] binding 2
-- index 2
-- name 'boxes'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1
-- dim #1 3000
-- dim #2 4
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] 3: Cannot find binding of given name: coverage
[TRT] failed to find requested output layer coverage in network
[TRT] device GPU, failed to create resources for CUDA engine
[TRT] failed to create TensorRT engine for models/SatModel/ssd-mobilenet.onnx, device GPU
[TRT] detectNet -- failed to initialize.
detectnet: failed to load detectNet model
what can I do with that ?!
I try many ways, but…