Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Xavier NX • DeepStream Version 6.01 • JetPack Version (valid for Jetson only) 4.6.1 • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) Question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
ERROR: [TRT]: CaffeParser: Could not open file /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/final_model.etlt ERROR: [TRT]: CaffeParser: Could not parse model file ERROR: Failed while parsing caffe network: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.prototxt ERROR: failed to build network since parsing model errors. ERROR: failed to build network.
Any view on whats causing the error? And also I’m assuming the engine file will be produced automatically?
hi @curious_cat, thanks for responding. No I did not and was mainly due to what I thought the docs were telling me. The docs say for integrating TAO etlt models into Deepstream:
Option 1 is very straightforward. The .etlt file and calibration cache are directly used by DeepStream. DeepStream will automatically generate the TensorRT engine file and then run inference. TensorRT engine generation can take some time depending on size of the model and type of hardware. Engine generation can be done ahead of time with Option 2. With option 2, the tao-converter is used to convert the .etlt file to TensorRT; this file is then provided directly to DeepStream.
I’m using Option 1 (etlt plus cal.bin) and let deepstream build the engine. I think also, because of the destination device being a Jetson, there’s a “special” tao-converter that also introduces a bunch of lib issues too, so want to avoid if at all possible.
Also the MD commentary on the CV examples on the NVIDIA IOT repo, suggest that all is needed to integrate directly into DS is the etlt file plus the cal.bin file so seems fairly strong to me that the converter is not needed.
But of course happy for anyone to tell me I’m wrong and that I need to - thank you.
ERROR: [TRT]: 3: conv1/convolution:kernel weights has count 9408 but 0 was expected
ERROR: [TRT]: 4: conv1/convolution: count of 9408 weights in kernel, but kernel dimensions (7,7) with 0 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 0 * 77 * 64 / 1 = 0
ERROR: [TRT]: 4: [convolutionNode.cpp::computeOutputExtents::39] Error Code 4: Internal Error (conv1/convolution: number of kernel weights does not match tensor dimensions)
ERROR: [TRT]: 3: conv1/convolution:kernel weights has count 9408 but 0 was expected
ERROR: [TRT]: 4: conv1/convolution: count of 9408 weights in kernel, but kernel dimensions (7,7) with 0 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 0 * 77 * 64 / 1 = 0
ERROR: [TRT]: 4: [convolutionNode.cpp::computeOutputExtents::39] Error Code 4: Internal Error (conv1/convolution: number of kernel weights does not match tensor dimensions)
ERROR: [TRT]: 3: conv1/convolution:kernel weights has count 9408 but 0 was expected
ERROR: [TRT]: 4: conv1/convolution: count of 9408 weights in kernel, but kernel dimensions (7,7) with 0 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 0 * 7*7 * 64 / 1 = 0
ERROR: [TRT]: 4: [convolutionNode.cpp::computeOutputExtents::39] Error Code 4: Internal Error (conv1/convolution: number of kernel weights does not match tensor dimensions)
ERROR: [TRT]: UffParser: Parser error: conv1/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
parseModel: Failed to parse UFF model
ERROR: Failed to build network, error in model parsing.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.275657810 16370 0x2e4b66d0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
This looks like Conv layers are not supported? For reference I’m using Detectnet_v2.