Deepstream 6.01 - Python app error - USB Camera

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 6.01
• JetPack Version (valid for Jetson only) 4.6.1
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have adopted the approach in https://developer.nvidia.com/blog/3-methods-speeding-up-ai-model-development-tao-toolkit-whitepaper/
and successfully exported etlt and calibration.bin. I went across to the deepstream python apps and specifically the usb camera sample app (deepstream_test_1_usb.py) and edited the config file with:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=…/…/…/…/samples/models/Primary_Detector/final_model.etlt
proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
labelfile-path=~/Detectnet_v2/labels.txt
int8-calib-file=~/Decectnet_v2/calibration.bin
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin

However I get the error:

ERROR: [TRT]: CaffeParser: Could not open file /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/final_model.etlt
ERROR: [TRT]: CaffeParser: Could not parse model file
ERROR: Failed while parsing caffe network: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.prototxt
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.

Any view on whats causing the error? And also I’m assuming the engine file will be produced automatically?

Thank you

Have you tried converting using the tlt-converter?

https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/tensorrt.html

I did that for a mobilenet etlt file and had to enter an encryption key.

hi @curious_cat, thanks for responding. No I did not and was mainly due to what I thought the docs were telling me. The docs say for integrating TAO etlt models into Deepstream:

Option 1 is very straightforward. The .etlt file and calibration cache are directly used by DeepStream. DeepStream will automatically generate the TensorRT engine file and then run inference. TensorRT engine generation can take some time depending on size of the model and type of hardware. Engine generation can be done ahead of time with Option 2. With option 2, the tao-converter is used to convert the .etlt file to TensorRT; this file is then provided directly to DeepStream.

I’m using Option 1 (etlt plus cal.bin) and let deepstream build the engine. I think also, because of the destination device being a Jetson, there’s a “special” tao-converter that also introduces a bunch of lib issues too, so want to avoid if at all possible.

Also the MD commentary on the CV examples on the NVIDIA IOT repo, suggest that all is needed to integrate directly into DS is the etlt file plus the cal.bin file so seems fairly strong to me that the converter is not needed.

But of course happy for anyone to tell me I’m wrong and that I need to - thank you.

Iain

As an update after fixing a permissions issue with the etlt file and updating the config file as below:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=…/…/…/…/samples/models/Primay_Detector/final_model.etlt
tlt-encoded-model=…/…/…/…/samples/models/Primary_Detector/final_model.etlt
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/calibration.bin
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
tlt-model-key=tlt_encode
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
uff-input-blob-name=input_1
#scaling-filter=0
#scaling-compute-hw=0

I now get the following error:

ERROR: [TRT]: 3: conv1/convolution:kernel weights has count 9408 but 0 was expected
ERROR: [TRT]: 4: conv1/convolution: count of 9408 weights in kernel, but kernel dimensions (7,7) with 0 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 0 * 77 * 64 / 1 = 0
ERROR: [TRT]: 4: [convolutionNode.cpp::computeOutputExtents::39] Error Code 4: Internal Error (conv1/convolution: number of kernel weights does not match tensor dimensions)
ERROR: [TRT]: 3: conv1/convolution:kernel weights has count 9408 but 0 was expected
ERROR: [TRT]: 4: conv1/convolution: count of 9408 weights in kernel, but kernel dimensions (7,7) with 0 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 0 * 7
7 * 64 / 1 = 0
ERROR: [TRT]: 4: [convolutionNode.cpp::computeOutputExtents::39] Error Code 4: Internal Error (conv1/convolution: number of kernel weights does not match tensor dimensions)
ERROR: [TRT]: 3: conv1/convolution:kernel weights has count 9408 but 0 was expected
ERROR: [TRT]: 4: conv1/convolution: count of 9408 weights in kernel, but kernel dimensions (7,7) with 0 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 0 * 7*7 * 64 / 1 = 0
ERROR: [TRT]: 4: [convolutionNode.cpp::computeOutputExtents::39] Error Code 4: Internal Error (conv1/convolution: number of kernel weights does not match tensor dimensions)
ERROR: [TRT]: UffParser: Parser error: conv1/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
parseModel: Failed to parse UFF model
ERROR: Failed to build network, error in model parsing.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.275657810 16370 0x2e4b66d0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

This looks like Conv layers are not supported? For reference I’m using Detectnet_v2.

Any help appreciated.

Cheers.

Does your model work without deepstream?

Hi @Amycao thank you for your response. Yes it does and I was able to resolve the issue further - incorrect tao-converter.

Hi @curious_cat - that was it - thank you! I’ve marked your answer as the solution.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.