Description
Hi, I’m currently running inference with deepstream yolov5s-3.0 on 2 camera IP on a jetson nano 4GB and its runs with a long delay.
Here you can see the log
I saw that the jetson nano can handle multiple IP camera with full FPS. I have 20 seconds delay minimum.
My GPU is at 99% usage.
I tested with another model : yolov3 tiny and I had the same problem.
Environment
Deepstream :5.1
Jetpack : 4.5.1
Relevant Files
Here’s my config files and the model to run with deepstream
config_infer_primary.txt (444 Bytes) deepstream_app_config.txt (1.1 KB)
yolov5s.engine (19.9 MB)
Steps To Reproduce
I follow this tutorial to run yolov5 with deepstream :
https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/YOLOv5.md
I run the command deepstream-app -c deepstream_app_config.txt
If you have any idea ? Thanks
NVES
March 12, 2021, 1:37pm
2
Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation
<!--- SPDX-License-Identifier: Apache-2.0 -->
# Supported ONNX Operators
TensorRT 8.4 supports operators up to Opset 17. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL
> Note: There is limited support for INT32, INT64, and DOUBLE types. TensorRT will attempt to cast down INT64 to INT32 and DOUBLE down to FLOAT, clamping values to `+-INT_MAX` or `+-FLT_MAX` if necessary.
See below for the support matrix of ONNX operators in ONNX-TensorRT.
## Operator Support Matrix
| Operator | Supported | Supported Types | Restrictions |
|---------------------------|------------|-----------------|------------------------------------------------------------------------------------------------------------------------|
| Abs | Y | FP32, FP16, INT32 |
| Acos | Y | FP32, FP16 |
| Acosh | Y | FP32, FP16 |
| Add | Y | FP32, FP16, INT32 |
This file has been truncated. show original
Also, request you to share your model and script if not shared already so that we can help you better.
Thanks!
thanks for your reply,
I add the model in the post above
I run ./trtexec yolov5s.engine but it doesn’t support the model format
Hi @constantin.fite ,
You may get better help here. Please post your query in DeepStream forum.
Discussions about the DeepStream SDK
Thank you.
1 Like