Are the perception samples available for Driveworks 4.0?

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux
QNX
other

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.9.1.10844
other

Host Machine Version
native Ubuntu 18.04
other

I need a pedestrian & car detector. In Driveworks 3.5 I could use DriveNet but this is not available in DriveWorks 4.0.
Is there a tensorrt model which I could download and use it via the dnn tensor api (sample_dnn_tensor) ?
Or an onnx model which could be converted to tensorrt 6.5 ?

Please refer to the diagram on NVIDIA DRIVE OS | NVIDIA Developer. The Perception module is part of DRIVE AV, situated above DriveWorks, and it’s exclusively available in DRIVE Software.

Thx. Can you give advise of a drivenet like model which I could use with the dnn tensor api ? It is for an experimental setup, which will never go on the road.

YOLO could be a suitable choice.

Hi, I tried to use yolo v3.

I get following error when optimizing the onnx model to trt.

tensorRT_optimization (TensorRT) 6.5.00
This is a DNN generating tool based on TensorRT.
[26-04-2024 11:16:42] WARNING: ExplicitBatch is enabled by default for ONNX models.
[26-04-2024 11:16:43] DNNGenerator: Initializing TensorRT generation on model /home/nvidia/yolov3.onnx.
----------------------------------------------------------------
Input filename:   /home/nvidia/yolov3.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    NVIDIA TensorRT sample
Producer version: 
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[26-04-2024 11:16:44] DNNGenerator: Input "000_net": 3x608x608
[26-04-2024 11:16:44] DNNGenerator: Output "082_convolutional": 255x19x19
[26-04-2024 11:16:44] DNNGenerator: Output "094_convolutional": 255x38x38
[26-04-2024 11:16:44] DNNGenerator: Output "106_convolutional": 255x76x76
[26-04-2024 11:16:44] DNNGenerator: Building Engine...
[26-04-2024 11:18:32] ../rtSafe/cudaDevice.cpp (190) - Cuda Error in allocateImpl: 2 (out of memory)
[26-04-2024 11:18:32] ../rtSafe/resources.cpp (36) - OutOfMemory Error in gieCudaMalloc: 0 (GPU Memory allocation fails!)
[26-04-2024 11:18:32] FAILED_ALLOCATION: std::exception
Segmentation fault (core dumped)

I’m running everything on the target.

Generated the onnx from the /usr/src/tensorrt/samples/python/yolov3_onnx folder.

this was how I ran the optimization (on the target):

/usr/local/driveworks-4.0/tools/dnn/tensorRT_optimization --modelType=onnx --onnxFile=/home/nvidia/yolov3.onnx --out=/home/nvidia/yolov3.bin

What I understood was that we need to use the yolov3 on the target because of the ROI that is injected in one of the layers, right ? So any other tensorrt sdk version won’t work, right ?

After some googling I found that the batch and subdivision is set to training in the yolov3.cfg. I set it to testing and don’t get the out of memory problem anymore.

However, after the interpretation of the output, I can’t get object detections.

The inference also runs very slow.

Attached you can find the source code.

This is how I run it:
main.zip (10.1 KB)
yolov3.bin.zip (410 Bytes)
yolov3.zip (856 Bytes)

./sample_dnn_tensor --input-type=video --video=/usr/local/driveworks/data/samples/sfm/triangulation/video_0.h264 --tensorRT_model=yolov3.bin

Please check if these topics can help resolve your issue:

They were my source of inspiration. I’ve managed to get the network running, but the output has weird values. Could you have a look in my code (main.zip, see above)) if everything is setup properly ?

Dear @erwin.rademakers ,
Could you double check the preprocessing step needed for YOLO v3. I see input data is scaled by 1/255 in /usr/src/tensorrt/samples/python/yolov3_onnx/data_processing.py . Make sure the data fed into yolov3 network is same in both TensorRT and DW sample for comparison.

Thank you, that part I was missing!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.