Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.8.1
[*] DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
[*] Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
[*] DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
1.9.3.10904
[*] other
Host Machine Version
[*] native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Hi
I am trying to execute sample_object_detector_tracker and face errors. The command used is -
./sample_object_detector_tracker --input-type=camera --camera-type=F008A120RM0A --camera-group=c --camera-index=3
Attaching the error logs for below 2 scenarios for reference :
a) When camera stream is running with ./sample_camera and in parallel executing the sample_object_detector.
Error Log : sample_object_detector_tracker_error_with_camera_stream_on.txt
b) Only executing the sample_object_detector.
Error Log : **sample_object_detector_tracker_error_with_camera_stream_off.txt
sample_object_detector_tracker_error_with_camera_stream_off.txt (12.6 KB)
sample_object_detector_tracker_error_with_camera_stream_on.txt (11.0 KB)
**
Dear @arjav.parikh,
When camera stream is running with ./sample_camera and in parallel executing the sample_object_detector
only one process can use camera at a time. So this is not a valid test
I see below message in the logs
[14-08-2024 11:37:13] CameraClient: no NITO found at /usr/share/camera/F008A120RM0A.nito
[14-08-2024 11:37:13] CameraClient: no NITO found at /usr/share/camera/f008a120rm0a.nito
[14-08-2024 11:37:13] CameraClient: using NITO found at /usr/share/camera/template.nito
I notice you can run sample_Camera. What is the camera
parameter used rig file?
Also, There are few known issues with object detector sample in DRIVE OS 6.0.6 + DW 5.10 and we suggest to use the latest release. Do you have any limitation to upgrade to recent release?
Hi @SivaRamaKrishnaNV,
Below is the parameter line in rig.json -
“parameter”: “camera-name=F008A120RM0AV2,interface=csi-ef,CPHY-mode=1,link=3,output-format=processed,async-record=1,file-buffer-size=16777216”,
The reason for using DRIVE OS 6.0.6 is the WiFi support available in 6.0.6 and disabled in 6.0.8.
Is it possible to provide solution for DRIVE OS 6.0.6 or can I just compile sample app in 6.0.8 and use in 6.0.6 environment?
Hi @SivaRamaKrishnaNV,
I tried to port sample_object_detector_tracker compiled in 6.0.8 docker container onto DRIVE OS 6.0.6 along with all necessary libraries and still face error. Is it possible to resolve them so I can verify this docker based sample_object_detector_tracker on DRIVE OS 6.0.6? Also please let me know if it is possible to get solution for sample_object_detector_tracker on DRIVE OS 6.0.6 itself.
Attaching logs for reference.
sample_object_detector_tracker_error_with_6_0_8_on_6_0_6.txt (5.3 KB)
Hi @SivaRamaKrishnaNV,
Able to run ./sample_object_detector_tracker --input-type=camera --camera-type=F008A120RM0A --camera-group=c --camera-index=3 --tensorRT_model=/usr/local/driveworks-5.14/bin/tensorRT_model.bin on 6.0.8 docker image.
Is it possible to run this sample app in 6.0.6?
Also, I tried to run the sample app on recorded video as per reference link Sample App For Video I could not see bounded boxes on the cars as shown in the snapshot in the reference link.
Attaching logs for reference.
sample_object_detector_tracker_bounding_box_error_6_0_8.txt (30.4 KB)
I am not sure as the generated binary linked different TRT and CUDA versions…
Did you test sample object detector sample on 6.0.8.1 release and notice no bounding box detection?
Yes, I checked sample object detector on 6.0.8.1 and I see only 1 bounded box being drawn by default but do not see bounded box being drawn on object detections.
Is it with the default input video file or your custom input video/live camera??
With default input video files available in docker.
Hi @SivaRamaKrishnaNV,
Are you able to observe this scenario with default video files present in docker for 6.0.8.1?
Also a new Drive OS 6.0.10 version is available. Do we need to use this as compared to 6.0.8?
I don’t see any issue. I could see bounding box for cars. I tested on DRIVE OS 6.0.8.1
We recommend to use the latest DRIVE release if you don’t have any dependencies on older releases.
Hi @SivaRamaKrishnaNV
I tried the sample_object_detector_tracker in 6.0.10 for recorded video stream and still see the same issue as observed in 6.0.8.1(it does not match to the snap you shared previously). Attaching the logs for reference (sample_object_detector_tracker_bounding_box_error_6_0_10.txt). It seems there is some mismatch in the Orin environment between yours and mine.
Also when executing for live camera stream I see very small random red boxes but not across the objects or faces.
sample_object_detector_tracker_bounding_box_error_6_0_10.txt (287.3 KB)
Hi @SivaRamaKrishnaNV,
Any inputs for the observations shared previously?
Is it possible to share the recorded video of sample_object_detector output display in some shared drive. I would like to understand your comment about random red boxes.
Note that, this is sample is just a demonstration of DNN model integration with DW framework. It is based on YOLO.
I see the code detects/tracks only car objects
if (YOLO_CLASS_NAMES[box.classIndex] == "car")
{
m_detectedBoxList.push_back(bbox);
m_detectedBoxListFloat.push_back(bboxFloat);
m_label.push_back(YOLO_CLASS_NAMES[box.classIndex]);
dwTrackedBox2D tmpTrackBox;
tmpTrackBox.box = bbox;
tmpTrackBox.id = -1;
tmpTrackBox.confidence = box.score;
m_detectedTrackBox.push_back(tmpTrackBox);
}
Hi @SivaRamaKrishnaNV
The issue for the boxes not being shown on recorded video got resolved by placing the tensorRT_model.bin.json at the location where tensorRT_model.bin file was kept.
Regarding the object detection using camera stream as you mentioned previously that the code detects only car objects then if I pass a stream to camera consisting of cars then should it get detected and provide same output as the recorded video?
Yes. Json file contains the model preprocessing parameters and should be in the same location as TRT model as the application looks for it in the same path.
I would expect the sample to work with other recorded video. Note that, this sample is just a demonstration of integration of a DNN into DW. We would ideally expect your custom model to tested on your dataset and finetuned so that you can integrate into DW using DW DNN APIs.