I was runing my prediction network with tensorRT on TX1 with four camera captured int the same time.
From the log I saw that was memory overflow in the CaptureServiceDeviceViCsi.cpp.
There is no source code to deep further. So could anyone help give some tips?
SCF: Error OverFlow: (propagating from src/services/capture/CaptureServiceDeviceViCsi.cpp, function startCapture(), line 608)
SCF: Error OverFlow: (propagating from src/services/capture/CaptureServiceCore.cpp, function doCSItoMemCapture(), line 450)
SCF: Error OverFlow: (propagating from src/services/capture/CaptureServiceCore.cpp, function issueCapture(), line 334)
SCF: Error OverFlow: (propagating from src/services/capture/CaptureServiceDevice.cpp, function issueCaptures(), line 1087)
SCF: Error OverFlow: (propagating from src/services/capture/CaptureServiceDevice.cpp, function issueBubbleFillCapturesIfNeeded(), line 600)
SCF: Error OverFlow: (propagating from src/services/capture/CaptureServiceDevice.cpp, function issueCaptures(), line 918)
SCF: Error OverFlow: (propagating from src/common/Utils.cpp, function workerThread(), line 183)
SCF: Error OverFlow: Worker function failed (in src/common/Utils.cpp, function workerThread(), line 199)
hello Allen_Z,
let’s try to narrow down the issue first,
- may I know how many hours did you saw the memory overflow issue.
- since your use-case is running TensorRT with 4 camera sensors simultaneously. could you please try to remove TensorRT parts, and checking the camera preview stability.
Hi, JerryChang
- I tried with TensorRT about 1 hours, it will blocked with the overflow error.
- I tried with four camera runing in the same time, but skip the tensorRT inference. It runing about 1 hours, seemed no difference with the fist case.
the memory usage as below:
ubuntu@tegra-ubuntu:~$ free -m
total used free shared buff/cache available
Mem: 3995 2618 444 45 932 1305
Swap: 0 0
I think there still enough memory can be used.
I tried another case.
- Multi cameras running in the same time but without video convert(from yuv to rgb), for about 12hours without error.
And I have another question:
- When I run the tensorRT on one camera(other three are not in use), the time it cost for one frame are about 60ms, when I run four cameras in the same time and put the frame to tensorRT, it cost about 100ms, the performance reduced so much, Is that normal?
Sure.
GPU resource is limited on Jetson platform.
More tasks of TensorRT will decrease the resource of each application and lower the performance.
Thanks.