Thanks so much for the reply and I tested out.
It seems not make much difference, but I have four follow up questions… Please advise. Thanks again!
(1) I did apply below… but it seems running two cores… is that right path to go ?.. I thought we should try to run more cores
sudo nvpmodel -m 0
(2) tegrastats shows it’s using swap… and GPU (99%) as below
RAM 7082/7772MB (lfb 69x4MB) SWAP 2017/3886MB (cached 29MB) CPU [96%@1907,95%@1907,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 99% AO@50C GPU@53.5C PMIC@100C AUX@50.5C CPU@52.5C firstname.lastname@example.orgC VDD_IN 12045/8815 VDD_CPU_GPU_CV 8357/5514 VDD_SOC 1549/1411
RAM 7082/7772MB (lfb 69x4MB) SWAP 2017/3886MB (cached 29MB) CPU [70%@1907,75%@1907,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 99% AO@50C GPU@53.5C PMIC@100C AUX@50.5C CPU@52.5C email@example.comC VDD_IN 11922/8844 VDD_CPU_GPU_CV 8398/5541 VDD_SOC 1508/1412
(3) I do convert my model into TensorRT (see my partial codes below)… Is my code below the right way or any links for me to check/study? Please note my_frozen_graph is based on SSD mobileNet, but it’s transfer-learned and the input image size is about 4 times than 300x300 image size.
import tensorflow.contrib.tensorrt as trt
trt_graph = trt.create_inference_graph(
max_workspace_size_bytes=1 << 25,
(4) I am using Cuda (v10.2) and TF version (1.15.2+nv20.6) …
Is Cuda v10.2 supporting Tensorflow 1.x (1.15.2 for my case) ?