the problem was when using detectnet on command line, so i think it was c++
For now, I would just comment out this if
block: https://github.com/dusty-nv/jetson-inference/blob/d563e1e3db041af7e01a7aade0245744022e8668/examples/detectnet/detectnet.cpp#L105
Then detectnet will still run even with no output stream
Dusty, I was thinkingâŠ
net1 = jetson_inference.detectNet(argv=[ââmodel=/home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnxâ, ââlabels=/home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/labels.txtâ, ââinput-blob=input_0â, ââoutput-cvg=scoresâ, ââoutput-bbox=boxesâ, ââconfidence=0.4â, ââinput-width=1920â, ââinput-height=1080â, ââinput-rate=30â, ââtracking=Trueâ, ââtracker=IOUâ, ââtracker-min-frames=1â, ââtracker-lost-frames=5â, ââtracker-overlap=0.5â])
img1 = camera1.Capture() #lets capture image from camera 1
detections1 = net1.Detect(input_img1, overlay=âbox,labels,confâ) #overlay says how the defect is annotated on final image
Iâm looking for a ways to speed up things and to lower gpu usage.
Questions:
-
in jetson_inference.detectNet - if i set input rate in detectNet 30fps but my actual input rate (when getting frames threading, using camera.Capture) is 60fps - what then happens? It means that detectnet processes every second frame?
-
in net1.Detect - can i somehow disable image output? I was thinking that i donât need overlay or graphical information drawn on image - can i somehow disable this? I donât know if i save so gpu time (i have 4 cameras and 60fps, 1920x1080). For me it is enough when i get only detected object data (class, coordinates, confidence). ââoverlay=noneâ - does it switch off image processing also or it only doesnât show output?
-
is there any parameter in detectnet.cpp or somewhere about memory usage? I use Xavier NX 16, maybe there is some parameter which i can increase to use more memory but gain in terms of gpu usage? I see in detectNet.cpp mWorkspaceSize - is there any point to increase it?
-
any ideas besides lowering resolution or framerate to get better results in terms of FPS and lower GPU usage?
-
If i want to âdisableâ overlay and its processing and cpu usage, is the right place in code:
// Detect
int detectNet::Detect( void* input, uint32_t width, uint32_t height, imageFormat format, Detection* detections, uint32_t overlay )
{
// verify parameters
if( !input || width == 0 || height == 0 || !detections )
{
LogError(LOG_TRT âdetectNet::Detect( 0x%p, %u, %u ) â invalid parameters\nâ, input, width, height);
return -1;
}
if( !imageFormatIsRGB(format) )
{
LogError(LOG_TRT "detectNet::Detect() -- unsupported image format (%s)\n", imageFormatToStr(format));
LogError(LOG_TRT " supported formats are:\n");
LogError(LOG_TRT " * rgb8\n");
LogError(LOG_TRT " * rgba8\n");
LogError(LOG_TRT " * rgb32f\n");
LogError(LOG_TRT " * rgba32f\n");
return false;
}
// apply input pre-processing
if( !preProcess(input, width, height, format) )
return -1;
// process model with TensorRT
PROFILER_BEGIN(PROFILER_NETWORK);
if( !ProcessNetwork() )
return -1;
PROFILER_END(PROFILER_NETWORK);
// post-processing / clustering
const int numDetections = postProcess(input, width, height, format, detections);
// render the overlay
if( overlay != 0 && numDetections > 0 )
{
if( !Overlay(input, input, width, height, format, detections, numDetections, overlay) )
LogError(LOG_TRT "detectNet::Detect() -- failed to render overlay\n");
}
// wait for GPU to complete work
//CUDA(cudaDeviceSynchronize()); // BUG is this needed here?
// return the number of detections
return numDetections;
}
ee-mode and ee-strength are using cpu or gpu (compared if i switch those settings to 0). Testing doesnât show difference :) - if anyone wants to know.
nvargus-daemon seems to consume cpu. Thinking if there is something i could do to lower cpu and gpu usage without losing too muchâŠ
ss << ânvarguscamerasrc sensor-id=â << mOptions.resource.port << " saturation=2 exposurecompensation=0 exposuretimerange=â134000 158733000â ee-mode=1 ee-strength=1 gainrange=â1 22â ! video/x-raw(memory:NVMM), width=(int)" << GetWidth() << â, height=(int)â << GetHeight() << â, framerate=â << (int)mOptions.frameRate << â/1, format=(string)NV12 ! nvvidconv flip-method=â << mOptions.flipMethod << " ! ";
If i comment out everything:
/*
* create output stream
*/
videoOutput* output = videoOutput::Create(cmdLine, ARG_POSITION(1));
if( !output )
{
LogError("detectnet: failed to create output stream\n");
return 1;
}
Does this run and do I gain so some performance (i think it saves gpu and cpu because doesnt need to process things)âŠ
Yes, it would be up to you which frames you feed into detectNet.Detect() and only those frames would be processed.
Yes, you can just use overlay='none'
(or the --overlay=none
command-line argument), and it will just skip doing the graphical overlay but still generate the detection results. You can also choose to do the overlay later in an independent step using the detectNet.Overlay() function
I donât think there is any point to increase it unless the TensorRT builder complains than the workspace size is not enough to run some of itâs kernels. I already increase it in the code for the ONNX detection models for this reason.
Iâve been meaning to try this ONNX Simplifier tool on the models and see if they still run with TensorRT and result in a performance increase: https://github.com/daquexian/onnx-simplifier (if you try it and it works, let me know)
I would just disable the overlay through the Python API or command-line as mentioned above.
Itâs working, already tested. I see no difference in results. Maybe my dataset and model is already quite simple. It needed to delete some files which were generated last time i run first time detectnet with same model (initialized). But i figured this out. :)
It seems to me that if I get 4x 1920x1080 60FPS images. It consumes about 60-70% of Xavier CPU. I but there a âsleepâ before detectnet comes in play - and if detectnet is running then GPU usage raises to 70-100%.
Iâm running detectnet in âwhile loopâ, there is 4 times detectnet. Images come from another thread. If I measure this loop time, i get about 30 cycles per second.
So, it means basically 4 neural network take 4 frames to inference, it means my actual FPS is about 120FPS in 1920x1080, not bad - maybe this is âthe roofâ and and i cant get more? Maybe Iâm chasing ghosts :)
By the way, in dev âoverlay=noneâ is not working. Just tested.
net1 = jetson_inference.detectNet(argv=[ââmodel=/home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnxâ, ââlabels=/home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/labels.txtâ, ââinput-blob=input_0â, ââoutput-cvg=scoresâ, ââoutput-bbox=boxesâ, ââconfidence=0.4â, ââinput-width=1920â, ââinput-height=1080â, ââinput-rate=30â, ââtracking=Trueâ, ââtracker=KLTâ, ââtracker-min-frames=1â, ââtracker-lost-frames=5â, ââtracker-overlap=0.5â, ââclustering=0.5â, ââoverlay=noneâ])
4096 is too much for it also, it doesnt support too much memory. But i donât think it helps me anyway.
Any ideas how to speed things up? Except using deepstream and except changing input image rate (I need at least 60fps, otherwise i donât see detects on moving detail).
In thery - I can detect for example every second frame but in practice iâm already doing it (because i cant get more than 30fps in 4 camera setting in real life). i think i have there a âwindowâ - on moving detail i have about 10-15 frames per area i see, so at the moment I see half of if - itâs about 7 - about 3 is not good - i need more to make good detection :)
In theory I could cut the image and reduce image size in nvarguscamerasrc command but I donât belive it makes any difference (until narguscamerasrc is using something different than GPU or CPU for cutting images - some other chip on the system).
The overlay flag doesnât go to the detectNet constructor - it goes to detectNet.Detect() function. Like detectNet.Detect(img, overlay='none')
Using deepstream and an INT8-quantized model trained with TAO would be my suggestion for best performance, as jetson-inference isnât super-optimized for multi-stream use-cases.
Thnx! :) I think about it :).
Overlay works this way - unfortunately it doesnât speed up things.
Jetson-inference seems so easy to use and to understand, itâs hard to me to begin investigating deepstream and how to use it. Iâll try other ways at first.
Intuition says that if i have so many objects and processes then finding one spot to improve gpu/cpu usage little bit could give enormous result in final result⊠maybe to reduce detectnet network fps, change output image resolution, etc make small changes âŠ
thnx anyway Dusty. Iâm very grateful for your assistance.
Iâll go to sleep and think about tomorrow âŠ
detectNet â loading detection network model from:
â prototxt NULL
â model /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnx
â input_blob âinput_0â
â output_cvg âscoresâ
â output_bbox âboxesâ
â mean_pixel 0.000000
â class_labels /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/labels.txt
â class_colors NULL
â threshold 0.400000
â batch_size 1
[TRT] TensorRT version 8.4.1
[TRT] loading NVIDIA pluginsâŠ
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::BatchTilePlugin_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::CropAndResizeDynamic version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
[TRT] Registered plugin creator - ::ProposalDynamic version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 2
[TRT] Registered plugin creator - ::CoordConvAC version 1
[TRT] Registered plugin creator - ::DecodeBbox3DPlugin version 1
[TRT] Registered plugin creator - ::GenerateDetection_TRT version 1
[TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[TRT] Registered plugin creator - ::NMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::PillarScatterPlugin version 1
[TRT] Registered plugin creator - ::VoxelGeneratorPlugin version 1
[TRT] Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
[TRT] detected model format - ONNX (extension â.onnxâ)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +181, GPU +0, now: CPU 321, GPU 2618 (MiB)
[TRT] [MemUsageChange] Init builder kernel library: CPU +130, GPU +155, now: CPU 470, GPU 2811 (MiB)
[TRT] native precisions detected for GPU: FP32, FP16, INT8
[TRT] selecting fastest native precision for GPU: FP16
[TRT] found engine cache file /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnx.1.1.8401.GPU.FP16.engine
[TRT] found model checksum /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnx.sha256sum
[TRT] echo â$(cat /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnx.sha256sum) /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnxâ | sha256sum --check --status
[TRT] model matched checksum /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnx.sha256sum
[TRT] loading network plan from engine cache⊠/home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnx.1.1.8401.GPU.FP16.engine
[TRT] device GPU, loaded /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnx
[TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 348, GPU 2819 (MiB)
[TRT] Loaded engine size: 7 MiB
[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[TRT] Deserialization required 51985 microseconds.
[TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +6, now: CPU 0, GPU 6 (MiB)
[TRT] Total per-runner device persistent memory is 0
[TRT] Total per-runner host persistent memory is 148864
[TRT] Allocated activation device memory of size 18595840
[TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +18, now: CPU 0, GPU 24 (MiB)
[TRT] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] â layers 120
[TRT] â maxBatchSize 1
[TRT] â deviceMemory 18595840
[TRT] â bindings 3
[TRT] binding 0
â index 0
â name âinput_0â
â type FP32
â in/out INPUT
â # dims 4
â dim #0 1
â dim #1 3
â dim #2 512
â dim #3 512
[TRT] binding 1
â index 1
â name âscoresâ
â type FP32
â in/out OUTPUT
â # dims 3
â dim #0 1
â dim #1 8190
â dim #2 25
[TRT] binding 2
â index 2
â name âboxesâ
â type FP32
â in/out OUTPUT
â # dims 3
â dim #0 1
â dim #1 8190
â dim #2 4
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=512 w=512) size=3145728
[TRT] binding to output 0 scores binding index: 1
[TRT] binding to output 0 scores dims (b=1 c=8190 h=25 w=1) size=819000
[TRT] binding to output 1 boxes binding index: 2
[TRT] binding to output 1 boxes dims (b=1 c=8190 h=4 w=1) size=131040
[TRT]
[TRT] device GPU, /home/visioline/install/jetson-inference-dev/python/training/detection/ssd/models/jw9-opt/ssd-mobilnet-v2.onnx initialized.
[TRT] detectNet â number of object classes: 25
[TRT] detectNet â maximum bounding boxes: 8190
[TRT] loaded 25 class labels
[TRT] detectNet â number of object classes: 25
[TRT] loaded 0 class colors
[TRT] didnât load expected number of class colors (0 of 25)
[TRT] filling in remaining 25 class colors with default colors
If your application is multithreaded (presumably it may be, considering you have multiple streams), then you could take a crack at porting it to C++ where you have access to real parallel threads. But it may be diminishing returns. And of course, the other way to increase performance is to upgrade the hardware - you could look at Orin NX or Orin Nano.
Thnx.
Can i run onnx int8 on detectnet? I read a bit about it, sounds promising. At the moment i imagine that i have to convert somehow the model. And if i get 2x speed for gou then its enough for me. Any elegant ideas how to test it fast?
And another thing is nvargus-daemon, so if i get from there also some speed for cpu, then we can move our device to factory for reallife testing.
C++, i was 14 years old when i used it last time. :) But I belive i can learn as much needed. I make everything 10x slower than everyone else, even more true if c++.
My main job actually isnât programming, iâm having fun an developing a project which could be a foundation for new business direction for us. I hope to build it still on jetson-inference, this way it is easyer to envolve later other team members, also python is easyier.
I thought about stronger jetson models, but havenât found baseboard. A205 has 6 csi connectors and its good. I think if i use different cameras, for example lan cameras then i must convert video stream, decode it/etc. But its a way to go if other ways dont work.
Technically you can run INT8 engines that have already been converted to TensorRT (just by passing in the .engine file to the --model
argument instead of the ONNX). I donât really have the INT8 calibration stuff implemented in jetson-inference thatâs required to generate the TensorRT engines in INT8 mode. I do have preliminary support for the TAO detectnet_v2 models though, and those are quite fast (although a lot goes into the quantization-aware training for training good INT8 models)
That does look like a nice carrier. My guess is that it supports Nano/NX form-factor, it will at some point support the Orin NX/Nano modules also since those are mostly backwards-compatible (you could ask them what their plans are to be sure)
thats good news
howâd you do the treading? considering you have 4 cameras?
Iâve actually not tested jetson-inference with more than 2 cameras haha (as I mentioned itâs not really the primary application use-case that I focus on). I would be torn between just polling the cameras with a timeout of 0 (so Capture() returns instantly if frame not ready), or having one thread per camera which adds the frames to a queue once Capture() receives them. videoSource/videoOutput already use threads in GStreamer though, so itâs already largely multithreaded to an extent.
:) i found some mistakes in my code. I was asking âimg1 = camera1.Capture()â this multiple times. Donât ask :) - one was in the tread. So, I already gained some speed :)
Maybe its better to go to sleep :)
Maybe this story helps someone to think about those questions.
I have been testing. In python, cprofiler (good tool to measure how much time functions spent) shows that âmethod âDetectâ of âjetson.inference.detectNetâ objectsâ speed per call is 0.011. Interesing is that it doesnât matter if i use 1,2,3 or 4 detectNetâs. If i see GPU usage from âjtopâ then also, load is the same, no matter how much âdetectnetâ I initialize.
Also, even if i reduce my cameras to 2 (disable the code which pulls the frames), pull images from 2 cameras and use only 1 detectnet (and get in only 1 stream) - itâs again low, 0.009. So, background working camera thread (with it i donât do nothing) costs 0.002.
Third test, with one camera (disabled other camera pulls) i get in my âwhile loopâ about 110-140 FPS, not more, per call 0.007 (so, 0.002 camera cost seems to be somewhat right). So, it seems logical that with 4 cameras it get about 25-30 FPS.
0.007 * 4 = 0.028 and if I want 60FPS then itâs impossible, because maximum is about 36FPS with 4 cameras (which were there but very rarely). So, threading or not, about 30 is maximum.
So, its always about right questions. How fast is the model? :)
While testing in command line:
*** with one camera and SSD Mobilnet V2, I got:**
[TRT] ------------------------------------------------
[TRT] Timing Report ssd-mobilnet-v2.onnx
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.03194ms CUDA 0.12070ms
[TRT] Network CPU 7.25217ms CUDA 6.06166ms
[TRT] Post-Process CPU 0.40730ms CUDA 0.41322ms
[TRT] Total CPU 7.69140ms CUDA 6.59558ms
[TRT] ------------------------------------------------
*** with one camera and SSD Mobilent V2 optimized model, I got:**
[TRT] ------------------------------------------------
[TRT] Timing Report ssd-mobilnet-v2.onnx
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.03085ms CUDA 0.11174ms
[TRT] Network CPU 7.21002ms CUDA 6.05939ms
[TRT] Post-Process CPU 0.44272ms CUDA 0.44186ms
[TRT] Total CPU 7.68359ms CUDA 6.61299ms
[TRT] ------------------------------------------------
*** with one camera and SSD Mobilnet V1, I got:**
[TRT] ------------------------------------------------
[TRT] Timing Report ssd-mobilnet-v1.onnx
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.02656ms CUDA 0.12330ms
[TRT] Network CPU 6.57198ms CUDA 5.38288ms
[TRT] Post-Process CPU 0.40429ms CUDA 0.40314ms
[TRT] Total CPU 7.00283ms CUDA 5.90931ms
[TRT] ------------------------------------------------
*** with one camera and with built in default model (v2) with detectnet, I got:**
[TRT] ------------------------------------------------
[TRT] Timing Report /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.03933ms CUDA 0.05888ms
[TRT] Network CPU 7.45713ms CUDA 6.27539ms
[TRT] Post-Process CPU 0.01165ms CUDA 0.01181ms
[TRT] Total CPU 7.50811ms CUDA 6.34608ms
[TRT] ------------------------------------------------
Conclusion:
- its about 0.0066, it means 0.007 per call matches! My model is slow! :) How didnât I notice it before :)
- jetson-inference with python is doing very well there isnât any bottlenecks (detecenet c++ app vs detectnet.py appâs)
- my model isnât bad compared with default models, but itâs not good for me
Proposition: I need to thread detectnet (because sequentally 0.007 x 4 is slower than parallel about 0.011) - and I hope practice confirms theory :)