I am a developer of the driveOS 6.0 platform, and now I am working with the nvscistream mechanism framework. As we know, there are samples for this topic, and I tried the camera producer and both the CUDA consumer and nvMedia2D consumer, and they work fine.
But when I was testing the GR3D resource statics, I found the CUDA consumer will consume about 11% resource of GR3D when handling the 4K cameras.
Could you please explain the reason, and also how to reduce the occupancy percentage when using the CUDA consumer?
My steps are as below:
1\ Launch the camera producer with 4 x 8M cameras,
2\ Start 4 x cuda consumers to receive the camera data, and the process step only handle the bl2pl.
3\ Then observe the tegrastats outputs, do the average calculation on the GR3D percentage data.
4\ The average of GR3D is 11% when we receive those images at 10 fps.
Hi,
Since the default ‘send-receive’ loop value is too short to observe the tegrastats’ output, I suggest you to modify the loop times from 32 to 32768, which is located at line 981 of file block_producer_uc1.c, then you will see the valid frequency data after you launch the nvscistream_event_sample application.
"
line 981 if (prodData->counter == 32768) {
"
BTW, my driveOS version is 6.0.
Hi,
emmm…
Let’s go back to the topic, even if I set it to full performance mode(1275), GR3D still has consumption(1% on your side), how to explain this?