i made a mistake in my post : i try to encode 3 flux from 1 camera.
if you have read the post ( https://forums.developer.nvidia.com/t/encode-frames-using-nvenc-v4l2/183424/17 ) , you know that the 3 flux came from 3 VIC instances. actually i can encode 1 flux.
i might not expect i’m out of the capacity but this is the camera’s resolutions :
720p @ 120FPS
1080p @ 62FPS ( 65 FPS )
*the three resolutions works : the capture, color conversion and encoding is working well *
and this is the 3 flux resolution i want to encode :
original ( one of the 3 above)
the encoder should encode at 30-60 FPS as it is set in the params
i read the sample 15 on Jetson Linux API. it seems that you thread the entire system including resolutions and the create encoder function. i was expecting to open one time the encoder and create threads only when its really needed : when buffer is declared.
but for your reference : i try to encode one resolution among 720p @ 120FPS , 1080p @ 62FPS, or 1920x1200 @60FPS and want 3 output from the encoder : (the original selected before encoding ( so the native from camera ) and 720p and 480p
when i try to create an object “videoencoder” with different name ( of the object but also of the encoder) it always say (from the second one ) that the encoder is busy. this seems to happen when the Queuing is called…
so, for reference i created 2 objects from my encoder class. they both use 2 different resolutions. and they came from one camera.
in fact, its near the same than the frontend example except that their is one camera, ( so one input ), and 3 output from the VIC so 3 diferents data to encode.