I have solved the previous issue but now I am facing another issue of frame drop when encoding.
I am using 3x8mp cameras and 4x3mp cameras. If I use more cameras then I have more frame drops.
-- Camera Device Error Info ---
Notification Type: 100
Pipeline ID: 1
dev block link mask:
Tsc Timestamp: 0
GPIO indices:
--- End of the Error Info ---
[08-01-2024 14:49:05] CameraClient: raw bit type is missing or unexpected in virtual channel info, meta info might be incomplete
[08-01-2024 14:49:05] CameraClient: raw bit type is missing or unexpected in virtual channel info, meta info might be incomplete
[08-01-2024 14:49:05] CameraClient: raw bit type is missing or unexpected in virtual channel info, meta info might be incomplete
[08-01-2024 14:49:05] CameraClient: Notification received from pipeline index:0 of type: NOTIF_WARN_ICP_FRAME_DROP
--- Camera Device Error Info ---
Notification Type: 100
Pipeline ID: 0
dev block link mask:
Tsc Timestamp: 0
GPIO indices:
--- End of the Error Info ---
[08-01-2024 14:49:05] CameraClient: raw bit type is missing or unexpected in virtual channel info, meta info might be incomplete
[08-01-2024 14:49:05] CameraClient: Notification received from pipeline index:2 of type: NOTIF_WARN_ICP_FRAME_DROP
--- Camera Device Error Info ---
Notification Type: 100
Pipeline ID: 2
dev block link mask:
Tsc Timestamp: 0
GPIO indices:
--- End of the Error Info ---
[08-01-2024 14:49:05] CameraClient: raw bit type is missing or unexpected in virtual channel info, meta info might be incomplete
[08-01-2024 14:49:05] CameraClient: raw bit type is missing or unexpected in virtual channel info, meta info might be incomplete
[08-01-2024 14:49:05] CameraClient: Notification received from pipeline index:1 of type: NOTIF_WARN_ICP_FRAME_DROP
Is there any limits of number of cameras?
Can you provide encoder capabilities or a sample application with source code for encoding multiple cameras?
Dear @abdul.rehman3,
Does that mean you can perform encoding using multiple cameras and store them to a file now?
How many camera you can run in parallel? How many encoder instances are used in rig file?
I will check if there is any limitation on encoding?
In my knowledge, multiple codec instance can work.
If you are storing to a disk, just see if we are hitting storage disk speed limits.
I can encode 4 x 3mp cameras and 1 x 8mp camera without frame drops.
I am not storing to disk. I am just encoding. Bottleneck is on the encoder side.
As far my information there is just one encoder instance in Orin.
What do you mean by multiple codec instance?
Moreover,
my rig for all the cameras have following parameters. .....,type=user,gop-length=1,format=h264,output-format=yuv,encoder-instance=0,quality=24,fifo-size=16
Dear @abdul.rehman3,
What do you mean by multiple codec instance?
I meant multiple cameras using same encoders like in your case.
It looks you are hitting the peak throughput of encoder.
You see frame drop issue when you add additional 8MP camera(i.e 4 -3MP and 2 - 8MP cameras)? If so, can you check connecting 3 MP camera instead of 8MP to get enough room for encoding.
If I connect 1x8MP and 3x3MP cameras then there are no frame drops. I think we are hitting the throughput limit of encoder as you mentioned.
I have 3 Orins. How can I cascade them and make it to one? Can you share any guidelines. Also after that how can I encode 11 cameras? Do I need to run encoder application on 3 different Orins or will it be like encoder-instance 0,1,2 in one encoder application?
Thankyou for the clarification. How can I decode used hardware decoder.
I want to create a generic decoder that can decode any h264 stream. Not just encoded by nvidia encoders.
I have an encoded stream h264 in a memory (an array uint8_t* input). I want to use this array instead of file and decode into another array (uint8_t* output). Decoded output can be NV12, YUV, RGB, etc…
Dear Krishna,
Thankyou for the clarification. How can I decode used hardware decoder. I am following camera replay and it is decoding a video however I want to develop a decoder that is reading h264 frame from memory.
I want to create a generic decoder that can decode any h264 stream. Not just encoded by nvidia encoders.
I have an encoded stream h264 in a memory (an array uint8_t* input). I want to use this array instead of file and decode into another array (uint8_t* output). Decoded output can be NV12, YUV, RGB, etc…
Dear @abdul.rehman3,
This doc shows how to enable c2c. We have Nvscistream event sample to check c2c feature.
This use case we have not tested. Could you share the complete details of 11 cameras like resolutions and encoding type. Let me check on the feasibility.