Please provide the following info (tick the boxes after creating this topic): Software Version
DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
2.1.0
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Issue Description
I can run the sample_camera driveworks sample and see (render) the camera streams. We are using the camera internally in the Orin, but we would like to show one of the camera frames live on a separate display (our HMI). We cannot use the Orin for object detection and the HMI at the same time.
For this we have to stream the camera frames out of the Orin. Is there a way to stream the camera out of the Orin, like via a network connection to an external consumer?
I took a look at the IPC sample but that isn’t configured for camera data, is it? Is there a way to get that to be compatible with streaming the camera data as well?
Error String
< Please provide just the error messages here >
Logs
Provide logs in text box instead of image
Please paste the complete application log here. If there are multiple logs, please use multiple text box
I have implemented something similar. You have 2 options.
The first one is the IPC method you have mentioned which require you to write your own custom data buffer for NvSciIPC. It should be fairly simple if you follow the sample code.
The second method is to stream the serialized camera frame using dwSensorSerializer. You can serialize it into any format that Nvidia supports (H264, H265 and etc). This way, you can actually send the serialized video stream through your network which should be more robust and platform agnostic than Option 1.
I have implemented both methods and working correctly. Please let me know if you have any more detailed questions.
Dear @extern.ray.xie ,
Thank you for providing inputs on this issue.
For both approaches, Can you please share psuedo code/steps (to get high level overview on workflow) that you have used in your implementation. This helps others in community who are trying to attempt similar use case in future. Thank you.
For option 1, I actually really just copied the sample code from dwFramework that uses the custom raw buffer. The main workflow would be [obtain dwImage from camera] → [extract data buffer] → [send using NvSciIPC]. Really, the sample code is already very useful.
For option 2, you should also refer to the DriveWork sample code (sensor/camera/camera/main.cpp). The serializer part needs some pruning but the main workflow would be [set up serializer for each SENSOR (not image)] → [grab camera frame] → [feed camera frame into serializer]. There is a catch for the serializer though. If you just call the serializer by itself, you wouldn’t get anything. Instead, you need to define your own custom data callback function for the serializer during initialization to better handle the serialized data. My current approach is to publish a custom ROS message containing that serialized camera frame but you can do whatever you want with it.
In the mean time maybe you can help with this.
I see in dw documentation that dwSerializeParams takes user input for “type”. It states: if the value of 'type' is 'user' then serializer uses the provided callback to stream data. When new data is available, the serializer calls the function provided in onData and puts the data in the buffer provided by userData.
Does this mean that the callback function has to be provided to the variable onData? And userData is a buffer inside my function to which the serializer writes data?
Is that the correct interpretation of the way the serializer output can be used?
Ok. I’ve figured out most of the layout of the script and also set my function, which is called when onData is triggered.
Do you use dwSensorSerializer_serializeCameraFrame, dwSensorSerializer_serializeCameraFrameAsync, or just dwSensorSerializer_serializeData? The serializer loop already has the dwSensorSerializer_serializerStart so it seems that using dwSensorSerializer_serializeCameraFrame is not an option, right?
where frame[i] is assigned by dwSensorCamera_readFrame(&frame[i], 333333, m_camera[i]);
The error is:
Driveworks exception thrown: DW_INTERNAL_ERROR: IDRLookup: NAL lookup failure
terminate called after throwing an instance of 'std::runtime_error'
what(): [2025-04-29 16:24:45] DW Error DW_INTERNAL_ERROR executing DW function:
dwSensorSerializer_serializeCameraFrame(frame[i], m_serializer[dwCameraISPType::DW_CAMERA_ISP0]);
The output-format in my rig file is processed and I see the following image properties of the image in the camera frame just before the serializer is called:
So it seems the best thing is to keep it simple for now. I scrapped everything with this script and started again from a clean sample_camera script.
Just added my function and am printing out the size:
Running the socket stream: 42680
Running the socket stream: 210652
Running the socket stream: 1003
Running the socket stream: 1369
Running the socket stream: 1059
Running the socket stream: 1040
Running the socket stream: 1341
Running the socket stream: 1703
Running the socket stream: 2132
Running the socket stream: 3371
Running the socket stream: 5422
Running the socket stream: 4201
Running the socket stream: 3998
Running the socket stream: 3614
Running the socket stream: 15868
Running the socket stream: 20708
Running the socket stream: 284758
Running the socket stream: 4195
Running the socket stream: 3566
Running the socket stream: 2070
Running the socket stream: 1780
Running the socket stream: 2090
Running the socket stream: 2125
Running the socket stream: 1811
Running the socket stream: 2920
Running the socket stream: 5576
Running the socket stream: 11873
Running the socket stream: 10027
Running the socket stream: 7935
Running the socket stream: 8552
Running the socket stream: 9901
Running the socket stream: 9793
Running the socket stream: 314918
Running the socket stream: 4296
Running the socket stream: 4201
Running the socket stream: 5033
Yes, please. Can you first explain why the size of the serialized data is so variable?
I’m guessing that a camera frame of 4K resolution at 30fps is being serialized, so surely the size of the data being sent out should be the same.
Secondly, what exactly is being serialized? It is a bit unclear to me, firstly looking at the variable data size, and secondly, when I run a deserialization receiver script, I don’t get the same camera frame image.
Dear @adityen.sudhakaran ,
My apologies for missing this topic.
Could you provide sample code to reproduce the issue.
Also, is it ok if you get a CPU buffer of input camera(ISP output) and you can use it later to send to ROS node? If this fits your requirement, I can help to provide a sample snippet to get CPU buffer of input camera(ISP output) using dwImageStreamer and store it to a file for verification at your end.
dwSerilizer needs to be used if you want to perform h264/h265 encoding.
I’m just using a modification of the camera_sample DW Sample script. I’ve added it here. The socketStream() function is called during serializerParams.onData and the variable data is sent out over the socket.
Hi @extern.ray.xie, for your option 1 ‘[obtain dwImage from camera] → [extract data buffer] → [send using NvSciIPC]‘, could you please indicate which sample(s) did you use as the base exactly to make it work? Is that ‘Camera Extra Sample‘ enough? Many thanks!