Some question about jetson xavier nx

Hello everyone,
I have some question about this device, I work on jetson nano before, I want to know how these two device are different in the initialization and running programs like HW decoder using opencv + gsteamer.
1- This device need SD memory? How much capacity is efficient ? for nano I used 64 GB.
2- How I can use opencv + gsteamer in python for HW decoder?

gst = (
‘rtspsrc location=rtsp latency=300 !’
'rtph264depay ! h264parse ! ’
‘omxh264dec !’
‘video/x-raw(memory:NVMM),format=(string)NV12 !’
‘nvvidconv ! video/x-raw , width={}, height={}, format=(string)BGRx !’
‘videoconvert !’
'appsink ').
return cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)

Is that work for xavier?

3- How I can directly use NVMM buffer for processing?

Regarding the SD-card size: as far as I remember the minimum size required size is 16GB, so going with 64GB is fine.
I have a 32GB, but that is only used for boot, all data is on a NVMe SSD.

Regarding the questions on OpenCV and gstreamer: i am not an expert here, bit there should no big difference to the Nano. For GPU support you must recompile OpenCV, though.

Thanks a lot, @dkreutz
What’s VNMe SSD?
Recompiling of opencv is same with nano?
It have usb3? Why the usb ports are black? usb3 always is blue.
Ever could you run tensorflow models on xavier? If so, How I can to set the model run on DLA or GPU?

Basically it is the same, for Xavier NX you must set CUDA capability 7.2 (Nano: 5.3)

To my knowledge there is blue and black only if a device has USB3 and USB2 ports. Xavier NX has only USB3 so no need to differentiate.

DLA can not run TF models, you must convert the model to TensorRT first.

Thanks so much. @dkreutz

DLA can not run TF models, you must convert the model to TensorRT first.

Could you run the deep model on DLA? If so, Please share me converting codes to TensorRT and runtime.
And the DLA accept INT8 only? What the its GPU? Can support INT8? Jetson nano doesn’t support INT8 only support FP16/32.

For TensorRT &DLA search this forum and look into documentation: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

As you know, the jetson devices has share-memory, I want to know,
1- What’s difference between CPU buffer and GPU buffer?
2- When a device has share-memory, what’s mean CPU buffer and GPU buffer? when a object is in CPU buffer, and GPU buffer want to use that object, for this we need to copy from CPU buffer to GPU buffer? the GPU can’t directly access to CPU buffer?
3- I want to know about architecture of device, these buffers are in memory or independent from memory?