Different Nvsci buffer types within a single packet of nvstream fails

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
[*] other DRIVE OS version

Target Operating System
[*] QNX

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
[*] other

SDK Manager Version
[*] other

Host Machine Version
native Ubuntu 18.04
[*] other

I am attempting to extend nvscistream unicast safety sample to use different types of buffer - nvmedia image buffer and raw buffer in a single packet of nvstream. The packet is to be exchanged between 2 qnx processes over a single nvstream in unicast fashion.

When running both producer and consumer in 2 separate terminals on QNX in Nvidia Drive Xavier system using pdk526.

These are the errors I get on producer sample
[ERROR: NvSciBufAttrListAreDataTypeCompatible]: bufTypes 1 and 2 cannot be reconciled
[ERROR: NvSciBufAttrListReconcile]: NvSciBuf datatypes cannot be reconciled

My question is does nvstream not support multiple buffer types in a single packet? If not, what option do I have when I need to make sure that these 2 buffers need to be sent aligned in the same fence/semaphore lock?

Here is the sample code for creating buffAttributes for 2 different buffers:

virtual void createBufAttrList(NvSciBufModule bufModule) {
// Setup NvMedia buffets

// create attr requirements
for (uint32_t i = 0U; i < 1; i++) {
CHECK_NVSCIERR(NvSciBufAttrListCreate(bufModule, &bufAttrLists[i]));
LOG_DEBUG("Create NvSciBuf attribute list of element " << i << “.”);

NvSciBufAttrList attrList = bufAttrLists[i];
NvSciBufAttrValAccessPerm access_perm = NvSciBufAccessPerm_ReadWrite;
NvSciBufAttrKeyValuePair attrKvp = {NvSciBufGeneralAttrKey_RequiredPerm, &access_perm, sizeof(access_perm)};
NvMediaSurfaceType nvmsurfType;
NvMediaSurfAllocAttr surfAllocAttrs[NVM_SURF_ALLOC_ATTR_MAX];
uint32_t numSurfAllocAttrs = 0;

// Create YUV 422 PL images with cpu mapping pointer

nvmsurfType = NvMediaSurfaceFormatGetType(surfFormatAttrs, NVM_SURF_FMT_ATTR_MAX);

surfAllocAttrs[0].type = NVM_SURF_ATTR_WIDTH;
surfAllocAttrs[0].value = WIDTH;
surfAllocAttrs[1].type = NVM_SURF_ATTR_HEIGHT;
surfAllocAttrs[1].value = HEIGHT;
surfAllocAttrs[2].type = NVM_SURF_ATTR_CPU_ACCESS;

surfAllocAttrs[2].value = NVM_SURF_ATTR_CPU_ACCESS_CACHED;
numSurfAllocAttrs = 3;

CHECK_NVSCIERR(NvSciBufAttrListSetAttrs(attrList, &attrKvp, 1));

NvMediaImageFillNvSciBufAttrs(nvmdevice, nvmsurfType, surfAllocAttrs, numSurfAllocAttrs, 0, attrList));

LOG_DEBUG("Set attribute value of element " << i << “.”);

int idx2 = 1;
// 2nd element - metadata
CHECK_NVSCIERR(NvSciBufAttrListCreate(bufModule, &bufAttrLists[idx2]));
LOG_DEBUG(" Create NvSciBuf attribute list of element " << idx2 << “.”);

NvSciBufAttrList attrList = bufAttrLists[idx2];

NvSciRmGpuId gpuId;
CUuuid uuid;
CHECK_CUDAERR(cuDeviceGetUuid(&uuid, m_cudaDeviceId));
memcpy(&gpuId.bytes, &uuid.bytes, sizeof(uuid.bytes));

NvSciBufAttrKeyValuePair genBufAttrs = {
{ NvSciBufGeneralAttrKey_GpuId, &gpuId, sizeof(gpuId) }


NvSciBufType bufType = NvSciBufType_RawBuffer;
NvSciBufAttrValAccessPerm perm = NvSciBufAccessPerm_ReadWrite;
bool cpuaccess_flag = true;
uint64_t rawsize = sizeof(Metadata_t);
uint64_t align = 1;

NvSciBufAttrKeyValuePair rawBufAttrs = {
{ NvSciBufGeneralAttrKey_Types, &bufType, sizeof(bufType) },
{ NvSciBufGeneralAttrKey_RequiredPerm, &perm, sizeof(perm) },
{ NvSciBufGeneralAttrKey_NeedCpuAccess, &cpuaccess_flag, sizeof(cpuaccess_flag) },
{ NvSciBufRawBufferAttrKey_Size, &rawsize, sizeof(rawsize) },
{ NvSciBufRawBufferAttrKey_Align, &align, sizeof(align) },

sizeof(rawBufAttrs) / sizeof(NvSciBufAttrKeyValuePair)));
LOG_DEBUG("Set Attribute value of element " << idx2 << “.”);

Hi @tejashs ,

We just checked internally. Your support channel should be NVONLINE.
If any problems in accessing it, please contact with your nvidia rep. Thanks.