"recorder-tui" fails, says "Backend recorder didn't start, no acknowledgement received"

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
[V] DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
[V] Linux
QNX
other

Hardware Platform
[V] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
[V] 1.7.1.8928
other

Host Machine Version
[V] native Ubuntu 18.04
other

Hi. I wrote a custom lidar sensor plugin for app “sample_lidar_replay” in the NVIDIA driveworks SDK.
I run it like this and see a nice point cloud from our lidar.

./sample_lidar_replay --protocol=lidar.custom --params=ip=127.0.0.1,port=8888,decoder-path=/home/user/Projects/nvidia_opsys_plugin/build/libnv_lidar_plugin_x86.so

Now, I want to record our lidar data, using “recorder-tui”.

I went to site: https://docs.nvidia.com/drive/archive/driveworks-3.0/dwx_recording_devguide_basic_recording.html to section “Recording Data from a Lidar Sensor”.

I copied “/usr/local/driveworks/tools/capture/configs/rwd/hyperion7_1/release.json” to “/home/user/Projects/nvidia_opsys_plugin/release.json”.

Note: the “release.json” file location on my system was different than the one in the “dwx_recording_devguide_basic_recording.html”.

I added our lidar section to the “release.json”.
Attached is the altered file:
release.json (12.0 KB)

Note: In file “dwx_recording_devguide_basic_recording.html” they forgot to add line:

"properties": null,

Then I ran:

cd /usr/local/driveworks/tools/capture
./recorder-tui /home/user/Projects/nvidia_opsys_plugin/release.json

But got an error:

root@user-ThinkCentre-M93p:/usr/local/driveworks/tools/capture# ./recorder-tui /home/user/Projects/nvidia_opsys_plugin/release.json
running recorder-tui with arguments: Namespace(aes_key=None, aes_key_encrypted=None, backend=False, bbr=False, disable_encryption=True, enable_encryption=False, no_mount_check=False, non_interaction=False, remote_dw_path='', rig='/home/user/Projects/nvidia_opsys_plugin/release.json', rsa_key='/home/nvidia/.ssh/recorder-aiinfra.pem', rsa_key_md5=None, skip_init_input=False, tag='NONE')
[02-01-2022 17:25:55] SDK: No resources(.pak) mounted from '/usr/local/driveworks-3.5/data'. Please adjust path or some modules won't function properly.
	Copying rigs and setup code for local...
	Platform setup for local...
Backend recorder didn't start, no acknowledgement received
Errors detected during recording
First 100 lines of std error log:
First 100 lines of std out log:
When reporting bugs, please attach full log and files /tmp/recorder-tui-6727/*.log and /tmp/recorder-6727/*.log
WARNING: Forcibly exited. Some data might have been lost!
WARNING: Use q<Enter> to exit gracefully, next time.
  • The created log files at “/tmp/recorder-tui-6727*.log” are empty ( 0 bytes ).

  • I tried to run with root, didn’t help.

  • I tried to run without root, didn’t help.

  • I tried adding to path, didn’t help.

    export PATH=“/usr/local/driveworks-3.5/data:$PATH”
    echo $PATH

  • I tried running on target, got the same error message as on host ( “Backend recorder didn’t start, no acknowledgement received”.

  • Even if I don’t do any change on the original “release.json”, I get the same error.

  • I tried adding “–backend” flag like this:.

    ./recorder-tui /home/user/Projects/nvidia_opsys_plugin/release.json --backend
    

but got some crashing error.

Rig: release.json CurrentMount: 
Traceback (most recent call last):
  File "./recorder-tui", line 744, in <module>
	printMountsInfo(mntInfos, sensorsInfo, rig, rigmnt, usermnt)
  File "./recorder-tui", line 295, in printMountsInfo
	storage = mnts if mnts else [infos[0][0]]
IndexError: list index out of range

Any ideas?

Dear @Sunny127,
Could you please use recorder tool instead of recorder-tui for verification. Also, please check using simple camera in release.json first to confirm if everything’s is fine with setup.

@SivaRamaKrishnaNV I tried to record from the camera with “recorder-tui”. It gave the same error.

nvidia@tegra-ubuntu:/usr/local/driveworks/tools/capture$ ./recorder-tui /usr/local/driveworks/data/samples/sensors/camera/camera/rig.json
running recorder-tui with arguments: Namespace(aes_key=None, aes_key_encrypted=None, backend=False, bbr=False, disable_encryption=True, enable_encryption=False, no_mount_check=False, non_interaction=False, remote_dw_path='', rig='/usr/local/driveworks/data/samples/sensors/camera/camera/rig.json', rsa_key='/home/nvidia/.ssh/recorder-aiinfra.pem', rsa_key_md5=None, skip_init_input=False, tag='NONE')
[04-01-2022 15:52:02] TimeSource Eth: Lost PTP time synchronizaton. Synchronized time will not be available from this timesource.
[04-01-2022 15:52:03] SDK: No resources(.pak) mounted from '/usr/local/driveworks-3.5/data'. Please adjust path or some modules won't function properly.
[04-01-2022 15:52:03] TimeSource Eth: Lost PTP time synchronizaton. Synchronized time will not be available from this timesource.
	Copying rigs and setup code for local...
	Platform setup for local...
Backend recorder didn't start, no acknowledgement received
Errors detected during recording
First 100 lines of std error log:
First 100 lines of std out log:
When reporting bugs, please attach full log and files /tmp/recorder-tui-11430/*.log and /tmp/recorder-11430/*.log
WARNING: Forcibly exited. Some data might have been lost!
WARNING: Use q<Enter> to exit gracefully, next time.


Then I tried to record from the camera with “recorder”. Worked fine:

nvidia@tegra-ubuntu:/usr/local/driveworks/tools/capture$ ./recorder /usr/local/driveworks/data/samples/sensors/camera/camera/rig.json
[04-01-2022 15:52:54] Platform: Detected DDPX - Tegra A
[04-01-2022 15:52:54] TimeSource: monotonic epoch time offset is 1641134812151491
[04-01-2022 15:52:54] TimeSource: Could not detect valid PTP time source at nvpps. Fallback to eth0
[04-01-2022 15:52:54] TimeSource Eth: Lost PTP time synchronizaton. Synchronized time will not be available from this timesource.
[04-01-2022 15:52:54] TimeSource: Could not detect valid PTP time source at 'eth0'. Fallback to CLOCK_MONOTONIC.
[04-01-2022 15:52:55] Platform: number of GPU devices detected 1
[04-01-2022 15:52:55] Platform: currently selected GPU device integrated ID 0
[04-01-2022 15:52:55] Context::getDataPathFromSelfLocation DATA_ROOT found at: /usr/local/driveworks-3.5/data
[04-01-2022 15:52:55] SDK: No resources(.pak) mounted, some modules will not function properly
[04-01-2022 15:52:55] SDK: Create NvMediaDevice
[04-01-2022 15:52:55] SDK: Create NvMedia2D
[04-01-2022 15:52:55] egl::Display: found 1 EGL devices
[04-01-2022 15:52:55] egl::Display: use drm device: /dev/dri/card0
[04-01-2022 15:52:55] TimeSource: monotonic epoch time offset is 1641134812151491
[04-01-2022 15:52:55] TimeSource: Could not detect valid PTP time source at nvpps. Fallback to eth0
[04-01-2022 15:52:55] TimeSource Eth: Lost PTP time synchronizaton. Synchronized time will not be available from this timesource.
[04-01-2022 15:52:55] TimeSource: Could not detect valid PTP time source at 'eth0'. Fallback to CLOCK_MONOTONIC.
[04-01-2022 15:52:55] Initialize DriveWorks SDK v3.5.75
[04-01-2022 15:52:55] Release build with GNU 7.3.1 from heads/buildbrain-branch-0-gc61a9a35bd0 against Drive PDK v5.2.0.0
[04-01-2022 15:52:55] Platform: currently selected GPU device integrated ID 0
[04-01-2022 15:52:55] Rig::fromFile: Loading rig file: /usr/local/driveworks/data/samples/sensors/camera/camera/rig.json
[04-01-2022 15:52:55] No valid data file found for camera:sample0 in parameter string: camera-name=SF3324,interface=csi-a,link=0,output-format=processed (using configuration folder /usr/local/driveworks/data/samples/sensors/camera/camera/)
[04-01-2022 15:52:55] SensorFactory::createSensor() -> camera.gmsl, camera-name=SF3324,interface=csi-a,link=0,output-format=processed
[04-01-2022 15:52:55] CameraBase: pool size set to 8
[04-01-2022 15:52:55] SensorFactory::createSensor() -> camera.gmsl.master, 
[04-01-2022 15:52:55] CameraMaster::parseDevBlock Getting device info list.
[04-01-2022 15:52:55] devBlock: 1 Slave = 0 Interface = csi-a Camera_name = SF3324 Link = 0
[04-01-2022 15:52:55] Camera Match Name: SF3324 Description: Sekonix SF3324 module - 120-deg FOV, DVP AR0231-RCCB, MAX96705 linkIndex: 4294967295 serInfo.Name: MAX96705
[04-01-2022 15:52:55] Client, Setting up information for camera ID 0
[04-01-2022 15:52:55] Client, successfully found info for camera ID 0 bound to id 0
[04-01-2022 15:52:55] CameraClient: no NITO found at /opt/nvidia/nvmedia/nit/SF3324.nito
[04-01-2022 15:52:55] CameraClient: using NITO found at /opt/nvidia/nvmedia/nit/sf3324.nito
[04-01-2022 15:52:55] CameraClient: camera params: index_table=off,file=,camera-name=SF3324,interface=csi-a,link=0,output-format=processed
[04-01-2022 15:52:55] CameraClient: format not specified. Using h264.
[04-01-2022 15:52:55] CameraClient: serializer bitrate not specified. Using 8000000.
[04-01-2022 15:52:55] EncoderNvMedia: Setting encode on instance 0
Rig: rig.json NewSink: /dev/null
[04-01-2022 15:52:55] CameraGSMLMaster: starting...
[04-01-2022 15:52:55] SIPLMaster::SIPLMaster: Setting up master camera
[04-01-2022 15:52:55] Platform: 
[04-01-2022 15:52:55] Platform Config: 
[04-01-2022 15:52:55] Description: 
[04-01-2022 15:52:55] Number of device blocks: 1
[04-01-2022 15:52:55] Device Block : 0
[04-01-2022 15:52:55] 	csiPort: 0
[04-01-2022 15:52:55] 	i2cDevice: 0
[04-01-2022 15:52:55] 	Deserializer Name: MAX96712
[04-01-2022 15:52:55] 	Deserializer Description: Maxim 96712 Aggregator
[04-01-2022 15:52:55] 	Deserializer i2cAddress: 41
[04-01-2022 15:52:55] 	Simulator Mode: 0
[04-01-2022 15:52:55] 	Slave Mode: 0
[04-01-2022 15:52:55] 	Phy Mode: 0
[04-01-2022 15:52:55] 	Number of camera modules: 1
[04-01-2022 15:52:55] 	CameraModule index: 0
[04-01-2022 15:52:55] 		Name :SF3324
[04-01-2022 15:52:55] 		Description: Sekonix SF3324 module - 120-deg FOV, DVP AR0231-RCCB, MAX96705
[04-01-2022 15:52:55] 		Serializer name: MAX96705
[04-01-2022 15:52:55] 		Serializer description: Maxim 96705 Serializer
[04-01-2022 15:52:55] 		Serializer i2cAdress: 64
[04-01-2022 15:52:55] 			Sensor ID: 0
[04-01-2022 15:52:55] 			Sensor name: AR0231
[04-01-2022 15:52:55] 			Sensor description: OnSemi AR0231 Sensor
[04-01-2022 15:52:55] 			Sensor i2cAddress: 16
[04-01-2022 15:52:55] 			Sensor isTPGEnabled: 0
[04-01-2022 15:52:55] 			Sensor isTriggerMode: 1
[04-01-2022 15:52:55] 				 cfa: 39
[04-01-2022 15:52:55] 				 embeddedTopLines: 24
[04-01-2022 15:52:55] 				 embeddedBottomLines: 4
[04-01-2022 15:52:55] 				 inputFormat: 8
[04-01-2022 15:52:55] 				 height: 1208
[04-01-2022 15:52:55] 				 width: 1920
[04-01-2022 15:52:55] 				 fps: 30.0000000
[04-01-2022 15:52:55] 				 Embedded Data: 0
[04-01-2022 15:52:55] CameraMaster::setOutputDescription Setting output consumer descriptors for sensor:  OutputType: 1
[04-01-2022 15:52:55] Client, setting pipeline config for camera ID 0
[04-01-2022 15:52:55] CameraMaster: master initiation
MAX96712: Revision 2 detected
MAX96712: Enable periodic AEQ on Link 0
MAX96705: Pre-emphasis set to 0xaa
MAX96705: Revision 1 detected!
Sensor AR0231 RCCB Rev7 detected!
Module_id 22 Severity 6 : NvMediaICPCreateEx 76
Module_id 22 Severity 6 : T19x VI version  0x000019
[04-01-2022 15:52:56] CameraClient: allocating RAW surfaces, nvmedia surface 210
[04-01-2022 15:52:56] Registering imagegroups for pipeline 0 NO COOKIES
[04-01-2022 15:52:56] CameraClient: allocating ISP surfaces 36
[04-01-2022 15:52:56] CameraMaster: bootstrap complete
[04-01-2022 15:52:56] SensorManager::start() started
[04-01-2022 15:52:56] SensorManager::setRTPriority() couldn't set sensor thread prio
Press s<Enter> to start, just <Enter> to see progress, q<Enter> to quit.
[04-01-2022 15:52:56] CameraClient: Acquisition started

Rig: rig.json Sensor: camera:sample0 Bytes: 8772709
Rig: rig.json EndOfSensorsInfo

Rig: rig.json Sensor: camera:sample0 Bytes: 10869964
Rig: rig.json EndOfSensorsInfo

Rig: rig.json Sensor: camera:sample0 Bytes: 12373192
Rig: rig.json EndOfSensorsInfo
q
[04-01-2022 15:53:09] Sensor statistics for: camera.gmsl, camera-name=SF3324,interface=csi-a,link=0,output-format=processed
[04-01-2022 15:53:09] Events: 398
Errors: 0
Drops: 0
minDelta: 33287.00000
maxDelta: 33389.00000
meanDelta: 33333.8086
Standard deviation: 18.4441
[04-01-2022 15:53:09] SensorManager::stop() stopped
[04-01-2022 15:53:09] Deinit master camera
[04-01-2022 15:53:09] CameraClient: Acquisition started
[04-01-2022 15:53:12] CameraClient: Stopping client
[04-01-2022 15:53:12] Releasing Driveworks SDK Context
[04-01-2022 15:53:12] SDK: Release NvMediaDevice
[04-01-2022 15:53:13] SDK: Release NvMedia2D
[04-01-2022 15:53:13] Releasing camera master
nvidia@tegra-ubuntu:/usr/local/driveworks/tools/capture$ 


I then changed “recorder-tui” to “recorder”, and changed the “release.json” file contents to this one:

release.json (6.9 KB)

A relevant partial snippet from the file:

"parameter": "ip=127.0.0.1,port=8888,decoder-path=/home/user/Projects/nvidia_opsys_plugin/build/libnv_lidar_plugin_x86.so",
                "properties": null,
                "protocol": "lidar.custom", 

Then I ran “recorder” and got the following error: “DW_SAL_SENSOR_ERROR: Lidar: cannot create serializer, unknown lidar type”.

user@user-ThinkCentre-M93p:/usr/local/driveworks/tools/capture$ ./recorder /home/user/Projects/nvidia_opsys_plugin/release.json
[03-01-2022 15:17:41] Platform: Detected Generic x86 Platform
[03-01-2022 15:17:41] TimeSource: monotonic epoch time offset is 1641213330452718
[03-01-2022 15:17:41] Platform: number of GPU devices detected 1
[03-01-2022 15:17:41] Platform: currently selected GPU device discrete ID 0
[03-01-2022 15:17:41] Context::getDataPathFromSelfLocation DATA_ROOT found at: /usr/local/driveworks-3.5/data
[03-01-2022 15:17:41] SDK: No resources(.pak) mounted, some modules will not function properly
[03-01-2022 15:17:41] TimeSource: monotonic epoch time offset is 1641213330452718
[03-01-2022 15:17:41] Initialize DriveWorks SDK v3.5.75
[03-01-2022 15:17:41] Release build with GNU 7.4.0 from heads/buildbrain-branch-0-gc61a9a35bd0
[03-01-2022 15:17:41] Rig::fromFile: Loading rig file: /home/user/Projects/nvidia_opsys_plugin/release.json
[03-01-2022 15:17:41] No valid data file found for lidar:side in parameter string: ip=127.0.0.1,port=8888,decoder-path=/home/user/Projects/nvidia_opsys_plugin/build/libnv_lidar_plugin_x86.so (using configuration folder /home/user/Projects/nvidia_opsys_plugin/)
[03-01-2022 15:17:41] SensorFactory::createSensor() -> lidar.custom, ip=127.0.0.1,port=8888,decoder-path=/home/user/Projects/nvidia_opsys_plugin/build/libnv_lidar_plugin_x86.so
NVLidar::createSensor - start.
NVLidar::createSensor - end.
NVLidar::getConstants - start.
NVLidar::getConstants - Initializing constants.
NVLidar::getConstants - end.
terminate called after throwing an instance of 'dw::core::Exception'
  what():  DW_SAL_SENSOR_ERROR: Lidar: cannot create serializer, unknown lidar type
Aborted (core dumped)
user@user-ThinkCentre-M93p:/usr/local/driveworks/tools/capture$ 

After running it again. I got a different error: “SensorManager::addSensor DW_INVALID_ARGUMENT: safeMemcpy: destination array is too small”

user@user-ThinkCentre-M93p:/usr/local/driveworks/tools/capture$ ./recorder /home/user/Projects/nvidia_opsys_plugin/release.json
[03-01-2022 15:39:24] Platform: Detected Generic x86 Platform
[03-01-2022 15:39:24] TimeSource: monotonic epoch time offset is 1641213330452718
[03-01-2022 15:39:24] Platform: number of GPU devices detected 1
[03-01-2022 15:39:24] Platform: currently selected GPU device discrete ID 0
[03-01-2022 15:39:24] Context::getDataPathFromSelfLocation DATA_ROOT found at: /usr/local/driveworks-3.5/data
[03-01-2022 15:39:24] SDK: No resources(.pak) mounted, some modules will not function properly
[03-01-2022 15:39:24] TimeSource: monotonic epoch time offset is 1641213330452718
[03-01-2022 15:39:24] Initialize DriveWorks SDK v3.5.75
[03-01-2022 15:39:24] Release build with GNU 7.4.0 from heads/buildbrain-branch-0-gc61a9a35bd0
[03-01-2022 15:39:24] Rig::fromFile: Loading rig file: /home/user/Projects/nvidia_opsys_plugin/release.json
[03-01-2022 15:39:24] No valid data file found for lidar:side in parameter string: ip=127.0.0.1,port=8888,decoder-path=/home/user/Projects/nvidia_opsys_plugin/build/libnv_lidar_plugin_x86.so (using configuration folder /home/user/Projects/nvidia_opsys_plugin/)
[03-01-2022 15:39:24] SensorFactory::createSensor() -> lidar.custom, ip=127.0.0.1,port=8888,decoder-path=/home/user/Projects/nvidia_opsys_plugin/build/libnv_lidar_plugin_x86.so
NVLidar::createSensor - start.
NVLidar::createSensor - end.
NVLidar::getConstants - start.
NVLidar::getConstants - Initializing constants.
NVLidar::getConstants - end.
[03-01-2022 15:39:24] SensorManager::addSensor DW_INVALID_ARGUMENT: safeMemcpy: destination array is too small
[03-01-2022 15:39:24] SensorManager::addSensorsFromRig() failed to add sensor from rig: lidar.custom ip=127.0.0.1,port=8888,decoder-path=/home/user/Projects/nvidia_opsys_plugin/build/libnv_lidar_plugin_x86.so (Error: DW_INVALID_ARGUMENT)
terminate called after throwing an instance of 'dw::core::Exception'
  what():  DW_INVALID_ARGUMENT: SensorManager::SensorManager() Failed to add sensors from rig
Aborted (core dumped)


It seems that “recorder” didn’t like the function I wrote, called “NVLidar::getConstants”. At least the recorder writes my printouts from the plugin. Here is the function’s code:

dwStatus NVLidar::getConstants(_dwSensorLidarDecoder_constants* constants)
{   
    #ifdef VERY_VERBOSE
        printf("NVLidar::getConstants - start.\n");
    #endif
    
    //Check to ensure constants are only initialized once as they are not supposed to be changed during runtime 
    if(!m_Init){

        #ifdef VERY_VERBOSE
            printf("NVLidar::getConstants - Initializing constants.\n");
        #endif
        
        //Set lidar constants        
        m_constants.maxPayloadSize = MAX_PACKET_SIZE * PACKETS_PER_TRANSMITION_TO_SAL; //Full scan is assembled in plugin
                
        /*
        typedef struct
        {
            char deviceString[256]; //< ASCII string identifying the device. 

            float32_t spinFrequency; //< Current spin frequency.

            uint32_t packetsPerSecond; //< Number of packets per second the sensor produces. 
            uint32_t packetsPerSpin;   //< Number of packets per sensor full spin.

            uint32_t pointsPerSecond; //< Number of points per second the sensor provides.
            uint32_t pointsPerPacket; //< Maximum number of points in a packet. 
            uint32_t pointsPerSpin;   //< Maximum number of points on a full sensor spin. 
            uint32_t pointStride;     //< Number of `float32` elements per point. 
        } dwLidarProperties;
        */
    
        //Set respective scan properties
        dwLidarProperties* properties     = &m_constants.properties;
        strcpy( properties->deviceString,"opsys_lidar");

        //!!C
        properties->spinFrequency = 1;
        properties->packetsPerSecond = PACKETS_PER_IMAGE_OF_ALL_TRXS*24;
        properties->packetsPerSpin = PACKETS_PER_TRANSMITION_TO_SAL;
        properties->pointsPerSecond = properties->packetsPerSecond*NUM_POINTS_IN_PACKET_OF_TYPE_XYZ;
        properties->pointsPerPacket = NUM_POINTS_IN_PACKET_OF_TYPE_XYZ;
        properties->pointsPerSpin = PACKETS_PER_TRANSMITION_TO_SAL*NUM_POINTS_IN_PACKET_OF_TYPE_XYZ;
        properties->pointStride = 1;

        *constants = m_constants;
        m_Init = true; 
    }

    *constants = m_constants;

    #ifdef VERY_VERBOSE
        printf("NVLidar::getConstants - end.\n");
    #endif
    
    return DW_SUCCESS;
}

I took the radar course:
https://courses.nvidia.com/courses/course-v1:DLI+S-AV-06+V1/course/

and a webinar about lidars:
https://developer.nvidia.com/video/integrating-custom-sensors-using-nvidia-driveworks

But they didn’t tell me how to write the “NVLidar::getConstants” function for a lidar.

Any ideas?

Dear @Sunny127,
So as I understand, you could run sample_lidar_replay for your custom lidar and failed to record using recorder tool and hitting issues in your plugin snippet code. Could you share the full log with sample_lidar_replay with live display. and also the code snippet where it hits issue in the plugin code?

@SivaRamaKrishnaNV I didn’t try to run “sample_radar_replay”. I don’t have a custom radar plugin. What I do have, is a custom lidar plugin I wrote.
I shared a code snippet in my previous message, of the problematic function, called:

dwStatus NVLidar::getConstants(_dwSensorLidarDecoder_constants* constants)

Any ideas what to do?

Hi, @Sunny127
Please check if Implement DriveWorks Lidar Plugin for custom lidar - #12 by leepeter909 and the information on the topic help on the issue.

1 Like

Dear @Sunny127,
Adding Vick’s suggestion, does getConstants function is going to be called by _dwLidarDecoder_getConstants to get decoder constants? I don’t see any issue as it just fills some constants. How you got to know that getConstants has issue? Is it possible to put debug statements inside plugin code to narrow down the route cause? The debug logs from sample_lidar_replay and recorder may give some insights about issue.

@VickNV Thanks, the secret string “CUSTOM_EX” in function “NVLidar::getConstants” indeed fixes the issue:

strcpy( properties->deviceString,"CUSTOM_EX");
1 Like

Good to hear that you solved the problem.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.