Please provide the following info (tick the boxes after creating this topic): Software Version
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
1.9.3.10904
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
I’m a little confused reading through the documentation, but what ports support gPTP? I have a gPTP enabled external grandmaster running on eqos_0, but i’m not sure if it’s working correctly. I’ve also tried the default ptp4l configuration on MATENET P4 with our lidar devices on Quad HMTD J3 (P4 & P3) but i believe the amount of jumping between x2 switches is causing a large delay time.
Can the external grandmaster be connected to other ports? We have x2 lidars that publish over the limit the matenet ports have, and we found when plugging into the other ethernet switch the grandmaster time delay is very large as it’s jumping between 2 switches.
I’ve noticed as well that there is a [ptp1] process running on the system? And when viewing the output in wireshark on mgbe2_0 and mgbe3_0 I’m seeing Sync messages? Is the drive unit running a ptp service in the background?
Are you trying to connect more lidars than there are available ports? How many lidars are you attempting to connect? Additionally, could you provide more details about the observed behavior and the specific challenges you are facing?
My issue is i’m seeing a large gm_offset when trying to use the default ptp4l configuration with a Boundary Clock for MGBR3_0 and MGBE2_0. My concern is the jumping between 2 switches for PTP vs. putting ptp on say eqos or even on the same switch as the ouster lidars.
Can you also explain what i’m seeing on with the [ptp1] service running in the background. I’m seeing sync / follow messages from Nvidia to LLDP_Multicast
Please follow Orin Time Sync | NVIDIA Docs to use the mentioned configuration files. For the ptp1 issue, please provide specific details about what you observed and how you observed it.
I think the concern is that if I have an external GM, that the solution might pick this as the best clock vs. our own. I’d rather have the ethernet traffic clear of any PTP messages that are not coming from our devices. As for the source of the mac address I can’t seem to find that anywhere in the system.
nvidia@tegra-ubuntu:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a2:1d:68:10:26:f8 brd ff:ff:ff:ff:ff:ff
3: eqos_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1466 qdisc mq state UP group default qlen 1000
link/ether 48:b0:2d:b5:f0:a7 brd ff:ff:ff:ff:ff:ff
inet 10.224.3.50/24 brd 10.224.3.255 scope global dynamic eqos_0
valid_lft 682516sec preferred_lft 682516sec
inet6 fe80::4ab0:2dff:feb5:f0a7/64 scope link
valid_lft forever preferred_lft forever
4: mgbe0_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1466 qdisc mq state UP group default qlen 1000
link/ether 48:b0:2d:b5:f0:a9 brd ff:ff:ff:ff:ff:ff
inet6 fe80::4ab0:2dff:feb5:f0a9/64 scope link
valid_lft forever preferred_lft forever
5: mgbe1_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1466 qdisc mq state DOWN group default qlen 1000
link/ether 48:b0:2d:b5:f0:ab brd ff:ff:ff:ff:ff:ff
6: mgbe2_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1466 qdisc mq state UP group default qlen 1000
link/ether 48:b0:2d:b5:f0:ad brd ff:ff:ff:ff:ff:ff
inet6 fe80::4ab0:2dff:feb5:f0ad/64 scope link
valid_lft forever preferred_lft forever
7: mgbe3_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1466 qdisc mq state UP group default qlen 1000
link/ether 48:b0:2d:b5:f0:af brd ff:ff:ff:ff:ff:ff
inet6 fe80::4ab0:2dff:feb5:f0af/64 scope link
valid_lft forever preferred_lft forever
8: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 16 qdisc pfifo_fast state UP group default qlen 10
link/can
9: enP7p1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 48:b0:2d:91:ca:e6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::4ab0:2dff:fe91:cae6/64 scope link
valid_lft forever preferred_lft forever
10: can1: <NOARP,UP,LOWER_UP,ECHO> mtu 16 qdisc pfifo_fast state UP group default qlen 10
link/can
11: wlP1p1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether b4:8c:9d:49:a0:f9 brd ff:ff:ff:ff:ff:ff
12: mgbe2_0.200@mgbe2_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1466 qdisc noqueue state UP group default qlen 1000
link/ether 48:b0:2d:b5:f0:ad brd ff:ff:ff:ff:ff:ff
inet 10.42.0.28/24 brd 10.42.0.255 scope global mgbe2_0.200
valid_lft forever preferred_lft forever
inet6 fe80::4ab0:2dff:feb5:f0ad/64 scope link
valid_lft forever preferred_lft forever
13: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:8a:98:53:d8 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
Welcome to minicom 2.7.1
OPTIONS: I18n
Compiled on Aug 13 2017, 15:25:34.
Port /dev/ttyACM1, 11:49:20
Press CTRL-A Z for help on special keys
ReporterId-0x8001 Error_Attribute-0x0 Timestamp-0x77fa7dd1
MCU_FOH: ErrReport: ErrorCode-0x18550080 ReporterId-0x8001 Error_Attribute-0x0 Timestamp-0x77faf3c9
status
Info: Executing cmd: status, argc: 0, args:
Alive : 02:33:45
CPU load Core 0: 5%
CPU load max Core 0: 100%
CPU load Core 1: 0%
CPU load max Core 1: 00%
CPU load Core 2: 0%
CPU load max Core 2: 00%
CPU load Core 3: 0%
CPU load max Core 3: 00%
CPU load Core 4: 0%
CPU load max Core 4: 00%
CPU load Core 5: 0%
CPU load max Core 5: 00%
IP-address (AURIX): 10.42.0.146
MAC-address (AURIX): 0x48B02D631AF6
RAM Usage: 79232 bytes
Command Executed
NvShell>
Is there a reason why it’s running PTP by default on the switch FW? Is there any way to disable it so that our external GM is the only one that would be publishing any PTP messages. Understanding that there will be PTP traffic from our external GM and the slave ptp command on the system
Yes, when we run the automotive-slave.cfg configuration file, it does not seem to recognize our “external grandmaster” it just pulls the master that’s running on the Marvel FW switch. It will run and show the output with low rms / delays, however this is still the same result even when not plugging in our external grandmaster so i’m not sure how valid it is.