Helo fanzh,
Thank you so much for getting back to me, I really appreciate!
I didn’t have the opportunity to get back to my jetson before now, but I did comment the “model-engine-file” according to your suggestion and indeed this error did not appear anymore.
However, now I get errors about the memory, I thought the Orin had more memory than the Xavier.
Additionaly, this only came on the first time I ran it, on subsequent tries it got stuck.
This is the result I had on my first attempt:
/home/jetson/.local/lib/python3.8/site-packages/pyds.so
main.py:757: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: https://wiki.gnome.org/PyGObject/Threading
GObject.threads_init()
Creating Pipeline
2023-04-07 13:02:31.227693: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:31.290434: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:31.290671: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:31.292334: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:31.292514: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:31.292636: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:33.987465: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:33.988103: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:33.988197: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2023-04-07 13:02:33.988375: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:02:33.988528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 23253 MB memory: -> device: 0, name: Orin, pci bus id: 0000:00:00.0, compute capability: 8.7
DEBUG:tensorflow:Layer lstm will use cuDNN kernels when running on GPU.
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Now playing...
rtsp://192.168.2.119:554
Starting pipeline
Using winsys: x11
Process PWM-proc:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
TypeError: set_pwm_values() takes 4 positional arguments but 6 were given
Opening in BLOCKING MODE
0:00:10.591125859 3323 0x5df70d90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary-pose-estimation> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tactic Device request: 547MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 3 due to insufficient memory on requested size of 547 detected for tactic 0x0000000000000005.
WARNING: [TRT]: Tactic Device request: 547MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 7 due to insufficient memory on requested size of 547 detected for tactic 0x000000000000003d.
WARNING: [TRT]: Tactic Device request: 547MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 11 due to insufficient memory on requested size of 547 detected for tactic 0x0000000000000075.
WARNING: [TRT]: Tactic Device request: 547MB Available: 408MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 3 due to insufficient memory on requested size of 547 detected for tactic 0x0000000000000005.
WARNING: [TRT]: Tactic Device request: 547MB Available: 408MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 6 due to insufficient memory on requested size of 547 detected for tactic 0x000000000000003d.
WARNING: [TRT]: Tactic Device request: 586MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 3 due to insufficient memory on requested size of 586 detected for tactic 0x0000000000000004.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 4 due to insufficient memory on requested size of 1092 detected for tactic 0x0000000000000005.
WARNING: [TRT]: Tactic Device request: 586MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 9 due to insufficient memory on requested size of 586 detected for tactic 0x000000000000003c.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 10 due to insufficient memory on requested size of 1092 detected for tactic 0x000000000000003d.
WARNING: [TRT]: Tactic Device request: 586MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 15 due to insufficient memory on requested size of 586 detected for tactic 0x0000000000000074.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 409MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 16 due to insufficient memory on requested size of 1092 detected for tactic 0x0000000000000075.
WARNING: [TRT]: Tactic Device request: 586MB Available: 404MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 3 due to insufficient memory on requested size of 586 detected for tactic 0x0000000000000004.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 404MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 4 due to insufficient memory on requested size of 1092 detected for tactic 0x0000000000000005.
WARNING: [TRT]: Tactic Device request: 586MB Available: 404MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 8 due to insufficient memory on requested size of 586 detected for tactic 0x000000000000003c.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 404MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 9 due to insufficient memory on requested size of 1092 detected for tactic 0x000000000000003d.
WARNING: [TRT]: Tactic Device request: 586MB Available: 405MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 3 due to insufficient memory on requested size of 586 detected for tactic 0x0000000000000004.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 405MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 4 due to insufficient memory on requested size of 1092 detected for tactic 0x0000000000000005.
WARNING: [TRT]: Tactic Device request: 586MB Available: 405MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 9 due to insufficient memory on requested size of 586 detected for tactic 0x000000000000003c.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 405MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 10 due to insufficient memory on requested size of 1092 detected for tactic 0x000000000000003d.
WARNING: [TRT]: Tactic Device request: 586MB Available: 405MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 15 due to insufficient memory on requested size of 586 detected for tactic 0x0000000000000074.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 406MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 16 due to insufficient memory on requested size of 1092 detected for tactic 0x0000000000000075.
WARNING: [TRT]: Tactic Device request: 586MB Available: 407MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 3 due to insufficient memory on requested size of 586 detected for tactic 0x0000000000000004.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 407MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 4 due to insufficient memory on requested size of 1092 detected for tactic 0x0000000000000005.
WARNING: [TRT]: Tactic Device request: 586MB Available: 407MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 8 due to insufficient memory on requested size of 586 detected for tactic 0x000000000000003c.
WARNING: [TRT]: Tactic Device request: 1092MB Available: 407MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 9 due to insufficient memory on requested size of 1092 detected for tactic 0x000000000000003d.
WARNING: [TRT]: Tactic Device request: 547MB Available: 407MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 2 due to insufficient memory on requested size of 547 detected for tactic 0x0000000000000003.
WARNING: [TRT]: Tactic Device request: 547MB Available: 406MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 2 due to insufficient memory on requested size of 547 detected for tactic 0x0000000000000003.
This is the result of the following attempts.
It gets stuck at that point and I have to kill it:
/home/jetson/.local/lib/python3.8/site-packages/pyds.so
SIOCADDRT: File exists
main.py:757: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: https://wiki.gnome.org/PyGObject/Threading
GObject.threads_init()
Creating Pipeline
2023-04-07 13:16:23.662445: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:23.714250: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:23.714517: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:23.716006: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:23.716189: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:23.716348: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:25.716863: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:25.717240: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:25.717341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2023-04-07 13:16:25.717515: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-07 13:16:25.717688: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20311 MB memory: -> device: 0, name: Orin, pci bus id: 0000:00:00.0, compute capability: 8.7
DEBUG:tensorflow:Layer lstm will use cuDNN kernels when running on GPU.
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Now playing...
rtsp://192.168.2.119:554
Starting pipeline
Using winsys: x11
Process PWM-proc:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
TypeError: set_pwm_values() takes 4 positional arguments but 6 were given
Opening in BLOCKING MODE
0:00:08.361644652 4318 0x5bf92b90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary-pose-estimation> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING: [TRT]: Check verbose logs for the list of affected weights.
WARNING: [TRT]: - 35 weights are affected by this issue: Detected subnormal FP16 values.
WARNING: [TRT]: - 19 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
0:03:18.857104291 4318 0x5bf92b90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary-pose-estimation> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 2]: serialize cuda engine to file: /home/jetson/git/blimp/jetson-agx/models/pose_estimation.onnx_b1_gpu0_fp16.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input.1 3x224x224
1 OUTPUT kFLOAT 262 18x56x56
2 OUTPUT kFLOAT 264 42x56x56
0:03:19.090411862 4318 0x5bf92b90 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-pose-estimation> [UID 2]: Load new model:configs/sgie.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Failed to open low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
dlopen error: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so: cannot open shared object file: No such file or directory
gstnvtracker: Failed to initilaize low level lib.
And out of curiosity, this is the log when I run it successfully on my Xavier:
2023-04-07 13:25:56.988898: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
Creating Pipeline
2023-04-07 13:26:06.445662: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2023-04-07 13:26:06.461388: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:06.461721: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1734] Found device 0 with properties:
pciBusID: 0000:00:00.0 name: Xavier computeCapability: 7.2
coreClock: 1.377GHz coreCount: 8 deviceMemorySize: 31.18GiB deviceMemoryBandwidth: 82.08GiB/s
2023-04-07 13:26:06.461883: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
2023-04-07 13:26:06.462120: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.10
2023-04-07 13:26:06.462282: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.10
2023-04-07 13:26:06.462407: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2023-04-07 13:26:06.462546: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2023-04-07 13:26:06.462696: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.10
2023-04-07 13:26:06.462820: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.10
2023-04-07 13:26:06.462967: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2023-04-07 13:26:06.463353: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:06.463707: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:06.464205: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1872] Adding visible gpu devices: 0
2023-04-07 13:26:06.470911: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:06.471172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1734] Found device 0 with properties:
pciBusID: 0000:00:00.0 name: Xavier computeCapability: 7.2
coreClock: 1.377GHz coreCount: 8 deviceMemorySize: 31.18GiB deviceMemoryBandwidth: 82.08GiB/s
2023-04-07 13:26:06.471478: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:06.471881: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:06.472008: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1872] Adding visible gpu devices: 0
2023-04-07 13:26:16.261133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2023-04-07 13:26:16.261279: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0
2023-04-07 13:26:16.261327: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N
2023-04-07 13:26:16.261983: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:16.262496: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:16.262859: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2023-04-07 13:26:16.263118: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 25539 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
DEBUG:tensorflow:Layer lstm will use cuDNN kernels when running on GPU.
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Now playing...
rtsp://192.168.2.119:554
Starting pipeline
Using winsys: x11
Opening in BLOCKING MODE
Opening in BLOCKING MODE
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:29.446871448 10458 0xa35374a0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-pose-estimation> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 2]: deserialized trt engine from :/home/jetson/git/blimp/jetson-agx/models/pose_estimation.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input.1 3x224x224
1 OUTPUT kFLOAT 262 18x56x56
2 OUTPUT kFLOAT 264 42x56x56
0:00:29.448703491 10458 0xa35374a0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-pose-estimation> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 2]: Use deserialized engine model: /home/jetson/git/blimp/jetson-agx/models/pose_estimation.onnx_b1_gpu0_fp16.engine
0:00:29.496151292 10458 0xa35374a0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-pose-estimation> [UID 2]: Load new model:configs/sgie.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvDCF][Warning] `minTrackingConfidenceDuringInactive` is deprecated
[NvDCF] Initialized
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:33.211706029 10458 0xa35374a0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/jetson/git/blimp/jetson-agx/models/yolov5m.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x640x640
1 OUTPUT kFLOAT prob 6001x1x1
0:00:33.215408964 10458 0xa35374a0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/jetson/git/blimp/jetson-agx/models/yolov5m.engine
0:00:33.280809750 10458 0xa35374a0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:configs/pgie.txt sucessfully
Decodebin child added: source
I have to admit I’m wondering the direction I should follow from here.
Basically, I have a working deepstream app that I developed on the Xavier, but as we encounter some lags we thought we would try it on the Orin, but it’s harder than I thought to simply reproduce the possibility of simply running the app on the Orin, with two years between the two.
The Xavier has L4T 32.5.1 and deepstream 5.1
NVIDIA Jetson AGX Xavier [16GB]
L4T 32.5.1 [ JetPack 4.5.1 ]
Ubuntu 18.04.6 LTS
Kernel Version: 4.9.201-tegra
CUDA 10.2.89
CUDA Architecture: 7.2
OpenCV version: 4.4.0
OpenCV Cuda: YES
CUDNN: 8.0.0.180
TensorRT: 7.1.3.0
Vision Works: 1.6.0.501
VPI: ii libnvvpi1 1.0.15 arm64 NVIDIA Vision Programming Interface library
Vulcan: 1.2.70
The Orin has L4T 35.2.1 and deepstream 6.2
NVIDIA Jetson AGX Orin
L4T 35.2.1 [ JetPack UNKNOWN ]
Ubuntu 20.04.5 LTS
Kernel Version: 5.10.104-tegra
CUDA 11.4.315
CUDA Architecture: 8.7
OpenCV version: 4.5.4
OpenCV Cuda: NO
CUDNN: 8.6.0.166
TensorRT: 8.5.2.2
Vision Works: NOT_INSTALLED
VPI: 2.2.4
Vulcan: 1.3.204