|
Cannot upgrade AGX Xavier : Stuck on Jetpack 4.6 - Options for upgrading libraries without OEM BSP support?
|
|
3
|
11
|
March 31, 2026
|
|
Issues with Using TensorRT on Jetson Orin Nano
|
|
1
|
44
|
March 31, 2026
|
|
[Bug?] Builder Internal Error Code 2 on RTX 6000 Ada (sm89) but succeeds on Blackwell (sm120) with IPluginV2DynamicExt + OBEY_PRECISION_CONSTRAINTS
|
|
1
|
26
|
March 31, 2026
|
|
TensorRT 10.7.0 on Orin Nano: Target GPU SM 87 is not supported by this TensorRT release
|
|
1
|
38
|
March 31, 2026
|
|
How to install TensorRT 8.6.2 in jetpack 6.2
|
|
5
|
49
|
March 25, 2026
|
|
Why tensorrt can use concatenation in-place
|
|
0
|
13
|
March 24, 2026
|
|
Jetson Orin Nano run TensorRT's sample failed
|
|
5
|
33
|
March 24, 2026
|
|
Jetson Thor AGX - Poor INT8 performance
|
|
5
|
163
|
March 23, 2026
|
|
[Issue] Qwen3-Next-80B NVFP4 and FP8 Cannot Be Served via trtllm-serve on DGX Spark GB10 (TRT-LLM 1.3.0rc7)
|
|
0
|
59
|
March 15, 2026
|
|
Public repositories for TensorRT 11.0
|
|
6
|
91
|
March 26, 2026
|
|
Same ONNX model produces incorrect output on Thor; works correctly on Orin
|
|
1
|
34
|
March 10, 2026
|
|
Use custom RT-DETR model for FoundationPose
|
|
6
|
769
|
March 9, 2026
|
|
TRT LLM for Inference with NVFP4 safetensors slower than LM studio GGUF on the Spark
|
|
9
|
1104
|
March 6, 2026
|
|
Smart record once object is detected
|
|
6
|
66
|
March 2, 2026
|
|
Inference Discrepancy Between TensorRT 10.13.2 (Thor) and 8.6.2 (Orin)
|
|
14
|
240
|
March 2, 2026
|
|
Jeston thor convert onnx to engine fail
|
|
11
|
223
|
March 2, 2026
|
|
Execution context creation fails with multiple optimization profiles
|
|
3
|
53
|
March 14, 2026
|
|
Jetson Nano 2 GB TensorRT Python 3.8 Bindings Continued
|
|
2
|
26
|
February 28, 2026
|
|
Performance discrepancy: TensorRT achieves ~10 TFLOPS vs. 17 TFLOPS spec on Orin Nano (Super mode)
|
|
7
|
118
|
February 25, 2026
|
|
Model outputting NaNs
|
|
2
|
111
|
February 18, 2026
|
|
[isaac_ros_foundationpose] cannot convert the model (.onnx) to a TensorRT engine plan
|
|
4
|
52
|
February 23, 2026
|
|
`trtexec` failed to save engine to file in jetson orin board
|
|
12
|
103
|
February 13, 2026
|
|
TensorRT Model Optimizer INT8 quantization causes 2.7x performance regression on Jetson Orin Nano 4GB (ViT-S + DPT architecture)
|
|
9
|
230
|
February 13, 2026
|
|
Converting int4 model to trt engine for inferencing
|
|
9
|
142
|
February 9, 2026
|
|
Status of Sparse4D TensorRT Deployment in TAO 6.25
|
|
5
|
42
|
February 8, 2026
|
|
Production Inference Path for Fine-Tuned Canary-v2 (TensorRT or RIVA Support)
|
|
0
|
51
|
February 6, 2026
|
|
JetPack 7.x support for Jetson Orin Nano (TensorRT ≥ 10.5)
|
|
2
|
166
|
February 5, 2026
|
|
[TensorRT-LLM 1.3.0rc1] TranslateGemma-27B-IT model fails with "Per-layer-type RoPE configuration is not supported yet"
|
|
2
|
104
|
February 4, 2026
|
|
Optimize .NET Real-Time Video Pipeline with Multiple TensorRT Models — Low GPU Utilization & Throughput Bottleneck
|
|
0
|
38
|
February 2, 2026
|
|
Running GestureNet model on Holoscan
|
|
9
|
89
|
February 1, 2026
|