|
Bug Report Summary | Product : NVIDIA NIM for Image OCR (NeMo Retriever OCR v1) | Version: 1.1.0 | Severity: High (Production Blocker)
|
|
0
|
3
|
November 18, 2025
|
|
Segmentation Fault Loading YOLO v4 TensorRT Model with Triton
|
|
1
|
17
|
November 18, 2025
|
|
NIM to Triton Server Pipeline
|
|
1
|
116
|
November 14, 2025
|
|
Creating a container for seminar Fundamentals of Deep Learning
|
|
0
|
9
|
November 9, 2025
|
|
Nvinfer yields constant OCR text with NHWC engine (fast_plate_ocr – cct_s_v1_global_model) while nvinferserver returns correct results
|
|
3
|
49
|
November 7, 2025
|
|
TensorRT built-in NMS output lost when using Triton dynamic batching
|
|
0
|
18
|
November 3, 2025
|
|
Gray image in Triton
|
|
3
|
30
|
October 31, 2025
|
|
Running Llama-3.1-8B-FP4 get triton error. Value 'sm_121a' is not defined for option 'gpu-name'
|
|
2
|
191
|
October 24, 2025
|
|
Tensor-RT rejects engine cache pre-built on same device type
|
|
5
|
68
|
October 21, 2025
|
|
tritonclient.utils.InferenceServerException: Fail to connect to remote host ipv4:127.0.0.1:8001 in TRELLIS NIM
|
|
0
|
59
|
September 23, 2025
|
|
Connection problem due to lack of CORS support in Triton Server, which blocks requests from frontend web applications
|
|
3
|
82
|
September 12, 2025
|
|
Triton + TensorRT-LLM (Llama 3.1 8B) – Feasibility of Stateful Serving + KV Cache Reuse + Priority Caching
|
|
1
|
59
|
September 5, 2025
|
|
How to access labelfile_path in custom classifier parser for nvinferserver?
|
|
2
|
73
|
August 19, 2025
|
|
Feature Proposal: Enable Deterministic Algorithms in Triton server PyTorch Backend
|
|
0
|
66
|
August 5, 2025
|
|
Error reading checkpoint.tl
|
|
1
|
91
|
July 31, 2025
|
|
Triton server GPU memory leak for grpc cuda shared memory request
|
|
3
|
172
|
August 8, 2025
|
|
Nvcr.io/nvidia/l4t-triton:r35.2.1 access denied
|
|
3
|
79
|
August 13, 2025
|
|
NSight with AGX Orin and Deepstream + Triton
|
|
11
|
200
|
July 15, 2025
|
|
Intermittent Artifacts in DeepStream RTSP Output with Dynamic Multi-Stream Video Analytics with triton inference server with python backend
|
|
88
|
677
|
July 8, 2025
|
|
Deploying Triton Server with TensorRT-LLM on Jetson AGX Orin (JetPack 6.2) — Any Working Example?
|
|
10
|
643
|
June 17, 2025
|
|
How to get model configuration from HTTP API without first loading the model in EXPLICIT mode?
|
|
1
|
58
|
April 30, 2025
|
|
Windows systems perfomance issue
|
|
1
|
73
|
April 30, 2025
|
|
How to load specific version of a model using EXPLICIT mode?
|
|
0
|
45
|
April 29, 2025
|
|
tritonclient.utils.InferenceServerException: No field is set
|
|
1
|
1697
|
April 17, 2025
|
|
CUDA shared memory doesn't work (failed to open CUDA IPC handle: invalid device context)
|
|
9
|
462
|
April 14, 2025
|
|
Deepstream + triton infer server
|
|
4
|
136
|
March 25, 2025
|
|
Invalid argument: model input NHWC/NCHW require 3 dims for visual_changenet_segmentation_tao
|
|
5
|
95
|
March 13, 2025
|
|
NvInferServer implementation of LSTM model
|
|
9
|
150
|
March 10, 2025
|
|
Issues with setting up Dynamic Batching for Triton server
|
|
1
|
319
|
March 6, 2025
|
|
How to list find the names for rmirs on rivia
|
|
2
|
99
|
February 21, 2025
|