Error running deepstream with triton server inference of a tensorflow frozengraph model

Please provide complete information as applicable to your setup.

**• Hardware Platform: Tesla T4 in Ubuntu server 18.04
**• DeepStream Versio: 5.0
**• NVIDIA GPU Driver Version (valid for GPU only): 450.51.06

I’m working right now on the triton server. I’m using a tensorflow model with a frozen_graph. After many custom configs, I have my model working with the right labels and all but when I run the deepstream app I got this error after 1 min or less.

2020-09-10 17:15:23.850915: E tensorflow/stream_executor/cuda/cuda_event.cc:29] Error polling for event status: failed to query event: CUDA_ERROR_LAUNCH_FAILED: unspecified launch failure
2020-09-10 17:15:23.850955: F tensorflow/core/common_runtime/gpu/gpu_event_mgr.cc:273] Unexpected Event status: 1

The inference is right and it works on RTSP, but it just works 1 min or less until I got the error above.

any ideas? pls

Hi @tubarao0705,

The inference is right and it works on RTSP, but it just works 1 min or less until I got the error above.

So, you can see the correct output in the 1 min, right?