System Setup:
PC, 4090, Ubuntu 20.04
Built from docker container: nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
Not sure if this is a misuse of Triton and Deepstream, but I’m trying to make a gRPC server using Triton to perform Dewarping on an image (This is targeted to be part of a dewarping setup tool for the Deepstream dewarping plugin).
1 - created a docker container from the latest Deepstream/Triton image from NGC
docker run --gpus all --name triton --shm-size=4g --ulimit memlock=-1 -p 8000:8000 -p 8001:8001 -p 8002:8002 -v /home/dssadmin/triton/models:/root/models --ulimit stack=67108864 -ti -w /root nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
2 - Inside the container, I verified my command line gstreamer function would work:
gst-launch-1.0 filesrc location=/root/my_image.jpg ! nvjpegdec ! nvdewarper config-file=/root/my_config.txt ! nvvideoconvert ! jpegenc ! filesink location=my_output.jpg
My goal is to pass an input image (my_image.jpg) and input dewarper config file (my_config.txt) to Triton via gRPC, process the above command using gstreamer’s parse functionality, and create the dewarped output image (my_outpu.jpg). This would be run in Triton’s python backend.
3 - I created the model.py file and started testing. I was successful in getting the input image and config file, but when I tried to run the gstreamer pipeline (exactly as described above), Triton appears to choke at the loop.run() command. The exact output is shown below:
I0617 16:32:07.822351 143 grpc_server.cc:2513] Started GRPCInferenceService at 0.0.0.0:8001
I0617 16:32:07.822536 143 http_server.cc:4497] Started HTTPService at 0.0.0.0:8000
I0617 16:32:07.863755 143 http_server.cc:270] Started Metrics Service at 0.0.0.0:8002
request received with id: 1
pre-run
E0617 16:32:21.233532 143 python_be.cc:2250] Stub process is unhealthy and it will be restarted.
(.:311): GStreamer-WARNING **: 16:32:21.422: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though.
(.:311): GStreamer-WARNING **: 16:32:21.424: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.1: cannot open shared object file: No such file or directory
As you can see from above, right after the “pre-run” print statement, i.e. at loop.run()
I’m not sure if the plugin warnings are of any consequence. I tried setting the GST_PLUGIN_SCANNER env variable, but it didn’t change anything. In fact, got the same warnings when I just ran the gst-launch-1.0 command line as shown above, and gstreamer still created the output jpg file.
The error “…stub process is unhealthy…” didn’t help me very much, so was wondering if I could get some help figuring this out. Maybe I’m just misusing Triton, Deepstream, or the python backend.
Thanks for any suggestions.
–Bryan
Here is my model.py file used by Triton’s python backend:
import sys
import gi
import logging
import traceback
import triton_python_backend_utils as pb_utils
gi.require_version("GLib", "2.0")
gi.require_version("GObject", "2.0")
gi.require_version("Gst", "1.0")
from gi.repository import GLib, Gst
import numpy as np
class TritonPythonModel:
def on_message(self, bus: Gst.Bus, message: Gst.Message, loop: GLib.MainLoop):
mtype = message.type
"""
Gstreamer Message Types and how to parse
https://lazka.github.io/pgi-docs/Gst-1.0/flags.html#Gst.MessageType
"""
if mtype == Gst.MessageType.EOS:
print("End of stream")
loop.quit()
elif mtype == Gst.MessageType.ERROR:
err, debug = message.parse_error()
print(err, debug)
loop.quit()
elif mtype == Gst.MessageType.WARNING:
err, debug = message.parse_warning()
print(err, debug)
return True
def initialize(self, args):
# Initializes Gstreamer, it's variables, paths
Gst.init(sys.argv)
def execute(self, requests):
responses = []
try:
for request in requests:
print(f"request received with id: {request.request_id()}")
## Unpack Request Data
# get binary data for jpeg file and write to temporary file: /root/my_image.jpg
input_jpeg_data = pb_utils.get_input_tensor_by_name(request, "input_jpeg_data").as_numpy()
with open("/root/my_image.jpg", "wb") as binary_file:
binary_file.write(input_jpeg_data)
# get string data for dewarp configuration file and write to temporary file: /root/my_config.txt
config_file_tensor = pb_utils.get_input_tensor_by_name(request, "config_file_str").as_numpy()[0]
config_file_str = config_file_tensor.decode('utf-8')
with open("/root/my_config.txt", "w") as config_file:
config_file.write(config_file_str)
## Make GStreamer call
pipeline_str = "filesrc location=/root/my_image.jpg ! nvjpegdec ! nvdewarper "
pipeline_str += "config-file=/root/my_config.txt ! nvvideoconvert ! jpegenc ! "
pipeline_str += "filesink location=/root/my_output.jpg"
pipeline = Gst.parse_launch(pipeline_str)
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", self.on_message, loop)
pipeline.set_state(Gst.State.PLAYING)
print("pre-run")
try:
loop.run() ## <-- Failure occurs at this point
except Exception:
print("loop Exception")
traceback.print_exc()
loop.quit()
print("post-run")
pipeline.set_state(Gst.State.NULL)
## Build response from the dewarped image file: /root/my_output.jpg
# read in jpg file data
f = open("/root/my_output.jpg", "rb")
output_jpg_data = np.fromfile(f, dtype=np.uint8)
f.close()
output_tensor = pb_utils.Tensor("output_jpeg_data", output_jpg_data)
inference_response = pb_utils.InferenceResponse(
output_tensors=[
output_tensor
]
)
responses.append(inference_response)
############################################
print(f"end processing request id: {request.request_id()}")
except Exception as e:
print(f'****** Exception: {e}')
return responses
def finalize(self, args):
pass