Error - NvBufSurfTransform failure

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) R32 Revision: 5.0 GCID: 25531747 Board: t186ref
• TensorRT Version 7.1.3 + CUDA 10.2
• Issue Type( questions, new requirements, bugs) Please see below
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) please see below
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) please see below

Using Python deepstream-imagedata-multistream app as an example, I have created my own app. I noticed that the app would fail silently, and based on what I saw from the log, this seems to be the issue.

[2021-03-26T08:49:28.549-07:00][INFO]-gstnvtracker: NvBufSurfTransform failed with error -2 while converting buffergstnvtracker: Failed to convert input batch.
[2021-03-26T08:49:28.549-07:00][ERROR]-SYNC_IOC_FENCE_INFO ioctl failed with 9
[2021-03-26T08:49:28.589-07:00][ERROR]-58:04:05.548567353 e[332m 5569e[00m 0x33445770 e[33;01mWARN e[00m e[00m nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:e[00m error: Internal data stream error.
[2021-03-26T08:49:28.589-07:00][ERROR]-58:04:05.548703833 e[332m 5569e[00m 0x33445770 e[33;01mWARN e[00m e[00m nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:e[00m error: streaming stopped, reason error (-5)
[2021-03-26T08:49:28.594-07:00][ERROR]-bus_call.py:37,Error: gst-stream-error-quark: Failed to submit input to tracker (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvtracker2/gstnvtracker.cpp(581): gst_nv_tracker_submit_input_buffer (): /GstPipeline:pipeline0/GstNvTracker:tracker

My guess is that this error is happening when we are performing the following -

n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
frame_image = np.array(n_frame, copy=True, order=‘C’)
frame_image = cv2.cvtColor(frame_image, cv2.COLOR_RGBA2BGRA)

This is performed when we find something of an interest. We use frame_image to save and send to the cloud.
When the error related to NvBufSurfTransform shows up, it seems to stop getting the feed from the camera. Is there anything obvious that I may have implemented incorrectly?

Please see below for the snippets of the code as well as actual code in the attached docs.

Would appreciate any pointer.

Thank you!

> def tiler_sink_pad_buffer_probe(pad, info, u_data):
>     global trackableObjects
>     global objectTrackingTotalFrames
>     global ss
>     global ct
>     global directionInfo
> 
>     frame_number = 0
>     num_rects = 0
>     counting_car = 0
>     trackers = []
>     all_result = {}
> 
>     gst_buffer = info.get_buffer()
>     if not gst_buffer:
>         print("Unable to get GstBuffer ")
>         return
> 
>     # Retrieve batch metadata from the gst_buffer
>     # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
>     # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
>     batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
>     l_frame = batch_meta.frame_meta_list
>     while l_frame is not None:
>         try:
>             # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
>             # The casting is done by pyds.NvDsFrameMeta.cast()
>             # The casting also keeps ownership of the underlying memory
>             # in the C code, so the Python garbage collector will leave
>             # it alone.
>             frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
>         except StopIteration:
>             break
> 
>         frame_number = frame_meta.frame_num
>         l_obj = frame_meta.obj_meta_list
>         num_rects = frame_meta.num_obj_meta
>         is_first_obj = True
>         save_image = False
>         dc = direction_counter_module.DirectionCounter(
>             'horizontal', frame_meta.source_frame_height, frame_meta.source_frame_width)
> 
>         while l_obj is not None:
>             try:
>                 # Casting l_obj.data to pyds.NvDsObjectMeta
>                 obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
>             except StopIteration:
>                 print('********* [ERROR] happened at location 1 *********')
>                 break
>             # obj_counter[obj_meta.class_id] += 1
> 
>             # Periodically check for objects with borderline confidence value that may be false positive detections.
>             # If such detections are found, annoate the frame with bboxes and confidence value.
>             # Save the annotated frame to file.
>             if(pgie_classes_str[obj_meta.class_id] == "license_plate[dstest3_pgie_config.txt|attachment](upload://2iQ31E73aI6grLV2RLI4eeHpQgO.txt) (3.6 KB) [modified.py|attachment](upload://2GmrT9sca1uZ9Ck8Oh7jIZ98BU2.py) (28.0 KB) "):
> 
>                 if is_first_obj:
>                     # check if this is the first object in the frame, this way, we only save once per frame
>                     is_first_obj = False
>                     # Getting Image data using nvbufsurface
>                     # the input should be address of buffer and batch_id
>                     # print('[------------] inside of is_first_obj [------------]')
>                     n_frame = pyds.get_nvds_buf_surface(
>                         hash(gst_buffer), frame_meta.batch_id)
>                     # convert python array into numy array format.
>                     frame_image = np.array(n_frame, copy=True, order='C')
>                     # covert the array into cv2 default color format
>                     frame_image = cv2.cvtColor(
>                         frame_image, cv2.COLOR_RGBA2BGRA)
> 
>                 save_image = True
>                 flash_on()
> 
>                 rect_params = obj_meta.rect_params
>                 startY = int(rect_params.top)
>                 startX = int(rect_params.left)
>                 width = int(rect_params.width)
>                 height = int(rect_params.height)
>                 endY = startY + height
>                 endX = startX + width
>                 score = str(int(abs(obj_meta.confidence) * 100))
> 
>                 all_result[score] = (startX, startY, endX, endY)
> 
>             try:
>                 l_obj = l_obj.next
>             except StopIteration:
>                 print('********* [ERROR] happened at location 5 *********')
>                 break
> 
> 
>         # Out of all the objects/rectangles we found, track the only one with highest confidence level
>         if bool(all_result):
>             sorted_result = list(all_result)
>             sorted_result.sort(reverse=True)
>             trackers.append(all_result[sorted_result[0]])
> 
>         # use the centroid tracker to associate the (1) old object centroids with (2) the newly computed object centroids
>         objects = ct.update(
>             trackers, objectTrackingTotalFrames)
> 
>         if save_image:
>             # loop over the tracked objects
>             for (objectID, centroid) in objects.items():
>                 to = trackableObjects.get(objectID, None)
> 
>                 if to is None:
>                     # print(
>                     #     '[INFO - save_image] no existing trackable object with that ID ... creating one')
>                     to = tracker_object_module.TrackableObject(
>                         objectID, centroid)
> 
>                 else:
>                     # print(
>                     #     '[INFO - save_image] found an object with that ID')
>                     to.centroids.append(centroid)
> 
>                 print(to.counted)
> 
>                 # using the first 3 frames to set the direction
>                 if to.counted <= 2:
> 
>                     dc.find_direction(to, centroid)
>                     # find the direction of motion which will tell us whether to send to entrance or exit
>                     # if door is not "ENTRANCE":
>                     directionInfo = dc.count_object(
>                         to, centroid, door)
> 
>                 elif 3 <= to.counted <= 7:
>                     loc_dt = datetime.datetime.now(
>                         tz=dateutil.tz.gettz(timezone))
>                     localized_time = loc_dt.strftime("%Y-%m-%d-%H-%M-%S")
>                     localized_time_yr = loc_dt.strftime("%Y")
>                     localized_time_month = loc_dt.strftime("%m")
>                     localized_time_day = loc_dt.strftime("%d")
>                     door_short = "EN" if door == "ENTRANCE" else "EX"
>                     door_new = door if directionInfo and directionInfo == door_short else "EXIT" if door == "ENTRANCE" else "ENTRANCE"
>                     folder_name = f'public/{company_name}/{place}/{door_new}/{localized_time_yr}/{localized_time_month}/{localized_time_day}/{objectID}'
> 
>                     local_directory = device_user_name
>                     entire_path = local_directory + folder_name
> 
>                     os.umask(0)
>                     os.makedirs(entire_path, mode=0o777,
>                                 exist_ok=True)
> 
>                     door_short_new = door_new[:2]
> 
>                     filename = f'_{localized_time}_{objectID}_{door_short_new}_{score}_{startX}_{startY}_{endX}_{endY}'
> 
>                     ss.start(folder_name, filename,
>                              frame_image, to.counted, s3_bucket)
> 
>                     if not ss.sending:
>                         ss.finish()
> 
>                 trackableObjects[objectID] = to
> 
>                 to.counted += 1
> 
>         objectTrackingTotalFrames += 1
> 
>         try:
>             l_frame = l_frame.next
>         except StopIteration:
>             print('********* [ERROR] happened at location 6 *********')
>             break
> 
>     return Gst.PadProbeReturn.OK

dstest3_pgie_config.txt (3.6 KB) modified.py (28.0 KB)

It may take some time to review you code.

1 Like

Any help would be appreciated.

In the meantime, my thought was that, if ERROR message pops up, I can just restart the streaming process.
Would you recommend this method (something like this - Basic tutorial 2: GStreamer concepts)? Or as an expert, if you have any other recommendation, I would appreciate it.

Thank you

Can you move tracker right after nvinfer and before nvvideoconvert?

I just deployed it and will report back on what happens after.
If you don’t mind, could I ask for the logic behind the switch?
What from the log or from the code that you triggered the suggestion - if you don’t mind me asking.

I am reading over all GStreamer tutorials right now, and after that I plan to read through gst-python, but in the meantime, if you could help me understand, I would appreciate it.

Thanks,
Jae

Hi @Fiona.Chen ,

Hope you are having a good week.
I did make the adjustment as you suggested -

Can you move tracker right after nvinfer and before nvvideoconvert?

It seemed to be working fine, but similar but different error popped up.

[2021-04-01T17:13:03.108-07:00][ERROR]-SYNC_IOC_FENCE_INFO ioctl failed with 9
[2021-04-01T17:13:03.108-07:00][ERROR]-54:19:34.736036162 e[336m 7841e[00m 0x354ea450 e[31;01mERROR e[00m e[00m nvvideoconvert gstnvvideoconvert.c:3387:gst_nvvideoconvert_transform:e[00m buffer transform failed
[2021-04-01T17:13:44.919-07:00][ERROR]-SYNC_IOC_FENCE_INFO ioctl failed with 9

I looked up SYNC_IOC_FENCE_INFO ioctl failed with 9 and based on what I gathered, it doesn’t seem like this is a generic Linux error.
So I searhced more about gst_nvvideoconvert_transform:e[00m buffer transform failed

I couldn’t really figure out why the failure would happen. Nor was I able to figure out if it was coming from nvvideoconvert this was coming from (as I have 2). Since I have uploaded the code in the previous post, I am just copying the section where I am creating and linking elements.

If something stands out, I would appreciate any pointer.

##This is the section I created each element

print("Creating pgie \n ")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")

print("Creating tracker \n ")
tracker = Gst.ElementFactory.make("nvtracker", "tracker")
if not tracker:
    sys.stderr.write(" Unable to create tracker \n")

print("Creating nvvidconv1 \n ")
nvvidconv1 = Gst.ElementFactory.make("nvvideoconvert", "convertor1")
if not nvvidconv1:
    sys.stderr.write(" Unable to create nvvidconv1 \n")

print("Creating filter1 \n ")
caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
if not filter1:
    sys.stderr.write(" Unable to get the caps filter1 \n")
filter1.set_property("caps", caps1)

print("Creating tiler \n ")
tiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
if not tiler:
    sys.stderr.write(" Unable to create tiler \n")

print("Creating nvvidconv \n ")
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
if not nvvidconv:
    sys.stderr.write(" Unable to create nvvidconv \n")

print("Creating nvosd \n ")
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
if not nvosd:

##This is the section I linked all elements together

streammux.link(queue1)
queue1.link(pgie)

pgie.link(queue2)
queue2.link(tracker)

tracker.link(queue3)
queue3.link(nvvidconv1)

nvvidconv1.link(queue4)
queue4.link(filter1)

filter1.link(queue5)
queue5.link(tiler)    

tiler.link(queue6)
queue6.link(nvvidconv)

nvvidconv.link(queue7)
queue7.link(nvosd)

if output_mp4:
    nvosd.link(queue9)
    queue9.link(queue_sink)

    queue_sink.link(queue10)
    queue10.link(nvvidconv_sink)

    nvvidconv_sink.link(queue11)
    queue11.link(caps_filter)

    caps_filter.link(queue12)
    queue12.link(encoder)

    encoder.link(queue13)
    queue13.link(h264parse)

    h264parse.link(queue14)
    queue14.link(muxer)

    muxer.link(queue15)
    queue15.link(sink)
else:
    if is_aarch64():
        nvosd.link(queue9)
        queue9.link(transform)
        transform.link(sink)
# create an event loop and feed gstreamer bus mesages to it
loop = GObject.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

tiler_sink_pad = tiler.get_static_pad("sink")
if not tiler_sink_pad:
    sys.stderr.write(" Unable to get src pad \n")
else:
    tiler_sink_pad.add_probe(
        Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe, 0)

# List the sources
print("Now playing...")
for i, source in enumerate(args):
    if (i != 0):
        print(i, ": ", source)

print("Starting pipeline \n")
# start play back and listed to events
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass
# cleanup
print("Exiting app\n")
pipeline.set_state(Gst.State.NULL)

In addition, I was curious why no error or information was outputted. I see that bus_call function inside of common folder within the deepstream_python_app (https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/common) should output a message in case of EOS or ERROR and quit, but this didn’t seem to be happening… Is there a better place to put the bus_call callback function?

def bus_call(bus, message, loop):
    t = message.type
    if t == Gst.MessageType.EOS:
        sys.stdout.write("End-of-stream\n")
        loop.quit()
    elif t==Gst.MessageType.WARNING:
        err, debug = message.parse_warning()
        sys.stderr.write("Warning: %s: %s\n" % (err, debug))
    elif t == Gst.MessageType.ERROR:
        err, debug = message.parse_error()
        sys.stderr.write("Error: %s: %s\n" % (err, debug))
        loop.quit()
    return True

@Fiona.Chen
Any thoughts/feedback please?

The error is related to nvvideoconvert. There are three nvvideoconvert in your code. You may try to check them one by one. I think the second one is of no use since the format is already RGBA.

There are too many queue in your code, it is not necessary and make code hard to be read.

bus_call only handle gstreamer bus error message. Your pipeline met plugin error before it reach to EOS. Not all errors are sent as bus error messages.

I modified deepstream_imagedata-multistream.py as to your pipeline. It works well. Please check deepstream_imagedata-multistream_test.py (19.3 KB)