GStreamer-CRITICAL **: 16:10:55.658: gst_meta_api_type_has_tag: assertion 'tag != 0' failed

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• NVIDIA GPU Driver Version (valid for GPU only) 550.142
• Issue Type( questions, new requirements, bugs)

When I ran the deepstream pipeline with conda, no exception was found, when I packaged it with pyinstaller, and when my function pgie_src_pad_buffer_probe() was called, batch_meta = pyds.gst_buffer_get_nvds_batch_meta(gst_buffer.hash()) Error GStreamer-CRITICAL **: 16:10:55.658: gst_meta_api_type_has_tag: assertion ‘tag ! = 0’ failed and the batch_meta value is None

    def pgie_src_pad_buffer_probe(self, pad, info, u_data):

        frame_number = 0
        num_rects = 0
        got_fps = False

        current_time = time.time()

        # self.update_time = time.time()


        gst_buffer = info.get_buffer()
        if not gst_buffer:
            MyLogger.error("Unable to get GstBuffer ")
            return Gst.PadProbeReturn.OK


        # opencv_image = StreamCapture.gst_buffer_to_opencv(gst_buffer)
        # Retrieve batch metadata from the gst_buffer
        # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
        # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
        # batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
        batch_meta = pyds.gst_buffer_get_nvds_batch_meta(gst_buffer.__hash__())
        # batch_meta = None
        # print(gst_buffer.__hash__())
        #todo
        if not batch_meta:
            MyLogger.warning(f"Batch meta is None, skipping frame processing:{gst_buffer.__hash__()}")
            return Gst.PadProbeReturn.OK
        l_frame = batch_meta.frame_meta_list
        while l_frame is not None:
            n_frame1_box = None
            try:
                # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
                # The casting is done by pyds.NvDsFrameMeta.cast()
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone.
                frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
                # MyLogger.info(f"--------------------------------{frame_meta.batch_id}")

                if current_time - self.last_alive_time_dict.setdefault(frame_meta.pad_index, 0) > 60 * 5:
                    MyLogger.info(f"source [{frame_meta.pad_index}]:{self.stream_path_code[frame_meta.pad_index]} is alive...")
                    self.last_alive_time_dict[frame_meta.pad_index] = current_time

                if current_time - self.last_frame_time_dict.setdefault(frame_meta.pad_index, 0) < self.frame_interval:
                    return Gst.PadProbeReturn.OK

                self.last_frame_time_dict[frame_meta.pad_index] = current_time

            except StopIteration:
                break

            frame_number = frame_meta.frame_num
            l_obj = frame_meta.obj_meta_list
            num_rects = frame_meta.num_obj_meta
            is_first_obj = True
            save_image = False
            obj_counter = {
                PGIE_CLASS_ID_VEHICLE: 0,
                PGIE_CLASS_ID_PERSON: 0,
                PGIE_CLASS_ID_BICYCLE: 0,
                PGIE_CLASS_ID_ROADSIGN: 0
            }
            # n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
            n_frame = pyds.get_nvds_buf_surface(gst_buffer.__hash__(), frame_meta.batch_id)
            n_frame1 = np.array(n_frame, copy=True, order='C')
            while l_obj is not None:
                try:
                    # Casting l_obj.data to pyds.NvDsObjectMeta
                    obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
                    # print(obj_meta.class_id)
                except StopIteration:
                    break
                obj_counter[obj_meta.class_id] += 1
                # if is_first_obj:
                #     is_first_obj = False
                if n_frame1_box is None:
                    n_frame1_box = StreamCapture.draw_bounding_boxes(copy.deepcopy(n_frame1), obj_meta,
                                                                     obj_meta.confidence)
                else:
                    n_frame1_box = StreamCapture.draw_bounding_boxes(n_frame1_box, obj_meta,
                                                                     obj_meta.confidence)

                # convert python array into numpy array format in the copy mode.

                try:
                    l_obj = l_obj.next
                except StopIteration:
                    break
            # convert the array into cv2 default color format
            n_frame1 = cv2.cvtColor(n_frame1, cv2.COLOR_RGBA2RGB)
            # n_frame1 = cv2.cvtColor(n_frame1, cv2.COLOR_RGBA2BGRA)
            if n_frame1_box is not None:
                cv2.imwrite(f"1-{frame_meta.batch_id}.jpg", n_frame1_box)
            with self.lock:
                self.frame_dict.setdefault(str(frame_meta.pad_index), {"frame": None, "time": None})
                self.frame_dict[str(frame_meta.pad_index)]["frame"] = n_frame1
                self.frame_dict[str(frame_meta.pad_index)]["time"] = get_current_timestamp_millis()

            if not silent:
                MyLogger.info(
                    f"batchid:{frame_meta.pad_index},Frame Number={frame_number},Number of Objects={num_rects} Vehicle_count={obj_counter[PGIE_CLASS_ID_VEHICLE]} Person_count={obj_counter[PGIE_CLASS_ID_PERSON]}")
            # Update frame rate through this probe
            stream_index = "stream{0}".format(frame_meta.pad_index)
            # global perf_data
            # perf_data.update_fps(stream_index)
            try:
                l_frame = l_frame.next
            except StopIteration:
                break
        return Gst.PadProbeReturn.OK

    def _read_loop(self):

        # global perf_data
        # perf_data = PERF_DATA(len(self.stream_paths))

        number_sources = len(self.stream_paths)

        # Standard GStreamer initialization

        # create an event loop and feed gstreamer bus mesages to it
        self.bus = self.pipeline.get_bus()
        # self.pgie_src_pad = self.tiler.get_static_pad("src")
        self.pgie_src_pad = self.tiler.get_static_pad("sink")
        if not self.pgie_src_pad:
            MyLogger.error(" Unable to get src pad")
        else:
            if not self.disable_probe:
                self.pgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, self.pgie_src_pad_buffer_probe, 0)
                # perf callback function to print fps every 5 sec
                # GLib.timeout_add(5000, perf_data.perf_print_callback)
        # List the sources
        MyLogger.info("Now playing...")

Can you describe in detail your steps with conda and pyinstaller separately?

I first created a conda environment with python=3.8 and installed pyds-1.1.8-py3-none-linux_x86_64.whl. Here is a list of my environments:

Package                   Version
------------------------- ---------
altgraph                  0.17.4
av                        12.3.0
cx_Freeze                 7.2.10
filelock                  3.16.1
imageio                   2.35.1
imageio-ffmpeg            0.5.1
importlib_metadata        8.5.0
numpy                     1.24.4
opencv-python             4.11.0.86
packaging                 24.2
patchelf                  0.17.2.1
pgi                       0.0.11.2
pillow                    10.4.0
pip                       24.2
psutil                    7.0.0
pyds                      1.1.8
pyinstaller               6.12.0
pyinstaller-hooks-contrib 2025.1
setuptools                75.1.0
tomli                     2.2.1
typing_extensions         4.12.2
wheel                     0.44.0
zipp                      3.20.2

Then I packaged my program as pyinstaller main.py

but my program encapsulates the deepstream inference pipeline in a class and instantiates it in main.py. Is there anything wrong

I’ve used this code so that conda can directly use gstreamer, which comes with ubuntu
ln -s /usr/lib/python3/dist-packages/gi /home/user/anaconda3/envs/gstream/lib/python3.8/site-packages/
conda install -c conda-forge libffi

Please make sure you have added all the required lib when you use the pyinstaller. Could you refer to 156942 and 303944 and see if that can fix your issue?

These two posts don’t seem to work. My program doesn’t tell me about missing libraries when it runs. Is there any other way to check? The gst_buffer_get_nvds_batch_meta function and so on are not properly declared

This error means the batch_meta value is None. So the most likely reason is that the batch_meta cannot be added because something is missing in the process of packaging your application. So you need to ensure that all necessary DeepStream libraries and data files are included when packaging the application with PyInstaller.

When I run it with python3, I get a warning:

(gst-plugin-scanner:76923): GStreamer-WARNING **: 13:41:53.450: Failed to load plugin '/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_inferserver.so':  libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:76923): GStreamer-WARNING **: 13:41:53.451: Failed to load plugin '/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_ucx.so': libucs.so.0:  cannot open shared object file: No such file or directory

(gst-plugin-scanner:76923): GStreamer-WARNING **: 13:41:53.458: Failed to load plugin '/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_udp.so': librivermax.so.0:  cannot open shared object file: No such file or directory

(gst-plugin-scanner:76923): GStreamer-WARNING **: 13:41:53.463: Failed to load the plugin '/ usr/lib/x86_64 - Linux - gnu/gstreamer 1.0 / deepstream/libnvdsgst_inferserver. So' : libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:76923): GStreamer-WARNING **: 13:41:53.464: Failed to load the plugin '/ usr/lib/x86_64 - Linux - gnu/gstreamer 1.0 / deepstream/libnvdsgst_ucx. So' : libucs. So. 0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:76923): GStreamer-WARNING **: 13:41:53.464: Failed to load the plugin '/ usr/lib/x86_64 - Linux - gnu/gstreamer 1.0 / deepstream/libnvdsgst_udp. So' : librivermax. So. 0: cannot open shared object file: No such file or directory

But when I bundle it with pyinstaller, it says:

(main-deepstream:76851): GStreamer-WARNING **: 13:41:44.728: Failed to load plugin '/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_inferserver.so':  libtritonserver.so: cannot open shared object file: No such file or directory

(main-deepstream:76851): GStreamer-WARNING **: 13:41:44.728: Failed to load plugin '/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_ucx.so': libucs.so.0:  cannot open shared object file: No such file or directory

(main-deepstream:76851): GStreamer-WARNING **: 13:41:44.729: Failed to load plugin '/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_udp.so': librivermax.so.0:  cannot open shared object file: No such file or directory

Both methods have a missing so file warning, but the result is different: GStreamer-CRITICAL **: 16:10:55.658: gst_meta_api_type_has_tag: assertion 'tag! = 0 'failedHint, could these libraries be missing

Have you tried to use the command below to package your demo?

pyinstaller --add-binary '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/*.so:.' --add-data '/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/common:common' your_demo.py

Report an error:

(main-deepstream:39997): GStreamer-WARNING **: 16:54:49.357: Failed to load plugin '/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(main-deepstream:39997): GStreamer-WARNING **: 16:54:49.358: Failed to load plugin '/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_ucx.so': libucs.so.0: cannot open shared object file: No such file or directory
infer.py:617: Warning: cannot register existing type 'GstNvInfer'
infer.py:617: Warning: g_once_init_leave: assertion 'result != 0' failed

(main-deepstream:39997): GLib-GObject-ERROR **: 16:54:49.363: cannot create new instance of invalid (non-instantiatable) type '<invalid>'
Trace/breakpoint trap (core dumped)

Gst.init(None) is 617

def open(self, stream_paths_dict):
        self.stream_path_code = list(stream_paths_dict.keys())
        self.stream_paths = list(stream_paths_dict.values())
        self.number_sources = len(self.stream_paths)
        Gst.init(None)
        if not self.create_pipeline():
            MyLogger.error('Failed to create Pipline.')
            self.status = False
            return False
        self.running = True
        self.thread = threading.Thread(target=self._read_loop)
        self.thread.daemon = True
        self.thread.start()
        self.status = True
        return True

There may still be some related resources that have not been successfully packaged. We also don’t have much experience with PyInstaller in DeepStream. For ease of use, it is recommended to use our docker solution.

Thank you for your help, but I have only installed deepstream on my machine. Can you provide links to some tutorials on how to install deepstream6.3-docker

Sure. You can refer to the link below. https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_docker_containers.html

I want to configure the docker image based on the setup of the environment. I have installed docker

when I want to Install nvidia-container-toolkit by following the install-guide .

When I go to sudo apt-get update, I get an E error:

The repository 'https://nvidia.github.io/libnvidia-container/stable/deb/amd64  Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Did one step go wrong? The OS is ubuntu20.04.

The OS we used for the latest DeepStream version 7.1 is Ubuntu 22.04.
Can you make sure your network is not blocked by a firewall first? Then we recommend that you follow these steps to use DeepStream 7.1.

  1. upgrade your OS to Ubuntu 22.04
  2. install the Nvidia Driver #link
  3. install the nvidia-container-toolkit
  4. run the docker #link

I have successfully pull mirror under the docker pull NVCR. IO/nvidia/deepstream: 6.3 - samples, but I found that there seems to be no python environment, is to install myself

Second, when I run

cd /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app
deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

Report an error:

root@f0b59e997add:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

(gst-plugin-scanner:31): GStreamer-WARNING **: 03:29:17.807: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstmpeg2dec.so': libmpeg2.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:31): GStreamer-WARNING **: 03:29:17.833: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstchromaprint.so': libavcodec.so.58: cannot open shared object file: No such file or directory

(gst-plugin-scanner:31): GStreamer-WARNING **: 03:29:17.891: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstmpg123.so': libmpg123.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:31): GStreamer-WARNING **: 03:29:17.917: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstmpeg2enc.so': libmpeg2encpp-2.1.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:31): GStreamer-WARNING **: 03:29:17.924: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopenmpt.so': libmpg123.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:31): GStreamer-WARNING **: 03:29:18.044: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:31): GStreamer-WARNING **: 03:29:18.047: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_ucx.so': libucs.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:31): GStreamer-WARNING **: 03:29:18.060: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:03.232182684    30 0x558dd7191380 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 6]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:03.326883624    30 0x558dd7191380 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 6]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:03.326905054    30 0x558dd7191380 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 6]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:00:26.745523492    30 0x558dd7191380 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 6]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 20x1x1          

0:00:26.842464020    30 0x558dd7191380 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_2> [UID 6]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_carmake.txt sucessfully
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:29.281149625    30 0x558dd7191380 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 5]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:29.376075154    30 0x558dd7191380 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 5]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:29.376098435    30 0x558dd7191380 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 5]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:00:52.184784694    30 0x558dd7191380 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 5]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 12x1x1          

0:00:52.286935994    30 0x558dd7191380 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt sucessfully
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:54.745959033    30 0x558dd7191380 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 4]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:54.844231948    30 0x558dd7191380 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 4]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:54.844252397    30 0x558dd7191380 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 4]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:01:17.020591780    30 0x558dd7191380 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 4]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 6x1x1           

0:01:17.121570926    30 0x558dd7191380 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine open error
0:01:19.606767249    30 0x558dd7191380 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine failed
0:01:19.702068432    30 0x558dd7191380 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine failed, try rebuild
0:01:19.703272015    30 0x558dd7191380 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
0:01:55.261685917    30 0x558dd7191380 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:01:55.363472155    30 0x558dd7191380 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
** ERROR: <main:733>: Could not open X Display
Quitting
nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=1
nvstreammux: Successfully handled EOS for source_id=2
nvstreammux: Successfully handled EOS for source_id=3
[NvMultiObjectTracker] De-initialized
App run failed

How do you fix this error

About the error, pleaser refer to our https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_troubleshooting.html#deepstream-plugins-failing-to-load-without-display-variable-set-when-launching-ds-dockers.

About the python env, please run the command below.

cd /opt/nvidia/deepstream/deepstream/
./user_deepstream_python_apps_install.sh --version <compatible with the version of your DeepStream>

If there are new problems, please file a new topic. We recommend only discussing one issue in a topic so that others can refer to it conveniently.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.