How to run a program running on CPU in parallel with DeepStream pipeline?

Hi, I’m trying to run a program(.cpp program) in parallel with deepstream), .cpp program should be done by CPU so this shouldn’t mess with the performance of the deepstream, here is the script I’m using:
g++ Source.cpp pkg-config --cflags --libs opencv
./a.out &
deepstream-app -c deepstream_app_config_yoloV3.txt&
but I keep getting this warning :

(deepstream-app:9297): GLib-GObject-CRITICAL **: 16:38:35.356: g_object_get: assertion ‘G_IS_OBJECT (object)’ failed

Do you know what is the correct way of doing this?

Hi @MGh
Could you share your env info like below:

• Hardware Platform: GPU RTX2080
• DeepStream Version: 5.0
• TensorRT Version:
• NVIDIA GPU Driver Version:440.59

I tried to reproduce your issue with DeepStream docker - on x86 + Telsa T4 GPU, GPU driver: 440.64.00 with the exactly same steps, but I can’t reproduce your issue.

And, could you share more log?

My Source.cpp is as below.

int main(int argc, char** argv){
        Mat image;
        Mat grayImage;

                std::cerr << "No image data!" << std::endl;
                return -1;

        while (1) {
                image = imread("test.jpg", 1);
                //cv2.cvtColor(image, grayImage, CV_BGR2GRAY);
                imwrite("Gray_Image.jpg", grayImage);

        return 0;

Hi @mchi, Thank you very much for your reply.
Here is the env info:
TensorRT Version:
DeepStream Version:5.0
NVIDIA GPU Driver Version:430.50
Hardware Platform: Tesla P100
while (true)

        cap >> frame;
    if (frame.empty())
    string bbFile;
    //bbFile = string(trck_results[i]);
    string path = "/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/result_output/tracking/00_000_00"; 
     bbFile = path + to_string(i) + ".txt";
    fstream file;, ios::in);
    if (!file.is_open()) 
        cout << "text file not open";
        check = false;
        check = true;

Here is a snippet of the code that I’m using, one point about this program is that .txt files are output of DeepStream pipeline and as they are being creating the second program tries to fetch them, and if a file is not created yet it will sleep for 2 s.

.txt files are output of DeepStream pipeline’
What “DeepStream pipeline”? Does you mean “deepstream-app -c deepstream_app_config_yoloV3.txt&” , but it doesn’t output “.txt” file?

will sleep for 2 s.
What show it will sleep 2s ?

And, why these code need to use “pkg-config --cflags --libs opencv” to compile it?

I can’t see any reason why these code could affect the deepstream-app. Maybe you should debug it by yourself firstly.

deepstream-app -c deepstream_app_config_yoloV3.txt
outputs the result of YOLO object detector and tracking in .txt files, and the other .cpp program sleep for 2s in

else { sleep(2); }
(it was previously 1 instead of 2, sorry I just edited)
and OpenCV is required in this program because I’m only attaching a part of the code which is related to fetching data from outputs of YOLO, the rest is not related to that in fact it is only about doing some analysis(using OpenCV library) on the object detector data. Also, the program runs correctly but it consequently throws this warning throughout the running of these two programs.

did you modify deepstream-app or deepstream_app_config_yoloV3.txt?

Except for the path to the video, no I didn’t change any of them, and in fact deepstream-app -c deepstream_app_config_yoloV3.txt works fine if I only run deepstream without the .cpp program.

what’s the relation of these two programs?

deepstream-app -c deepstream_app_config_yoloV3.txt&
./a.out &

are they totally independent?

As you are working under DS5.0 docker, why refer to deepstream-4,0 folder?

string path = “/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/result_output/tracking/00_000_00”;

Sorry @mchi I’m using deepstream sdk 4.0(not 5.0), my bad. they are independent in terms of their computation the only relation between these two is that a.out fetch the .txt output files of tracking plugin in deepstream as they are created, and the reason why I don’t want these two to be sequential(first run deepstream then a.out) is to increase the overall speed.

I cofnirmed running deepstream-app -c deepstream_app_config_yoloV3.txt does not generate “result_output” output.

@mchi There is an option of storing results of tracking and object detector in a specified directory, I’m storing them in result_output, so when I said I didn’t change the config file I meant no logical change some change in the directories(technical change)

can you share deepstream_app_config_yoloV3 ?

Hi, here is the config file(just to mention that it works fine when I run it without the .cpp program but throws that warning when they run in parallel.
deepstream_app_config_yoloV3.txt (4.3 KB)

I noticed you’re opening a file a loop. Are you closing it somewhere? My limited understanding of this kind of thing is that you must open file outside the loop, flush the stream (std::endl or std::flush will do this) when you want to explicitly write, and close the loop at the end, after your work is done.

Otherwise you can run out of file descriptors and your program will crash. Likewise i’m not actually sure what opening a file that’s already open does, but I suspect it might cause trouble. Lastly, if you’re trying to communcate between apps. Rather than a file, you might try using socket io (or something).

@mdegans Hi, Thank you for your reply. Actually I open a file in the loop and close it it in the same iteration, i.e. at each iteration one file opens and closes. I actually am familiar with socket communication but have no idea how to incorporate this into the deepstream pipeline, since from what I see the only things I can manipulate in this pipeline is the config files, where can I add some c++ codes?

Source code exists in /opt/nvidia/deepstream/deepstream/sources/gst-plugins for various plugins written in c++. Of interest to you might begst-nvmsgconv and gst-nvmsgbroker which are gstreamer elements that convert and broker nvidia metadata, respectively.

Also of interest might be /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-user-metadata-test/deepstream_user_metadata_app.c and various other metadata examples. You can grep for user_meta_data and find quite a bit in the deepstream sources.

Also of use might be this manual and the plugin interface for the message broker. There are good examples in that manual. I’m sure you’ll find something that can suit your needs :)

Thanks a lot @mdegans very helpful hints, I started to figure out how I can modify to my needs, I was always looking for such things. Thanks again.