Using TRT engine generated by DetectNet for custom app

Hey folks,

I am building my own app using PyQT and I wish to run inferencing, take snapshots in intervals into a RAM scratch disk, then feed them to the inferencer.

The thing is I do not need it to output images nor do I wish to have to have images inside a folder to establish a stream from the get go.

Using Dusty-Nv setup is great for learning but I would like to know how to utilize the TRT engine generated to run inferencing as and when I see fit as opposed to having to initiate the stream from beginning as I say. I also have written my own code to draw boxes and display in my ui etc but I am very restricted to basically hacking Dusty’s repo so any info that will guide me in the right direction to shedding some of the weight of that would be greatly appreciated.

I just need to be able to give it images, get the result and I can do as I wish with the results back.

Thank You!

Hi,

You can find some example to run TensorRT with engine file in the below folder:

/usr/src/tensorrt/samples/

In general, you can create a runtime engine from a file like below:

std::ifstream input_stream("./myengine.plan", std::ifstream::in);
std::vector<char> model_data((std::istreambuf_iterator<char>(input_stream)), std::istreambuf_iterator<char>());

nvinfer1::IRuntime* runtime = nvinfer1::createInferRuntime(logger);
nvinfer1::ICudaEngine* engine = runtime->deserializeCudaEngine(model_data.data(), model_data.size(), nullptr);

Then run the following for inferencing:

nvinfer1::IExecutionContext* context = engine->createExecutionContext();
context ->enqueue(1, mBindings, mStream, NULL)

Thanks.

Thank you very much, I am guessing this is in C? I am a Python programmer so that is a little confusing :) is there a Python solution?

I will take a look at the samples mentioned.

Cheers
Tom

Hi Tom, there are TensorRT Python samples under /usr/src/tensorrt/samples/python

Also, it may be worth mentioning that you don’t need to continually process a stream like in detectnet.py, which is just an example. You can push your own frames to detectNet.Detect() whenever you want (at whatever interval you want), and you don’t need to use videoSource/videoOutput either. You can use cudaFromNumpy() if your data is coming from a numpy array.

You also don’t need to use detectNet’s overlay if you have your own, you can call detectNet.Detect(img, overlay='none') and it will skip the overlay.

You may want to avoid re-implementing detectNet yourself because in addition to the TensorRT code there is pre/post-processing code to convert the RGB images into the right format (i.e. NCHW 224x224 with pixel normalization applied) and to interpret/cluster the output bounding boxes.

1 Like

Thanks a lot :))

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.