Hello, 04_video_dec_trt with yolov3

Hello,

04_video_dec_trt example
How do I use it with yolov3 (convert to caffemodel after learning using darknet), not resnet?

Thank you.

Hi,

1. Please modify the model name first.

const char *GOOGLE_NET_DEPLOY_NAME =
             "../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt";
const char *GOOGLE_NET_MODEL_NAME =
             "../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel";

2. Add the model definition with g_pModelNetAttr in common/algorithm/trt/trt_inference.h:

   struct {
        const int  classCnt;
        float      THRESHOLD[3];
        const char *INPUT_BLOB_NAME;
        const char *OUTPUT_BLOB_NAME;
        const char *OUTPUT_BBOX_NAME;
        const int  STRIDE;
        const int  WORKSPACE_SIZE;
        int        offsets[3];
        float      input_scale[3];
        float      bbox_output_scales[4];
        const int  ParseFunc_ID;
    } *g_pModelNetAttr, gModelNetAttr[4] = {
        {
            // Add your model
            ...

3. And you will need to change the output of doInference in the common/algorithm/trt/trt_inference.cpp.
Please update the output_cov_buf and output_bbox_buf based on the YOLO architecture if needed.

And add the corresponding parser to generate the rectList.
You can find an example in our Deepstream sample:

/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp

Thanks.

1 Like

Hello, @AastaLLL

Okay, I’ll try it.

I have a question.

jetson multimedia api 04_video_dec_trt Can you provide a sample that uses onnx other than caffemodel in the sample?

Now
to video_dec_trt_main.cpp
#include “NvOnnxParser.h”

using namespace nvonnxparser; I put in the build, but I will not be able to progress beyond that.

In the next article, I did what you gave me.

Can you provide a sample?

I have a question.

jetson multimedia api 04_video_dec_trt Can you provide a sample that uses onnx other than caffemodel in the sample?

Now
to video_dec_trt_main.cpp
#include “NvOnnxParser.h”

using namespace nvonnxparser; I put in the build, but I will not be able to progress beyond that.

In the next article, I did what you gave me.

Can you provide a sample?

Thank you.

Hi,

You will need to update the algorithm/trt/trt_inference.cpp.
Please change the below function for the ONNX parser:

void
TRT_Context::caffeToTRTModel(const string& deployfile, const string& modelfile)
{
    Int8EntropyCalibrator calibrator;
    IInt8Calibrator* int8Calibrator = &calibrator;
    // create API root class - must span the lifetime of the engine usage
    IBuilder *builder = createInferBuilder(*pLogger);
    INetworkDefinition *network = builder->createNetwork();

    // parse the caffe model to populate the network, then set the outputs
    ICaffeParser *parser = createCaffeParser();
    ...

The ONNX parser example can be found in the below sample:

/usr/src/tensorrt/samples/sampleOnnxMNIST/sampleOnnxMNIST.cpp

Thanks.

1 Like

Hello,

It says “Converts the image from YUV to RGB format and saves it in a file”

The output of this sample program is known only as a txt file containing the bbox coordinate information.
Can I save it as an image?
I’m not sure what exactly the sentence underlined in red means.
Can you tell me the meaning of the sentence?

Thank you.

Hello, @AastaLLL

Hello.
04_video_dec_trt Can I save images other than the result*.txt file as explained in the example?

Thank you.