TensorRT Plugin layers

Hi,

Difficult to know where on the forums to post this…

I have tried the face-recognition sample, on git, and it all works fine!

I have used DIGITS, created a huge trainset (from VGG_Face) and tested it using DIGITS, it works as expected - I then deployed it, ran the code against the above sample and it failed in tensorNet.cpp

for (auto& s : outputs) network->markOutput(*blobNameToTensor->find(s.c_str()));
The blobname was bboxes_fd - I presume this fail for all of the blobs?

How did the recognition model get trained? How do you add the IPlugin layer and blob fields into the network - when I try it complains about unknown layer type IPlugin

I am trying write a plug in layer, as per the above, So thats why I ask, how did you train it so I can get my set working with the sample code?

I used nvCaffe with GoogleNet.

Thanks

Hi,

Here are some related topics for your reference:
https://devtalk.nvidia.com/default/topic/1023699/jetson-tx2/questions-about-face-recongnition/
https://devtalk.nvidia.com/default/topic/1007313/how-to-build-the-objection-detection-framework-ssd-with-tensorrt-on-tx2-/

Thanks.

Hi,

Thanks for the reference links - very helpful. I did think it maybe a merged net but wasnt sure that it was possible!!

Did you facenet that comes with the jetson-inference sample of did you retrain with the FDDB? I have created my classification network and was hoping to just merge with the facenet detection network?

Thanks

Hi,

YES. facenet is trained with FDDB.
Try to follow the steps shared in comment #2 and you should be able to get a merged net.

Thanks.

Yes working through this now. Creating a detection network is new to us, so picking our way throught it.

I presume the training was done in DIGITS? So, again, I presume the FDDB structure has to be converted to KITTI and then start the training?

Thanks

Hi,

We have documented our detectNet training process here:
[url]https://github.com/dusty-nv/jetson-inference#locating-object-coordinates-using-detectnet[/url]

Thanks.

Hi,

I have tried the face-recognition sample, on git. However, I met some problem like ‘vector’ is not a member of ‘std’ with jetpack3.3 and Jestson TX2. Is that caused by different jetpack version or other reasons? Must I use the jetpack3.1?

Thanks.

Hi,

Have you include the vector header in your source code?

#include <vector>

Thanks.

Hi,

Which file do you mean? I just try the sample as the github said.
$ sudo apt-get install git cmake
$ git clone GitHub - AastaNV/Face-Recognition: Demonstrate Plugin API for TensorRT2.1
$ cd Face-Recognition
$ mkdir build
$ cd build
$ cmake …
$ make

And the error is as bellows:

/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:5:45: error: ‘std::vector<bboxProfile*> RecognitionLayer::bboxTable’ is not a static data member of ‘class RecognitionLayer’
std::vector<bboxProfile*> RecognitionLayer::bboxTable;
^
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:6:44: error: ‘std::vector<tagProfile*> RecognitionLayer::tagTable’ is not a static data member of ‘class RecognitionLayer’
std::vector<tagProfile*> RecognitionLayer::tagTable;
^
In file included from /usr/include/c++/5/cassert:43:0,
from /home/nvidia/liuningjie/Face-Recognition/build/aarch64/include/pluginImplement.h:4,
from /home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:1:
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp: In member function ‘virtual nvinfer1::IPlugin* PluginFactory::createPlugin(const char*, const nvinfer1::Weights*, int)’:
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:24:16: error: ‘mBboxMergeLayer’ was not declared in this scope
assert(mBboxMergeLayer.get() == nullptr);
^
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:25:27: error: ‘unique_ptr’ is not a member of ‘std’
mBboxMergeLayer = std::unique_ptr(new BboxMergeLayer());
^
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:25:57: error: expected primary-expression before ‘>’ token
mBboxMergeLayer = std::unique_ptr(new BboxMergeLayer());
^
In file included from /usr/include/c++/5/cassert:43:0,
from /home/nvidia/liuningjie/Face-Recognition/build/aarch64/include/pluginImplement.h:4,
from /home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:1:
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:30:16: error: ‘mDataRoiLayer’ was not declared in this scope
assert(mDataRoiLayer.get() == nullptr);
^
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:31:25: error: ‘unique_ptr’ is not a member of ‘std’
mDataRoiLayer = std::unique_ptr(new DataRoiLayer());
^
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:31:53: error: expected primary-expression before ‘>’ token
mDataRoiLayer = std::unique_ptr(new DataRoiLayer());
^
In file included from /usr/include/c++/5/cassert:43:0,
from /home/nvidia/liuningjie/Face-Recognition/build/aarch64/include/pluginImplement.h:4,
from /home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:1:
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:36:16: error: ‘mSelectLayer’ was not declared in this scope
assert(mSelectLayer.get() == nullptr);
^
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:37:24: error: ‘unique_ptr’ is not a member of ‘std’
mSelectLayer = std::unique_ptr(new RecognitionLayer(FunctionType::SELECT));
^
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:37:56: error: expected primary-expression before ‘>’ token
mSelectLayer = std::unique_ptr(new RecognitionLayer(FunctionType::SELECT));
^
In file included from /usr/include/c++/5/cassert:43:0,
from /home/nvidia/liuningjie/Face-Recognition/build/aarch64/include/pluginImplement.h:4,
from /home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:1:
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:42:16: error: ‘mSummaryLayer’ was not declared in this scope
assert(mSummaryLayer.get() == nullptr);
^
/home/nvidia/liuningjie/Face-Recognition/pluginImplement.cpp:43:25: error: ‘unique_ptr’ is not a member of ‘std’
mSummaryLayer = std::unique_ptr(new RecognitionLayer(FunctionType::SUMMARY));

Hi,

May I know your environment.
Have you updated any C++ library or compiler?

Thanks.

Hi,

My environment is jetson TX2 with jetpack3.3. Gcc and g++ version is gcc(Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609.

Thanks.

Hi,

Okay. We will try to reproduce this issue internally first.
Thanks.

Hi,

Here is the change for JetPack3.3:

diff --git a/face-recognition/face-recognition.cpp b/face-recognition/face-recognition.cpp
index 33b5dab..da6502a 100644
--- a/face-recognition/face-recognition.cpp
+++ b/face-recognition/face-recognition.cpp
@@ -240,7 +240,7 @@ int main(int argc, char** argv)
         if( display != NULL )
         {
             char str[256];
-            sprintf(str, "TensorRT build %x | %4.1f FPS", NV_GIE_VERSION, display->GetFPS());
+            sprintf(str, "TensorRT build %x | %4.1f FPS", NV_TENSORRT_MAJOR, display->GetFPS());
             display->SetTitle(str);
         }

diff --git a/pluginImplement.h b/pluginImplement.h
index 8b78a0a..14a39a2 100644
--- a/pluginImplement.h
+++ b/pluginImplement.h
@@ -5,6 +5,8 @@
 #include <iostream>
 #include <cudnn.h>
 #include <cstring>
+#include <vector>
+#include <memory>

 #include "NvCaffeParser.h"
 #include "NvInferPlugin.h"

Thanks.

Hi,

I have modified the face-recognition sample as above, and it compiles well. But, why the image is inverted or upside down from the camera of Jetson TX2? And what if I want to recognize one unknown person using this sample?

Thanks.

Hi,

You will need to re-train the detection model.
This tutorial can give you some information:
[url]https://github.com/dusty-nv/jetson-inference#locating-object-coordinates-using-detectnet[/url]

Thanks.