Difficult to know where on the forums to post this…
I have tried the face-recognition sample, on git, and it all works fine!
I have used DIGITS, created a huge trainset (from VGG_Face) and tested it using DIGITS, it works as expected - I then deployed it, ran the code against the above sample and it failed in tensorNet.cpp
for (auto& s : outputs) network->markOutput(*blobNameToTensor->find(s.c_str()));
The blobname was bboxes_fd - I presume this fail for all of the blobs?
How did the recognition model get trained? How do you add the IPlugin layer and blob fields into the network - when I try it complains about unknown layer type IPlugin
I am trying write a plug in layer, as per the above, So thats why I ask, how did you train it so I can get my set working with the sample code?
I used nvCaffe with GoogleNet.