The Face-Recognition project does not work on x86_64

Hey. The project implemented to study the work of user layers in TensorRT does not work on a machine with x86_64. GitHub - AastaNV/Face-Recognition: Demonstrate Plugin API for TensorRT2.1

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111                Driver Version: 384.111                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 00000000:01:00.0  On |                  N/A |
| 24%   37C    P8    10W / 180W |   1176MiB /  8105MiB |      2%      Default |
+-------------------------------+----------------------+----------------------+

I’m using TensorRT3.0.4 https://developer.nvidia.com/compute/machine-learning/tensorrt/3.0/ga/TensorRT-3.0.4.Ubuntu-16.04.3.x86_64.cuda-8.0.cudnn7.0-tar.gz

When I try to run the compiled binaries, I see the following:

cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 1
cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 1
face-recognition: Face-Recognition/tensorNet.cpp:38: void TensorNet::caffeToTRTModel(const string&, const string&, const std::vector<std::__cxx11::basic_string<char> >&, unsigned int): Assertion `engine' failed.
[1]    10606 abort (core dumped)  ./face-recognition

I think the project is optimized and adjusted for the Jetson architecture.
For x86_64 you may use caffe and corresponding tools and libraries. In my opinion.
Reference: [url]Questions about Face-Recongnition - Jetson TX2 - NVIDIA Developer Forums

Hi,

This sample is designed for Jetson platform.

If you want to execute it on a x86 Linux environment, please correct the make configuration here:
[url]Face-Recognition/CMakeLists.txt at master · AastaNV/Face-Recognition · GitHub

GPU architecture can be found from this page: CUDA GPUs - Compute Capability | NVIDIA Developer

Thanks.

Guys, the problem is not in the cmake file. How to edit the cmake file I know. The problem is that even in the third version of TensorRT, the examples located in / usr / src / tensorrt do not work!
I see the same error. It all started with the fact that I created the SSD-300 using user layers, and trying to run this code on x86_64 encountered the above error. It is noteworthy that the version of TensorRT 2 works with the FaceDetection project and the example is located /usr/src/tensorrt. At the same time everything works fine on the Jetson TX2 with TensorRT 3

I do not want to sound rude, but CMakeLists file in projects https://github.com/dusty-nv/jetson-inference/blob/master/CMakeLists.txt and https://github.com/AastaNV/Face-Recognition/blob/master/CMakeLists.txt is pretty awful :(

I’m not attached to a particular GPU model, and my project and code works fine and is going to both on the Jetson TX1/TX2 and on x86_64. But TensorRT3 does not work on x86_64 in any scenario.

cmake_minimum_required(VERSION 3.5)
SET(PROJECT_NAME TensorRT_SSD)
SET( CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_SOURCE_DIR}/)
PROJECT(${PROJECT_NAME})
SET(MODULENAME ${PROJECT_NAME}_core)

SET(CMAKE_CXX_STANDARD 14)

ADD_DEFINITIONS(-DOPENCV)
ADD_DEFINITIONS(-DGPU)

FIND_PACKAGE(OpenCV REQUIRED)
FIND_PACKAGE(CUDA REQUIRED)
INCLUDE_DIRECTORIES("${CUDA_INCLUDE_DIRS}")

FILE(GLOB CUDA_SOURCE_FILES
        pluginImplement.h
        pluginImplement.cpp
        tensorNet.cpp
        tensorNet.h
        util/cuda/mathFunctions.h
        util/cuda/mathFunctions.cu
        util/cuda/mathFunctions.cpp
        util/cuda/cudaUtility.h
        util/cuda/cudaMappedMemory.h
        util/cuda/kernel.cu)

SET(TENSORRT_CUDA_LIBNAME ${MODULENAME}_cuda CACHE INTERNAL "${MODULENAME}: cuda library" FORCE)

CUDA_ADD_LIBRARY(
        ${TENSORRT_CUDA_LIBNAME} SHARED
        ${CUDA_SOURCE_FILES})

add_executable(ssd_300 ssd_300/ssd_300.cpp)

TARGET_LINK_LIBRARIES(
        ssd_300
        ${OpenCV_LIBS}
        ${TENSORRT_CUDA_LIBNAME}
        ${CUDA_LIBRARIES}
        nvcaffe_parser
        nvinfer
        nvinfer_plugin
        glog)

Hi,

Face recognition is not an official sample and is designed for demonstrating Plugin API for TensorRT2.1 on Jetson.
If you want to use it on other environment or platform, an update is required.

It’s recommended to use our native sample contained in TensorRT package to free you from cross-platform modification.
Thanks.

AastaLLL, What should I do with the samples that come with TensorRT3.0.4? Samples of TensorRT3.0.4 give the same error when i try to start them on x86_64. For me it’s really important. Large projects, for example, like SSD or RetinaNet, are very long run on mobile Tegra.

I ran into the same problem while trying to run jetson-inference samples on x86_64. I changed the CMakeLists.txt file to support the GTX 1050 running on this system.

  -gencode=arch=compute_61,code=sm_61 

Still see this error:

[GIE] cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 1

./imagenet-console orange_0.jpg output_0.jpg
imagenet-console
  args (3):  0 [./imagenet-console]  1 [orange_0.jpg]  2 [output_0.jpg]  


imageNet -- loading classification network model from:
         -- prototxt     networks/googlenet.prototxt
         -- model        networks/bvlc_googlenet.caffemodel
         -- class_labels networks/ilsvrc12_synset_words.txt
         -- input_blob   'data'
         -- output_blob  'prob'
         -- batch_size   2

[GIE]  TensorRT version 3.0, build 3002
[GIE]  attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE]  cache file not found, profiling network model
[GIE]  platform does not have FP16 support.
[GIE]  loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[GIE]  retrieved output tensor 'prob'
[GIE]  configuring CUDA engine
[GIE]  building CUDA engine
[GIE]  cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 1
[GIE]  cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 1
[GIE]  failed to build CUDA engine
failed to load networks/bvlc_googlenet.caffemodel
failed to load networks/bvlc_googlenet.caffemodel
imageNet -- failed to initialize.
imagenet-console:   failed to initialize imageNet

Seems you are using [GIE] TensorRT version 3.0, build 3002
Somewhere at forum threads I saw that in some cases lower version of TensorRT will work when 3.0 doesn’t

Hi,

If you are interested in TensorRT 3.0, it’s recommended to use Jetson_inference which has TensorRT 3.0 support:

Thanks.

May be

export CUDA_ARCH="61 61"

will resolve the “[GIE] cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 1” issue?
Reference: https://devtalk.nvidia.com/default/topic/1010200/gpu-accelerated-libraries/tensorrt-error-could-not-build-engine/post/5219278/#5219278

Well, unfortunately this didn’t solve the problem:

export CUDA_ARCH=“61 61”

The issue was solved by downgrading to TensorRT 2.1

Hi, mechadeck

It looks like the error comes from CUDA context initialization.

The most common cause is the incompatible packages.
Could you check have you installed the recommended version of CUDA/cuDNN/TensorRT?

Thanks.