deepstream_reference_apps: trt-yolo-app: Windows Build

Hello,

I have built and deployed the trt-yolo-app for the TX2 from this repo:

https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/yolo

Has anyone tried compiling this code for Windows? I have tried and ran into some strange errors that I believe are related to the Visual Studio 2013 (VS120 toolset) compiler that I’m using.

constepr isn’t supported by the compiler and a few other similar things.

My question is, should it be possible to get this code to build in theory?

Thanks,
Jonny

Ok so here is an update:

I have been able to port a portion of the trt-yolo-app that reads in the yolo config file and weights, and then generates a TRT engine. It can then serialise the engine and then write this to disk.

The problem is now reading in and de-serialising the plan file in order to create a nvinfer1::ICudaEngine* object. I still have difficulties when trying to generate the engine and then use the nvinfer1::ICudaEngine* object directly instead of dumping to disk and reading back in.

My error is as follows:

Exception thrown at 0x00007FF9B34795E6 (nvinfer.dll) in TensorRT_Yolo.exe: 0xC0000005: Access violation reading location 0x000001EA7EDDBD51. occurred

Next I am going to checkout a fresh copy of this repo:

https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/yolo

and try and build the trt-yolo-app using Visual Studio 2017

Hi,

did you resolve this issue yet? I encountered a similar error (Exception thrown at 0x00007FF828C282BB (nvcuda.dll) in Project2.exe: 0xC0000005: Access violation reading location 0x000000B828DA0000.).

I ran into this when I tried to create a CUDA engine for my model:

auto engine = builder->buildCudaEngine(*network);

I’d really appreciate any suggestions! Thank you!

Hello he44,

Unfortunately we have made no progress here, our solution in the end was to switch back to the Linux stack of CUDA, cuDNN, and TensorRT.

I had some replies from nVidia here:

But it wasn’t much help. This looks to be related to a bug at their end as the exact same code works with the Linux version of their TRT API.

I’m not sure how we can dig deeper ourselves to try and debug the issue.

Kind Regards,
Jonny

I am suffering from a very similar problem. Been working on it for days and am blocked. :-(

I can immediately de-serialize a serialized buffer but if I write it to disk and read it back (in another program) I get the Access Violation during the deserializeCudaEngine call.

I suspect this is a simple alignment issue but I cannot check. It is a bit frustration becasue there is no error code, no debug build and no versioning of documentation or samples.

( i.e. There seem to be different ways on different versions of the API like IPlugin, IPluginext, IpluginV2 … but samples are old or mixed in the scheme they use. Documentation is not versioned at all and I assume is the latest way???)

Could I ask that the caffe example be split into 2 apps. one that converts an engine and one that loads an engine and infers it. For completeness, add a custom layer like PReLU (like the pnet caffe model for MTCNN). This will server as a good regression test for windows!

I am a bit new to this and it may be that there is a fundamental requirement of types that I am not respecting and we are importing a float into an int or something due to my lack of experience.

Greetings and salutations!

First off, kudos to jmirza for cleaned-up the code (https://github.com/mj8ac/trt-yolo-app_win64). Actually I’ve successfully migrated trt-yolo-app from https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure to Windows before I’ve came by this thread. I faced the same “…complains that the plan file TRT version is 0.0.0 when it is expecting 5.x.x…” like jmirza did. Both repos were having the same blunder. Turns out the blunder to be that of an I/O in nature. Here are the fixes:

std::ifstream cache(planFilePath); --> std::ifstream cache(planFilePath, std::ios::binary | std::ios::in);

in loadTRTEngine and:

outFile.open(m_EnginePath); --> outFile.open(m_EnginePath, std::ios::binary | std::ios::out);

in writePlanFileToDisk

For kINT8 though, changing:

std::ifstream input(m_CalibTableFilePath, std::ios::binary); --> std::ifstream input(m_CalibTableFilePath, std::ios::binary | std::ios::in);

won’t do any good as one may required to calibrate from dataset as pointed out here: https://devtalk.nvidia.com/default/topic/1057147/tensorrt/tensorrt-yolo-int8-on-gtx-1080ti/

Hi,

I have a problem like this,

ERROR: 000007FEC7FAB3D0yolo_83 is uniformly zero; network calibration failed.
ERROR: c:\p4sw\sw\gpgpu\MachineLearning\DIT\release\5.0\builder\cudnnBuilder2.cp
p (1508) - Misc Error in nvinfer1::builder::buildEngine: -1 (Could not find tens
or yolo_83 in tensorScales.)
ERROR: c:\p4sw\sw\gpgpu\MachineLearning\DIT\release\5.0\builder\cudnnBuilder2.cp
p (1508) - Misc Error in nvinfer1::builder::buildEngine: -1 (?uld not find tenso
r yolo_83 in tensorScales.)

my palteform is:
win7, cuda10.0 ,cudnn7.3 ,vs2015, tensorrt5.0

Has anyone encountered this problem? And I don not know how to solve it .

Thanks,
zhaorong

Perhaps an upgrade to cuda10.1, cudnn7.4, tensorrt5.1 would help? Encounter no such issue with the said specifications.

Hello joestump,

Thanks for you advice,when I change the platform to cuda10.1, cudnn7.4, tensorrt5.1,and there is not error,
but there is a warning like this:

New calibration table will be created to build the engine
WARNING: Tensor yolo_107 is uniformly zero; network calibration failed
WARNING: Tensor yolo_83 is uniformly zero; network calibration failed.
WARNING: Tensor yolo_95 is uniformly zero; network calibration failed.

and the detection result is wrong.

I must to switch to win10?

Thanks,
zhaorong

Migrated to a different machine with the latest CUDA (10.1.243), CUDNN (7.6.5) and TENSORRT (6.0.1.5). Still can’t seem to replicate your issue using the trt-yolo-app. Was on Windows 10 all the while though…

All,

I believe my issue all along was down to BIOS issues!! The motherboard we were using was a fairly recent board and we decided to check for a BIOS release due to other unexplained instability.

Turns out a BIOS release was available that fixed “memory issues” and “compatibility issues”. Once I updated the BIOS instability disappeared, and then applying the changes suggest by joestump:

I rebuilt the plan file, and this got rid of version complaint. Then I tried to load the plan file and start running inference on some images and it worked. So it looks like the BIOS update fixed this issue:

“Exception thrown at 0x00007FF9B34795E6 (nvinfer.dll) in TensorRT_Yolo.exe: 0xC0000005: Access violation reading location 0x000001EA7EDDBD51. occurred”

Boom bang it sprung into life!!

Happy days.

try it https://github.com/enazoe/yolo-tensorrt