How to build jetson-inference in host PC?

I have some code work in tx2 based on jetson-inference(detectnet inference). I want to compile and run it on my host PC(ubuntu14.04). I have install TensorRT in my PC.
@AastaLLL
Can i compile it in host PC without modify my code?
If not, then how can i modify my code to make it run in host PC?

Hi,

Yes. We have user successfully compile jetson_inference on an x86 machine before.
Please check this topic:
https://devtalk.nvidia.com/default/topic/1003230/jetson-inference-samples-does-t-work/

Please remember to add your GPU architecture into the cmake file.
Architecture information can be found here.

Thanks.

Thank you AastaLLL, I have successfully compile jetson-inference and run my code in host PC Ubuntu14.04 x86_64 platform by several steps below:

  1. I comment the lines 24-25 in CMakePrebuild.sh, cause it’s related to aarch64 platform.
# sudo rm /usr/lib/aarch64-linux-gnu/libGL.so
# sudo ln -s /usr/lib/aarch64-linux-gnu/tegra/libGL.so /usr/lib/aarch64-linux-gnu/libGL.so
  1. My GPU is GeForce GTX 1050, so i append one line in CMakeLists.txt,
set(
	CUDA_NVCC_FLAGS
	${CUDA_NVCC_FLAGS}; 
    -O3 
	-gencode arch=compute_53,code=sm_53
	-gencode arch=compute_62,code=sm_62
	-gencode arch=compute_61,code=sm_61  # the line i add
)
  1. i add a line in CMakeLists.txt to use c++11 for CUDA
set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};--disable-warnings;--ptxas-options=-v;-use_fast_math;-lineinfo;-std=c++11)

After that, i compile jetson-inference by make && make install, and then i compile my code and run. It works.