'Failure to connect to JETSON', samples not compiling on Nsight.

Hello, the CUDA 6.5 samples that are provided with JetPack are not compiling on the Jetson. These are the errors I get when I try to build it.
I am cross-compiling these samples using a NSIGHT IDE.
[url]NVIDIA
[url]NVIDIA
Can you tell me what h best course of action would be?

Screenshot from 2015-09-17 14_06_32.png

Screenshot from 2015-09-17 14_05_47.png

This may or may not be an issue, but beware that sometimes remote connections like ssh try to execute code on one machine and display it on another machine. An example is if you ran an OpenGL or OpenGLES application on Jetson via “ssh -Y”. In this case the core code would run on Jetson, but display on the host you connect from…as a side-effect of this the OpenGL or OpenGLES support would be required on the remote display host rather than on Jetson. Since CUDA applications use a GPU, it is possible that these cases also expect CUDA code to run on the host you connect from, and would ignore Jetson for the GPU portion.

I mention this because it seems from the above screen shots that ssh is involved. Can you try to directly (from Jetson graphical display) execute what you need and see what changes or if the issue remains?

Hello, yes I read about t in a different forum. So I logged into the board using the default credentials and ran the samples natively. They seem to work fine.
I changed the settings on the Nsight IDE to ensure that it runs the programs on the Jetson through ARM cross-compilation rather than locally on the host. I think this setting exists under ‘Run Configurations…’
If you need any other information regarding my system I will be happy to share it.

the best way is write a make file, and type-down make, so fluent and power

# Location of the CUDA Toolkit
CUDA_PATH ?= /usr/local/cuda-6.5
OS_SIZE    = 32
OS_ARCH    =  $(shell uname -m")

ifeq ("$(OS_ARCH)","x86_64")
    TARGET_FS  = /home/rock/target_fs
else
    TARGET_FS = 
endif

ARCH_FLAGS = -target-cpu-arch ARM
ARMv7      = 1
dbg        = 1
OBJ        = $(addsuffix .o,$(basename $(shell ls *.cpp *.cu)))
TARGET     = open_gpu

# Common binaries
GCC ?= arm-linux-gnueabihf-g++
NVCC := $(CUDA_PATH)/bin/nvcc -ccbin $(GCC)

# internal flags
NVCCFLAGS   := -m${OS_SIZE} ${ARCH_FLAGS}
CCFLAGS     :=
LDFLAGS     := -nostdlib

# Extra user flags
EXTRA_NVCCFLAGS   ?=
EXTRA_LDFLAGS     ?=
EXTRA_CCFLAGS     ?=

# OS-specific build flags
override abi := gnueabihf
LDFLAGS += --dynamic-linker=/lib/ld-linux-armhf.so.3
CCFLAGS += -mfloat-abi=hard

GCCVERSIONLTEQ46 := $(shell expr `$(GCC) -dumpversion` \<= 4.6)
ifeq ($(GCCVERSIONLTEQ46),1)
CCFLAGS += --sysroot=$(TARGET_FS)
endif
LDFLAGS += --sysroot=$(TARGET_FS)
LDFLAGS += -rpath-link=$(TARGET_FS)/lib
LDFLAGS += -rpath-link=$(TARGET_FS)/lib/arm-linux-$(abi)
LDFLAGS += -rpath-link=$(TARGET_FS)/usr/lib
LDFLAGS += -rpath-link=$(TARGET_FS)/usr/lib/arm-linux-$(abi)


# Debug build flags
ifeq ($(dbg),1)
      NVCCFLAGS += -g -G
else
endif

ALL_CCFLAGS := -DLINUX
ALL_CCFLAGS += $(NVCCFLAGS)
ALL_CCFLAGS += $(EXTRA_NVCCFLAGS)
ALL_CCFLAGS += $(addprefix -Xcompiler ,$(CCFLAGS))
ALL_CCFLAGS += $(addprefix -Xcompiler ,$(EXTRA_CCFLAGS))

ALL_LDFLAGS := 
ALL_LDFLAGS += $(ALL_CCFLAGS)
ALL_LDFLAGS += $(addprefix -Xlinker ,$(LDFLAGS))
ALL_LDFLAGS += $(addprefix -Xlinker ,$(EXTRA_LDFLAGS))

# Common includes and paths for CUDA
INCLUDES  := -I$(TARGET_FS)/usr/include
INCLUDES  += -I./
LIBRARIES :=

################################################################################
# Makefile include to help find GL Libraries
# OpenGL specific libraries
LIBRARIES += -L$(TARGET_FS)/lib
LIBRARIES += -L$(TARGET_FS)/lib/arm-linux-gnueabihf
LIBRARIES += -L$(TARGET_FS)/usr/lib

LIBRARIES += -lc
LIBRARIES += -lcuda
LIBRARIES += -lcudart
LIBRARIES += -lpthread
LIBRARIES += -lopencv_highgui
LIBRARIES += -lopencv_core
LIBRARIES += -lopencv_imgproc
LIBRARIES += -lopencv_gpu
LIBRARIES += -lnppi
LIBRARIES += -lnppc
LIBRARIES += -lnpps
LIBRARIES += -lcufft

SMS ?= 32

ifeq ($(GENCODE_FLAGS),)
# Generate SASS code for each SM architecture listed in $(SMS)
$(foreach sm,$(SMS),$(eval GENCODE_FLAGS += -gencode arch=compute_$(sm),code=sm_$(sm)))
endif

################################################################################

# Target rules
all: build

build: $(TARGET)

%.o:%.cpp
	$(EXEC) $(NVCC) $(INCLUDES) $(ALL_CCFLAGS) $(GENCODE_FLAGS) -o $@ -c $<

%.o:%.cu
	$(EXEC) $(NVCC) $(INCLUDES) $(ALL_CCFLAGS) $(GENCODE_FLAGS) -o $@ -c $<


$(TARGET): $(OBJ)
	$(EXEC) $(NVCC) $(ALL_LDFLAGS) $(GENCODE_FLAGS) -o $@ $+ $(LIBRARIES)

run: build
	$(EXEC) ./$(TARGET)

clean:
	rm -f *.o $(TARGET) *~

disp-%: ;@echo $* = $($*)

clobber: clean

but the problem looks like, the host can’t connect to the board, you have to correctly set up the eth0 of develop board,my make file can work well both at host and develop board.

The ssh was not configured properly.
I used the IPv4 address, instead of a hostname. Also git needs to configured on both the host and the target, so that it doesn’t pop up a ‘Security Violation’ error.
[url]http://montecristo.co.it.pt/cudaDoc65/pdf/Nsight_Eclipse_Edition_Getting_Started.pdf[/url]
Please refer to Section 3.6 of the above for further information.
Cross-compilation works fine now.