Mmap with Segmentation fault

Hi,

   I got a problem about Segmentation fault when I used mmap...

#define IMAGE_POOL_SIZE 4096

userspace code…
gdma_handle=open ("/dev/fit_dma", O_RDWR);
gpimage_pool=mmap(0, IMAGE_POOL_SIZE , PROT_READ | PROT_WRITE,
MAP_SHARED, gdma_handle, 0);
printf(“gpimage_pool address:0x%X \n”,gpimage_pool);
printf(“gpimage_pool 0:0x%X\n”,gpimage_pool[0]);

kernel space code
static int fit_dma_mmap(struct file* filp, struct vm_area_struct* vma)
{
int i, ret;
int reserved_size=0;
gpimage_buf = kzalloc(IMAGE_POOL_SIZE, GFP_KERNEL);
gpimage_buf [0]=0x55;

printk ( "gpimage_buf:0x%X.............\n",gpimage_buf);
printk ( "vm_start:0x%X.............\n",vma->vm_start);
printk ( "vm_end:0x%X.............\n",vma->vm_end );   
printk ( "IMAGE_POOL_SIZE[%d]............\n",IMAGE_POOL_SIZE);  

//1.Change to be non-cached
  vma->vm_page_prot = pgprot_noncached ( vma->vm_page_prot );
//2.Reserved pages
  while(reserved_size != IMAGE_POOL_SIZE) 
  {
    SetPageReserved(virt_to_page(gpimage_buf+reserved_size));
    reserved_size+=PAGE_SIZE;
  }    
//3.do mmap
  if(remap_pfn_range(vma,
                     vma->vm_start,
                     virt_to_phys((void*)gpimage_buf) >> PAGE_SHIFT,
                     vma->vm_end - vma->vm_start,
                     vma->vm_page_prot)) 
  {

    printk ( "dma image pool mmap fail.............\n" );
    return -ENOMEM;
  }  
       
  return 0;

}

** kernelspace log**
Apr 27 15:40:02 tx2 kernel: [ 7156.900783] gpimage_buf:0xDF6B1000…
Apr 27 15:40:02 tx2 kernel: [ 7156.900798] vm_start:0x83FA1000…
Apr 27 15:40:02 tx2 kernel: [ 7156.900807] vm_end:0x83FA2000…
Apr 27 15:40:02 tx2 kernel: [ 7156.900817] IMAGE_POOL_SIZE[4096]…

userspace log
gpimage_pool address:0x83FA1000
Segmentation fault (core dumped)

Has made mmap addres, but I do not know why Segmentation fault (core dumped)? anyone can help me? Thanks.

Regards,
Locar

What’s this node? /dev/fit_dma

Hi ShaneCCC,

node is 505
fit@tx2:/dev$ ls -l fit_dma
crw-rw-rw- 1 root root 505, 0 å›› 27 16:08 fit_dma

Regards,
Locar

What’s your kernel version?
And you should check the open function are return normally first.

I didn’t find any fit_dma node on my TX2 r32.3.1

nvidia@nvidia-desktop:/dev$ ls f*
fb0 full fuse

fd:
0 1 2 3

about fit_dma, it is created by myself.
I checked the open function is fine, so I can use the handle to do mmap.
I checked kernel.log, I think it is fine, but I do not know why the problem happened, I
I doubt if it is compatible with 64 bit architecture.

kernel.log
Apr 27 18:04:46 tx2 kernel: [15841.630153] .fit_dma…init_chrdev, dev_major:505…
Apr 27 18:04:46 tx2 kernel: [15841.630179] Fit DMA 0000:01:00.0: pci_enable_device() successful
Apr 27 18:04:46 tx2 kernel: [15841.630276] Fit DMA 0000:01:00.0: pci_enable_msi() successful
Apr 27 18:04:46 tx2 kernel: [15841.630290] Fit DMA 0000:01:00.0: using a 64-bit irq mask
Apr 27 18:04:46 tx2 kernel: [15841.630292] Fit DMA 0000:01:00.0: irq pin: 1
Apr 27 18:04:46 tx2 kernel: [15841.630294] Fit DMA 0000:01:00.0: irq line: 125
Apr 27 18:04:46 tx2 kernel: [15841.630296] Fit DMA 0000:01:00.0: irq: 446
Apr 27 18:04:46 tx2 kernel: [15841.630298] Fit DMA 0000:01:00.0: request irq: 125
Apr 27 18:04:46 tx2 kernel: [15841.630302] Fit DMA 0000:01:00.0: BAR[0] 0x48040000-0x480401ff flags 0x0014220c, length 51 2
Apr 27 18:04:46 tx2 kernel: [15841.630305] Fit DMA 0000:01:00.0: BAR[1] 0x00000000-0x00000000 flags 0x00000000, length 0
Apr 27 18:04:46 tx2 kernel: [15841.630308] Fit DMA 0000:01:00.0: BAR[2] 0x00000000-0x00000000 flags 0x00000000, length 0
Apr 27 18:04:46 tx2 kernel: [15841.630310] Fit DMA 0000:01:00.0: BAR[3] 0x00000000-0x00000000 flags 0x00000000, length 0
Apr 27 18:04:46 tx2 kernel: [15841.630313] Fit DMA 0000:01:00.0: BAR[4] 0x48000000-0x4803ffff flags 0x0014220c, length 26 2144
Apr 27 18:04:46 tx2 kernel: [15841.630315] Fit DMA 0000:01:00.0: BAR[5] 0x00000000-0x00000000 flags 0x00000000, length 0
Apr 27 18:04:46 tx2 kernel: [15841.630361] Fit DMA 0000:01:00.0: BAR[0] mapped to 0xffffff800bba1000, length 512
Apr 27 18:04:46 tx2 kernel: [15841.630394] Fit DMA 0000:01:00.0: BAR[4] mapped to 0xffffff8015900000, length 262144

Can you try /dev/mem?

still wrong…

Due to TX2 is aarm64, but mmap is 32 bit, so I should do some processing as below,

userspace code
gpimage_pool=mmap64(0, IMAGE_POOL_SIZE , PROT_READ | PROT_WRITE, MAP_SHARED,
gdma_handle, 0) & 0x007fffffffff;
printf(“gpimage_pool address:0x%lX \n”,gpimage_pool);
ioctl(gdma_handle,FIT_IOCX_SET_PATTERN,0);
printf(“gpimage_pool 0:0x%X\n”,gpimage_pool[0]);
printf(“gpimage_pool 0:0x%X\n”,gpimage_pool[1]);
printf(“gpimage_pool IMAGE_POOL_SIZE-2:0x%X\n”,gpimage_pool[IMAGE_POOL_SIZE-2]);
printf(“gpimage_pool IMAGE_POOL_SIZE-1:0x%X\n”,gpimage_pool[IMAGE_POOL_SIZE-1]);

kernelspace
case FIT_IOCX_SET_PATTERN:
gpimage_buf[0]=0x55;
gpimage_buf[1]=0xAA;
gpimage_buf[IMAGE_POOL_SIZE-2]=0xcc;
gpimage_buf[IMAGE_POOL_SIZE-1]=0x33;
break;

usespace log
gpimage_pool address:0x7F9A01A000
gpimage_pool 0:0x55
gpimage_pool 0:0xAA
gpimage_pool IMAGE_POOL_SIZE-2:0xCC
gpimage_pool IMAGE_POOL_SIZE-1:0x33

 the segmentation fault could be solved.

Regards,
Locar

Updated final result,

  1. TX2 default virtual space is 39bits
    39bit

2.But, mmap only return 32bits, so it can not get 39bits virtual address by mmap() in userspace.

My method is as below,

1. do mmap in userspace.
//1.Do memory map…
ret=mmap(0, IMAGE_POOL_SIZE , PROT_READ | PROT_WRITE, MAP_SHARED , gdma_handle, 0);
if(ret==0xFFFFFFFF)
{
printf(“image pool mmap fail …\n”);
return IMAGE_POOL_MMAP_FAIL;
}
2.keep virtual start address in kernelspace **
static int fit_dma_mmap(struct file
filp, struct vm_area_struct
vma)
{
int i, ret;
int reserved_size=0;

    **gimage_pool_va=vma->vm_start;**

3.Get virtual start address by ioctl in userspace
ioctl(gdma_handle,FIT_IOCX_GET_SHARE_MEM_ADDR,&gpimage_pool);
4.Transfer virtual start address to userspace.
copy_to_user((u8*)arg,&gimage_pool_va,sizeof(u64));
5.use “volatile” for share point, do not use " pgprot_noncached()", I found it has an async issue.
(V) volatile extern u8 *gpimage_pool;
(X) vma->vm_page_prot = pgprot_noncached ( vma->vm_page_prot );
I changed data in kernelspace and print data in userspace, it has an async issue.


When I changed image point to “volatile” and removed " pgprot_noncached()", the data could be sync between kernelspace with userspace, I do not know why…

Above just sharing…