Each thread in each processor

Hi
who dont know my case :

i have a program where i use 3 or 4 threads and a server with 4 procesor
i run "normaly :-)) "the program and the threads are run in the first procesor , how can make each thread in one processeur.


thank’s advance
saad,

that’s another test program and i get the same think,

test program

#include <omp.h>
int main() {

  int i,j;

  
  omp_set_num_threads(3);

#pragma omp for nowait
    for(i =0 ; i <10000; i++)
      for(j =0 ; j <100; j++)
          printf ("hyhy that's my way\n");


  exit(0);
}

and i use this vesrion


pgcc 6.0-5 64-bit target on x86-64 Linux


thank’s advance.

An omp for pragma must be containined within a parallel section. (See below). Note that chapter 6 of the PGI Users Guide (http://www.pgroup.com/doc/pgiug.pdf) gives an overview on using OpenMP pragmas and might be helpful.

  • Mat
% cat omp2.c
#include <omp.h>
int main() {

  int i,j, tid;

  omp_set_num_threads(3);
#pragma omp parallel private (tid)
{
    tid = omp_get_thread_num();
#pragma omp for nowait

    for(i =0 ; i <10; i++)
      for(j =0 ; j <10; j++)
          printf ("%d [%d, %d]\n", tid, i, j);
}
  exit(0);
}

Thant’s MAT, i will try, execuse me i am begineger :-((, i will read it i promise you

i do it and i get the same think …
3 thread and when i make “top”, my program is seen only in one procesor …!!! ,

Hi Saad,

When you were using top, were all the cpus listed or did it just show “Cpu(s)”? If it was the latter, press ‘1’ to have top show the status of each process.

  • Mat

yes of course and i see the program and the nuber of process wich is executed

PID USER PRI NI SIZE RSS SHRE STAT %CPU %MEM TIME CPU CMD
29892 forest 25 0 11.2G 11G 892 R 24.9 74.0 2344m 1 Myprog
19327 root 15 0 19088 1164 716 S 0.3 0.0 173:23 2 X
1 root 15 0 252 208 184 S 0.0 0.0 5:44 0 init
2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0
3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1
4 root RT 0 0 0 0 SW 0.0 0.0 0:00 2 migration/2
5 root RT 0 0 0 0 SW 0.0 0.0 0:00 3 migration/3
6 root 15 0 0 0 0 SW 0.0 0.0 0:11 0 keventd
7 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0
8 root 34 19 0 0 0 SWN 0.0 0.0 0:00 1 ksoftirqd/1
9 root 34 19 0 0 0 SWN 0.0 0.0 0:00 2 ksoftirqd/2

more that that the system programs are executed in 4 processeur “migration”
thank’s i really like this forum…:-))[/code]

Hey , is there any another solution on linux to visualize an execution on several procesor in one machine or profiling command.

thank’s advance.

I often use xosview http://sourceforge.net/projects/xosview/.

  • Mat

hi Mat,
I will ask a favor and it’s important to me, can you send to a result of a top when you execute a // program “your test program for example” ,

i will compare it to my result “Top result”.

you know it’s very strange i make your test program and it get it in only one processor with 4 thread ???:-(( and this is second week with this problem i must get a solution, this week.


oh yeah i test this visaluateur but it make some erreur in compilation!!!
i have have linux red hat os
thank’s mat,

Ok, I changed the example a bit so that each iteration is doing some work. The threads don’t show up with top since there is no actual computation.

% cat test.c
#include <omp.h>
#include <math.h>
int main() {
  float x, y, z;
  int i,j, tid;

  x = y = z = 0.0;
  omp_set_num_threads(3);
#pragma omp parallel private (tid)
{
    tid = omp_get_thread_num();
#pragma omp for nowait
    for(i =0 ; i <10000; i++)
      for(j =0 ; j <10000; j++) {
          x = sin((y+i) * (z+j) / 1.2);
         // printf ("%d [%d, %d] %f\n", tid, i, j, x);
      }
}
  exit(0);
}

Compile the code and bind each process to CPU 3, 4, and 5. Note that setting MP_BIND and MP_BLIST is optional.

% pgcc -mp -V6.2-4 test.c -O0
% setenv MP_BIND yes
% setenv MP_BLIST 3,4,5
% a.out



top - 09:26:24 up 40 days, 11:05,  4 users,  load average: 0.79, 0.55, 0.29
Tasks: 145 total,   2 running, 141 sleeping,   2 stopped,   0 zombie
Cpu0  :  0.0% us, 15.9% sy,  0.0% ni, 84.1% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu1  :  0.0% us,  0.3% sy,  0.0% ni, 99.7% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu2  :  0.0% us,  0.0% sy,  0.0% ni, 100.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu3  : 100.0% us,  0.0% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu4  : 94.0% us,  6.0% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu5  : 100.0% us,  0.0% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu6  :  0.0% us,  0.0% sy,  0.0% ni, 100.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu7  :  0.0% us,  0.0% sy,  0.0% ni, 100.0% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:   8179780k total,  2210816k used,  5968964k free,    70044k buffers
Swap:  8393920k total,   580392k used,  7813528k free,   460760k cached
  • Mat

Saad,

to who don’t know my case,

hello i have a // program i compile it with mpi, with pgcc 6.1-6 64-bit target on x86-64 Linux, i have a test program "hello from threan number x " when i use it without MP_bind i have 4 thread max (i have 4 processor) on one processeur, and when i use MP_BIND=yes, i have a bad adress :
Error: init_pthr: sched_setaffinity: Bad address,

that’s it.

it’s work :-)) on my four processor but mp_bind=yes have always but it’s not important, it’s work


thank’s mat,