[LU-663] Some architectures do not have NUMA features anymore Created: 07/Sep/11 Updated: 07/Jun/12 Resolved: 12/Dec/11 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.0.0 |
| Fix Version/s: | Lustre 2.2.0, Lustre 2.1.2 |
| Type: | Improvement | Priority: | Minor |
| Reporter: | Gregoire Pichon | Assignee: | Zhenyu Xu |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Environment: |
lustre 2.0, x86_64 architectures, recent kernel version |
||
| Epic: | NUMA, performance, server |
| Rank (Obsolete): | 4831 |
| Description |
|
Lustre makes some of its working threads (ll_ost_io and ptlrpc_hr) NUMA-aware by starting them spread over the online cpus. These portions of code use the cpu_to_node() and node_to_cpumask() kernel services. in lustre/ptlrpc/service.c, ptlrpc_main() #if defined(HAVE_NODE_TO_CPUMASK) && defined(CONFIG_NUMA) /* we need to do this before any per-thread allocation is done so that * we get the per-thread allocations on local node. bug 7342 */ if (svc->srv_cpu_affinity) { int cpu, num_cpu; for (cpu = 0, num_cpu = 0; cpu < cfs_num_possible_cpus(); cpu++) { if (!cfs_cpu_online(cpu)) continue; if (num_cpu == thread->t_id % cfs_num_online_cpus()) break; num_cpu++; } cfs_set_cpus_allowed(cfs_current(), node_to_cpumask(cpu_to_node(cpu))); } #endif 1. The cpu_to_node() service is defined by the kernel either with a #define, or in new kernel version (2.6.24) in some architectures with a symbol export. The symbol export case is not correctly managed by Lustre code and leads to always return 0. This is because in lustre/include/linux/lustre_compat25.h, we have: #ifndef cpu_to_node #define cpu_to_node(cpu) 0 #endif 2. The portions of code that uses the node_to_cpumask() service are protected by the HAVE_NODE_TO_CPUMASK #define. Unfortunately, newer kernel versions do not export this symbol (or its equivalent versions node_to_cpu_mask() and node_2_cpu_mask() ) resulting in having HAVE_NODE_TO_CPUMASK undefined. I am going to provide a patch through Gerrit for these two issues. |
| Comments |
| Comment by Peter Jones [ 07/Sep/11 ] |
|
Bobi Can you please look into this one? Thanks Peter |
| Comment by Gregoire Pichon [ 07/Sep/11 ] |
|
I have pushed a patch in Gerrit: |
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Build Master (Inactive) [ 10/Dec/11 ] |
|
Integrated in Result = SUCCESS
|
| Comment by Peter Jones [ 12/Dec/11 ] |
|
Landed for 2.2 |
| Comment by Bob Glossman (Inactive) [ 01/May/12 ] |
|
http://review.whamcloud.com/#change,2620 |