[LU-516] oom when running ost-pools test 23 Created: 20/Jul/11  Updated: 28/May/17  Resolved: 28/May/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Sarah Liu Assignee: WC Triage
Resolution: Cannot Reproduce Votes: 0
Labels: None
Environment:

RHEL6 x86_64 server with i686 client, quota enabled.


Severity: 3
Rank (Obsolete): 10384

 Description   

it can be reproduce.

Lustre: DEBUG MARKER: == ost-pools test 23: OST pools and quota == 21:40:52 (1311136852)
LustreError: 28036:0:(quota_ctl.c:328:client_quota_ctl()) ptlrpc_queue_wait failed, rc: -114
LustreError: 11-0: an error occurred while communicating with 192.168.4.131@o2ib. The ost_write operation failed with -122
LustreError: 28045:0:(vvp_io.c:990:vvp_io_commit_write()) Write page 280 of inode d7f47d44 failed -122
Lustre: DEBUG MARKER: cancel_lru_locks osc start
Lustre: DEBUG MARKER: cancel_lru_locks osc stop
LustreError: 11-0: an error occurred while communicating with 192.168.4.131@o2ib. The ost_write operation failed with -122
LustreError: 28062:0:(vvp_io.c:990:vvp_io_commit_write()) Write page 512 of inode d7f47d44 failed -122
__ratelimit: 116414 callbacks suppressed
sssd_be invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0
sssd_be cpuset=/ mems_allowed=0
Pid: 1753, comm: sssd_be Not tainted 2.6.32-131.2.1.el6.i686 #1
Call Trace:
[<c04df3f0>] ? oom_kill_process+0xb0/0x2d0
[<c04dfa9a>] ? __out_of_memory+0x4a/0x90
[<c04dfb35>] ? out_of_memory+0x55/0xb0
[<c04ed982>] ? __alloc_pages_nodemask+0x7e2/0x800
[<c051981c>] ? cache_alloc_refill+0x2bc/0x510
[<c05194f4>] ? kmem_cache_alloc+0xa4/0x110
[<c0532728>] ? getname+0x28/0xe0
[<c05349ba>] ? user_path_at+0x1a/0x80
[<c0473cb0>] ? autoremove_wake_function+0x0/0x40
[<c052c8d7>] ? vfs_fstatat+0x37/0x70
[<c052ca18>] ? vfs_stat+0x18/0x20
[<c052ca2f>] ? sys_stat64+0xf/0x30
[<c047d786>] ? getnstimeofday+0x46/0xf0
[<c04adb3c>] ? audit_syscall_entry+0x21c/0x240
[<c04ad856>] ? audit_syscall_exit+0x216/0x240
[<c0409adf>] ? sysenter_do_call+0x12/0x28
Mem-Info:
DMA per-cpu:
CPU 0: hi: 0, btch: 1 usd: 0
CPU 1: hi: 0, btch: 1 usd: 0
CPU 2: hi: 0, btch: 1 usd: 0
CPU 3: hi: 0, btch: 1 usd: 0
Normal per-cpu:
CPU 0: hi: 186, btch: 31 usd: 171
CPU 1: hi: 186, btch: 31 usd: 0
CPU 2: hi: 186, btch: 31 usd: 157
CPU 3: hi: 186, btch: 31 usd: 177
HighMem per-cpu:
CPU 0: hi: 186, btch: 31 usd: 18
CPU 1: hi: 186, btch: 31 usd: 0
CPU 2: hi: 186, btch: 31 usd: 181
CPU 3: hi: 186, btch: 31 usd: 78
active_anon:3032 inactive_anon:1596 isolated_anon:0
active_file:3360 inactive_file:937347 isolated_file:0
unevictable:0 dirty:326 writeback:0 unstable:0
free:1963264 slab_reclaimable:3968 slab_unreclaimable:127049
mapped:3547 shmem:46 pagetables:477 bounce:0
DMA free:3516kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15792kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:140kB slab_unreclaimable:4520kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
lowmem_reserve[]: 0 863 12159 12159
Normal free:4056kB min:3724kB low:4652kB high:5584kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:96kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:883912kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:15732kB slab_unreclaimable:503676kB kernel_stack:2112kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 90370 90370
HighMem free:7845484kB min:512kB low:12700kB high:24888kB active_anon:12128kB inactive_anon:6384kB active_file:13440kB inactive_file:3749292kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:11567412kB mlocked:0kB dirty:1304kB writeback:0kB mapped:14184kB shmem:184kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1908kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 3*4kB 2*8kB 2*16kB 4*32kB 14*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 3516kB
Normal: 36*4kB 8*8kB 2*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 4048kB
HighMem: 48*4kB 26*8kB 12*16kB 3*32kB 9*64kB 10*128kB 7*256kB 5*512kB 5*1024kB 3*2048kB 1911*4096kB = 7845616kB
803094 total pagecache pages
0 pages in swap cache
Swap cache stats: add 0, delete 0, find 0/0
Free swap = 14565368kB
Total swap = 14565368kB
3145712 pages RAM
2918914 pages HighMem
62162 pages reserved
815050 pages shared
289543 pages non-shared
Out of memory: kill process 24459 (pickup) score 2788 or a child
Killed process 24459 (pickup) vsz:11152kB, anon-rss:436kB, file-rss:1632kB
irqbalance invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0
irqbalance cpuset=/ mems_allowed=0
Pid: 1470, comm: irqbalance Not tainted 2.6.32-131.2.1.el6.i686 #1
Call Trace:
[<c04df3f0>] ? oom_kill_process+0xb0/0x2d0
[<c04dfa9a>] ? __out_of_memory+0x4a/0x90
[<c04dfb35>] ? out_of_memory+0x55/0xb0
[<c04ed982>] ? __alloc_pages_nodemask+0x7e2/0x800
[<c051981c>] ? cache_alloc_refill+0x2bc/0x510
[<c05194f4>] ? kmem_cache_alloc+0xa4/0x110
[<c0532728>] ? getname+0x28/0xe0
[<c05251ce>] ? do_sys_open+0x1e/0x130
[<c04adb3c>] ? audit_syscall_entry+0x21c/0x240
[<c052535c>] ? sys_open+0x2c/0x40
[<c0409adf>] ? sysenter_do_call+0x12/0x28
Mem-Info:
DMA per-cpu:
CPU 0: hi: 0, btch: 1 usd: 0
CPU 1: hi: 0, btch: 1 usd: 0
CPU 2: hi: 0, btch: 1 usd: 0
CPU 3: hi: 0, btch: 1 usd: 0
Normal per-cpu:
CPU 0: hi: 186, btch: 31 usd: 101
CPU 1: hi: 186, btch: 31 usd: 41
CPU 2: hi: 186, btch: 31 usd: 135
CPU 3: hi: 186, btch: 31 usd: 175
HighMem per-cpu:
CPU 0: hi: 186, btch: 31 usd: 22
CPU 1: hi: 186, btch: 31 usd: 12
CPU 2: hi: 186, btch: 31 usd: 0
CPU 3: hi: 186, btch: 31 usd: 73
...



 Comments   
Comment by Sarah Liu [ 20/Jul/11 ]

not sure if this is the same issue as LU-514

Comment by Andreas Dilger [ 28/May/17 ]

Close old issue.

Generated at Sat Feb 10 05:32:34 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.