[LU-1642] Clients get disconnected and reconnected during heavy IO immediately after the halt of a blade. Created: 18/Jul/12  Updated: 29/May/17  Resolved: 29/May/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.2.0
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: Fabio Verzelloni Assignee: Oleg Drokin
Resolution: Incomplete Votes: 0
Labels: None
Environment:

----------------------------------------------------------------------------------------------------

    1. MDS HW ##
                                                                                                                                                                                                        • Linux XXXX.admin.cscs.ch 2.6.32-220.7.1.el6_lustre.g9c8f747.x86_64
                                                                                                                                                                                                          Architecture: x86_64
                                                                                                                                                                                                          CPU op-mode(s): 32-bit, 64-bit
                                                                                                                                                                                                          Byte Order: Little Endian
                                                                                                                                                                                                          CPU(s): 16
                                                                                                                                                                                                          Vendor ID: AuthenticAMD
                                                                                                                                                                                                          CPU family: 16
                                                                                                                                                                                                          64Gb RAM
                                                                                                                                                                                                          Interconnect IB 40Gb/s

      • MDT LSI 5480 Pikes Peak
        SSDs SLC

----------------------------------------------------------------------------------------------------

    1. OSS HW ##
                                                                                                                                                                                                        • Architecture: x86_64
                                                                                                                                                                                                          CPU op-mode(s): 32-bit, 64-bit
                                                                                                                                                                                                          Byte Order: Little Endian
                                                                                                                                                                                                          CPU(s): 32
                                                                                                                                                                                                          Vendor ID: GenuineIntel
                                                                                                                                                                                                          CPU family: 6
                                                                                                                                                                                                          64Gb RAM
                                                                                                                                                                                                          Interconnect IB 40Gb/s


      • OSTs ---> LSI 7900 SATA Disks

----------------------------------------------------------------------------------------------------

    1. Router nodes ##

                                                                                                                                                                                                        • 12 Cray XE6 Service nodes as router nodes - IB 40Gb/s

----------------------------------------------------------------------------------------------------

    1. Clients ##
                                                                                                                                                                                                        • ~ 1500 Cray XE6 nodes - Lustre 1.8.6

----------------------------------------------------------------------------------------------------

    1. LUSTRE Config ##
                                                                                                                                                                                                        • 1 MDS + 1 fail over (MDT on SSD array)
                                                                                                                                                                                                          12 OSSs - 6 OSTs per OSS (72 OSTs)

Luster Servers ---> 2.2.51.0
Lustre Clients ---> 1.8.6 (~1500 nodes) / 2.2.51.0 (~20 nodes)
----------------------------------------------------------------------------------------------------


Attachments: Text File nid2CrayMapping.txt     File sdb.log     File smw.log     File weiss02.tar.gz     File weisshorn02.tar.gz    
Severity: 3
Rank (Obsolete): 4006

 Description   

During Lustre testing yesterday we observe this behaviours:

  • Halt 4 nodes on a blade
  • jobs doing IO intense such as IOR or MPIIO starts

Jul 17 16:38:15 nid00475 aprun.x[13684]: apid=1177710, Starting, user=20859, batch_id=377847, cmd_line="/usr/bin/apr
un.x -n 256 src/C/IOR -a MPIIO -B -b 4096m -t 4096K -k -r -w -e -g -s 1 -i 2 -F -C -o /scratch/weisshorn/fverzell/te
st5/IORtest-377847 ", num_nodes=64, node_list=64-65,126-129,190-191,702-705,766-769,830-833,894-897,958-961,1022-102
5,1086-1089,1150-1153,1214-1217,1278-1281,1294-1295,1342-1345,1406-1409,1470-1473,1534-1535

  • then a few minutes later Lustre is acting up

Lustre server log:

Jul 17 16:39:57 weisshorn03 kernel: LNetError: 4754:0:(o2iblnd_cb.c:2991:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 11 seconds
Jul 17 16:39:57 weisshorn03 kernel: LNetError: 4754:0:(o2iblnd_cb.c:3054:kiblnd_check_conns()) Timed out RDMA with 148.187.7.73@o2ib2 (0): c: 0, oc: 1, rc: 5

Jul 17 16:39:58 weisshorn08 kernel: LNetError: 5045:0:(o2iblnd_cb.c:2991:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 12 seconds
Jul 17 16:39:58 weisshorn08 kernel: LNetError: 5045:0:(o2iblnd_cb.c:3054:kiblnd_check_conns()) Timed out RDMA with 148.187.7.78@o2ib2 (0): c: 0, oc: 3, rc: 4

Jul 17 16:39:59 weisshorn13 kernel: LNet: 3394:0:(o2iblnd_cb.c:2340:kiblnd_passive_connect()) Conn race 148.187.7.81@o2ib2
Jul 17 16:39:59 weisshorn05 kernel: LNetError: 4875:0:(o2iblnd_cb.c:2991:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 12 seconds

  • Notice "IO Bulk write error" for nid833 which part of job mentioned above followed by "inactive thread " then dumptrace:

Jul 17 16:40:05 weisshorn14 kernel: LustreError: 7929:0:(ldlm_lib.c:2717:target_bulk_io()) @@@ network error on bulk GET 0(1048576) req@f
fff880f07f8b400 x1407748581382025/t0(0) o4->412fabdd-3b3a-df4b-bdc6-264145113d70@833@gni:0/0 lens 448/416 e 0 to 0 dl 1342536406 ref 1 fl Interpret:/0/0 rc 0/0
Jul 17 16:40:05 weisshorn14 kernel: Lustre: scratch-OST003f: Bulk IO write error with 412fabdd-3b3a-df4b-bdc6-264145113d70 (at 833@gni), c
lient will retry: rc -110

Jul 17 16:43:19 weisshorn13 kernel: Lustre: 6182:0:(service.c:1034:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/1), not sending early reply
Jul 17 16:43:19 weisshorn13 kernel: req@ffff880ddeb81400 x1407748579298657/t0(0) o4->a34c7ab8-980f-db22-6596-e1db30724c4d@12@gni:0/0 lens 448/416 e 1 to 0 dl 1342536204 ref 2 fl Interpret:/0/0 rc 0/0
Jul 17 16:43:19 weisshorn13 kernel: Lustre: 6182:0:(service.c:1034:ptlrpc_at_send_early_reply()) Skipped 19 previous similar messages

Jul 17 16:43:20 weisshorn13 kernel: LNet: Service thread pid 8102 was inactive for 600.00s. The thread might be hung, or it might only be
slow and will resume later. Dumping the stack trace for debugging purposes:
Jul 17 16:43:20 weisshorn13 kernel: Pid: 8102, comm: ll_ost_io_153
Jul 17 16:43:20 weisshorn13 kernel:
Jul 17 16:43:20 weisshorn13 kernel: Call Trace:
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffff8107bf8c>] ? lock_timer_base+0x3c/0x70
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffff814edc52>] schedule_timeout+0x192/0x2e0
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffff8107c0a0>] ? process_timeout+0x0/0x10
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa03a65c1>] cfs_waitq_timedwait+0x11/0x20 [libcfs]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa05df0ad>] target_bulk_io+0x38d/0x8b0 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffff8105e7f0>] ? default_wake_function+0x0/0x20
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa0b4c792>] ost_brw_write+0x1172/0x1380 [ost]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa03a527b>] ? cfs_set_ptldebug_header+0x2b/0xc0 [libcfs]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa05d64a0>] ? target_bulk_timeout+0x0/0x80 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa0b507c4>] ost_handle+0x2764/0x39e0 [ost]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa0612c83>] ? ptlrpc_update_export_timer+0x1c3/0x360 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa06183c1>] ptlrpc_server_handle_request+0x3c1/0xcb0 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa03a64ce>] ? cfs_timer_arm+0xe/0x10 [libcfs]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa03b0ef9>] ? lc_watchdog_touch+0x79/0x110 [libcfs]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa0612462>] ? ptlrpc_wait_event+0xb2/0x2c0 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa06193cf>] ptlrpc_main+0x71f/0x1210 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa0618cb0>] ? ptlrpc_main+0x0/0x1210 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffff8100c14a>] child_rip+0xa/0x20
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa0618cb0>] ? ptlrpc_main+0x0/0x1210 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffffa0618cb0>] ? ptlrpc_main+0x0/0x1210 [ptlrpc]
Jul 17 16:43:20 weisshorn13 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20
Jul 17 16:43:20 weisshorn13 kernel:

  • Nid833 is then being evicted:

Jul 17 16:47:44 weisshorn14 kernel: LustreError: 0:0:(ldlm_lockd.c:357:waiting_locks_callback()) ### lock callback timer expired after 568
s: evicting client at 833@gni ns: filter-scratch-OST003f_UUID lock: ffff8808aa9fc480/0x9cc518d034bea3c8 lrc: 3/0,0 mode: PW/PW res: 22186
722/0 rrc: 2 type: EXT [0->18446744073709551615] (req 0->1048575) flags: 0x20 remote: 0x629daae2e6cf7351 expref: 5 pid: 5811 timeout 42997
03474

On SMW console log:

[2012-07-17 16:48:07][c7-0c1s0n1]Lustre: 9196:0:(client.c:1492:ptlrpc_expire_one_request()) @@@ Request x1407748581382025
sent from scratch-OST003f-osc-ffff88041e142400 to NID 148.187.7.114@o2ib2 471s ago has timed out (471s prior to deadline
).
[2012-07-17 16:48:07][c7-0c1s0n1] req@ffff8801bea18800 x1407748581382025/t0 o4->scratch-OST003f_UUID@148.187.7.114@o2ib2
:6/4 lens 448/608 e 0 to 1 dl 1342536485 ref 2 fl Rpc:/0/0 rc 0/0
[2012-07-17 16:48:07][c7-0c1s0n1]Lustre: scratch-OST003f-osc-ffff88041e142400: Connection to service scratch-OST003f via
nid 148.187.7.114@o2ib2 was lost; in progress operations using this service will wait for recovery to complete.
[2012-07-17 16:48:07][c7-0c1s0n1]LustreError: 167-0: This client was evicted by scratch-OST003f; in progress operations u
sing this service will fail.
[2012-07-17 16:48:07][c7-0c1s0n1]Lustre: Server scratch-OST003f_UUID version (2.2.51.0) is much newer than client version
(1.8.6)
[2012-07-17 16:48:07][c7-0c1s0n1]Lustre: Skipped 72 previous similar messages
[2012-07-17 16:48:07][c7-0c1s0n1]Lustre: scratch-OST003f-osc-ffff88041e142400: Connection restored to service scratch-OST
003f using nid 148.187.7.114@o2ib2.

Job is talled then finally is killed due cputime limit exceeded

slurmd[rosa12]: *** JOB 377847 CANCELLED AT 17:03:36 DUE TO TIME LIMIT ***
aprun.x: Apid 1177710: Caught signal Terminated, sending to application

Attached the log file of Cray XE machine of the specific time range.



 Comments   
Comment by Peter Jones [ 18/Jul/12 ]

Oleg

Could you please look into this one?

Thanks

Peter

Comment by Cliff White (Inactive) [ 18/Jul/12 ]

I am still seeing these messages:

LNetError: 1480:0:(o2iblnd_cb.c:2273:kiblnd_passive_connect()) Can't accept 148.187.6.201@o2ib2: incompatible queue depth 8 (16 wanted)

which would indicate your parameters are not the same on all nodes. This could impact the issue, please make sure that IB parameters match every where.
We are escalating this issue with engineering.

Comment by Fabio Verzelloni [ 18/Jul/12 ]

Hi Cliff,
we are in the process of update the missing external clients with the new parameters, the problem will be fixed soon.

Thanks
Fabio

Comment by Liang Zhen (Inactive) [ 18/Jul/12 ]

Hi Fabio, a few questsion:

  • could you give some descriptions about those two log files (sdb.log and smw.log), i.e: which nodes they are from (client? OSS?)
  • what's the difference between Nid833 and c7-0c1s0n*?
  • we can see there are some error messages from o2iblnd, I assume 148.187.7.73@o2ib2 is a router right? is there any errors in dmesg or console output on that router?
    o2iblnd_cb.c:3054:kiblnd_check_conns()) Timed out RDMA with 148.187.7.73@o2ib2 (0): c: 0, oc: 1, rc: 5
    
  • dmesg on the client & OSS could be helpful as well
Comment by Isaac Huang (Inactive) [ 18/Jul/12 ]

Hi Fabio, can you please also run a 'ibcheckerrors' to make sure the IB fabric is clean? Sometimes those RDMA timeout errors are caused by faulty fabric. It makes sense to me to first make sure the network itself is healthy.

Comment by Colin McMurtrie [ 19/Jul/12 ]

The two log files (sdb.log and smw.log) are from the Cray XE6 so the events logged there relate to the clients running on the Cray (compute nodes and service nodes running the LNET routers).

Nid833 and c7-0c1s0n1 refer to the same compute node (i.e. they are different names for the same thing). I have attached the file nid2CrayMapping.txt so that you can see this mapping for all nodes on our Cray XE6.

Comment by Liang Zhen (Inactive) [ 19/Jul/12 ]

I'm still trying to understand the network topology at here, could you give a description of network topology?

I think OSSs are on o2ib network correct? But after I checked the sdb.log I saw this:

Jul 17 16:37:52 nid00394 kernel: LNet: 12047:0:(gnilnd_conn.c:1872:kgnilnd_reaper_dgram_check()) GNILND_DGRAM_REQ datagram to 12@gni timed out @ 128s dgram 0xffff8803ef766b48 state GNILND_DGRAM_POSTED conn 0xffff8803b4b07000
Jul 17 16:37:52 nid01530 kernel: LNet: 12033:0:(gnilnd_conn.c:1872:kgnilnd_reaper_dgram_check()) GNILND_DGRAM_REQ datagram to 50@gni timed out @ 128s dgram 0xffff880407221b08 state GNILND_DGRAM_POSTED conn 0xffff8803cedc0000
Jul 17 16:37:54 nid01530 kernel: LNet: could not send to 50@gni due to connection setup failure after 130 seconds
Jul 17 16:37:54 nid01530 kernel: LNet: 12028:0:(gnilnd_cb.c:1104:kgnilnd_tx_done()) $$ error -113 on tx 0xffff8803e6be9b68-><?> id 0/0 state GNILND_TX_ALLOCD age 130s  msg@0xffff8803e6be9be8 m/v/ty/ck/pck/pl b00fbabe/8/2/0/22d/0 x0:GNILND_MSG_IMMEDIATE
Jul 17 16:37:58 nid00394 kernel: LNet: could not send to 12@gni due to connection setup failure after 134 seconds
Jul 17 16:37:58 nid00394 kernel: LNet: 12042:0:(gnilnd_cb.c:1104:kgnilnd_tx_done()) $$ error -113 on tx 0xffff8803fdcab248-><?> id 0/0 state GNILND_TX_ALLOCD age 134s  msg@0xffff8803fdcab2c8 m/v/ty/ck/pck/pl b00fbabe/8/2/0/24d4/0 x0:GNILND_MSG_IMMEDIATE

I checked nid2CrayMapping.txt, these nodes (nid00394, nid01530) are marked as "service", does it mean that those OSSs are also acting as "router", or they are dedicated routers but also marked as "service" nodes?

Also, what's o2ib NIDs of these two nodes(nid00394, nid01530)? what's hostname of 12@gni, 50@gni? We need to find out errors/logs on a message path (OSS<>router<>client) at the same moment.

Comment by Isaac Huang (Inactive) [ 19/Jul/12 ]

Hi Colin and Fabio, I'm an Intel/Whamcloud engineer (despite that my email isn't from either one) working with Liang on this bug. I'd appreciate a couple of things from you:

  1. As I requested above, please run 'ibcheckerrors' on the IB network. I want to make sure that the network itself is OK. This is important - if the networking is faulty but we assume it's good, then we could be led to wrong directions and it'd take us longer to solve the problem.
  2. On the router nodes, particularly 148.187.7.[73,78]@o2ib2, please collect all files under /proc/sys/lnet/, e.g. by "tar -czvf `hostname`.tgz /proc/sys/lnet/". As long as these nodes have not rebooted since the problem happened, the files would contain very useful history data to help us understand what was happening. Of course, it'd be a lot more useful to gather these files while the problem is happening.
Comment by Fabio Verzelloni [ 19/Jul/12 ]

Dear Liang,
that's an overview of the network topology.

These are the router node "inside" Cray XE System:
rosa4:~ # lctl show_route
net o2ib2 hops 1 gw 220@gni up
net o2ib2 hops 1 gw 304@gni up
net o2ib2 hops 1 gw 394@gni up
net o2ib2 hops 1 gw 436@gni up
net o2ib2 hops 1 gw 226@gni up
net o2ib2 hops 1 gw 1530@gni up
net o2ib2 hops 1 gw 1476@gni up
net o2ib2 hops 1 gw 270@gni up
net o2ib2 hops 1 gw 474@gni up
net o2ib2 hops 1 gw 484@gni up
net o2ib2 hops 1 gw 1364@gni up
net o2ib2 hops 1 gw 1386@gni up

These node have the IB & GNI.

The following lines are from weisshorn, that is basically were Lustre is built:

[root@weisshorn01 ~]# lctl show_route
net gni hops 1 gw 148.187.7.77@o2ib2 up
net gni hops 1 gw 148.187.7.72@o2ib2 up
net gni hops 1 gw 148.187.7.71@o2ib2 up
net gni hops 1 gw 148.187.7.78@o2ib2 up
net gni hops 1 gw 148.187.7.81@o2ib2 up
net gni hops 1 gw 148.187.7.74@o2ib2 up
net gni hops 1 gw 148.187.7.76@o2ib2 up
net gni hops 1 gw 148.187.7.73@o2ib2 up
net gni hops 1 gw 148.187.7.82@o2ib2 up
net gni hops 1 gw 148.187.7.79@o2ib2 up
net gni hops 1 gw 148.187.7.75@o2ib2 up
net gni hops 1 gw 148.187.7.80@o2ib2 up

These nodes have only IB.

Weisshorn & Cray are two completely separated things, the nodes nid00394, nid01530 are router:
nid01530:~ # lctl list_nids
1530@gni
148.187.7.82@o2ib2

nid00394:~ # lctl list_nids
394@gni
148.187.7.75@o2ib2

In Cray speaking what is called 12@gni or "any_number"@gni if it is a compute nodes, so basically a node for computation should be called in different ways, but as example a node called 50@gni will be:

50@gni
nid00050
c0-0c0s6n2
xxx.xx.0.51

What you see marked as service, should be a router nodes or fronend nodes.
All the compute nodes have to pass thought the router nodes, because they only have gni.
As mentioned 12@gni & 50@gni I can confirm you that they are compute nodes.

Please let me know if you need more details.
Fabio

Comment by Isaac Huang (Inactive) [ 19/Jul/12 ]

Hi Fabio, thanks for the feedback. It's about 3AM for Liang now, so he's likely not going to respond soon. Please have a look at my previous comment where additional data was requested.

Comment by Cliff White (Inactive) [ 19/Jul/12 ]

Tarball of /proc/sys/lnet on the MDS

Comment by Cliff White (Inactive) [ 19/Jul/12 ]

Better tarball. Still need this from the routers

Comment by Isaac Huang (Inactive) [ 19/Jul/12 ]

First of all, Cliff did an ibcheckerrors, which said:

Summary: 236 nodes checked, 2 bad nodes found
786 ports checked, 342 ports have errors beyond threshold

I'm not sure how serious the problem is, as the error counters could have been accumulating for months. But I think it's a good idea to have your IB admin double check that the IB fabric is running OK. Such problems can be hard to nail down when they begin manifesting themselves at upper layers.

From the data available, and under the assumption that the nodes' system clocks are roughly synchronized, to seconds at least, here's my speculation of what happened.

  1. 16:35:01 Timeouts and errors began to show up in the @gni network:

    No gnilnd traffic received from 50@gni for 120 seconds, terminating connection. Is node down?
    kgnilnd_close_conn_locked()) closing conn to 12@gni: error -110
    kgnilnd_tx_done()) $$ error 113 on tx 0xffff8803b4b87b68><?>
    kgnilnd_reaper_dgram_check()) GNILND_DGRAM_REQ datagram to 12@gni timed out

    These errors could indicate problems in the GNI network, or they could be OK if they were errors about messages to the halted blade.

  2. Then, on routers, messages to nodes in the @gni network got queued up, as the GNI network couldn't forward them out. Then servers ran out of router buffer credits. As a result, routers couldn't return TX credits back to servers. Then next step...
  3. 16:39:57 Servers began to see RDMA timeouts:

    Jul 17 16:39:57 weisshorn03 kernel: LNetError: 4754:0:(o2iblnd_cb.c:2991:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 11 seconds
    Jul 17 16:39:57 weisshorn03 kernel: LNetError: 4754:0:(o2iblnd_cb.c:3054:kiblnd_check_conns()) Timed out RDMA with 148.187.7.73@o2ib2 (0): c: 0, oc: 1, rc: 5

    These TXs were never put out on wire. Instead they waited for TX credits for too long and timed out before reaching the wire. The 1st message said they were waiting for TX credit, and the 2nd said the connection had no credit to use. The lack of tx credits could be seen from the peers file on the MDS:

    nid refs state last max rtr min tx min queue
    148.187.7.71@o2ib2 3 up -1 16 16 16 16 -153 0
    148.187.7.72@o2ib2 3 up -1 16 16 16 16 -150 0
    148.187.7.73@o2ib2 3 up -1 16 16 16 16 -152 0
    148.187.7.74@o2ib2 3 up -1 16 16 16 16 -152 0
    148.187.7.75@o2ib2 3 up -1 16 16 16 16 -152 0
    148.187.7.76@o2ib2 3 up -1 16 16 16 16 -151 0
    148.187.7.77@o2ib2 3 up -1 16 16 16 16 -151 0
    148.187.7.78@o2ib2 3 up -1 16 16 16 16 -151 0
    148.187.7.79@o2ib2 3 up -1 16 16 16 16 -152 0
    148.187.7.80@o2ib2 3 up -1 16 16 16 16 -151 0
    148.187.7.81@o2ib2 3 up -1 16 16 16 16 -151 0
    148.187.7.82@o2ib2 3 up -1 16 16 16 16 -150 0

    The TX queues for routers became quite long at one point.

  4. Active clients now wouldn't see any progress, as the servers couldn't send messages to the routers.

To fix it, I'd suggest to:

  1. Server IB network: the errors reported by ibcheckerrors should be double checked. Also, the error counters should be reset, so that later we can query them again and be able to interpret the results better.
  2. Client GNI network: If the errors were all about nodes in the halted blade, then they can be ignored. Otherwise, they must be investigated.
  3. On routers:
    1. More buffer credits should be granted to the servers. I'd need to see the module options on routers and the files under /proc/sys/lnet/ to make suggestion on router buffer settings.
    2. Peer health option must be turned on for both the ko2iblnd and the gnilnd:
      options ko2iblnd peer_timeout=180
      options kgnilnd peer_health=60
Comment by Fabio Verzelloni [ 20/Jul/12 ]

Dear Isaac,
I'll get in touch with our network admin to have a look on our Ib network looking for errors, regarding your second question the Cray XE machine has been rebooted so all the /proc/sys/lnet is not anymore the one would be helpful. In case something will happen I'll take immediately a dump of all the router nodes if that could help.

Regards
Fabio

Comment by Isaac Huang (Inactive) [ 20/Jul/12 ]

Hi Fabio, three more notes on collecting data on routers:

  1. Please enable console logging of network errors: echo +neterror > /proc/sys/lnet/printk
  2. Cliff mentioned that "tar -czvf `hostname`.tgz /proc/sys/lnet/" might fail to grab the files as some were not readable. Cliff can you advise how you managed to get the "Better tarball"? Or maybe you could find out in the shell command line history on 148.187.7.102@o2ib2.
  3. It'd be helpful to include a timestamp in the tarball, e.g. tar -czvf `hostname`_`date +%T`.tgz. That'll help me correlate the data with events reported in the log files.

Thanks!

Comment by Liang Zhen (Inactive) [ 20/Jul/12 ]

Isaac, I think you meant "options kgnilnd peer_health=1" correct? because peer_health is a boolean,

I remember that the ko2iblnd peer_buffer_credits on router is set to 128, but need to be verified, Fabio, could you check this? As Isaac said, all modparameters on routers could be helpful, I saw various versions of parameters were posted on the other ticket but not sure which one is your final choice, so could you post them at here.

Comment by Fabio Verzelloni [ 20/Jul/12 ]

We are having a file system hang right now, can you please connect to weisshorn and have a look, I'm here in case of any kind of needs to help you with logs, details, etc.

Thanks
Fabio

Comment by Isaac Huang (Inactive) [ 20/Jul/12 ]
  1. You're correct on kgnilnd peer_health. Thanks.
  2. If there's only a small number of servers, ko2iblnd peer_buffer_credits could be set to higher than 128.
Comment by Fabio Verzelloni [ 20/Jul/12 ]

> Please enable console logging of network errors: echo +neterror > /proc/sys/lnet/printk
Done on all the Lustre Cluster (weisshorn).

>1- Server IB network:
>the errors reported by ibcheckerrors should be double checked. Also, the error counters should be reset, so that later we can query them again and be able to interpret >the results better.

Today I'll have a look with the network administrator.

>2- Client GNI network:
>If the errors were all about nodes in the halted blade, then they can be ignored. Otherwise, they must be investigated.

Yes the gni network errors were from nodes halted.

>3- On routers:
>-More buffer credits should be granted to the servers. I'd need to see the module options on routers and the files under /proc/sys/lnet/ to make suggestion on router >buffer settings.
>-Peer health option must be turned on for both the ko2iblnd and the gnilnd:
> options ko2iblnd peer_timeout=180
> options kgnilnd peer_health=60

I'll do it at the first reboot of the cluster.

Comment by Fabio Verzelloni [ 20/Jul/12 ]

I'm going to make a fsck on the MDT.

Fabio

Comment by Fabio Verzelloni [ 20/Jul/12 ]

Is it normal that the MDT is using 110Gb? I think I've never seen the MDT so full.

Thanks
Fabio

Comment by Liang Zhen (Inactive) [ 20/Jul/12 ]

I've added Fanyong to CC list, I think if MDT size is growing faster than you thought, then it's very likely because our OI files are growing forever (LU-1512). It's a design defect of IAM, Fanyong has already worked out a patch, but I don't know if it's possible to apply to existed filesystem.

Fanyong, could you comment on this?

Comment by Liang Zhen (Inactive) [ 20/Jul/12 ]

sorry, delete comment on wrong ticket

Comment by nasf (Inactive) [ 23/Jul/12 ]

Hi Fabio,

Can you please to mount the MDT device as the type of "ldiskfs", then check which file(s) consumed such large space?

Generated at Sat Feb 10 01:18:25 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.