[LU-8077] LLNL BGQ IO-node Lustre Client Not Reconnecting Created: 27/Apr/16  Updated: 06/Jul/18  Resolved: 06/Jul/18

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.5.4
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Lance Weems Assignee: Joseph Gmitter (Inactive)
Resolution: Not a Bug Votes: 0
Labels: llnl

Severity: 3
Epic: client
Rank (Obsolete): 9223372036854775807

 Description   

Lustre Clients: lustre-client-ion-2.5.4-16chaos_2.6.32_504.8.2.bgq.4blueos.V1R2M3.bl2.2_11.ppc64.ppc64
Lustre Servers: lustre-2.5.5-3chaos_2.6.32_573.18.1.1chaos.ch5.4.x86_64.x86_64

On our IBM BGQ system Vulcan at LLNL, the ion's have been experiencing what is believed to be repeated ost connection issues affecting user jobs. Recently two ions have reported issues have been identified. The rack has been drained and the ions left as is. The command "lfs check servers" reports the following errors:

vulcanio121: fsv-OST0017-osc-c0000003e09f49c0: check error: Resource temporarily unavailable
vulcanio127: fsv-OST0017-osc-c0000003c4483300: check error: Resource temporarily unavailable

Output from proc "import" file for affect ost:

vulcanio121-ib0@root: cat /proc/fs/lustre/osc/fsv-OST0017-osc-c0000003e09f49c0/import
import:
    name: fsv-OST0017-osc-c0000003e09f49c0
    target: fsv-OST0017_UUID
    state: REPLAY
    connect_flags: [ write_grant, server_lock, version, request_portal, truncate_lock, max_byte_per_rpc, early_lock_cancel, adaptive_timeouts, lru_resize, alt_checksum_algorithm, fid_is_enabled, version_recovery, full20, layout_lock, 64bithash, object_max_bytes, jobstats, einprogress, lvb_type ]
    connect_data:
       flags: 0x4af0e3440478
       instance: 45
       target_version: 2.5.5.0
       initial_grant: 2097152
       max_brw_size: 4194304
       grant_block_size: 0
       grant_inode_size: 0
       grant_extent_overhead: 0
       cksum_types: 0x2
       max_easize: 32768
       max_object_bytes: 9223372036854775807
    import_flags: [ replayable, pingable, connect_tried ]
    connection:
       failover_nids: [ 172.20.20.23@o2ib500, 172.20.20.24@o2ib500 ]
       current_connection: 172.20.20.23@o2ib500
       connection_attempts: 39
       generation: 1
       in-progress_invalidations: 0
    rpcs:
       inflight: 168
       unregistering: 1
       timeouts: 20977
       avg_waittime: 209959 usec
    service_estimates:
       services: 48 sec
       network: 45 sec
    transactions:
       last_replay: 0
       peer_committed: 150323856033
       last_checked: 150323856033
    read_data_averages:
       bytes_per_rpc: 69000
       usec_per_rpc: 4389
       MB_per_sec: 15.72
    write_data_averages:
       bytes_per_rpc: 893643
       usec_per_rpc: 2458
       MB_per_sec: 363.56
vulcanio121-ib0@root: 

vulcanio127-ib0@root: cat /proc/fs/lustre/osc/fsv-OST0017-osc-c0000003c4483300/import 
import:
    name: fsv-OST0017-osc-c0000003c4483300
    target: fsv-OST0017_UUID
    state: REPLAY
    connect_flags: [ write_grant, server_lock, version, request_portal, truncate_lock, max_byte_per_rpc, early_lock_cancel, adaptive_timeouts, lru_resize, alt_checksum_algorithm, fid_is_enabled, version_recovery, full20, layout_lock, 64bithash, object_max_bytes, jobstats, einprogress, lvb_type ]
    connect_data:
       flags: 0x4af0e3440478
       instance: 45
       target_version: 2.5.5.0
       initial_grant: 2097152
       max_brw_size: 4194304
       grant_block_size: 0
       grant_inode_size: 0
       grant_extent_overhead: 0
       cksum_types: 0x2
       max_easize: 32768
       max_object_bytes: 9223372036854775807
    import_flags: [ replayable, pingable, connect_tried ]
    connection:
       failover_nids: [ 172.20.20.23@o2ib500, 172.20.20.24@o2ib500 ]
       current_connection: 172.20.20.23@o2ib500
       connection_attempts: 36
       generation: 1
       in-progress_invalidations: 0
    rpcs:
       inflight: 131
       unregistering: 1
       timeouts: 19341
       avg_waittime: 144395 usec
    service_estimates:
       services: 45 sec
       network: 50 sec
    transactions:
       last_replay: 0
       peer_committed: 150323856116
       last_checked: 150323856116
    read_data_averages:
       bytes_per_rpc: 67548
       usec_per_rpc: 3326
       MB_per_sec: 20.30
    write_data_averages:
       bytes_per_rpc: 913996
       usec_per_rpc: 5909
       MB_per_sec: 154.67
vulcanio127-ib0@root: 

The disconnects appear to have happen when we updated our lustre cluster. All other ion's reconnect with not problem once the lustre cluster was back up and running.



 Comments   
Comment by Oleg Drokin [ 28/Apr/16 ]

So it looks like the io nodes reconnect just fine, but then disconnect again?
What's the picture on the server concerning those nodes?

Comment by Ned Bass [ 28/Apr/16 ]

Oleg, not sure I understand your question. The io nodes disconnected from the OST when it was rebooted and remain in a disconnected state. The server is up and can lctl ping the io nodes. The server has no export under /proc/fs/lustre for those clients. What else specifically would you like to know about the server?

Comment by Oleg Drokin [ 29/Apr/16 ]

Oh, it was not apparent from the logs that there were multiple clients referenced.

So to reconstruct the picture better: the io nodes in question disconnected from OSTs and then do they try to reconnect and fail? Don't try to reconnect?
Can we get a better log snippet from one of the io nodes only from before the moment of reconnection through several reconnect attempts that fail?
When reconnect happens - does the server see it, any reporting about this client from the server?

Comment by Ned Bass [ 29/Apr/16 ]

I only see the connection lost message on the clients:

[root@vulcansn1:~]# grep OST0017 /bgsys/logs/BGQ.sn/R17-ID-J00.log                                                                                                                                                                                                                                                                                                           
2016-04-14 11:01:36.246847 {RMP14Ap004250453} [mmcs]{128}.15.1: Lustre: fsv-OST0017-osc-c0000003e09f49c0: Connection to fsv-OST0017 (at 172.20.20.24@o2ib500) was lost; in progress operations using this service will wait for recovery to complete
[root@vulcansn1:~]# grep OST0017 /bgsys/logs/BGQ.sn/R17-ID-J06.log                                                                                                                                                                                                                                                                                                           
2016-04-14 11:01:44.047027 {RMP14Ap004250453} [mmcs]{134}.1.1: Lustre: fsv-OST0017-osc-c0000003c4483300: Connection to fsv-OST0017 (at 172.20.20.24@o2ib500) was lost; in progress operations using this service will wait for recovery to complete

The server didn't log anything about connection attempts from these clients. I'll also note that both disconnected clients had non-zero read RPCs in flight in the rpcs_stats file. Both were stuck in the REPLAY state. The clients have since been shutdown. If I recall correctly the state history looked something like this:

current_state: REPLAY
state_history:
 - [ 1460666257, CONNECTING ]
 - [ 1460666307, DISCONN ]
 - [ 1460666307, CONNECTING ]
 - [ 1460666332, DISCONN ]
 - [ 1460666332, CONNECTING ]
 - [ 1460666382, DISCONN ]
 - [ 1460666382, CONNECTING ]
 - [ 1460666417, DISCONN ]
 - [ 1460666432, CONNECTING ]
 - [ 1460666487, DISCONN ]
 - [ 1460666507, CONNECTING ]
 - [ 1460666507, REPLAY ]
Comment by Christopher Morrone [ 29/Apr/16 ]

I glanced at the clients, and I believe that they both had started replay but didn't finish, and then got stuck on the subsequent replay.

Comment by Oleg Drokin [ 30/Apr/16 ]

hm, I wonder if ir was one of the recently uncovered bulk or rpc state hangups like LU-7434

Comment by Oleg Drokin [ 02/May/16 ]

next time something like this happens it would be great if you an get backtraces from lustre threads to see what they were stuck on.

Comment by Christopher Morrone [ 02/May/16 ]

I looked at the backtraces on one of the nodes and didn't see any lustre threads stuck on anything. It looked like all were just in their normal idle waiting states.

Comment by John Fuchs-Chesney (Inactive) [ 02/May/16 ]

Hello Lance and team,

Is there any more work required on this ticket? Or can we go ahead mark it as resolved?

Many thanks,
~ jfc.

Comment by Ned Bass [ 02/May/16 ]

Please keep the ticket open until the underlying bug has been fixed.

Comment by Oleg Drokin [ 02/May/16 ]

Chris, it must have been doing something somewhere, though
was there an ll_imp_inval for example?

Would be great if you can get a sysrq-t (or an equivalent from a crashdump) should something like this happen again.

Comment by Christopher Morrone [ 03/May/16 ]

I did a "foreach bt" from crash. That is easier on these nodes than doing sysrq-t. Console access is possible, but a pain. And I don't believe that these nodes have been set up to get crash dumps. But at least crash works on a live system. That is much better than we had on previous BG systems.

I didn't know the nodes were being rebooted without getting more info, or I would have saved it myself. I agree, more needs to be gathered next time this happens.

Comment by Oleg Drokin [ 03/May/16 ]

On the off chance can you see if any of that info is still there in the "xterm" scroll buffer somewhere from the time you did the bt?

Comment by Joseph Gmitter (Inactive) [ 07/Jun/18 ]

Can we close this issue?

Generated at Sat Feb 10 02:14:27 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.