[LU-12274] Clients aren't connecting to OST defined failover.node Created: 08/May/19 Updated: 12/May/19 |
|
| Status: | Open |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.10.2 |
| Fix Version/s: | None |
| Type: | Question/Request | Priority: | Blocker |
| Reporter: | Roger Sersted | Assignee: | Sebastien Buisson |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Environment: |
Servers: Lustre-2.10.2, Kernel: 3.10.0-693.5.2.el7_lustre.x86_64 |
||
| Attachments: |
|
| Epic/Theme: | Lustre-2.10.2 |
| Rank (Obsolete): | 9223372036854775807 |
| Description |
|
I tried running tunefs.lustre and successfully changed the failover NIDs to what they should be. This problem is happening on several OSTs, but fixing one should fix them all. I'm assuming I forgot a step when I ran tunefs.lustre. tunefs.lustre --erase-param failover.node --param failover.node=172.17.1.103@o2ib,172.16.1.103@tcp1 The OST OST0017 is mounted on 172.17.1.103 with the following parameters: [root@apslstr03 ~]# tunefs.lustre --dryrun /dev/mapper/mpathg Read previous values: Permanent disk data: exiting before disk write. However, the clients are still displaying errors like this: May 8 11:43:33 localhost kernel: Lustre: 2028:0:(client.c:2114:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1557333772/real 0] req@ffff880bd9296f00 x1632920191594624/t0(0) o8->lustrefc-OST0017-osc-ffff8817ef372000@172.17.1.106@o2ib:28/4 |
| Comments |
| Comment by Peter Jones [ 09/May/19 ] |
|
Sebastien Could you please advise here? Thanks Peter |
| Comment by Sebastien Buisson [ 09/May/19 ] |
|
Hi, Did you run the tunes.lustre commands while the targets were stopped (ie unmounted)? |
| Comment by Roger Sersted [ 09/May/19 ] |
|
I unmounted the OSTs inquestion. OSTs not being modified were mounted. The MDT and MGT were both mounted. |
| Comment by Sebastien Buisson [ 10/May/19 ] |
|
I think we will need to have a look at the Lustre Logs on the MGS. Assuming your Lustre file system name is lustrefc, that would be: mgs# debugfs -c -R 'dump CONFIGS/lustrefc-client /tmp/lustrefc-client' <mgt device> mgs# llog_reader /tmp/lustrefc-client > /tmp/lustrefc-client.txt Thanks. |
| Comment by Roger Sersted [ 10/May/19 ] |
|
I have attached the requested output. I should add, my cluster is down due to this problem. |
| Comment by Sebastien Buisson [ 10/May/19 ] |
|
As the name suggests, failover node parameter serves the purpose of specifying failover NIDs for targets. It does not reflect the primary NID of a target. So now that you mention that your cluster is down, I am wondering if your targets have been moved so that their primary NID is now different. If the targets did not move and they all run on their primary node, then this problem with the failover node change should not lead to any downtime. |
| Comment by Roger Sersted [ 10/May/19 ] |
|
I'm in the process of updating the Lustre servers. I unmounted the OSTs on one set of servers and mounted them on to their HA partners. I have done this in the past with one node and it worked fine. I have 6 OSSes configured in HA pairs. I am not running any HA software. If a server fails, I manually unmount and then mount to the HA partner. |
| Comment by Sebastien Buisson [ 10/May/19 ] |
|
I can see in the llog_reader output that target lustrefc-OST0017 for instance is still registered with NIDs 172.17.1.105 and 172.17.1.106. It explains the error messages on the clients. I just noticed this important message in the Lustre Operations Manual: If a --failnode option is added to a target to designate a failover server for the target, the target must be re-mounted on the primary node before the --failnode option takes effect So the problem you are facing could be due to the fact that after tunefs.lustre, the target was directly mounted on the failover node. For instance on 172.17.1.103 for target lustrefc-OST0017. Does it make sense? Targets should be mounted on the primary node right after a tunefs.lustre that changes the failnodes, and only after that failed over to a secondary node. |
| Comment by Roger Sersted [ 10/May/19 ] |
|
Great catch on that. I went through all of my OSTs and remounted and checked the failover setting. I then remounted to the HA partner and the filesystem is working. Quick question, "How would I change primary NID of an OST?" Would I specify the "servicenode" option? |
| Comment by Sebastien Buisson [ 12/May/19 ] |
|
Good to hear! If you need to change a primary nid, I would advise to follow the dedicated instructions in the Lustre Operations Manual: |