Details
-
Bug
-
Resolution: Duplicate
-
Critical
-
Lustre 2.4.1
-
None
-
3
-
10853
Description
BP has been running into an issue on their 2.4.1 system where clients are not able to connect to the OSTs after a failover.
Looking at the debug logs on the MDS, it looks like the problem is that when the OSTs register, both service node NIDs are assigned to one UUID, which is named after nid[0]. The imperative recovery code, however, uses the UUID name instead of the NID name when creating a connection. This causes imperative recovery to keep trying to connect to the first service node, even when the MGS tells it to connect to the second.
20000000:01000000:0.0:1380683267.395792:0:14496:0:(mgs_handler.c:344:mgs_handle_target_reg()) Server pfs-OST0006 is running on 10.10.160.26@tcp1
...
00000100:00000040:0.0:1380683274.383862:0:14498:0:(lustre_peer.c:200:class_check_uuid()) check if uuid 10.10.160.25@tcp1 has 10.10.160.26@tcp1.
10000000:00000040:0.0:1380683274.383865:0:14498:0:(mgc_request.c:1408:mgc_apply_recover_logs()) Find uuid 10.10.160.25@tcp1 by nid 10.10.160.26@tcp1
Here is the tunefs line used to create the OST:
tunefs.lustre --erase-params --writeconf --mgsnode=10.10.160.21@tcp1 --mgsnode=10.10.160.22@tcp1 --servicenode=10.10.160.25@tcp1 --servicenode=10.10.160.26@tcp1 --param ost.quota_type=ug /dev/mapper/ost_pfs_6
Does import really need to use the uuid, or can it use nid? Alternatively, should the registration code really be using all the servicenode nids for the same UUID?
This also leads into a concern I have with the fix for LU-3445. There doesn't appear to be anyway to distinguish between multiple NIDs for the same node, versus different nodes.
Also, I'm sort of confused as to why this wasn't caught in failover testing, is the servicenode parameter not part of the test suite?
Thanks,
Kit
Attachments
Issue Links
- duplicates
-
LU-4243 multiple servicenodes or failnids: wrong client llog registration
- Resolved