[LU-7794] tgt_clients_data_init()) soaked-MDT0001: duplicate export for client generation 1 Created: 18/Feb/16  Updated: 03/Nov/22  Resolved: 03/Nov/22

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: Di Wang Assignee: Gregoire Pichon
Resolution: Duplicate Votes: 0
Labels: soak

Issue Links:
Duplicate
duplicates LU-15935 MDT mount fails with "duplicate gener... Resolved
Related
is related to LU-7430 General protection fault: 0000 upon m... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

We saw this during DNE failover soak-test.

Lustre: 4374:0:(client.c:2063:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1455822992/real 1455822992]  req@ffff8807bba3b980 x1526529651962748/t0(0) o250->MGC192.168.1.108@o2ib10@0@lo:26/25 lens 520/544 e 0 to 1 dl 1455823038 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 4374:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 37 previous similar messages
LustreError: 8115:0:(tgt_lastrcvd.c:1464:tgt_clients_data_init()) soaked-MDT0001: duplicate export for client generation 3
LustreError: 8115:0:(obd_config.c:578:class_setup()) setup soaked-MDT0001 failed (-114)
LustreError: 8115:0:(obd_config.c:1666:class_config_llog_handler()) MGC192.168.1.108@o2ib10: cfg command failed: rc = -114
Lustre:    cmd=cf003 0:soaked-MDT0001  1:soaked-MDT0001_UUID  2:1  3:soaked-MDT0001-mdtlov  4:f  

And it cause the failover MDT can not be mounted on the new MDS. Similar as LU-7430, but no panic this time.



 Comments   
Comment by Di Wang [ 18/Feb/16 ]

debuglog snippet

00000020:01000000:8.0:1455822976.430423:0:8057:0:(obd_config.c:1562:class_config_llog_handler()) For 2.x interoperability, rename obd type from lov to lod (soaked-MDT0001)
00000020:00000080:8.0:1455822976.430426:0:8057:0:(obd_config.c:1148:class_process_config()) processing cmd: cf001
00000020:00000080:8.0:1455822976.430429:0:8057:0:(obd_config.c:362:class_attach()) attach type lod name: soaked-MDT0001-mdtlov uuid: soaked-MDT0001-mdtlov_UUID
00000020:00000080:8.0:1455822976.430526:0:8057:0:(genops.c:371:class_newdev()) Adding new device soaked-MDT0001-mdtlov (ffff8807b479f078)
00000020:00000080:8.0:1455822976.430530:0:8057:0:(obd_config.c:432:class_attach()) OBD: dev 59 attached type lod with refcount 1
00000020:00000080:8.0:1455822976.430534:0:8057:0:(obd_config.c:1148:class_process_config()) processing cmd: cf003
00080000:01000000:8.0:1455822976.430613:0:8057:0:(osd_handler.c:6597:osd_obd_connect()) connect #1
00000020:00000080:8.0:1455822976.430617:0:8057:0:(genops.c:1167:class_connect()) connect: client soaked-MDT0001-osd_UUID, cookie 0xe84502d68e3d0d9a
00000020:00000080:8.0:1455822976.430707:0:8057:0:(obd_config.c:552:class_setup()) finished setup of obd soaked-MDT0001-mdtlov (uuid soaked-MDT0001-mdtlov_UUID)
00000020:01000000:8.0:1455822976.430711:0:8057:0:(obd_config.c:1487:class_config_llog_handler()) Marker, inst_flg=0x2 mark_flg=0x2
00000020:00000080:8.0:1455822976.430714:0:8057:0:(obd_config.c:1148:class_process_config()) processing cmd: cf010
00000020:00000080:8.0:1455822976.430715:0:8057:0:(obd_config.c:1218:class_process_config()) marker 8 (0x2) soaked-MDT0001-m lov setup
00000020:01000000:8.0:1455822976.430717:0:8057:0:(obd_config.c:1487:class_config_llog_handler()) Marker, inst_flg=0x0 mark_flg=0x1
00000020:00000080:8.0:1455822976.430720:0:8057:0:(obd_config.c:1148:class_process_config()) processing cmd: cf010
00000020:00000080:8.0:1455822976.430721:0:8057:0:(obd_config.c:1218:class_process_config()) marker 9 (0x1) soaked-MDT0001 add mdt
00000020:00000080:8.0:1455822976.430724:0:8057:0:(obd_config.c:1148:class_process_config()) processing cmd: cf001
00000020:00000080:8.0:1455822976.430725:0:8057:0:(obd_config.c:362:class_attach()) attach type mdt name: soaked-MDT0001 uuid: soaked-MDT0001_UUID
00000020:00000080:8.0:1455822976.430816:0:8057:0:(genops.c:371:class_newdev()) Adding new device soaked-MDT0001 (ffff8807a97fd038)
00000020:00000080:8.0:1455822976.430819:0:8057:0:(obd_config.c:432:class_attach()) OBD: dev 60 attached type mdt with refcount 1
00000020:00000080:8.0:1455822976.430822:0:8057:0:(obd_config.c:1148:class_process_config()) processing cmd: cf007
00000020:00000080:8.0:1455822976.430823:0:8057:0:(obd_config.c:1176:class_process_config()) mountopt: profile soaked-MDT0001 osc soaked-MDT0001-mdtlov mdc (null)
00000020:01000000:8.0:1455822976.430826:0:8057:0:(obd_config.c:873:class_add_profile()) Add profile soaked-MDT0001
00000020:00000080:8.0:1455822976.430831:0:8057:0:(obd_config.c:1148:class_process_config()) processing cmd: cf003
00000020:01000004:8.0:1455822976.430883:0:8057:0:(obd_mount_server.c:175:server_get_mount()) get mount ffff8807c8232400 from soaked-MDT0001, refs=2
00000020:00000080:8.0:1455822976.430908:0:8057:0:(obd_config.c:362:class_attach()) attach type mdd name: soaked-MDD0001 uuid: soaked-MDD0001_UUID
00000020:00000080:8.0:1455822976.430998:0:8057:0:(genops.c:371:class_newdev()) Adding new device soaked-MDD0001 (ffff8807ea567078)
00000020:00000080:8.0:1455822976.431001:0:8057:0:(obd_config.c:432:class_attach()) OBD: dev 61 attached type mdd with refcount 1
00000004:01000000:8.0:1455822976.431060:0:8057:0:(lod_dev.c:1687:lod_obd_connect()) connect #0
00000020:00000080:8.0:1455822976.431064:0:8057:0:(genops.c:1167:class_connect()) connect: client soaked-MDT0001-mdtlov_UUID, cookie 0xe84502d68e3d0daf
00000020:00000080:8.0:1455822976.431096:0:8057:0:(obd_config.c:552:class_setup()) finished setup of obd soaked-MDD0001 (uuid soaked-MDD0001_UUID)
00000004:01000000:8.0:1455822976.431103:0:8057:0:(mdd_device.c:1178:mdd_obd_connect()) connect #000000020:00000080:8.0:1455822976.431106:0:8057:0:(genops.c:1167:class_connect()) connect: client soaked-MDD0001_UUID, cookie 0xe84502d68e3d0db6
00080000:01000000:8.0:1455822976.431113:0:8057:0:(osd_handler.c:6597:osd_obd_connect()) connect #2
00000020:00000080:8.0:1455822976.431115:0:8057:0:(genops.c:1167:class_connect()) connect: client soaked-MDT0001-osd_UUID, cookie 0xe84502d68e3d0dbd
00000001:00080000:8.0:1455822976.461440:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: soaked-MDT0000-mdtlov_UUID idx: 0 lr: 17182107883 srv lr: 17180780327 lx: 1526530761809700 gen 0
00000001:00080000:8.0:1455822976.461464:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: soaked-MDT0003-mdtlov_UUID idx: 1 lr: 17181767205 srv lr: 17180780327 lx: 1526529647176604 gen 0
00000001:00080000:8.0:1455822976.461475:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: soaked-MDT0002-mdtlov_UUID idx: 2 lr: 17182149935 srv lr: 17180780327 lx: 1526529649871472 gen 0
00000001:00080000:8.0:1455822976.461483:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: soaked-MDT0005-mdtlov_UUID idx: 3 lr: 17182105205 srv lr: 17180780327 lx: 1526525227761264 gen 0
00000001:00080000:8.0:1455822976.461493:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: soaked-MDT0004-mdtlov_UUID idx: 4 lr: 17182152956 srv lr: 17180780327 lx: 1526525229666900 gen 0
00000001:00080000:8.0:1455822976.461503:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: soaked-MDT0007-mdtlov_UUID idx: 5 lr: 17182138513 srv lr: 17180780327 lx: 1526536609097660 gen 0
00000001:00080000:8.0:1455822976.461512:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: soaked-MDT0006-mdtlov_UUID idx: 6 lr: 17181782621 srv lr: 17180780327 lx: 1526536607769984 gen 0
00000001:00080000:8.0:1455822976.461520:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: c2781877-e222-f9a8-07f4-a1250eba2af6 idx: 7 lr: 0 srv lr: 17180780327 lx: 0 gen 1
00000001:00080000:8.0:1455822976.461530:0:8057:0:(tgt_lastrcvd.c:1427:tgt_clients_data_init()) RCVRNG CLIENT uuid: 150208e6-71a8-375e-d139-d478eec5b761 idx: 8 lr: 0 srv lr: 17180780327 lx: 0 gen 1
00000001:00020000:8.0:1455822976.461539:0:8057:0:(tgt_lastrcvd.c:1464:tgt_clients_data_init()) soaked-MDT0001: duplicate export for client generation 1
Comment by Peter Jones [ 19/Feb/16 ]

Gregoire

Could you please advise?

Thanks

Peter

Comment by Gregoire Pichon [ 19/Feb/16 ]

Honestly, I have no idea what could lead to that duplicate export situation.

Could you detail what tests were running on the filesystem ?

Do you think you could rebuild the operations (client connections/disconnections, MDT mount/unmount, other MDTs connections/disconnections, ...) that occurred on the filesystem from the node logs ?

Comment by Di Wang [ 20/Feb/16 ]

Hmm, I am not sure we can reproduce the problem easily. I am not sure there are node logs. If you need debug log, maybe I can upload it somewhere.

Comment by Gregoire Pichon [ 20/Feb/16 ]

Yes, could upload debug logs as attachment to this JIRA ticket ?

Are the D_INFO messages written to the logs ?
It would be helpful to look for the D_INFO messages logged in tgt_client_new()

CDEBUG(D_INFO, "%s: new client at index %d (%llu) with UUID '%s' "
        "generation %d\n",
        tgt->lut_obd->obd_name, ted->ted_lr_idx, ted->ted_lr_off,
        ted->ted_lcd->lcd_uuid, ted->ted_lcd->lcd_generation);

And see in which context they were called.

And again, could you detail what tests were running on the filesystem, please ?

Comment by Di Wang [ 22/Feb/16 ]

I upload the debug log to ftp.hpdd.intel.com:/uploads/lu-7794/

The whole process does like this, 4 MDS, each MDS has 2 MDTs. MDS0 (MDT0/1), MDS1(MDT2/3), MDS2(MDT4/5), MDS3(MDT6/7). MDS0 and MDS1 are configured as active/active failover, and MDS2 and MDS3 are configured as active/active failover.

And the test is randomly chosen one of MDS to reboot. then its MDTs will be failover to its pair MDS. In this case, it is MDS0 is restarted, then MDT0/MDT1 should be mounted on MDS1, but failed because of this. No, I do not think this can be easily produced.

Maybe Frank or Cliff will know more.

Comment by Frank Heckes (Inactive) [ 22/Feb/16 ]

Gregoire: I hope the info's below will answer you're question. Please let me know if something is missing or unclear.

small environment info addition

  • lola-8 --> MDS0, lola-9 --> MDS1, lola-10 --> MDS2, lola-11 --> MDS3
  • MDTs formatted with ldiskfs , OSTs using zfs
  • Failover procedure
    • Triggered by automated framework
    • (Random) selected node is powercycled at (randomly) chosen time
    • Wait till node is up again
    • Mount resources on failover partner:
      Mount MDT resources in sequence
      Wait for mount command to complete
      If error occurrs retry to mount Lustre MDT(s) again.
    • Framework is configured for test session NOT to wait for RECOVERY process to complete
    • umount MDTs on secondary node
    • mount MDTs on primary node again
  • NOTE:
    I checked the soak framework. I'm very sure that the implementation won't' execute multiple
    mount commands at the same time or start a new mount command while the one started before
    haven't finished, yet.
    The framework didn't check whether there's already a mount command running executed from outside the framework.

Concerning 'sequence of events'

2016-02-18 11:07:14,437:fsmgmt.fsmgmt:INFO     triggering fault mds_failover
2016-02-18 11:07:14,438:fsmgmt.fsmgmt:INFO     reseting MDS node lola-8     (--> Node was powercycled!)
2016-02-18 11:07:14,439:fsmgmt.fsmgmt:INFO     executing cmd pm -h powerman -c lola-8> /dev/null
2016-02-18 11:07:28,291:fsmgmt.fsmgmt:INFO     trying to connect to lola-8 ...
2016-02-18 11:07:38,307:fsmgmt.fsmgmt:INFO     trying to connect to lola-8 ...
2016-02-18 11:07:46,410:fsmgmt.fsmgmt:INFO     trying to connect to lola-8 ...
...
...
2016-02-18 11:13:36,132:fsmgmt.fsmgmt:INFO     trying to connect to lola-8 ...
2016-02-18 11:13:37,060:fsmgmt.fsmgmt:INFO     lola-8 is up!!!
2016-02-18 11:13:48,072:fsmgmt.fsmgmt:INFO     Failing over soaked-MDT0001 ...
2016-02-18 11:13:48,073:fsmgmt.fsmgmt:INFO     Mounting soaked-MDT0001 on lola-9 ...
2016-02-18 11:16:16,760:fsmgmt.fsmgmt:ERROR    ... mount of soaked-MDT0001 on lola-9 failed with 114, retrying ...
2016-02-18 11:16:16,760:fsmgmt.fsmgmt:INFO     mount.lustre: increased /sys/block/dm-10/queue/max_sectors_kb from 1024 to 16383
mount.lustre: increased /sys/block/dm-8/queue/max_sectors_kb from 1024 to 16383
mount.lustre: increased /sys/block/sdg/queue/max_sectors_kb from 1024 to 16383
mount.lustre: increased /sys/block/sdb/queue/max_sectors_kb from 1024 to 16383
mount.lustre: mount /dev/mapper/360080e50002ffd820000024f52013094p1 at /mnt/soaked-mdt1 failed: Operation already in progress
The target service is already running. (/dev/mapper/360080e50002ffd820000024f52013094p1)
2016-02-18 11:18:12,738:fsmgmt.fsmgmt:ERROR    ... mount of soaked-MDT0001 on lola-9 failed with 114, retrying ...
2016-02-18 11:18:12,739:fsmgmt.fsmgmt:INFO     mount.lustre: mount /dev/mapper/360080e50002ffd820000024f52013094p1 at /mnt/soaked-mdt1 failed: Operation already in progress
The target service is already running. (/dev/mapper/360080e50002ffd820000024f52013094p1)
2016-02-18 11:26:58,512:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 mounted successfully on lola-9
2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 failed over
2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO     Failing over soaked-MDT0000 ...
2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO     Mounting soaked-MDT0000 on lola-9 ...
2016-02-18 11:27:51,049:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 mounted successfully on lola-9
2016-02-18 11:27:51,049:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 failed over
2016-02-18 11:28:11,430:fsmgmt.fsmgmt:DEBUG    Recovery Result Record: {'lola-9': {'soaked-MDT0001': 'RECOVERING', 'soaked-MDT0000': 'RECOVERING', 'soaked-MDT0003': 'COMPLETE', 'soaked-MDT0002': 'COMPLETE'}}
2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     soaked-MDT0001 in status 'RECOVERING'.
2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     soaked-MDT0000 in status 'RECOVERING'.
2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     Don't wait for recovery to complete. Failback MDT's immediately
2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     Failing back soaked-MDT0001 ...
2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     Unmounting soaked-MDT0001 on lola-9 ...
2016-02-18 11:28:12,078:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 unmounted successfully on lola-9
2016-02-18 11:28:12,079:fsmgmt.fsmgmt:INFO     Mounting soaked-MDT0001 on lola-8 ...
2016-02-18 11:29:03,122:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 mounted successfully on lola-8
2016-02-18 11:29:03,122:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 failed back
2016-02-18 11:29:03,123:fsmgmt.fsmgmt:INFO     Failing back soaked-MDT0000 ...
2016-02-18 11:29:03,123:fsmgmt.fsmgmt:INFO     Unmounting soaked-MDT0000 on lola-9 ...
2016-02-18 11:29:09,942:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 unmounted successfully on lola-9
2016-02-18 11:29:09,942:fsmgmt.fsmgmt:INFO     Mounting soaked-MDT0000 on lola-8 ...
2016-02-18 11:29:24,023:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 mounted successfully on lola-8
2016-02-18 11:29:24,023:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 failed back
2016-02-18 11:29:24,024:fsmgmt.fsmgmt:INFO     mds_failover just completed
2016-02-18 11:29:24,024:fsmgmt.fsmgmt:INFO     next fault in 1898s

The error message at 2016-02-18 11:16:16,760 ... failed with 114 ... is strange as (stated above) the soak framework
don't execute multiple mounts of the same device file.
I'm not sure whether some manual mount (outside the soak framework) of MDT1 took place. I'm sure I didn't execute anything on the node at this time.

applications executed

  • mdtest (single shared file, file per process)
  • IOR (single shared file, file per process)
  • simul
  • blogbench
  • kcompile
  • pct (producer, consumer inhouse application)
    All applications are initiated with random size, file count, block size. If needed I could provide the (slurm) list of active jobs at the time the error occured.
Comment by Alex Zhuravlev [ 22/Feb/16 ]

it's interesting that in this case and in LDEV-180 the duplicated entry had generation=1. meaning that for a reason the counter got reset.

Comment by Gregoire Pichon [ 29/Feb/16 ]

Di Wang,

Unfortunately, I cannot get the debug log from ftp.hpdd.intel.com:/uploads/lu-7794/, since the anonymous login only provides write access.

Would it be another way to provide these logs ?

Comment by Gregoire Pichon [ 29/Feb/16 ]

Why the error reported in the description of the ticket (soaked-MDT0001: duplicate export for client generation 3) is not the same than the error reported in the first comment from Di Wang (soaked-MDT0001: duplicate export for client generation 1) ?

Does it mean there has been several occurences of the issue ?
Could it be possible to have the logs of the first occurence ?

By the way, how many Lustre clients are connected to the filesystem ?
Are the two clients with uuid c2781877-e222-f9a8-07f4-a1250eba2af6 and 150208e6-71a8-375e-d139-d478eec5b761 different from others ?

Comment by Cliff White (Inactive) [ 29/Feb/16 ]

There are 8 clients connected to the filesystem. I will check on client differences, but afaik they are uniform.

Comment by Peter Jones [ 29/Feb/16 ]

Grégoire

To turn the question around - is there someone you would like the logs uploaded to so that you can access them?

Peter

Comment by Di Wang [ 29/Feb/16 ]
Does it mean there has been several occurences of the issue ?
Could it be possible to have the logs of the first occurrence ?

Yes, there are several occurrences according to the log. You can tell me one place, and I can upload the debug log there. Unfortunately the debug log is too big (more 100M) to be uploaded here.

Comment by Gregoire Pichon [ 10/Mar/16 ]

I have been able to access to the log yesterday. Unfortunately, I have not found any usefull information that could help identify the cause of the problem.

Anyway, I wonder if the tgt_client_new() routine could be called after tgt_init() and before tgt_clients_data_init() leading to assign the client lcd_generation using a wrong lut_client_generation value. The lut_client_generation value is initialized to 0 in tgt_init() but updated with the highest client generation read from last_rcvd file in case of recovery.

In case of recovery, if a client connects to the MDT in parallel of the target initialization, are its obd_export and tg_export_data structures immediately initialized ?

Generated at Sat Feb 10 02:11:58 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.