Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7794

tgt_clients_data_init()) soaked-MDT0001: duplicate export for client generation 1

Details

    • Bug
    • Resolution: Duplicate
    • Critical
    • None
    • Lustre 2.8.0
    • 3
    • 9223372036854775807

    Description

      We saw this during DNE failover soak-test.

      Lustre: 4374:0:(client.c:2063:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1455822992/real 1455822992]  req@ffff8807bba3b980 x1526529651962748/t0(0) o250->MGC192.168.1.108@o2ib10@0@lo:26/25 lens 520/544 e 0 to 1 dl 1455823038 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
      Lustre: 4374:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 37 previous similar messages
      LustreError: 8115:0:(tgt_lastrcvd.c:1464:tgt_clients_data_init()) soaked-MDT0001: duplicate export for client generation 3
      LustreError: 8115:0:(obd_config.c:578:class_setup()) setup soaked-MDT0001 failed (-114)
      LustreError: 8115:0:(obd_config.c:1666:class_config_llog_handler()) MGC192.168.1.108@o2ib10: cfg command failed: rc = -114
      Lustre:    cmd=cf003 0:soaked-MDT0001  1:soaked-MDT0001_UUID  2:1  3:soaked-MDT0001-mdtlov  4:f  
      

      And it cause the failover MDT can not be mounted on the new MDS. Similar as LU-7430, but no panic this time.

      Attachments

        Issue Links

          Activity

            [LU-7794] tgt_clients_data_init()) soaked-MDT0001: duplicate export for client generation 1

            I have been able to access to the log yesterday. Unfortunately, I have not found any usefull information that could help identify the cause of the problem.

            Anyway, I wonder if the tgt_client_new() routine could be called after tgt_init() and before tgt_clients_data_init() leading to assign the client lcd_generation using a wrong lut_client_generation value. The lut_client_generation value is initialized to 0 in tgt_init() but updated with the highest client generation read from last_rcvd file in case of recovery.

            In case of recovery, if a client connects to the MDT in parallel of the target initialization, are its obd_export and tg_export_data structures immediately initialized ?

            pichong Gregoire Pichon added a comment - I have been able to access to the log yesterday. Unfortunately, I have not found any usefull information that could help identify the cause of the problem. Anyway, I wonder if the tgt_client_new() routine could be called after tgt_init() and before tgt_clients_data_init() leading to assign the client lcd_generation using a wrong lut_client_generation value. The lut_client_generation value is initialized to 0 in tgt_init() but updated with the highest client generation read from last_rcvd file in case of recovery. In case of recovery, if a client connects to the MDT in parallel of the target initialization, are its obd_export and tg_export_data structures immediately initialized ?
            di.wang Di Wang added a comment - - edited
            Does it mean there has been several occurences of the issue ?
            Could it be possible to have the logs of the first occurrence ?
            

            Yes, there are several occurrences according to the log. You can tell me one place, and I can upload the debug log there. Unfortunately the debug log is too big (more 100M) to be uploaded here.

            di.wang Di Wang added a comment - - edited Does it mean there has been several occurences of the issue ? Could it be possible to have the logs of the first occurrence ? Yes, there are several occurrences according to the log. You can tell me one place, and I can upload the debug log there. Unfortunately the debug log is too big (more 100M) to be uploaded here.
            pjones Peter Jones added a comment -

            Grégoire

            To turn the question around - is there someone you would like the logs uploaded to so that you can access them?

            Peter

            pjones Peter Jones added a comment - Grégoire To turn the question around - is there someone you would like the logs uploaded to so that you can access them? Peter

            There are 8 clients connected to the filesystem. I will check on client differences, but afaik they are uniform.

            cliffw Cliff White (Inactive) added a comment - There are 8 clients connected to the filesystem. I will check on client differences, but afaik they are uniform.

            Why the error reported in the description of the ticket (soaked-MDT0001: duplicate export for client generation 3) is not the same than the error reported in the first comment from Di Wang (soaked-MDT0001: duplicate export for client generation 1) ?

            Does it mean there has been several occurences of the issue ?
            Could it be possible to have the logs of the first occurence ?

            By the way, how many Lustre clients are connected to the filesystem ?
            Are the two clients with uuid c2781877-e222-f9a8-07f4-a1250eba2af6 and 150208e6-71a8-375e-d139-d478eec5b761 different from others ?

            pichong Gregoire Pichon added a comment - Why the error reported in the description of the ticket ( soaked-MDT0001: duplicate export for client generation 3 ) is not the same than the error reported in the first comment from Di Wang ( soaked-MDT0001: duplicate export for client generation 1 ) ? Does it mean there has been several occurences of the issue ? Could it be possible to have the logs of the first occurence ? By the way, how many Lustre clients are connected to the filesystem ? Are the two clients with uuid c2781877-e222-f9a8-07f4-a1250eba2af6 and 150208e6-71a8-375e-d139-d478eec5b761 different from others ?

            Di Wang,

            Unfortunately, I cannot get the debug log from ftp.hpdd.intel.com:/uploads/lu-7794/, since the anonymous login only provides write access.

            Would it be another way to provide these logs ?

            pichong Gregoire Pichon added a comment - Di Wang, Unfortunately, I cannot get the debug log from ftp.hpdd.intel.com:/uploads/lu-7794/, since the anonymous login only provides write access. Would it be another way to provide these logs ?

            it's interesting that in this case and in LDEV-180 the duplicated entry had generation=1. meaning that for a reason the counter got reset.

            bzzz Alex Zhuravlev added a comment - it's interesting that in this case and in LDEV-180 the duplicated entry had generation=1. meaning that for a reason the counter got reset.
            heckes Frank Heckes (Inactive) added a comment - - edited

            Gregoire: I hope the info's below will answer you're question. Please let me know if something is missing or unclear.

            small environment info addition

            • lola-8 --> MDS0, lola-9 --> MDS1, lola-10 --> MDS2, lola-11 --> MDS3
            • MDTs formatted with ldiskfs , OSTs using zfs
            • Failover procedure
              • Triggered by automated framework
              • (Random) selected node is powercycled at (randomly) chosen time
              • Wait till node is up again
              • Mount resources on failover partner:
                Mount MDT resources in sequence
                Wait for mount command to complete
                If error occurrs retry to mount Lustre MDT(s) again.
              • Framework is configured for test session NOT to wait for RECOVERY process to complete
              • umount MDTs on secondary node
              • mount MDTs on primary node again
            • NOTE:
              I checked the soak framework. I'm very sure that the implementation won't' execute multiple
              mount commands at the same time or start a new mount command while the one started before
              haven't finished, yet.
              The framework didn't check whether there's already a mount command running executed from outside the framework.

            Concerning 'sequence of events'

            2016-02-18 11:07:14,437:fsmgmt.fsmgmt:INFO     triggering fault mds_failover
            2016-02-18 11:07:14,438:fsmgmt.fsmgmt:INFO     reseting MDS node lola-8     (--> Node was powercycled!)
            2016-02-18 11:07:14,439:fsmgmt.fsmgmt:INFO     executing cmd pm -h powerman -c lola-8> /dev/null
            2016-02-18 11:07:28,291:fsmgmt.fsmgmt:INFO     trying to connect to lola-8 ...
            2016-02-18 11:07:38,307:fsmgmt.fsmgmt:INFO     trying to connect to lola-8 ...
            2016-02-18 11:07:46,410:fsmgmt.fsmgmt:INFO     trying to connect to lola-8 ...
            ...
            ...
            2016-02-18 11:13:36,132:fsmgmt.fsmgmt:INFO     trying to connect to lola-8 ...
            2016-02-18 11:13:37,060:fsmgmt.fsmgmt:INFO     lola-8 is up!!!
            2016-02-18 11:13:48,072:fsmgmt.fsmgmt:INFO     Failing over soaked-MDT0001 ...
            2016-02-18 11:13:48,073:fsmgmt.fsmgmt:INFO     Mounting soaked-MDT0001 on lola-9 ...
            2016-02-18 11:16:16,760:fsmgmt.fsmgmt:ERROR    ... mount of soaked-MDT0001 on lola-9 failed with 114, retrying ...
            2016-02-18 11:16:16,760:fsmgmt.fsmgmt:INFO     mount.lustre: increased /sys/block/dm-10/queue/max_sectors_kb from 1024 to 16383
            mount.lustre: increased /sys/block/dm-8/queue/max_sectors_kb from 1024 to 16383
            mount.lustre: increased /sys/block/sdg/queue/max_sectors_kb from 1024 to 16383
            mount.lustre: increased /sys/block/sdb/queue/max_sectors_kb from 1024 to 16383
            mount.lustre: mount /dev/mapper/360080e50002ffd820000024f52013094p1 at /mnt/soaked-mdt1 failed: Operation already in progress
            The target service is already running. (/dev/mapper/360080e50002ffd820000024f52013094p1)
            2016-02-18 11:18:12,738:fsmgmt.fsmgmt:ERROR    ... mount of soaked-MDT0001 on lola-9 failed with 114, retrying ...
            2016-02-18 11:18:12,739:fsmgmt.fsmgmt:INFO     mount.lustre: mount /dev/mapper/360080e50002ffd820000024f52013094p1 at /mnt/soaked-mdt1 failed: Operation already in progress
            The target service is already running. (/dev/mapper/360080e50002ffd820000024f52013094p1)
            2016-02-18 11:26:58,512:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 mounted successfully on lola-9
            2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 failed over
            2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO     Failing over soaked-MDT0000 ...
            2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO     Mounting soaked-MDT0000 on lola-9 ...
            2016-02-18 11:27:51,049:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 mounted successfully on lola-9
            2016-02-18 11:27:51,049:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 failed over
            2016-02-18 11:28:11,430:fsmgmt.fsmgmt:DEBUG    Recovery Result Record: {'lola-9': {'soaked-MDT0001': 'RECOVERING', 'soaked-MDT0000': 'RECOVERING', 'soaked-MDT0003': 'COMPLETE', 'soaked-MDT0002': 'COMPLETE'}}
            2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     soaked-MDT0001 in status 'RECOVERING'.
            2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     soaked-MDT0000 in status 'RECOVERING'.
            2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     Don't wait for recovery to complete. Failback MDT's immediately
            2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     Failing back soaked-MDT0001 ...
            2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO     Unmounting soaked-MDT0001 on lola-9 ...
            2016-02-18 11:28:12,078:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 unmounted successfully on lola-9
            2016-02-18 11:28:12,079:fsmgmt.fsmgmt:INFO     Mounting soaked-MDT0001 on lola-8 ...
            2016-02-18 11:29:03,122:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 mounted successfully on lola-8
            2016-02-18 11:29:03,122:fsmgmt.fsmgmt:INFO     ... soaked-MDT0001 failed back
            2016-02-18 11:29:03,123:fsmgmt.fsmgmt:INFO     Failing back soaked-MDT0000 ...
            2016-02-18 11:29:03,123:fsmgmt.fsmgmt:INFO     Unmounting soaked-MDT0000 on lola-9 ...
            2016-02-18 11:29:09,942:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 unmounted successfully on lola-9
            2016-02-18 11:29:09,942:fsmgmt.fsmgmt:INFO     Mounting soaked-MDT0000 on lola-8 ...
            2016-02-18 11:29:24,023:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 mounted successfully on lola-8
            2016-02-18 11:29:24,023:fsmgmt.fsmgmt:INFO     ... soaked-MDT0000 failed back
            2016-02-18 11:29:24,024:fsmgmt.fsmgmt:INFO     mds_failover just completed
            2016-02-18 11:29:24,024:fsmgmt.fsmgmt:INFO     next fault in 1898s
            

            The error message at 2016-02-18 11:16:16,760 ... failed with 114 ... is strange as (stated above) the soak framework
            don't execute multiple mounts of the same device file.
            I'm not sure whether some manual mount (outside the soak framework) of MDT1 took place. I'm sure I didn't execute anything on the node at this time.

            applications executed

            • mdtest (single shared file, file per process)
            • IOR (single shared file, file per process)
            • simul
            • blogbench
            • kcompile
            • pct (producer, consumer inhouse application)
              All applications are initiated with random size, file count, block size. If needed I could provide the (slurm) list of active jobs at the time the error occured.
            heckes Frank Heckes (Inactive) added a comment - - edited Gregoire: I hope the info's below will answer you're question. Please let me know if something is missing or unclear. small environment info addition lola-8 --> MDS0, lola-9 --> MDS1, lola-10 --> MDS2, lola-11 --> MDS3 MDTs formatted with ldiskfs , OSTs using zfs Failover procedure Triggered by automated framework (Random) selected node is powercycled at (randomly) chosen time Wait till node is up again Mount resources on failover partner: Mount MDT resources in sequence Wait for mount command to complete If error occurrs retry to mount Lustre MDT(s) again. Framework is configured for test session NOT to wait for RECOVERY process to complete umount MDTs on secondary node mount MDTs on primary node again NOTE: I checked the soak framework. I'm very sure that the implementation won't' execute multiple mount commands at the same time or start a new mount command while the one started before haven't finished, yet. The framework didn't check whether there's already a mount command running executed from outside the framework. Concerning 'sequence of events' 2016-02-18 11:07:14,437:fsmgmt.fsmgmt:INFO triggering fault mds_failover 2016-02-18 11:07:14,438:fsmgmt.fsmgmt:INFO reseting MDS node lola-8 (--> Node was powercycled!) 2016-02-18 11:07:14,439:fsmgmt.fsmgmt:INFO executing cmd pm -h powerman -c lola-8> /dev/null 2016-02-18 11:07:28,291:fsmgmt.fsmgmt:INFO trying to connect to lola-8 ... 2016-02-18 11:07:38,307:fsmgmt.fsmgmt:INFO trying to connect to lola-8 ... 2016-02-18 11:07:46,410:fsmgmt.fsmgmt:INFO trying to connect to lola-8 ... ... ... 2016-02-18 11:13:36,132:fsmgmt.fsmgmt:INFO trying to connect to lola-8 ... 2016-02-18 11:13:37,060:fsmgmt.fsmgmt:INFO lola-8 is up!!! 2016-02-18 11:13:48,072:fsmgmt.fsmgmt:INFO Failing over soaked-MDT0001 ... 2016-02-18 11:13:48,073:fsmgmt.fsmgmt:INFO Mounting soaked-MDT0001 on lola-9 ... 2016-02-18 11:16:16,760:fsmgmt.fsmgmt:ERROR ... mount of soaked-MDT0001 on lola-9 failed with 114, retrying ... 2016-02-18 11:16:16,760:fsmgmt.fsmgmt:INFO mount.lustre: increased /sys/block/dm-10/queue/max_sectors_kb from 1024 to 16383 mount.lustre: increased /sys/block/dm-8/queue/max_sectors_kb from 1024 to 16383 mount.lustre: increased /sys/block/sdg/queue/max_sectors_kb from 1024 to 16383 mount.lustre: increased /sys/block/sdb/queue/max_sectors_kb from 1024 to 16383 mount.lustre: mount /dev/mapper/360080e50002ffd820000024f52013094p1 at /mnt/soaked-mdt1 failed: Operation already in progress The target service is already running. (/dev/mapper/360080e50002ffd820000024f52013094p1) 2016-02-18 11:18:12,738:fsmgmt.fsmgmt:ERROR ... mount of soaked-MDT0001 on lola-9 failed with 114, retrying ... 2016-02-18 11:18:12,739:fsmgmt.fsmgmt:INFO mount.lustre: mount /dev/mapper/360080e50002ffd820000024f52013094p1 at /mnt/soaked-mdt1 failed: Operation already in progress The target service is already running. (/dev/mapper/360080e50002ffd820000024f52013094p1) 2016-02-18 11:26:58,512:fsmgmt.fsmgmt:INFO ... soaked-MDT0001 mounted successfully on lola-9 2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO ... soaked-MDT0001 failed over 2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO Failing over soaked-MDT0000 ... 2016-02-18 11:26:58,513:fsmgmt.fsmgmt:INFO Mounting soaked-MDT0000 on lola-9 ... 2016-02-18 11:27:51,049:fsmgmt.fsmgmt:INFO ... soaked-MDT0000 mounted successfully on lola-9 2016-02-18 11:27:51,049:fsmgmt.fsmgmt:INFO ... soaked-MDT0000 failed over 2016-02-18 11:28:11,430:fsmgmt.fsmgmt:DEBUG Recovery Result Record: {'lola-9': {'soaked-MDT0001': 'RECOVERING', 'soaked-MDT0000': 'RECOVERING', 'soaked-MDT0003': 'COMPLETE', 'soaked-MDT0002': 'COMPLETE'}} 2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO soaked-MDT0001 in status 'RECOVERING'. 2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO soaked-MDT0000 in status 'RECOVERING'. 2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO Don't wait for recovery to complete. Failback MDT's immediately 2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO Failing back soaked-MDT0001 ... 2016-02-18 11:28:11,431:fsmgmt.fsmgmt:INFO Unmounting soaked-MDT0001 on lola-9 ... 2016-02-18 11:28:12,078:fsmgmt.fsmgmt:INFO ... soaked-MDT0001 unmounted successfully on lola-9 2016-02-18 11:28:12,079:fsmgmt.fsmgmt:INFO Mounting soaked-MDT0001 on lola-8 ... 2016-02-18 11:29:03,122:fsmgmt.fsmgmt:INFO ... soaked-MDT0001 mounted successfully on lola-8 2016-02-18 11:29:03,122:fsmgmt.fsmgmt:INFO ... soaked-MDT0001 failed back 2016-02-18 11:29:03,123:fsmgmt.fsmgmt:INFO Failing back soaked-MDT0000 ... 2016-02-18 11:29:03,123:fsmgmt.fsmgmt:INFO Unmounting soaked-MDT0000 on lola-9 ... 2016-02-18 11:29:09,942:fsmgmt.fsmgmt:INFO ... soaked-MDT0000 unmounted successfully on lola-9 2016-02-18 11:29:09,942:fsmgmt.fsmgmt:INFO Mounting soaked-MDT0000 on lola-8 ... 2016-02-18 11:29:24,023:fsmgmt.fsmgmt:INFO ... soaked-MDT0000 mounted successfully on lola-8 2016-02-18 11:29:24,023:fsmgmt.fsmgmt:INFO ... soaked-MDT0000 failed back 2016-02-18 11:29:24,024:fsmgmt.fsmgmt:INFO mds_failover just completed 2016-02-18 11:29:24,024:fsmgmt.fsmgmt:INFO next fault in 1898s The error message at 2016-02-18 11:16:16,760 ... failed with 114 ... is strange as (stated above) the soak framework don't execute multiple mounts of the same device file. I'm not sure whether some manual mount (outside the soak framework) of MDT1 took place. I'm sure I didn't execute anything on the node at this time. applications executed mdtest (single shared file, file per process) IOR (single shared file, file per process) simul blogbench kcompile pct (producer, consumer inhouse application) All applications are initiated with random size, file count, block size. If needed I could provide the (slurm) list of active jobs at the time the error occured.
            di.wang Di Wang added a comment -

            I upload the debug log to ftp.hpdd.intel.com:/uploads/lu-7794/

            The whole process does like this, 4 MDS, each MDS has 2 MDTs. MDS0 (MDT0/1), MDS1(MDT2/3), MDS2(MDT4/5), MDS3(MDT6/7). MDS0 and MDS1 are configured as active/active failover, and MDS2 and MDS3 are configured as active/active failover.

            And the test is randomly chosen one of MDS to reboot. then its MDTs will be failover to its pair MDS. In this case, it is MDS0 is restarted, then MDT0/MDT1 should be mounted on MDS1, but failed because of this. No, I do not think this can be easily produced.

            Maybe Frank or Cliff will know more.

            di.wang Di Wang added a comment - I upload the debug log to ftp.hpdd.intel.com:/uploads/lu-7794/ The whole process does like this, 4 MDS, each MDS has 2 MDTs. MDS0 (MDT0/1), MDS1(MDT2/3), MDS2(MDT4/5), MDS3(MDT6/7). MDS0 and MDS1 are configured as active/active failover, and MDS2 and MDS3 are configured as active/active failover. And the test is randomly chosen one of MDS to reboot. then its MDTs will be failover to its pair MDS. In this case, it is MDS0 is restarted, then MDT0/MDT1 should be mounted on MDS1, but failed because of this. No, I do not think this can be easily produced. Maybe Frank or Cliff will know more.

            Yes, could upload debug logs as attachment to this JIRA ticket ?

            Are the D_INFO messages written to the logs ?
            It would be helpful to look for the D_INFO messages logged in tgt_client_new()

            CDEBUG(D_INFO, "%s: new client at index %d (%llu) with UUID '%s' "
                    "generation %d\n",
                    tgt->lut_obd->obd_name, ted->ted_lr_idx, ted->ted_lr_off,
                    ted->ted_lcd->lcd_uuid, ted->ted_lcd->lcd_generation);
            

            And see in which context they were called.

            And again, could you detail what tests were running on the filesystem, please ?

            pichong Gregoire Pichon added a comment - Yes, could upload debug logs as attachment to this JIRA ticket ? Are the D_INFO messages written to the logs ? It would be helpful to look for the D_INFO messages logged in tgt_client_new() CDEBUG(D_INFO, "%s: new client at index %d (%llu) with UUID '%s' " "generation %d\n" , tgt->lut_obd->obd_name, ted->ted_lr_idx, ted->ted_lr_off, ted->ted_lcd->lcd_uuid, ted->ted_lcd->lcd_generation); And see in which context they were called. And again, could you detail what tests were running on the filesystem, please ?

            People

              pichong Gregoire Pichon
              di.wang Di Wang
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: