Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-4282

some OSTs reported as inactive in lfs df, UP with lctl dl, data not accessible

Details

    • Bug
    • Resolution: Duplicate
    • Blocker
    • None
    • Lustre 2.4.1
    • None
    • MDS and OSS on Lustre 2.4.1, clients lustre 1.8.9, all Red Hat Enterprise Linux.
    • 3
    • 11756

    Description

      As indicated in LU-4242, I now have a problem on our preproduction file system that stops users from accessing the data, servers from cleanly rebooting etc, stopping any further testing.

      After upgrading the servers from 2.3 to 2.4.1 (MDT build #51 of b2_4 from jenkins) our clients can no longer fully access this file system. The clients can mount the file system and can access one OST on each of the two OSSes, but the other OSSes are not accessible and are shown as inactive in lfs df output and /proc/fs/lustre/lov/*/target_obd, but are shown as UP in lctl dl.

      [bnh65367@cs04r-sc-serv-07 ~]$ lctl dl |grep play01
       91 UP lov play01-clilov-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 4
       92 UP mdc play01-MDT0000-mdc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5
       93 UP osc play01-OST0000-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5
       94 UP osc play01-OST0001-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5
       95 UP osc play01-OST0002-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5
       96 UP osc play01-OST0003-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5
       97 UP osc play01-OST0004-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5
       98 UP osc play01-OST0005-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5
      [bnh65367@cs04r-sc-serv-07 ~]$ lfs df /mnt/play01
      UUID                   1K-blocks        Used   Available Use% Mounted on
      play01-MDT0000_UUID     78636320     3502948    75133372   4% /mnt/play01[MDT:0]
      play01-OST0000_UUID   7691221300  4506865920  3184355380  59% /mnt/play01[OST:0]
      play01-OST0001_UUID   7691221300  3765688064  3925533236  49% /mnt/play01[OST:1]
      play01-OST0002_UUID : inactive device
      play01-OST0003_UUID : inactive device
      play01-OST0004_UUID : inactive device
      play01-OST0005_UUID : inactive device
      
      filesystem summary:  15382442600  8272553984  7109888616  54% /mnt/play01
      
      [bnh65367@cs04r-sc-serv-07 ~]$ cat /proc/fs/lustre/lov/play01-clilov-ffff810076ae2000/target_obd 
      0: play01-OST0000_UUID ACTIVE
      1: play01-OST0001_UUID ACTIVE
      2: play01-OST0002_UUID INACTIVE
      3: play01-OST0003_UUID INACTIVE
      4: play01-OST0004_UUID INACTIVE
      5: play01-OST0005_UUID INACTIVE
      

      As expected the fail-over OSS for each OST does see connection attempts and reports (correctly) that that OST is not available on this OSS.

      I have confirmed that the OSTs are mounted on the OSSes correctly.

      For the other client that I have tried to bring back the situation is similar but the OSTs that are inactive are slightly different:

      [bnh65367@cs04r-sc-serv-06 ~]$ lfs df /mnt/play01
      UUID                   1K-blocks        Used   Available Use% Mounted on
      play01-MDT0000_UUID     78636320     3502948    75133372   4% /mnt/play01[MDT:0]
      play01-OST0000_UUID : inactive device
      play01-OST0001_UUID   7691221300  3765688064  3925533236  49% /mnt/play01[OST:1]
      play01-OST0002_UUID   7691221300  1763305508  5927915792  23% /mnt/play01[OST:2]
      play01-OST0003_UUID : inactive device
      play01-OST0004_UUID : inactive device
      play01-OST0005_UUID : inactive device
      
      filesystem summary:  15382442600  5528993572  9853449028  36% /mnt/play01
      
      [bnh65367@cs04r-sc-serv-06 ~]$ 
      

      play01-OST0000, play01-OST0002, play01-OST0004 are on one OSS
      play01-OST0001, play01-OST0003, play01-OST0005 are on a different OSS (but all on the same).

      I have tested the network, don't see any errors, lnet_selftest between the clients and the OSSes works at line rate at least for the first client (1GigE client...), nothing obvious on the second client either.

      For completeness I should probably mention that all the servers (MDS and OSSes) have changed IP addresses at the same time as the upgrade, I have verified the information is correctly changed on the targets, both clients have been rebooted multiple times since the IP address change, without any changes.

      Attachments

        Issue Links

          Activity

            [LU-4282] some OSTs reported as inactive in lfs df, UP with lctl dl, data not accessible

            CONFIGS directories for MDT and MGS, including llog_reader output

            ferner Frederik Ferner (Inactive) added a comment - CONFIGS directories for MDT and MGS, including llog_reader output
            mdiep Minh Diep added a comment -

            This seems to relate to LU-4243. could you remount the mdt with ldiskfs and dump the config log?

            mdiep Minh Diep added a comment - This seems to relate to LU-4243 . could you remount the mdt with ldiskfs and dump the config log?

            Minh,

            it is sort of working. I have one configuration/setup where all OSTs can be accessed by all clients I've tried to bring up. However if I try to bring any of the OSTs up on a different OSS then they are on now, none of my clients even tries to contact this OSS. Recovery doesn't even start...

            So I'd not say everything is working, but the urgency is lower as we have a work around. (which is valid until one of the servers fails...)

            I would appreciate help in fully resolving this. Let me know if there are any diagnostics that I should provide...

            Kind regards,
            Frederik

            ferner Frederik Ferner (Inactive) added a comment - Minh, it is sort of working. I have one configuration/setup where all OSTs can be accessed by all clients I've tried to bring up. However if I try to bring any of the OSTs up on a different OSS then they are on now, none of my clients even tries to contact this OSS. Recovery doesn't even start... So I'd not say everything is working, but the urgency is lower as we have a work around. (which is valid until one of the servers fails...) I would appreciate help in fully resolving this. Let me know if there are any diagnostics that I should provide... Kind regards, Frederik
            mdiep Minh Diep added a comment -

            Hi Frederik,

            Is it working now? I believe there might be a small step that we missed somewhere during the process. Please let me know if everything is working. Thanks

            mdiep Minh Diep added a comment - Hi Frederik, Is it working now? I believe there might be a small step that we missed somewhere during the process. Please let me know if everything is working. Thanks

            Ok, more testing with this showed that for all OSTs that were shown as inactive on the clients, the fail over OSS was seeing connection attempts. So failing over the OSTs to the other node makes them available on the clients, however failing over OSTs that were active makes them unavailable on the clients.

            So to make this clearer:

            Initially the OSTs were distributed like this:

            OSTs mounted on 172.23.144.14: play01-OST0000, play01-OST0002, play01-OST0004
            OSTs mounted on 172.23.144.18: play01-OST0001, play01-OST0003, play01-OST0005

            In this configuration the clients were only able to access play01-OST0000 and play01-OST0001.

            The following distribution of OSTs makes them available on both clients I tested today:

            OSTs mounted on 172.23.144.14: play01-OST0000, play01-OST0003, play01-OST0005
            OSTs mounted on 172.23.144.18: play01-OST0001, play01-OST0002, play01-OST0004

            As soon as any of the OSTs is mounted on the other OSS, none of the clients will connect to it, it appears (with a possible exception of clients that have not been rebooted recently, unloading lustre modules and starting again on the client seems to bring them into the first category.

            The same parameters/failnode setup works without problems so far on our other file systems were all servers are still running 1.8.

            ferner Frederik Ferner (Inactive) added a comment - Ok, more testing with this showed that for all OSTs that were shown as inactive on the clients, the fail over OSS was seeing connection attempts. So failing over the OSTs to the other node makes them available on the clients, however failing over OSTs that were active makes them unavailable on the clients. So to make this clearer: Initially the OSTs were distributed like this: OSTs mounted on 172.23.144.14: play01-OST0000, play01-OST0002, play01-OST0004 OSTs mounted on 172.23.144.18: play01-OST0001, play01-OST0003, play01-OST0005 In this configuration the clients were only able to access play01-OST0000 and play01-OST0001. The following distribution of OSTs makes them available on both clients I tested today: OSTs mounted on 172.23.144.14: play01-OST0000, play01-OST0003, play01-OST0005 OSTs mounted on 172.23.144.18: play01-OST0001, play01-OST0002, play01-OST0004 As soon as any of the OSTs is mounted on the other OSS, none of the clients will connect to it, it appears (with a possible exception of clients that have not been rebooted recently, unloading lustre modules and starting again on the client seems to bring them into the first category. The same parameters/failnode setup works without problems so far on our other file systems were all servers are still running 1.8.

            The MGS is on the same shared storage as the MDT, same LVM volume group but a separate logical volume.

            So I think I need your first set of commands. though I don't see how they are much different from mine. In any case, I've run them again after umounting everything, brought everything back up, no change.

            This time I noticed the following -16 errors in the logs, I assume they are because the OSTs are still in recovery but thought I'd mention them. Also there is this initially error about communication with 0@lo that I don't recall seeing before.

            Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: MGS: Logs for fs play01 were removed by user request.  All servers must be restarted in order to regenerate the logs.
            Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-MDT0000.mdt.quota_type in log play01-MDT0000
            Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message
            Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: used disk, loading
            Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: 4012:0:(mdt_handler.c:4948:mdt_process_config()) For interoperability, skip this mdt.quota_type. It is obsolete.
            Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-MDT0000-lwp-MDT0000: Communicating with 0@lo, operation mds_connect failed with -11.
            Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0000 log by user request.
            Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0000.ost.quota_type in log play01-OST0000
            Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message
            Nov 21 17:30:38 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0001 log by user request.
            Nov 21 17:30:38 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0001.ost.quota_type in log play01-OST0001
            Nov 21 17:30:39 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0000-osc-MDT0000: Communicating with 172.23.144.14@tcp, operation ost_connect failed with -16.
            Nov 21 17:30:54 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0002 log by user request.
            Nov 21 17:30:54 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0002.ost.quota_type in log play01-OST0002
            Nov 21 17:31:02 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0003-osc-MDT0000: Communicating with 172.23.144.18@tcp, operation ost_connect failed with -16.
            Nov 21 17:31:02 cs04r-sc-mds02-03 kernel: LustreError: Skipped 1 previous similar message
            Nov 21 17:31:07 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0004 log by user request.
            Nov 21 17:31:07 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message
            Nov 21 17:31:10 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0001-osc-MDT0000: Communicating with 172.23.144.18@tcp, operation ost_connect failed with -16.
            Nov 21 17:31:11 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0005.ost.quota_type in log play01-OST0005
            Nov 21 17:31:11 cs04r-sc-mds02-03 kernel: Lustre: Skipped 2 previous similar messages
            Nov 21 17:31:44 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0000-osc-MDT0000: Communicating with 172.23.144.14@tcp, operation ost_connect failed with -16.
            Nov 21 17:31:44 cs04r-sc-mds02-03 kernel: LustreError: Skipped 2 previous similar messages
            Nov 21 17:32:34 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: Will be in recovery for at least 5:00, or until 1 client reconnects
            Nov 21 17:32:34 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: Denying connection for new client 45cd72fa-56c3-f257-0ed7-154d629ee603 (at 172.23.136.7@tcp), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 4:59
            
            ferner Frederik Ferner (Inactive) added a comment - The MGS is on the same shared storage as the MDT, same LVM volume group but a separate logical volume. So I think I need your first set of commands. though I don't see how they are much different from mine. In any case, I've run them again after umounting everything, brought everything back up, no change. This time I noticed the following -16 errors in the logs, I assume they are because the OSTs are still in recovery but thought I'd mention them. Also there is this initially error about communication with 0@lo that I don't recall seeing before. Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: MGS: Logs for fs play01 were removed by user request. All servers must be restarted in order to regenerate the logs. Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-MDT0000.mdt.quota_type in log play01-MDT0000 Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: used disk, loading Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: 4012:0:(mdt_handler.c:4948:mdt_process_config()) For interoperability, skip this mdt.quota_type. It is obsolete. Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-MDT0000-lwp-MDT0000: Communicating with 0@lo, operation mds_connect failed with -11. Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0000 log by user request. Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0000.ost.quota_type in log play01-OST0000 Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message Nov 21 17:30:38 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0001 log by user request. Nov 21 17:30:38 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0001.ost.quota_type in log play01-OST0001 Nov 21 17:30:39 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0000-osc-MDT0000: Communicating with 172.23.144.14@tcp, operation ost_connect failed with -16. Nov 21 17:30:54 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0002 log by user request. Nov 21 17:30:54 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0002.ost.quota_type in log play01-OST0002 Nov 21 17:31:02 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0003-osc-MDT0000: Communicating with 172.23.144.18@tcp, operation ost_connect failed with -16. Nov 21 17:31:02 cs04r-sc-mds02-03 kernel: LustreError: Skipped 1 previous similar message Nov 21 17:31:07 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0004 log by user request. Nov 21 17:31:07 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message Nov 21 17:31:10 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0001-osc-MDT0000: Communicating with 172.23.144.18@tcp, operation ost_connect failed with -16. Nov 21 17:31:11 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0005.ost.quota_type in log play01-OST0005 Nov 21 17:31:11 cs04r-sc-mds02-03 kernel: Lustre: Skipped 2 previous similar messages Nov 21 17:31:44 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0000-osc-MDT0000: Communicating with 172.23.144.14@tcp, operation ost_connect failed with -16. Nov 21 17:31:44 cs04r-sc-mds02-03 kernel: LustreError: Skipped 2 previous similar messages Nov 21 17:32:34 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: Will be in recovery for at least 5:00, or until 1 client reconnects Nov 21 17:32:34 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: Denying connection for new client 45cd72fa-56c3-f257-0ed7-154d629ee603 (at 172.23.136.7@tcp), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 4:59
            mdiep Minh Diep added a comment -

            don't forget the unmount all clients and OST while you --writeconf the MDS,

            Then run
            tunefs.lustre --writeconf --ost /dev/mapper/ost_play01_0

            mdiep Minh Diep added a comment - don't forget the unmount all clients and OST while you --writeconf the MDS, Then run tunefs.lustre --writeconf --ost /dev/mapper/ost_play01_0
            mdiep Minh Diep added a comment -

            ok thanks. Ah, I also see you share mgs? or it's a typo?

            tunefs.lustre --erase-params --writeconf /dev/vg_play01/mgs <<<<<
            tunefs.lustre --erase-params --writeconf --mgsnode=172.23.144.5@tcp0 --mgsnode=172.23.144.6@tcp0 --servicenode=172.23.144.5@tcp0 --servicenode=172.23.144.6@tcp0 --param mdt.quota_type=ug --param mdt.group_upcall=/usr/sbin/l_getgroups --mountfsoptions=iopen_nopriv,user_xattr,errors=remount-ro,acl /dev/vg_play01/mdt

            If you have combined mgs/mdt, then you should only have 1 mgsnode
            tunefs.lustre --erase-params --writeconf /dev/vg_play01/mdt
            tunefs.lustre --writeconf --mgsnode=172.23.144.6@tcp0 --servicenode=172.23.144.5@tcp0 --servicenode=172.23.144.6@tcp0 --param mdt.quota_type=ug --param mdt.group_upcall=/usr/sbin/l_getgroups --mountfsoptions=iopen_nopriv,user_xattr,errors=remount-ro,acl --mgs --mdt /dev/vg_play01/mdt

            Note: no --erase-params on second tunefs.lustre cmd

            mdiep Minh Diep added a comment - ok thanks. Ah, I also see you share mgs? or it's a typo? tunefs.lustre --erase-params --writeconf /dev/vg_play01/mgs <<<<< tunefs.lustre --erase-params --writeconf --mgsnode=172.23.144.5@tcp0 --mgsnode=172.23.144.6@tcp0 --servicenode=172.23.144.5@tcp0 --servicenode=172.23.144.6@tcp0 --param mdt.quota_type=ug --param mdt.group_upcall=/usr/sbin/l_getgroups --mountfsoptions=iopen_nopriv,user_xattr,errors=remount-ro,acl /dev/vg_play01/mdt If you have combined mgs/mdt, then you should only have 1 mgsnode tunefs.lustre --erase-params --writeconf /dev/vg_play01/mdt tunefs.lustre --writeconf --mgsnode=172.23.144.6@tcp0 --servicenode=172.23.144.5@tcp0 --servicenode=172.23.144.6@tcp0 --param mdt.quota_type=ug --param mdt.group_upcall=/usr/sbin/l_getgroups --mountfsoptions=iopen_nopriv,user_xattr,errors=remount-ro,acl --mgs --mdt /dev/vg_play01/mdt Note: no --erase-params on second tunefs.lustre cmd

            Sorry, should have provided a bit more background...

            The file system has two OSS in active-active configuration, with the new IPs 172.23.144.14 and 172.23.144.18 sharing a storage array. For the MDS we also have two servers sharing a storage array, the new IPs for those are indeed 172.23.144.5 (cs04r-sc-mds02-03) and 172.23.144.6 (cs04r-sc-mds02-04).

            Cheers,
            Frederik

            ferner Frederik Ferner (Inactive) added a comment - Sorry, should have provided a bit more background... The file system has two OSS in active-active configuration, with the new IPs 172.23.144.14 and 172.23.144.18 sharing a storage array. For the MDS we also have two servers sharing a storage array, the new IPs for those are indeed 172.23.144.5 (cs04r-sc-mds02-03) and 172.23.144.6 (cs04r-sc-mds02-04). Cheers, Frederik
            mdiep Minh Diep added a comment -

            I don't understand "MGS will fail over between the same two machines as the MDT even though it is on a separate partition."

            I assume that cs04r-sc-mds02-03 = 72.23.144.5@tcp

            and another-mss-host = 172.23.144.6@tcp

            are these two sharing the same storage/device?

            mdiep Minh Diep added a comment - I don't understand "MGS will fail over between the same two machines as the MDT even though it is on a separate partition." I assume that cs04r-sc-mds02-03 = 72.23.144.5@tcp and another-mss-host = 172.23.144.6@tcp are these two sharing the same storage/device?

            to mgsnode because the MGS will fail over between the same two machines as the MDT even though it is on a separate partition.

            tunefs.lustre --dryrun for mdt and ost:

            [bnh65367@cs04r-sc-mds02-03 ~]$ sudo tunefs.lustre --dryrun /dev/mapper/vg_play01-mdt 
            checking for existing Lustre data: found
            Reading CONFIGS/mountdata
            
               Read previous values:
            Target:     play01-MDT0000
            Index:      0
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1001
                          (MDT no_primnode )
            Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro,acl
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.5@tcp failover.node=172.23.144.6@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups
            
            
               Permanent disk data:
            Target:     play01-MDT0000
            Index:      0
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1001
                          (MDT no_primnode )
            Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro,acl
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.5@tcp failover.node=172.23.144.6@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups
            
            exiting before disk write.
            [bnh65367@cs04r-sc-mds02-03 ~]$ 
            

            OSTs:

            [bnh65367@cs04r-sc-oss01-04 ~]$ for i in /dev/mapper/ost_play01_* ; do sudo tunefs.lustre --dryrun $i ; done
            checking for existing Lustre data: found
            Reading CONFIGS/mountdata
            
               Read previous values:
            Target:     play01-OST0000
            Index:      0
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1402
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
            
            
               Permanent disk data:
            Target:     play01-OST0000
            Index:      0
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1402
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
            
            exiting before disk write.
            checking for existing Lustre data: found
            Reading CONFIGS/mountdata
            
               Read previous values:
            Target:     play01-OST0001
            Index:      1
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
            
            
               Permanent disk data:
            Target:     play01-OST0001
            Index:      1
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
            
            exiting before disk write.
            checking for existing Lustre data: found
            Reading CONFIGS/mountdata
            
               Read previous values:
            Target:     play01-OST0002
            Index:      2
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
            
            
               Permanent disk data:
            Target:     play01-OST0002
            Index:      2
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
            
            exiting before disk write.
            checking for existing Lustre data: found
            Reading CONFIGS/mountdata
            
               Read previous values:
            Target:     play01-OST0003
            Index:      3
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
            
            
               Permanent disk data:
            Target:     play01-OST0003
            Index:      3
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
            
            exiting before disk write.
            checking for existing Lustre data: found
            Reading CONFIGS/mountdata
            
               Read previous values:
            Target:     play01-OST0004
            Index:      4
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
            
            
               Permanent disk data:
            Target:     play01-OST0004
            Index:      4
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
            
            exiting before disk write.
            checking for existing Lustre data: found
            Reading CONFIGS/mountdata
            
               Read previous values:
            Target:     play01-OST0005
            Index:      5
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
            
            
               Permanent disk data:
            Target:     play01-OST0005
            Index:      5
            Lustre FS:  play01
            Mount type: ldiskfs
            Flags:      0x1002
                          (OST no_primnode )
            Persistent mount opts: errors=remount-ro,extents,mballoc
            Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
            
            exiting before disk write.
            [bnh65367@cs04r-sc-oss01-04 ~]$ 
            
            ferner Frederik Ferner (Inactive) added a comment - to mgsnode because the MGS will fail over between the same two machines as the MDT even though it is on a separate partition. tunefs.lustre --dryrun for mdt and ost: [bnh65367@cs04r-sc-mds02-03 ~]$ sudo tunefs.lustre --dryrun /dev/mapper/vg_play01-mdt checking for existing Lustre data: found Reading CONFIGS/mountdata Read previous values: Target: play01-MDT0000 Index: 0 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1001 (MDT no_primnode ) Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro,acl Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.5@tcp failover.node=172.23.144.6@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups Permanent disk data: Target: play01-MDT0000 Index: 0 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1001 (MDT no_primnode ) Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro,acl Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.5@tcp failover.node=172.23.144.6@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups exiting before disk write. [bnh65367@cs04r-sc-mds02-03 ~]$ OSTs: [bnh65367@cs04r-sc-oss01-04 ~]$ for i in /dev/mapper/ost_play01_* ; do sudo tunefs.lustre --dryrun $i ; done checking for existing Lustre data: found Reading CONFIGS/mountdata Read previous values: Target: play01-OST0000 Index: 0 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1402 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug Permanent disk data: Target: play01-OST0000 Index: 0 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1402 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug exiting before disk write. checking for existing Lustre data: found Reading CONFIGS/mountdata Read previous values: Target: play01-OST0001 Index: 1 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug Permanent disk data: Target: play01-OST0001 Index: 1 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug exiting before disk write. checking for existing Lustre data: found Reading CONFIGS/mountdata Read previous values: Target: play01-OST0002 Index: 2 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug Permanent disk data: Target: play01-OST0002 Index: 2 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug exiting before disk write. checking for existing Lustre data: found Reading CONFIGS/mountdata Read previous values: Target: play01-OST0003 Index: 3 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug Permanent disk data: Target: play01-OST0003 Index: 3 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug exiting before disk write. checking for existing Lustre data: found Reading CONFIGS/mountdata Read previous values: Target: play01-OST0004 Index: 4 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug Permanent disk data: Target: play01-OST0004 Index: 4 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug exiting before disk write. checking for existing Lustre data: found Reading CONFIGS/mountdata Read previous values: Target: play01-OST0005 Index: 5 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug Permanent disk data: Target: play01-OST0005 Index: 5 Lustre FS: play01 Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug exiting before disk write. [bnh65367@cs04r-sc-oss01-04 ~]$

            People

              mdiep Minh Diep
              ferner Frederik Ferner (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: