Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-9023

Second opinion on MDT inode recovery requested

Details

    • Question/Request
    • Resolution: Unresolved
    • Major
    • None
    • None
    • None
    • 9223372036854775807

    Description

      This is a sanity check question. NSC sees no reason the method described below should not work, but due to the high impact a failure would have we'd like a second opinion. We have scheduled downtime to execute it Thursday next week, 26 Jan.

      To sort out the fallout of LU-8953 (out of inodes on ZFS MDT solved by adding more disks to the pool) we need to recreate the original pool. The reason we ran out of inodes was that when the vendor sent us hardware for the latest expansion that was supposed to be equivalent to the last shipment the SSD had switched from reporting 512b blocks to 4k blocks. Since I had not hardcoded ashift we ended up with 6-8 times fewer inodes and this was missed in testing.

      There isn't enough slots in the MDSs to solve this by throwing HW at it as a permanent solution, so I need to move all data from pools with ashift=12 to ashift=9. Do you see any problem with just doing the following:

      (The funny device names come from running LVM just to get more easily identifiable names)

      Unmount the filesystem on all nodes then run something like this for each mdt that needs fixing:

      umount lustre-mdt0/fouo6
      zfs snapshot lustre-mdt0/fouo6@copythis
      zpool create lustre-mdt-tmp -o ashift=9 mirror \
      /dev/new_sdr/mdt_fouo6new_sdr \
      /dev/new_sdu/mdt_fouo6new_sdu
      zfs send -R lustre-mdt0/fouo6@copythis | zfs recv lustre-mdt-tmp/fouo6tmp
      zpool destroy lustre-REMOVETHIS-mdt0
      zpool create lustre-mdt0/fouo6 -o ashift=9 \
      mirror /dev/mds9_sdm/mdt_fouo6_sdm /dev/mds9_sdn/mdt_fouo6_sdn \
      mirror /dev/mds9_sdo/mdt_fouo6_sdo /dev/mds9_sdp/mdt_fouo6_sdp
      zfs send -R lustre-mdt-tmp/fouo6tmp@copythis | zfs recv lustre-mdt0/fouo6
      mount -t lustre lustre-mdt0/fouo6 /mnt/lustre/local/fouo6
      zpool destroy lustre-mdt-tmp

      The "REMOVETHIS-" inserted due to desktop copy buffer paranoia should be removed before running of course.

      Attachments

        Issue Links

          Activity

            [LU-9023] Second opinion on MDT inode recovery requested
            zino Peter Bortas added a comment -

            The ZFS oddity seems to be unrelated to the recreation of the filesystems, so I'll track that separately if needed.

            This concludes this issue from my side. Thanks for the help everyone!

            zino Peter Bortas added a comment - The ZFS oddity seems to be unrelated to the recreation of the filesystems, so I'll track that separately if needed. This concludes this issue from my side. Thanks for the help everyone!
            zino Peter Bortas added a comment -

            This operation was somewhat delayed by unrelated failures in one of the attached compute clusters, but completed without problems on Friday.

            I have noted one oddity with ZFS snapshots today, but nothing that affects production. I'll try to figure out that one by tomorrow and then we can close this.

            zino Peter Bortas added a comment - This operation was somewhat delayed by unrelated failures in one of the attached compute clusters, but completed without problems on Friday. I have noted one oddity with ZFS snapshots today, but nothing that affects production. I'll try to figure out that one by tomorrow and then we can close this.

            Hi zino,
            you don't see performance improvement because the bottleneck is in the code and not in the underline hw performance. We saw very different performance instead in the OST not using ashift=12.

            gabriele.paciucci Gabriele Paciucci (Inactive) added a comment - Hi zino , you don't see performance improvement because the bottleneck is in the code and not in the underline hw performance. We saw very different performance instead in the OST not using ashift=12.
            zino Peter Bortas added a comment -

            Hi Gabriel,

            You are in time. We got a bit delayed by hardware failing elsewhere in the cluster, so the procedure is just started. We'll know today if I lost the filesystems or not.

            I'll make an extra backup of the whole filesystems. It just adds about 1h to the procedure, and that's worth it.

            I don't think the tools for formatting really needs any intelligence here really, this was an operator error. But if there are no performance problems with running ashift=9 on 4k block SSDs in the general case it might be a good idea to default to ashift=9 there though. In my tests I've not seen any performance advantage outside of the error margin by using ashift=12 on SSDs on the MDS.

            zino Peter Bortas added a comment - Hi Gabriel, You are in time. We got a bit delayed by hardware failing elsewhere in the cluster, so the procedure is just started. We'll know today if I lost the filesystems or not. I'll make an extra backup of the whole filesystems. It just adds about 1h to the procedure, and that's worth it. I don't think the tools for formatting really needs any intelligence here really, this was an operator error. But if there are no performance problems with running ashift=9 on 4k block SSDs in the general case it might be a good idea to default to ashift=9 there though. In my tests I've not seen any performance advantage outside of the error margin by using ashift=12 on SSDs on the MDS.

            Hi zino,
            I'm back...sorry for this. I don't know if this is too late:
            1. Yes I was sending the whole pool for that reason, but testing only the individual volume worked. But I suggest to have a backup of the whole file system is not a bad idea...just in case
            2. We are not expecting any performance or big capacity requirement on the MGT, so I don't see any problem to leave with the original ashift.

            Making lustre decide the shift layout at the format time it is something that maybe adilger can evaluate. Not sure if Lustre can evaluate the physical layout of the disks.

            gabriele.paciucci Gabriele Paciucci (Inactive) added a comment - Hi zino , I'm back...sorry for this. I don't know if this is too late: 1. Yes I was sending the whole pool for that reason, but testing only the individual volume worked. But I suggest to have a backup of the whole file system is not a bad idea...just in case 2. We are not expecting any performance or big capacity requirement on the MGT, so I don't see any problem to leave with the original ashift. Making lustre decide the shift layout at the format time it is something that maybe adilger can evaluate. Not sure if Lustre can evaluate the physical layout of the disks.

            Hi Peter,

            That's unfortunate. Of course it's technically possible to delay this to another week, but the cluster downtime is now to late to stop for this week. I will also have to mount the filesystems ro for a few weeks since the users will run out of inodes before the next window.

            I'd be happy with an answer to just this question: As far as Intel engineers know, is there anything in the filesystem that stores a structure that would be affected by a change in block size; i.e. could cause problems during this data move. We'll assume for the sake of this discussion that I'll be able to flawlessly take care of the bit shuffling on disk.

            zino Peter Bortas added a comment - Hi Peter, That's unfortunate. Of course it's technically possible to delay this to another week, but the cluster downtime is now to late to stop for this week. I will also have to mount the filesystems ro for a few weeks since the users will run out of inodes before the next window. I'd be happy with an answer to just this question: As far as Intel engineers know, is there anything in the filesystem that stores a structure that would be affected by a change in block size; i.e. could cause problems during this data move. We'll assume for the sake of this discussion that I'll be able to flawlessly take care of the bit shuffling on disk.
            pjones Peter Jones added a comment -

            Peter

            Gabriele is unexpectedly out of the office at short notice. Can this wait until he is available again (hopefully next week)?

            Peter

            pjones Peter Jones added a comment - Peter Gabriele is unexpectedly out of the office at short notice. Can this wait until he is available again (hopefully next week)? Peter
            zino Peter Bortas added a comment -

            Hi Gabriel,

            The weekends tests looks good. I have some tests I will run over night and lock down the plans tomorrow. A couple of questions:

            1. Did you have any reason that sending the whole pool would be better than sending individual filesystems except that it was easier because you also had the MGT there? Unless there is a reason not to I will send the filesystems, only for clearitys sake. The pools have anonymous names while the MDTs are named after the filsystems. I will be doing this for 3 pools on the same machine, so keeping the names reduces the chance of recv:ing or destroying the wrong filesystem. These will be the actual sends on my end:

            zfs send -vR lustre-mdt0/fouo6@copythis | gzip > /lustre-mdt-tmpfs/mds0-fouo6.gz
            zfs send -vR lustre-mdt1/rossby20@copythis | gzip > /lustre-mdt-tmpfs/mds1-rossby20.gz
            zfs send -vR lustre-mdt2/smhid13@copythis | gzip > /lustre-mdt-tmpfs/mds2-smhid13.gz

            2. I will not be moving the MGT from ashift=12 to ashift=9. Will this cause any problems? I know the question is borderline insane, but this is really the original reason I opened this ticket with you. I'm OK with sorting out everything on the zfs level, but I'm trying to fish for half-insane things like hard-coding offsets on MDT creation time based on number of blocks somewhere deep in Lustre.

            zino Peter Bortas added a comment - Hi Gabriel, The weekends tests looks good. I have some tests I will run over night and lock down the plans tomorrow. A couple of questions: 1. Did you have any reason that sending the whole pool would be better than sending individual filesystems except that it was easier because you also had the MGT there? Unless there is a reason not to I will send the filesystems, only for clearitys sake. The pools have anonymous names while the MDTs are named after the filsystems. I will be doing this for 3 pools on the same machine, so keeping the names reduces the chance of recv:ing or destroying the wrong filesystem. These will be the actual sends on my end: zfs send -vR lustre-mdt0/fouo6@copythis | gzip > /lustre-mdt-tmpfs/mds0-fouo6.gz zfs send -vR lustre-mdt1/rossby20@copythis | gzip > /lustre-mdt-tmpfs/mds1-rossby20.gz zfs send -vR lustre-mdt2/smhid13@copythis | gzip > /lustre-mdt-tmpfs/mds2-smhid13.gz 2. I will not be moving the MGT from ashift=12 to ashift=9. Will this cause any problems? I know the question is borderline insane, but this is really the original reason I opened this ticket with you. I'm OK with sorting out everything on the zfs level, but I'm trying to fish for half-insane things like hard-coding offsets on MDT creation time based on number of blocks somewhere deep in Lustre.

            Okay, I'm now on hold waiting for your feedback.

            gabriele.paciucci Gabriele Paciucci (Inactive) added a comment - Okay, I'm now on hold waiting for your feedback.
            zino Peter Bortas added a comment -

            Not really. I like your method better. It does invalidate some of my testing though, so I'll run some new over the weekend.

            zino Peter Bortas added a comment - Not really. I like your method better. It does invalidate some of my testing though, so I'll run some new over the weekend.

            People

              gabriele.paciucci Gabriele Paciucci (Inactive)
              zino Peter Bortas
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated: