[LU-9418] After upgrade from 2.9 to master, with project quota enabled, cannot mount MDS Created: 28/Apr/17  Updated: 07/Nov/18  Resolved: 08/Jul/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.10.0
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Sarah Liu Assignee: Wang Shilong (Inactive)
Resolution: Fixed Votes: 0
Labels: None

Issue Links:
Related
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

here are the test steps
1. setup system as 2.9.0 ldiskfs, create files and setup ost pool, setup quota to "ug"
2. upgrade the system to 2.10
3. mount the system and check the data, quota and ost pool, everything looks fine
4. shutdown the system
5. run "tune2fs -O project mdsdev/ostdev" on both MDS and OSS devices, return without error
6. try to mount MDS and failed as

[root@onyx-69 ~]# mount -t lustre -o acl,user_xattr /dev/sdb1 /mnt/mds1
[84831.219419] LDISKFS-fs warning (device sdb1): ldiskfs_enable_quotas:5445: Failed to enable quota tracking (type=2, err=-22). Please run e2fsck to fix.
[84831.238898] LDISKFS-fs (sdb1): mount failed
[84831.245872] LustreError: 43650:0:(osd_handler.c:7014:osd_mount()) lustre-MDT0000-osd: can't mount /dev/sdb1: -22
[84831.259759] LustreError: 43650:0:(obd_config.c:574:class_setup()) setup lustre-MDT0000-osd failed (-22)
[84831.272497] LustreError: 43650:0:(obd_mount.c:199:lustre_start_simple()) lustre-MDT0000-osd setup error -22
[84831.285510] LustreError: 43650:0:(obd_mount_server.c:1806:server_fill_super()) Unable to start osd on /dev/sdb1: -22
[84831.299281] LustreError: 43650:0:(obd_mount.c:1502:lustre_fill_super()) Unable to mount  (-22)
mount.lustre: mount /dev/sdb1 at /mnt/mds1 failed: Invalid argument
This may have multiple causes.
Are the mount options correct?
Check the syslog for more info.
[root@onyx-69 ~]# mount


 Comments   
Comment by Peter Jones [ 28/Apr/17 ]

Wang Shilong

Could you please advise on this issue?

Thanks

Peter

Comment by Andreas Dilger [ 28/Apr/17 ]

Sarah, did you run e2fsck -fp on the MDT and OST devices after enabling project quotas with tune2fs -O project? It may be that tune2fs -O project would run an internal check of the quotas like tune2fs -O quota does, but I'm not positive that it does. If the e2fsck run fixes this problem, then this would be a bug to fix in tune2fs.

Comment by Sarah Liu [ 28/Apr/17 ]

no, I didn't run the e2fsck, I will try and update the ticket

update, the e2fsck doesn't fix the problem

[root@onyx-69 ~]# e2fsck -fp /dev/sdb1
lustre-MDT0000: recovering journal
lustre-MDT0000: 322/1572864 files (0.9% non-contiguous), 231457/786432 blocks
[root@onyx-69 ~]# mount -t lustre -o acl,user_xattr /dev/sdb1 /mnt/mds1
[90778.555192] LDISKFS-fs warning (device sdb1): ldiskfs_enable_quotas:5445: Failed to enable quota tracking (type=2, err=-22). Please run e2fsck to fix.
[90778.575441] LDISKFS-fs (sdb1): mount failed
[90778.582659] LustreError: 44010:0:(osd_handler.c:7014:osd_mount()) lustre-MDT0000-osd: can't mount /dev/sdb1: -22
[90778.596496] LustreError: 44010:0:(obd_config.c:574:class_setup()) setup lustre-MDT0000-osd failed (-22)
[90778.609237] LustreError: 44010:0:(obd_mount.c:199:lustre_start_simple()) lustre-MDT0000-osd setup error -22
[90778.622221] LustreError: 44010:0:(obd_mount_server.c:1806:server_fill_super()) Unable to start osd on /dev/sdb1: -22
[90778.635970] LustreError: 44010:0:(obd_mount.c:1502:lustre_fill_super()) Unable to mount  (-22)
mount.lustre: mount /dev/sdb1 at /mnt/mds1 failed: Invalid argument
This may have multiple causes.
Are the mount options correct?
Check the syslog for more info.

Comment by Wang Shilong (Inactive) [ 29/Apr/17 ]

Hello, this is a known problem, it is a regression of patchless kernel fix.

Problem should be fixed https://review.whamcloud.com/#/c/26769/

Could you try to latest master since patch just merged?

Thank,
Shilong

Comment by Sarah Liu [ 01/May/17 ]

Sure, I will try and update the ticket

Comment by Sarah Liu [ 02/May/17 ]

verified on tip of master #3573, the issue doesn't occur.

Generated at Sat Feb 10 02:26:01 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.