Details
-
Bug
-
Resolution: Fixed
-
Minor
-
Lustre 2.1.4, Lustre 1.8.8
-
None
-
b1_8, out-of-kernel-tree OFED, RHEL5
-
3
-
6345
Description
On b1_8 + out-of-kerneltree OFED (e.g. Mellanox 1.5.3), the quota doesn't work well.
1 x MDT, 1 x OST are mounted on a server and one client mounts that Lustre.
- lfs quotaon -ug /l-exofed/
- lfs quotacheck /l-exofed
- lfs setquota -B 0 -g group1 /l-exofed/
- lfs setquota -B 100000 -g group1 /l-exofed/
- lfs quota -g group1 -v /l-exofed/
Disk quotas for group group1 (gid 1001):
Filesystem kbytes quota limit grace files quota limit grace
/l-exofed/ 0 0 100000 - 0 0 0 -
lustre-MDT0000_UUID
0 - 1 - 0 - 0 -
lustre-OST0000_UUID
0 - 1 - - - - -
- su - user1
-bash-3.2$ cd /l-exofed/
-bash-3.2$ mkdir a
mkdir: cannot create directory `a': Disk quota exceeded
The problem seems to be caused by two functions sb_has_quota_active() and sb_any_quota_active() defined in
$BACKPORT_INCLUDES/include/linux/quotaops.h and use them, instead of lustre defined.
1.8.8 was no problem with out-of-kerneltree OFED, so LU-1438 patches might be related.
I created the patches and will post it soon.
Attachments
Issue Links
- is related to
-
LU-340 system hang when running sanity-quota on RHEL5-x86_64-OFED
- Resolved
- Trackbacks
-
Lustre 1.8.x known issues tracker While testing against Lustre b18 branch, we would hit known bugs which were already reported in Lustre Bugzilla https://bugzilla.lustre.org/. In order to move away from relying on Bugzilla, we would create a JIRA