[LU-935] Crash lquota:dquot_create_oqaq+0x28f/0x510 Created: 16/Dec/11  Updated: 09/May/12  Resolved: 02/Feb/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 1.8.x (1.8.0 - 1.8.5)
Fix Version/s: Lustre 2.2.0, Lustre 2.1.2, Lustre 1.8.8

Type: Bug Priority: Major
Reporter: Supporto Lustre Jnet2000 (Inactive) Assignee: Niu Yawei (Inactive)
Resolution: Fixed Votes: 0
Labels: stats
Environment:

Lustre version: 1.8.5.54-20110316022453-PRISTINE-2.6.18-194.17.1.el5_lustre.20110315140510
lctl version: 1.8.5.54-20110316022453-PRISTINE-2.6.18-194.17.1.el5_lustre.20110315140510
Red Hat Enterprise Linux Server release 5.4 (Tikanga)
auth type over ldap and kerberos
quota enabled only for group on lustre fs


Attachments: Zip Archive dump11-12.zip     Zip Archive quota_set_group.zip    
Severity: 2
Epic: client, hang, metadata, quota, server
Rank (Obsolete): 4798

 Description   

The Lustre infrastructure is based on two HP Blade Server with an
Hitachi Shared Storage. On the first server we have MDS, MGS, OST0/1/2,
on the second server we have OST3/4..
The first server is osiride-lp-030 and the second is osiride-lp-031.
The clustering of these services are based on Red Hat Cluster Suite.
The crash of the Lustre infrastructure is daily and we experience in the
log these dumps:

Dec 9 11:27:08 osiride-lp-030 kernel: BUG: soft lockup - CPU#8 stuck for 10s! [ll_mdt_06:21936]
Dec 9 11:27:08 osiride-lp-030 kernel: CPU 8:
Dec 9 11:27:08 osiride-lp-030 kernel: Modules linked in: obdfilter(U) ost(U) mds(U) fsfilt_ldiskfs(U) mgs(U) mgc(U) ldiskfs(U) crc16(U) lock_dlm(U) gfs2(U)
dlm(U) configfs(U) lustre(U) lov(U) mdc(U) lquota(U) osc(U) ksocklnd(U) ptlrpc(U) obdclass(U) lvfs(U) lnet(U) libcfs(U) bonding(U) ipv6(U) xfrm_nalgo(U) cryp
to_api(U) video(U) backlight(U) sbs(U) power_meter(U) hwmon(U) i2c_ec(U) i2c_core(U) dell_wmi(U) wmi(U) button(U) battery(U) asus_acpi(U) acpi_memhotplug(U)
ac(U) dm_round_robin(U) dm_multipath(U) scsi_dh(U) parport_pc(U) lp(U) parport(U) joydev(U) bnx2x(U) sg(U) amd64_edac_mod(U) shpchp(U) bnx2(U) serio_raw(U) t
g3(U) pcspkr(U) edac_mc(U) hpilo(U) dm_raid45(U) dm_message(U) dm_region_hash(U) dm_mem_cache(U) dm_snapshot(U) dm_zero(U) dm_mirror(U) dm_log(U) dm_mod(U) u
sb_storage(U) qla2xxx(U) scsi_transport_fc(U) cciss(U) sd_mod(U) scsi_mod(U) ext3(U) jbd(U) uhci_hcd(U) ohci_hcd(U) ehci_hcd(U)
Dec 9 11:27:08 osiride-lp-030 kernel: Pid: 21936, comm: ll_mdt_06 Tainted: G 2.6.18-194.17.1.el5_lustre.20110315140510 #1
Dec 9 11:27:08 osiride-lp-030 kernel: RIP: 0010:[<ffffffff8882a270>] [<ffffffff8882a270>] :lquota:dquot_create_oqaq+0x2b0/0x510
Dec 9 11:27:08 osiride-lp-030 kernel: RSP: 0018:ffff8104484e3ac0 EFLAGS: 00000246
Dec 9 11:27:08 osiride-lp-030 kernel: RAX: 0000000000000000 RBX: ffff81041eee3ef0 RCX: 000000000000000c
Dec 9 11:27:08 osiride-lp-030 kernel: RDX: 0000000000000000 RSI: 0000000000001400 RDI: 0000000000001400
Dec 9 11:27:08 osiride-lp-030 kernel: RBP: 0000000000000004 R08: 000000000000000c R09: 0000000001000000
Dec 9 11:27:08 osiride-lp-030 kernel: R10: 000000000000000c R11: 0000000000500000 R12: ffffffffffffffff
Dec 9 11:27:08 osiride-lp-030 kernel: R13: 003fffffffffffff R14: 0000000000000282 R15: ffff81041eee3f00
Dec 9 11:27:08 osiride-lp-030 kernel: FS: 00002b6411676230(0000) GS:ffff81010fc954c0(0000) knlGS:00000000f6cf2b90
Dec 9 11:27:08 osiride-lp-030 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Dec 9 11:27:08 osiride-lp-030 kernel: CR2: 00000000f6140000 CR3: 0000000000201000 CR4: 00000000000006e0
Dec 9 11:27:08 osiride-lp-030 kernel:
Dec 9 11:27:08 osiride-lp-030 kernel: Call Trace:
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8882ad69>] :lquota:lustre_dqget+0x679/0x7e0
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8882b086>] :lquota:init_oqaq+0x56/0x1c0
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8883285e>] :lquota:mds_set_dqblk+0x8de/0x2010
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff88732fd3>] :ptlrpc:ptl_send_buf+0x3f3/0x5b0
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8873b94a>] :ptlrpc:lustre_pack_reply_flags+0x86a/0x950
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff80150d56>] __next_cpu+0x19/0x28
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff88823e9a>] :lquota:mds_quota_ctl+0x16a/0x3c0
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8873ba59>] :ptlrpc:lustre_pack_reply+0x29/0xb0
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff88afe78f>] :mds:mds_handle+0x3d7f/0x4d10
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff800767ae>] smp_send_reschedule+0x4e/0x53
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8008c92d>] enqueue_task+0x41/0x56
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8873da35>] :ptlrpc:lustre_msg_get_conn_cnt+0x35/0xf0
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff887473b9>] :ptlrpc:ptlrpc_server_handle_request+0x989/0xe00
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff88747b15>] :ptlrpc:ptlrpc_wait_event+0x2e5/0x310
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8008b3bd>] __wake_up_common+0x3e/0x68
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff88748ac8>] :ptlrpc:ptlrpc_main+0xf88/0x1150
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8005dfb1>] child_rip+0xa/0x11
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8008c92d>] enqueue_task+0x41/0x56
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8873da35>] :ptlrpc:lustre_msg_get_conn_cnt+0x35/0xf0
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff887473b9>] :ptlrpc:ptlrpc_server_handle_request+0x989/0xe00
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff88747b15>] :ptlrpc:ptlrpc_wait_event+0x2e5/0x310
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8008b3bd>] __wake_up_common+0x3e/0x68
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff88748ac8>] :ptlrpc:ptlrpc_main+0xf88/0x1150
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8005dfb1>] child_rip+0xa/0x11
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff88747b40>] :ptlrpc:ptlrpc_main+0x0/0x1150
Dec 9 11:27:08 osiride-lp-030 kernel: [<ffffffff8005dfa7>] child_rip+0x0/0x11
Dec 9 11:27:08 osiride-lp-030 kernel:
Dec 9 11:27:15 osiride-lp-030 kernel: Lustre: Service thread pid 23639 was inactive for 218.00s. Watchdog stack traces are limited to 3 per 300 seconds, sk pping this one.

This saturates the resources of the server and the clients are unable to
access to the filesystem.

Regards



 Comments   
Comment by Peter Jones [ 16/Dec/11 ]

Thanks for the report. An engineer will be in touch soon

Comment by Johann Lombardi (Inactive) [ 16/Dec/11 ]

Could you please collect a sysrq-t (or even better a crash dump) of the MDS when those soft lockups are dumped to the console?

Comment by Peter Jones [ 16/Dec/11 ]

Niu

Could you please look into this one?

Thanks

Peter

Comment by Niu Yawei (Inactive) [ 18/Dec/11 ]

in dquot_create_oqaq(), I don't see why we didn't break early when the i/bunit_size exceeding the upper limit while expanding i/bunit_size:

                /* enlarge block qunit size */
                while (blimit >
                       QUSG(dquot->dq_dqb.dqb_curspace + 2 * b_limitation, 1)) {
                        oqaq->qaq_bunit_sz =
                                QUSG(oqaq->qaq_bunit_sz * cqs_factor, 1)
                                << QUOTABLOCK_BITS;
                        b_limitation = oqaq->qaq_bunit_sz * ost_num *
                                shrink_qunit_limit;
                }
                /* enlarge file qunit size */
                while (ilimit > dquot->dq_dqb.dqb_curinodes
                       + 2 * i_limitation) {
                        oqaq->qaq_iunit_sz = oqaq->qaq_iunit_sz * cqs_factor;
                        i_limitation = oqaq->qaq_iunit_sz * mdt_num *
                                shrink_qunit_limit;
                }

If the i/blimit is setting to a very large value by user, then the qaq_i/bunit_sz * cqs_factor could overflow, and causing endless loop at the end.

I think we'd better break the loop whenever the oqaq->qaq_i/bunit_sz exceeded the upper limit, will provide a patch soon.

Comment by Niu Yawei (Inactive) [ 18/Dec/11 ]

patch for b1_8: http://review.whamcloud.com/1887

Comment by Supporto Lustre Jnet2000 (Inactive) [ 19/Dec/11 ]

Thanks to all for answers,
i have a Lustre test infrastruction, how can i replicate this error in test environment?

Regards

Andrea Mattioli

Comment by Niu Yawei (Inactive) [ 19/Dec/11 ]

Thanks to all for answers,
i have a Lustre test infrastruction, how can i replicate this error in test environment?

I'm afard that some apps were setting very large limits, and that triggered this defect. To reproduce it, you could just set a very large ilimt/blimit for some user, for instace:

lfs setquota -u user_foo -b 0 -B 0 -i 0 -I 17293822569102704639 /mnt/lustre

Comment by Niu Yawei (Inactive) [ 19/Dec/11 ]

patch for master: http://review.whamcloud.com/1890

Comment by Peter Jones [ 19/Dec/11 ]

Niu

Does that mean that it should be possible for the customer to workaround this issue by identifying which jobs use a "too high" limit and correcting it to something within an acceptable range and thus remove the need to apply a patch? If so, at what threshold does the value become problematic?

Thanks

Peter

Comment by Niu Yawei (Inactive) [ 19/Dec/11 ]

Does that mean that it should be possible for the customer to workaround this issue by identifying which jobs use a "too high" limit and correcting it to something within an acceptable range and thus remove the need to apply a patch? If so, at what threshold does the value become problematic?

This patch is necessary, we shouldn't restrict user to set high limit, and the threshold limit which can trigger the overflow depends on many factors: ost count, quota_qs_factor (default is 2), quota_boundary_factor (default is 4), so there isn't a static threshold.

Assuming there are 100 osts, the blimit less than 100P bytes should probably be safe, and the ilimit could be larger (since there is only one mds), say less than 1000P inodes or so.

Comment by Peter Jones [ 19/Dec/11 ]

Niu

I understand that we want to fix this issue for a future release, but I just mean that the customer may prefer to workaround the issue rather than to apply a patch as a more immediate way to avoid the issue. If the customer were to provide the three values you mention ( ost count, quota_qs_factor, and quota_boundary_factor) would you be able to calculate the threshold more precisely?

Thanks

Peter

Comment by Johann Lombardi (Inactive) [ 19/Dec/11 ]

Could we ask to the customer what is the highest quota limit set for users/groups?
As Niu stated, this patch only makes sense for very large quota limits.

A workaround could be to just disable the dynamic qunit size feature by running the following command on the MDS before re-enabling quotas:

# lctl set_param lquota.*.quota_switch_qs=0
Comment by Supporto Lustre Jnet2000 (Inactive) [ 19/Dec/11 ]

We are working to give to you the value of our highest group-quota limit.

Thanks

Comment by Supporto Lustre Jnet2000 (Inactive) [ 19/Dec/11 ]

I created the attached file with all quota set for group, we don't use user quota but only group quota limit.

Now the highest hard limit for Block is 734003200 and for Inode is 2000000.

Regards

Andrea Mattioli

Comment by Niu Yawei (Inactive) [ 19/Dec/11 ]

I created the attached file with all quota set for group, we don't use user quota but only group quota limit.

Now the highest hard limit for Block is 734003200 and for Inode is 2000000.

such small limits should not trigger the overflow, maybe there are other reasons, or someone (or user app) was trying to set a very high limit but didn't success because of the defect.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 20/Dec/11 ]

So,
as workaround we could:
1. switch quotaoff
2. disable quota_switch_qs with lctl set_param lquota.*.quota_switch_qs=0
3. switch quotaon

is it correct?

which is the consequence to put to 0 the quota_switch_qs?

thanks

Comment by Niu Yawei (Inactive) [ 20/Dec/11 ]

The consequence of setting quota_switch_qs=0 is that the qunit expanding/shrinking feature will be disabled, and quota unit (granularity) will always be default size (128M for block quota/5120 inodes for file quota).

Without qunit shrinking, write will be more likely getting -EDQUOT when the total usage is still less than limit, because at least 1 qunit (128M) limit is allocated on each OST, even if the user doesn't have any objects on that OST.

BTW: Do you know what kind of operations caused this problem? And if possible, could you provide the full stacktrace that Johann mentioned in comment #2? Thanks a lot.

Comment by Elia Pinto [ 20/Dec/11 ]

Hi, i am the client that are working with Supporto Lustre jnet2000 on this issue. To get the kernel dump we should enable the functionality of kexec / kdump on RHEL. We will do so whenever possible as this is a mission critical production env and we should be do a reboot for this. But agree that it is a useful thing to do.

In any case, from the Lustre stack trace (under /tmp) I noticed that when the system crashes it has a load average of about 400 and seems to have some Lustre processes (kernel threads probably) hung and are not terminated, so the system crashes. These lustre processes are enumerating the secondary user groups via the standard POSIX API ( i am speaking of the default upcall /usr/sbin/l_getgroups), and we are using a central LDAP server as a POSIX USER/GROUP container (RFC2307bis). I believe that the bug is in the fact that these processes are not never terminated. Make sense ?

In the meantine could be a useful workraround

  • to define some timeout on the LDAP server for the ldap operation and connection ?
  • Introduce a caching mechanism on SSSD, because the glibc nscd does not work very well (low cache hit) ?

Thanks in advance

Comment by Supporto Lustre Jnet2000 (Inactive) [ 20/Dec/11 ]

We have upload the lustre dump when we open the issue. We don't have any kernel dump.

The procedure to switch off the quota_switch_qs is it correct? Should we need some quotacheck after quotaon?

We are investigate to find which operation cause the problem. In the past we have experienced the "BZ=22755
File system reports -28 – no space left on device, but OSTs are not full", but we upgrade to 1.8.5.54 that should fix this problem.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 20/Dec/11 ]

I made some test:
I set my quota on Hard inode size 17293822569102704639 as you tell me, at the some time i watch on Lustre server logs:

Dec 20 11:05:05 osiride-lp-034 kernel: BUG: soft lockup - CPU#7 stuck for 10s! [ll_mdt_11:7577]
Dec 20 11:05:05 osiride-lp-034 kernel: CPU 7:
Dec 20 11:05:05 osiride-lp-034 kernel: Modules linked in: mds(U) fsfilt_ldiskfs(U) mgs(U) mgc(U) ldiskfs(U) crc16(U) lustre(U) lov(U) mdc(U) lquota(U) osc(U)
ksocklnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) lock_dlm(U) gfs2(U) dlm(U) configfs(U) bonding(U) ipv6(U) xfrm_nalgo(U) crypto_api(U) video(U) b
acklight(U) sbs(U) power_meter(U) hwmon(U) i2c_ec(U) i2c_core(U) dell_wmi(U) wmi(U) button(U) battery(U) asus_acpi(U) acpi_memhotplug(U) ac(U) dm_round_robin
(U) dm_multipath(U) scsi_dh(U) parport_pc(U) lp(U) parport(U) amd64_edac_mod(U) tg3(U) pcspkr(U) edac_mc(U) shpchp(U) serio_raw(U) bnx2(U) hpilo(U) sg(U) bnx
2x(U) dm_raid45(U) dm_message(U) dm_region_hash(U) dm_mem_cache(U) dm_snapshot(U) dm_zero(U) dm_mirror(U) dm_log(U) dm_mod(U) usb_storage(U) qla2xxx(U) scsi_
transport_fc(U) cciss(U) sd_mod(U) scsi_mod(U) ext3(U) jbd(U) uhci_hcd(U) ohci_hcd(U) ehci_hcd(U)
Dec 20 11:05:05 osiride-lp-034 kernel: Pid: 7577, comm: ll_mdt_11 Tainted: G 2.6.18-194.17.1.el5_lustre.20110315140510 #1
Dec 20 11:05:05 osiride-lp-034 kernel: RIP: 0010:[<ffffffff888e930a>] [<ffffffff888e930a>] :lquota:dquot_create_oqaq+0x34a/0x510
Dec 20 11:05:05 osiride-lp-034 kernel: RSP: 0000:ffff81023060fac0 EFLAGS: 00000286
Dec 20 11:05:05 osiride-lp-034 kernel: RAX: 0000000000001400 RBX: ffff8101fe33e310 RCX: 0000000000000001
Dec 20 11:05:05 osiride-lp-034 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000001400
Dec 20 11:05:05 osiride-lp-034 kernel: RBP: 0000000000000004 R08: 0000000000000008 R09: 0000000000200000
Dec 20 11:05:05 osiride-lp-034 kernel: R10: 000000000000000c R11: 0000000000000000 R12: efffffffffffffff
Dec 20 11:05:05 osiride-lp-034 kernel: R13: 0000000000000000 R14: 0000000000000282 R15: ffff8101fe33e320
Dec 20 11:05:05 osiride-lp-034 kernel: FS: 00002b5915a6a230(0000) GS:ffff810108e2a1c0(0000) knlGS:00000000f7ea06d0
Dec 20 11:05:05 osiride-lp-034 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Dec 20 11:05:05 osiride-lp-034 kernel: CR2: 00000034aba288c0 CR3: 0000000000201000 CR4: 00000000000006e0
Dec 20 11:05:05 osiride-lp-034 kernel:
Dec 20 11:05:05 osiride-lp-034 kernel: Call Trace:
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff888e9e88>] :lquota:lustre_dqget+0x798/0x7e0
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff888f185e>] :lquota:mds_set_dqblk+0x8de/0x2010
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff887fa9c2>] :ptlrpc:lustre_pack_reply_flags+0x8e2/0x950
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff888e2e9a>] :lquota:mds_quota_ctl+0x16a/0x3c0
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff887faa59>] :ptlrpc:lustre_pack_reply+0x29/0xb0
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff88af278f>] :mds:mds_handle+0x3d7f/0x4d10
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff88692c7e>] :libcfs:libcfs_nid2str+0xbe/0x110
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff88803635>] :ptlrpc:ptlrpc_server_log_handling_request+0x105/0x130
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff888063b9>] :ptlrpc:ptlrpc_server_handle_request+0x989/0xe00
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff88807ac8>] :ptlrpc:ptlrpc_main+0xf88/0x1150
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff8005dfb1>] child_rip+0xa/0x11
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff88806b40>] :ptlrpc:ptlrpc_main+0x0/0x1150
Dec 20 11:05:05 osiride-lp-034 kernel: [<ffffffff8005dfa7>] child_rip+0x0/0x11

After reboot
I test your workaround
lfs quotaoff /lustre_mount_point
lctl set_param lquota.*.quota_switch_qs=0
lfs quotaon /lustre_mount_point

Then i reset my quota as in the first time with Hard Inode limit by 17293822569102704639, now on the server Lustre logs i don't find any Error.

Regards

Andrea Mattioli

Comment by Niu Yawei (Inactive) [ 20/Dec/11 ]

In any case, from the Lustre stack trace (under /tmp) I noticed that when the system crashes it has a load average of about 400 and seems to have some Lustre processes (kernel threads probably) hung and are not terminated, so the system crashes. These lustre processes are enumerating the secondary user groups via the standard POSIX API ( i am speaking of the default upcall /usr/sbin/l_getgroups), and we are using a central LDAP server as a POSIX USER/GROUP container (RFC2307bis). I believe that the bug is in the fact that these processes are not never terminated. Make sense ?

Hmm, from the statcktrace provided in this ticket, seems the process is stuck in dquot_create_oqaq(), so it's very likely we ran into endless loop while expanding qunit in dquot_create_oqaq().

Comment by Peter Jones [ 20/Dec/11 ]

Andrea\Elia

So, is the immediate emergency dealt with and you are satisfied to know how to workaround this bug and that this bug will be fixed in a future version of Lustre?

Peter

Comment by Supporto Lustre Jnet2000 (Inactive) [ 21/Dec/11 ]

Hi Peter,
regard workaround, how can i set permanently this parameter set_param lquota.*.quota_switch_qs=0 ?
Because after reboot this parameter returns in default value ( enabled ):

Thanks

Andrea Mattioli

Comment by Niu Yawei (Inactive) [ 21/Dec/11 ]

Hi, Andrea

The quota_switch_qs can't be set by 'lctl conf_param', you might need to write a script to set it after mount lustre.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 21/Dec/11 ]

After deploy workaround in environment "production" we found this error in Lustre server logs:

Dec 21 17:11:36 osiride-lp-031 kernel: LustreError: 16687:0:(fsfilt-ldiskfs.c:2248:fsfilt_ldiskfs_dquot()) operate dquot before it's enabled!
Dec 21 17:11:36 osiride-lp-031 kernel: LustreError: 16687:0:(quota_master.c:219:lustre_dqget()) can't read dquot from admin quotafile! (rc:-5)
Dec 21 17:11:36 osiride-lp-031 kernel: LustreError: 16687:0:(quota_context.c:699:dqacq_completion()) acquire qunit got error! (rc:-5)
Dec 21 17:11:36 osiride-lp-031 kernel: LustreError: 16687:0:(quota_context.c:699:dqacq_completion()) Skipped 1 previous similar message
Dec 21 17:11:36 osiride-lp-031 kernel: LustreError: 16674:0:(fsfilt-ldiskfs.c:2248:fsfilt_ldiskfs_dquot()) operate dquot before it's enabled!
Dec 21 17:11:36 osiride-lp-031 kernel: LustreError: 16674:0:(quota_master.c:219:lustre_dqget()) can't read dquot from admin quotafile! (rc:-5)
Dec 21 17:11:36 osiride-lp-031 kernel: LustreError: 16674:0:(quota_context.c:699:dqacq_completion()) acquire qunit got error! (rc:-5)

Regards

Andrea Mattioli

Comment by Johann Lombardi (Inactive) [ 21/Dec/11 ]

It seems that we were trying to acquire space from the master while it was not ready yet.
Hopefully, this was just a transient problem which was fixed when the master was properly set up.
Do you see still new instances of those messages or did it happen only once?
Is quota functional now?

Comment by Supporto Lustre Jnet2000 (Inactive) [ 21/Dec/11 ]

hi Johann the log reports everytime i exec the command lfs quotaoff /mountpoin.
yes of course quota are enabled.

[root@osiride-lp-018 ~]# lfs quota -g wisi251 /home
Disk quotas for group wisi251 (gid 30902):
Filesystem kbytes quota limit grace files quota limit grace
/home 126300 0 256000 - 2356 0 500000 -

if i exec the command:lctl get_param lquota.*.quota_switch_qs
output is
lquota.home-MDT0000.quota_switch_qs=changing qunit size is disabled

is this correct?

Regards

Andrea Mattioli

Comment by Johann Lombardi (Inactive) [ 21/Dec/11 ]

> hi Johann the log reports everytime i exec the command lfs quotaoff /mountpoin

ok, it is just a transient issue then. I would not care about those messages as long as everything works well once quota is re-enabled.

> lquota.home-MDT0000.quota_switch_qs=changing qunit size is disabled

Yes, this means that the dynamic qunit feature is correctly disabled.
Is quota working properly now with this workaround in place?

Comment by Supporto Lustre Jnet2000 (Inactive) [ 22/Dec/11 ]

Hi Johann,
of course, quota works.
I hope that the error does not reoccur!

Regards

Andrea Mattioli

Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » x86_64,server,el5,ofa #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = FAILURE
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » x86_64,client,el5,ofa #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » i686,client,el6,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » x86_64,server,el6,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » x86_64,client,el6,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » x86_64,server,el5,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » x86_64,client,el5,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » i686,server,el6,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » x86_64,client,sles11,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » i686,server,el5,ofa #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » x86_64,client,ubuntu1004,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » i686,server,el5,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » i686,client,el5,inkernel #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 29/Dec/11 ]

Integrated in lustre-master » i686,client,el5,ofa #391
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision fc4b46df111bbf9d2207265d18b3f0d72f49502c)

Result = SUCCESS
Oleg Drokin : fc4b46df111bbf9d2207265d18b3f0d72f49502c
Files :

  • lustre/quota/quota_master.c
Comment by Supporto Lustre Jnet2000 (Inactive) [ 19/Jan/12 ]

Hello,
we found a new problem on group quota.
After qunit workaround if i set an Hard Block Limit to 256000 i can't create directory.

This is the example:

[root@osiride-lp-032 ~]# lfs setquota -g testtest -B 256000 -I 500000 /home

[root@osiride-lp-032 ~]# lfs quota -g testtest /home

Disk quotas for group testtest (gid 30942):

Filesystem kbytes quota limit grace files quota limit grace

/home 36 0 256000 - 9 0 500000 -

[root@osiride-lp-032 ~]# su - testtest

[testtest@osiride-lp-032 ~]$ mkdir asd

mkdir: cannot create directory `asd': Disk quota exceeded

else it works if i set hard block limit as 512000:

[root@osiride-lp-032 ~]# lfs setquota -g testtest -B 512000 -I 500000 /home

[root@osiride-lp-032 ~]# lfs quota -g testtest /home

Disk quotas for group testtest (gid 30942):

Filesystem kbytes quota limit grace files quota limit grace

/home 40 0 512000 - 10 0 500000 -

[root@osiride-lp-032 ~]# su - testtest

[testtest@osiride-lp-032 ~]$ mkdir asd

[testtest@osiride-lp-032 ~]$ ls

asd private public public_html

Like Quota User:

[root@osiride-lp-032 ~]# lfs setquota -g testtest -B 0 -I 0 /home

[root@osiride-lp-032 ~]# lfs quota -g testtest /home

Disk quotas for group testtest (gid 30942):

Filesystem kbytes quota limit grace files quota limit grace

/home 40 0 0 - 10 0 0 -

[root@osiride-lp-032 ~]# lfs setquota -u testtest -B 256000 -I 500000 /home

[root@osiride-lp-032 ~]# lfs quota -u testtest /home

Disk quotas for user testtest (uid 10942):

Filesystem kbytes quota limit grace files quota limit grace

/home 44 0 256000 - 11 0 500000 -

[root@osiride-lp-032 ~]# su - testtest

[testtest@osiride-lp-032 ~]$ mkdir asd2

[testtest@osiride-lp-032 ~]$ ls

asd asd2 private public public_html

Enabling qunit working properly.

Best regards

Andrea Mattioli

Comment by Niu Yawei (Inactive) [ 20/Jan/12 ]

I think it because the limit (256000) is too small for the MDT/OSTs. Without qunit shrinking, there is at least 1 qunit size block limit (128M) allocated for each OST and MDT, and it can't be revoked by master, if the total limit is too small for each OST/MDT having 1 qunit, then some of the OST/MDT will have 1 block limit at the end.

Hi, Andrea

You could run 'lfs quota -v -g testtest /home' to see if the MDT is having 1 block limit in such case. To resolve it without enableing qunit shrink, you have to set enough high limit (at least 1 qunit each OST/MDT). Thanks.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 20/Jan/12 ]

Hi,
this is the output of lfs quota -v -g testtest /home

[root@osiride-lp-032 ~]# lfs quota -v -g testtest /home
Disk quotas for group testtest (gid 30942):
Filesystem kbytes quota limit grace files quota limit grace
/home 40 0 256000 - 10 0 500000 -
home-MDT0000_UUID
16 - 1024 - 10 - 5120 -
home-OST0000_UUID
4 - 1024 - - - - -
home-OST0001_UUID
20 - 1024 - - - - -

where i can see this value?

Thanks

Andrea Mattioli

Comment by Johann Lombardi (Inactive) [ 25/Jan/12 ]

hm, that's strange, only 1MB was allocated to the slaves.
Could you please run the following command on all the lustre servers?

# lctl get_param lquota.*.quota_*_sz lquota.*.quota_switch_qs
Comment by Supporto Lustre Jnet2000 (Inactive) [ 26/Jan/12 ]

hi this is the output.

[root@osiride-lp-030 ~]# lctl get_param lquota..quota__sz lquota.*.quota_switch_qs
lquota.home-MDT0000.quota_btune_sz=67108850
lquota.home-MDT0000.quota_bunit_sz=134217728
lquota.home-MDT0000.quota_itune_sz=2560
lquota.home-MDT0000.quota_iunit_sz=5120
lquota.home-OST0000.quota_btune_sz=67108850
lquota.home-OST0000.quota_bunit_sz=134217728
lquota.home-OST0000.quota_itune_sz=2560
lquota.home-OST0000.quota_iunit_sz=5120
lquota.home-OST0001.quota_btune_sz=67108850
lquota.home-OST0001.quota_bunit_sz=134217728
lquota.home-OST0001.quota_itune_sz=2560
lquota.home-OST0001.quota_iunit_sz=5120
lquota.home-OST0002.quota_btune_sz=67108850
lquota.home-OST0002.quota_bunit_sz=134217728
lquota.home-OST0002.quota_itune_sz=2560
lquota.home-OST0002.quota_iunit_sz=5120
lquota.home-MDT0000.quota_switch_qs=changing qunit size is disabled

Best Regards

Andrea Mattioli

Comment by Johann Lombardi (Inactive) [ 26/Jan/12 ]

All the parameters look sane on the server side. At this point, we would need to collect a debug log. Here is how to proceed:

* On the MDS:
 # lctl set_param debug=+quota+vfstrace
 # lctl clear
* On the client, reproduce the problem:
 # su - testtest
 $ mkdir asd
* One the MDS:
 # lctl dk > /tmp/lustre_logs
 # lctl set_param debug=-quota-vfstrace

And then please attach /tmp/lustre_logs to this bug.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 28/Jan/12 ]

Ok,
according with the manual we decide to upgrade the minimum value for the user and group quota to 100MB*(12 OST+1)= 1.3Gbyte rounded to 2 Gbyte.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 30/Jan/12 ]

Hi Johann,
we are planning to upgrade the release of Lustre with the latest stable. Could you link the latest whamcloud lustre version that include the Niu Yawei patch for our quota bug?

Could you close this issue?

Thanks in advance.

Comment by Peter Jones [ 31/Jan/12 ]

The first release that this patch is scheduled for inclusion in is Lustre 2.2, which is expected out in a couple of months time. I would suggest that you continue with the existing workaround for now.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 31/Jan/12 ]

Hi Peter,
so do you suggest to upgrade to 1.8.7-wc and not to the build #391 ?

Comment by Johann Lombardi (Inactive) [ 01/Feb/12 ]

I would indeed suggest to upgrade to 1.8.7-wc and use the workaround for now.
In fact, it is even worth trying to run w/o the workaround first, to see if the problem can be reproduced with a stock 1.8.7-wc release.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 02/Feb/12 ]

We have not any choice to upgrade to another version of Lustre for the next 6 months... so we have to be sure to install a rock solid version!!!

Please could you confirm the 1.8.7-wc version?

Please close the issue. Thanks

Comment by Peter Jones [ 02/Feb/12 ]

Yes Lustre 1.8.7-wc1 is the best option for you.

Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » x86_64,client,el5,inkernel #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » i686,server,el5,ofa #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » x86_64,client,el6,inkernel #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » x86_64,client,el5,ofa #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » i686,client,el6,inkernel #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » i686,client,el5,inkernel #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » x86_64,server,el5,inkernel #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » i686,client,el5,ofa #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » x86_64,server,el5,ofa #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 05/Apr/12 ]

Integrated in lustre-b1_8 » i686,server,el5,inkernel #178
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision 18aafe97a6782f9c7c301125895d23c0dff9ad8d)

Result = SUCCESS
Johann Lombardi : 18aafe97a6782f9c7c301125895d23c0dff9ad8d
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,client,sles11,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » i686,client,el6,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,server,el6,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » i686,client,el5,ofa #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,server,el5,ofa #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,client,el6,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » i686,server,el6,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,client,el5,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » i686,server,el5,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,server,el5,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » i686,server,el5,ofa #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,client,el5,ofa #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Comment by Build Master (Inactive) [ 08/Apr/12 ]

Integrated in lustre-b2_1 » i686,client,el5,inkernel #41
LU-935 quota: break early when b/i_unit_sz exceeded upper limit (Revision ed57fd22280fe5d1e2f8a57f21e83922ad565b3a)

Result = SUCCESS
Oleg Drokin : ed57fd22280fe5d1e2f8a57f21e83922ad565b3a
Files :

  • lustre/quota/quota_master.c
Generated at Sat Feb 10 01:11:52 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.