<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:36:52 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3784] Quota issue on system upgraded to 2.4.x</title>
                <link>https://jira.whamcloud.com/browse/LU-3784</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Sanger upgraded a test system to 2.4.0 and is having issues with their quota. The accounting is not working correctly. This file system was originally 1.6.x, but was upgraded to 1.8.x a while back. &lt;/p&gt;

&lt;p&gt;The e2fsprogs were also upgraded to 1.42.7.wc1-1. &lt;/p&gt;

&lt;p&gt;They ran tunefs.lustre --quota on all the OSTs and MDTs. Originally, some of the OSSes had been missed in the e2fsprogs upgrade. &lt;/p&gt;

&lt;p&gt;They ran e2fsck -fp and got:&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (3788615680, 1231588) != expected (761856, 139)&lt;/p&gt;

&lt;p&gt;Running e2fsck -fy afterwards looked clean:&lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;br/&gt;
lus01-OST0000: 2140413/488366080 files (3.2% non-contiguous), 668882261/1953457152 blocks&lt;/p&gt;

&lt;p&gt;but still the accounting was wrong:&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u jb23 /lustre/scratch101 -v&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      0       0       1       -       0       0       1       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                      0       -       0       -       0       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0001_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;/p&gt;

&lt;p&gt;They had these messages in the logs:&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: VFS: Quota for id 19228 referenced but not present.&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: VFS: Can&apos;t read quota structure for id 19228.&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: LustreError: 10738:0:(qsd_entry.c:215:qsd_refresh_usage()) $$$ failed to read disk usage, rc:-3 qsd:lus01-OST0000 qtype:usr id:19228 enforced:1 granted:0 pending:0 waiting:0 req:0 usage:0 qunit:0 qtune:0 edquot:0&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: Lustre: 10738:0:(qsd_reint.c:349:qsd_reconciliation()) lus01-OST0000: failed to locate lqe. &lt;span class=&quot;error&quot;&gt;&amp;#91;0x200000006:0x20000:0x0&amp;#93;&lt;/span&gt;, -3&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: Lustre: 10738:0:(qsd_reint.c:525:qsd_reint_main()) lus01-OST0000: reconciliation failed. &lt;span class=&quot;error&quot;&gt;&amp;#91;0x0:0x0:0x0&amp;#93;&lt;/span&gt;, -3&lt;br/&gt;
Aug 15 00:03:15 lus01-oss1 kernel: EXT4-fs (dm-7): Couldn&apos;t mount because of unsupported optional features (100)&lt;/p&gt;

&lt;p&gt;I asked them to try clearing the quota inodes:&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~# umount /export/vd01&lt;br/&gt;
root@lus01-oss1:~# &lt;/p&gt;
{ echo &quot;clri &amp;lt;3&amp;gt;&quot;; echo &quot;clri &amp;lt;4&amp;gt;&quot;; }
&lt;p&gt; | debugfs -w /dev/lus01-ost0/lus01 &lt;br/&gt;
debugfs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
debugfs:  clri &amp;lt;3&amp;gt;&lt;br/&gt;
debugfs:  clri &amp;lt;4&amp;gt;&lt;br/&gt;
debugfs:  root@lus0e2fsck -fy /dev/lus01-ost0/lus01 &lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Quota inode is not regular file.  Clear? yes&lt;/p&gt;

&lt;p&gt;Quota inode is not regular file.  Clear? yes&lt;/p&gt;

&lt;p&gt;Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;br/&gt;
Block bitmap differences:  &lt;del&gt;(1548&lt;/del&gt;&lt;del&gt;1551) -(1555&lt;/del&gt;&lt;del&gt;1557) -(1560&lt;/del&gt;&lt;del&gt;1561) -(1563&lt;/del&gt;&lt;del&gt;1567) -1575 -(1582&lt;/del&gt;&lt;del&gt;1584) -(1586&lt;/del&gt;&lt;del&gt;1587) -1590 -(1592&lt;/del&gt;&lt;del&gt;1600) -(1602&lt;/del&gt;&lt;del&gt;1605) -(1610&lt;/del&gt;&lt;del&gt;1611) -(1614&lt;/del&gt;&lt;del&gt;1616) -(1618&lt;/del&gt;&lt;del&gt;1623) -1626 -(1628&lt;/del&gt;&lt;del&gt;1637) -(1639&lt;/del&gt;&lt;del&gt;1644) -(1655&lt;/del&gt;&lt;del&gt;1658) -(1660&lt;/del&gt;&lt;del&gt;1664) -(1666&lt;/del&gt;&lt;del&gt;1677) -(1679&lt;/del&gt;&lt;del&gt;1681) -(1684&lt;/del&gt;&lt;del&gt;1700) -(1704&lt;/del&gt;&lt;del&gt;1707) -(1712&lt;/del&gt;&lt;del&gt;1715) -(1728&lt;/del&gt;&lt;del&gt;1731) -(1736&lt;/del&gt;&lt;del&gt;1739) -(1745&lt;/del&gt;&lt;del&gt;1751) -(1754&lt;/del&gt;&lt;del&gt;1755) -(1856&lt;/del&gt;&lt;del&gt;1869) -(1903&lt;/del&gt;&lt;del&gt;1912) -(1914&lt;/del&gt;&lt;del&gt;1915) -(1981&lt;/del&gt;&lt;del&gt;1990) -(1992&lt;/del&gt;&lt;del&gt;2013) -4223 -(12320&lt;/del&gt;-12341) -12745 -12888&lt;br/&gt;
Fix? yes&lt;/p&gt;

&lt;p&gt;Free blocks count wrong for group #0 (3327, counted=3538).&lt;br/&gt;
Fix? yes&lt;/p&gt;

&lt;p&gt;Free blocks count wrong (1284432058, counted=1284432269).&lt;br/&gt;
Fix? yes&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;ERROR&amp;#93;&lt;/span&gt; quotaio.c:246:quota_file_open:: qh_ops-&amp;gt;check_file failed&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;ERROR&amp;#93;&lt;/span&gt; mkquota.c:543:quota_compare_and_update:: Open quota file failed&lt;br/&gt;
Update quota info for quota type 0? yes&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;ERROR&amp;#93;&lt;/span&gt; quotaio.c:246:quota_file_open:: qh_ops-&amp;gt;check_file failed&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;ERROR&amp;#93;&lt;/span&gt; mkquota.c:543:quota_compare_and_update:: Open quota file failed&lt;br/&gt;
Update quota info for quota type 1? yes&lt;/p&gt;


&lt;p&gt;lus01-OST0000: ***** FILE SYSTEM WAS MODIFIED *****&lt;br/&gt;
lus01-OST0000: 2140509/488366080 files (3.2% non-contiguous), 669025094/1953457152 blocks&lt;br/&gt;
root@lus01-oss1:~# &lt;/p&gt;

&lt;p&gt;But still no luck. There are definitely objects allocated and in use on the OSTs:&lt;br/&gt;
root@lus01-oss1:~# find /export/vd01 -uid 12296 -ls&lt;br/&gt;
   109 6144 &lt;del&gt;rw-rw-rw&lt;/del&gt;   1 jb23     4294936579  6291456 Aug 15 17:33 /export/vd01/O/0/d25/72330521&lt;/p&gt;

&lt;p&gt;At this point I&apos;m not sure what to try next. Any ideas on what to try next, or any debugging that can be done?&lt;/p&gt;
</description>
                <environment>Ubuntu</environment>
        <key id="20484">LU-3784</key>
            <summary>Quota issue on system upgraded to 2.4.x</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="kitwestneat">Kit Westneat</reporter>
                        <labels>
                    </labels>
                <created>Tue, 20 Aug 2013 13:24:30 +0000</created>
                <updated>Tue, 17 Dec 2013 02:14:22 +0000</updated>
                            <resolved>Tue, 17 Dec 2013 02:14:22 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="64600" author="pjones" created="Tue, 20 Aug 2013 13:52:12 +0000"  >&lt;p&gt;Niu&lt;/p&gt;

&lt;p&gt;Could you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="64611" author="niu" created="Tue, 20 Aug 2013 15:27:28 +0000"  >&lt;p&gt;Looks quota wasn&apos;t properly enabled on backend filesystem, for the OSS which missed e2fsprogs upgrade, did you re-run the lustre.tunefs --quota to enable quota after e2fsprogs upgraded? Do you have logs when server start? Thanks.&lt;/p&gt;</comment>
                            <comment id="64613" author="james beal" created="Tue, 20 Aug 2013 15:56:42 +0000"  >&lt;p&gt;tunefs.lustre --quota was run on each of the OSS&apos;s again after we noticed that the package had not been upgraded.&lt;/p&gt;

&lt;p&gt;At what point do you want the kernel logs for ?&lt;/p&gt;</comment>
                            <comment id="64707" author="niu" created="Wed, 21 Aug 2013 02:57:50 +0000"  >&lt;p&gt;Hi, James, I&apos;d see if there is any error message during tuners.lustre -quota and server start time. Thanks.  BTW: what&apos;s the kernel version of server?&lt;/p&gt;</comment>
                            <comment id="64717" author="james beal" created="Wed, 21 Aug 2013 08:32:34 +0000"  >&lt;p&gt;We are running a kernel compiled from redhat source.&lt;/p&gt;

&lt;p&gt;This is from my original ticket to DDN:&lt;/p&gt;

&lt;p&gt;I have upgraded the systems to ubuntu precise and they are running ( 2.6.32-lustre-2.4 ).&lt;/p&gt;

&lt;p&gt;We are running e2fsprogs (  1.42.7.wc1-1   )&lt;/p&gt;

&lt;p&gt;The quota system is not working completely&#8230;.&lt;/p&gt;

&lt;p&gt;Each of the OST&apos;s had tunefs.lustre --quota  run upon it and this was also run on the MDS and MGS, for example:&lt;/p&gt;


&lt;p&gt;tunefs.lustre --quota /dev/lus01-ostf/lus01 &lt;br/&gt;
checking for existing Lustre data: found&lt;br/&gt;
Reading CONFIGS/mountdata&lt;/p&gt;

&lt;p&gt;  Read previous values:&lt;br/&gt;
Target:     lus01-OST000f&lt;br/&gt;
Index:      15&lt;br/&gt;
Lustre FS:  lus01&lt;br/&gt;
Mount type: ldiskfs&lt;br/&gt;
Flags:      0x2&lt;br/&gt;
             (OST )&lt;br/&gt;
Persistent mount opts: errors=remount-ro,extents,mballoc&lt;br/&gt;
Parameters: mgsnode=172.17.99.10@tcp mgsnode=172.17.99.9@tcp failover.node=172.17.99.8@tcp ost.quota_type=ug&lt;/p&gt;


&lt;p&gt;  Permanent disk data:&lt;br/&gt;
Target:     lus01-OST000f&lt;br/&gt;
Index:      15&lt;br/&gt;
Lustre FS:  lus01&lt;br/&gt;
Mount type: ldiskfs&lt;br/&gt;
Flags:      0x2&lt;br/&gt;
             (OST )&lt;br/&gt;
Persistent mount opts: errors=remount-ro,extents,mballoc&lt;br/&gt;
Parameters: mgsnode=172.17.99.10@tcp mgsnode=172.17.99.9@tcp failover.node=172.17.99.8@tcp ost.quota_type=ug&lt;/p&gt;


&lt;p&gt;I note we have &quot;ost.quota_type=ug&quot; in the parameters and that makes me think we might need to remove that persistent option.&lt;/p&gt;


&lt;p&gt;This is the kernel log from the MDT&lt;/p&gt;

&lt;p&gt;lctl&lt;br/&gt;
lctl &amp;gt;  get_param lus01.quota.mdt&lt;br/&gt;
error: get_param: /proc/&lt;/p&gt;
{fs,sys}/{lnet,lustre}/lus01/quota/mdt: Found no match&lt;br/&gt;
&lt;br/&gt;
ls /proc/{fs,sys}
&lt;p&gt;/&lt;/p&gt;
{lnet,lustre}
&lt;p&gt;/*/quota&lt;br/&gt;
ls: cannot access /proc/fs/lnet/*/quota: No such file or directory&lt;br/&gt;
ls: cannot access /proc/fs/lustre/*/quota: No such file or directory&lt;br/&gt;
ls: cannot access /proc/sys/lnet/*/quota: No such file or directory&lt;br/&gt;
ls: cannot access /proc/sys/lustre/*/quota: No such file or directory&lt;/p&gt;

&lt;p&gt;Aug 13 10:33:30 lus01-mds1 kernel: LustreError: 22667:0:(mgs_llog.c:2899:mgs_write_log_quota()) parameter quota.ost isn&apos;t supported (only quota.mdt &amp;amp; quota.ost are)&lt;br/&gt;
Aug 13 10:33:30 lus01-mds1 kernel: LustreError: 22667:0:(mgs_llog.c:3578:mgs_write_log_param()) err -22 on param &apos;quota.ost&apos;&lt;br/&gt;
Aug 13 10:33:30 lus01-mds1 kernel: LustreError: 22667:0:(mgs_handler.c:941:mgs_iocontrol()) MGS: setparam err: rc = -22&lt;br/&gt;
Aug 13 10:35:17 lus01-mds1 kernel: LustreError: 22692:0:(mgs_llog.c:2899:mgs_write_log_quota()) parameter quota.ost isn&apos;t supported (only quota.mdt &amp;amp; quota.ost are)&lt;br/&gt;
Aug 13 10:35:17 lus01-mds1 kernel: LustreError: 22692:0:(mgs_llog.c:3578:mgs_write_log_param()) err -22 on param &apos;quota.ost&apos;&lt;br/&gt;
Aug 13 10:35:17 lus01-mds1 kernel: LustreError: 22692:0:(mgs_handler.c:941:mgs_iocontrol()) MGS: setparam err: rc = -22&lt;/p&gt;


&lt;p&gt;After one failed MDT mount where I managed not to notice that I had an issue with the networking, the following is the end of a kernel log on an OSS.&lt;/p&gt;

&lt;p&gt;Aug 13 09:33:55 lus01-oss1 kernel: Lustre: lus01-OST0000: recovery is timed out, evict stale exports&lt;br/&gt;
Aug 13 09:33:55 lus01-oss1 kernel: Lustre: lus01-OST0000: disconnecting 1 stale clients&lt;br/&gt;
Aug 13 09:33:55 lus01-oss1 kernel: Lustre: lus01-OST0001: recovery is timed out, evict stale exports&lt;br/&gt;
Aug 13 09:33:55 lus01-oss1 kernel: Lustre: lus01-OST0001: disconnecting 1 stale clients&lt;br/&gt;
Aug 13 09:33:55 lus01-oss1 kernel: Lustre: lus01-OST0000: Recovery over after 5:00, of 3 clients 2 recovered and 1 was evicted.&lt;br/&gt;
Aug 13 09:33:55 lus01-oss1 kernel: Lustre: lus01-OST0003: Recovery over after 5:00, of 3 clients 2 recovered and 1 was evicted.&lt;br/&gt;
Aug 13 09:36:04 lus01-oss1 kernel: Lustre: 9744:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1376382859/real 1376382859&amp;#93;&lt;/span&gt;  req@ffff880438ee5000 x1443241492742768/t0(0) o38-&amp;gt;lus01-MDT0000-lwp-OST0000@172.17.99.10@tcp:12/10 lens 400/544 e 0 to 1 dl 1376382964 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1&lt;br/&gt;
Aug 13 09:36:04 lus01-oss1 kernel: Lustre: 9744:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 11 previous similar messages&lt;br/&gt;
Aug 13 09:40:39 lus01-oss1 kernel: Lustre: 9744:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1376383134/real 1376383137&amp;#93;&lt;/span&gt;  req@ffff88040276bc00 x1443241492742892/t0(0) o38-&amp;gt;lus01-MDT0000-lwp-OST0003@172.17.99.9@tcp:12/10 lens 400/544 e 0 to 1 dl 1376383239 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1&lt;br/&gt;
Aug 13 09:40:39 lus01-oss1 kernel: Lustre: 9744:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 22 previous similar messages&lt;br/&gt;
Aug 13 09:49:49 lus01-oss1 kernel: Lustre: 9744:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1376383684/real 1376383684&amp;#93;&lt;/span&gt;  req@ffff8803381c5c00 x1443241492743124/t0(0) o38-&amp;gt;lus01-MDT0000-lwp-OST0003@172.17.99.9@tcp:12/10 lens 400/544 e 0 to 1 dl 1376383789 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1&lt;br/&gt;
Aug 13 09:49:49 lus01-oss1 kernel: Lustre: 9744:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 32 previous similar messages&lt;br/&gt;
Aug 13 09:50:04 lus01-oss1 kernel: Lustre: lus01-OST0000: deleting orphan objects from 0x0:72330153 to 0x0:72330274&lt;br/&gt;
Aug 13 09:50:04 lus01-oss1 kernel: Lustre: lus01-OST0001: deleting orphan objects from 0x0:73064826 to 0x0:73065542&lt;br/&gt;
Aug 13 09:50:04 lus01-oss1 kernel: Lustre: lus01-OST0006: deleting orphan objects from 0x0:71940684 to 0x0:71941216&lt;br/&gt;
Aug 13 09:50:04 lus01-oss1 kernel: Lustre: lus01-OST0003: deleting orphan objects from 0x0:72426685 to 0x0:72426837&lt;br/&gt;
Aug 13 09:50:04 lus01-oss1 kernel: Lustre: lus01-OST0004: deleting orphan objects from 0x0:71736668 to 0x0:71736925&lt;br/&gt;
Aug 13 09:50:04 lus01-oss1 kernel: Lustre: lus01-OST0005: deleting orphan objects from 0x0:70934781 to 0x0:70934907&lt;br/&gt;
Aug 13 09:50:04 lus01-oss1 kernel: Lustre: lus01-OST0002: deleting orphan objects from 0x0:72574984 to 0x0:72575499&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-9): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-8): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-7): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-6): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-5): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-2): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-3): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-1): Couldn&apos;t mount because of unsupported optional features (100)&lt;/p&gt;

&lt;p&gt;The limits appear to be correct ( we set them all to 1 before the system was decommissioned ) however the current usage is not right.&lt;/p&gt;

&lt;p&gt;Some errors happened when getting quota info. Some devices may be not working or deactivated. The data in &quot;[]&quot; is inaccurate.&lt;br/&gt;
root@isg-disc-mon-05:/lustre/scratch101/ensembl# lfs quota -u jb23 -v  /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
    Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                   &lt;span class=&quot;error&quot;&gt;&amp;#91;0&amp;#93;&lt;/span&gt;       0       1       -     &lt;span class=&quot;error&quot;&gt;&amp;#91;0&amp;#93;&lt;/span&gt;       0       1       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                     0       -       0       -       0       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                     0       -       0       -       -       -       -       -&lt;br/&gt;
...&lt;/p&gt;</comment>
                            <comment id="64718" author="james beal" created="Wed, 21 Aug 2013 08:34:42 +0000"  >&lt;p&gt;More from the original ticket:&lt;/p&gt;

&lt;p&gt;I note that that one of the OSS&apos;s seems to get confused about multiple mount protection&#8230;..&lt;/p&gt;

&lt;p&gt;root@lus01-oss4:~# ps -ef |grep fsck&lt;br/&gt;
root     28034 22654  0 14:26 pts/1    00:00:00 grep fsck&lt;br/&gt;
root@lus01-oss4:~# mount /export/vd30&lt;br/&gt;
mount.lustre: mount /dev/mapper/lus01--ost1d-lus01 at /export/vd30 failed: Invalid argument&lt;br/&gt;
This may have multiple causes.&lt;br/&gt;
Are the mount options correct?&lt;br/&gt;
Check the syslog for more info.&lt;/p&gt;

&lt;p&gt;Aug 14 14:25:35 lus01-oss4 kernel: LustreError: 28023:0:(obd_mount_server.c:1665:server_fill_super()) Unable to start osd on /dev/mapper/lus01--ost1d-lus01: -22&lt;br/&gt;
Aug 14 14:25:35 lus01-oss4 kernel: LustreError: 28023:0:(obd_mount.c:1267:lustre_fill_super()) Unable to mount  (-22)&lt;br/&gt;
Aug 14 14:26:46 lus01-oss4 kernel: LDISKFS-fs warning (device dm-0): ldiskfs_multi_mount_protect: fsck is running on the filesystem&lt;br/&gt;
Aug 14 14:26:46 lus01-oss4 kernel: LDISKFS-fs warning (device dm-0): ldiskfs_multi_mount_protect: MMP failure info: last update time: 1376483804, last update node: lus01-oss4, last update device: /dev/lus01-ost1d/lus01&lt;br/&gt;
Aug 14 14:26:46 lus01-oss4 kernel: &lt;br/&gt;
Aug 14 14:26:46 lus01-oss4 kernel: LustreError: 28036:0:(osd_handler.c:5349:osd_mount()) lus01-OST001d-osd: can&apos;t mount /dev/mapper/lus01--ost1d-lus01: -22&lt;/p&gt;

&lt;p&gt;&#8230;.&lt;/p&gt;

&lt;p&gt;root@lus01-oss4:~# tune2fs -f -E clear_mmp  /dev/lus01-ost1d/lus01&lt;br/&gt;
tune2fs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
root@lus01-oss4:~# mount /export/vd30&lt;/p&gt;

&lt;p&gt;I then noted the following message and therefore repeated the tuners.lustre --quota&lt;/p&gt;

&lt;p&gt;Aug 14 14:28:56 lus01-oss4 kernel: LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. quota=off. Opts: &lt;/p&gt;

&lt;p&gt;root@lus01-oss4:~# tunefs.lustre --quota  /dev/lus01-ost1d/lus01&lt;br/&gt;
checking for existing Lustre data: found&lt;br/&gt;
Reading CONFIGS/mountdata&lt;/p&gt;

&lt;p&gt;  Read previous values:&lt;br/&gt;
Target:     lus01-OST001d&lt;br/&gt;
Index:      29&lt;br/&gt;
Lustre FS:  lus01&lt;br/&gt;
Mount type: ldiskfs&lt;br/&gt;
Flags:      0x2&lt;br/&gt;
             (OST )&lt;br/&gt;
Persistent mount opts: errors=remount-ro,extents,mballoc&lt;br/&gt;
Parameters: mgsnode=172.17.99.10@tcp mgsnode=172.17.99.9@tcp failover.node=172.17.99.7@tcp ost.quota_type=ug&lt;/p&gt;


&lt;p&gt;  Permanent disk data:&lt;br/&gt;
Target:     lus01-OST001d&lt;br/&gt;
Index:      29&lt;br/&gt;
Lustre FS:  lus01&lt;br/&gt;
Mount type: ldiskfs&lt;br/&gt;
Flags:      0x2&lt;br/&gt;
             (OST )&lt;br/&gt;
Persistent mount opts: errors=remount-ro,extents,mballoc&lt;br/&gt;
Parameters: mgsnode=172.17.99.10@tcp mgsnode=172.17.99.9@tcp failover.node=172.17.99.7@tcp ost.quota_type=ug&lt;/p&gt;

&lt;p&gt;root@lus01-oss4:~# &lt;/p&gt;

&lt;p&gt;Now all the discs are mounted thus:&lt;/p&gt;

&lt;p&gt;Aug 14 16:06:06 lus01-oss2 kernel: LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. quota=on. Opts: &lt;/p&gt;

&lt;p&gt;And we still do not get quotas correctly.&lt;/p&gt;</comment>
                            <comment id="64719" author="james beal" created="Wed, 21 Aug 2013 08:36:07 +0000"  >&lt;p&gt;And more:&lt;/p&gt;


&lt;p&gt;That didn&apos;t help. In case it is not clear ( as it wasn&apos;t to me ) I think the system has the old quotas in place ( we did set them to 1 when we decomissioned the file system)&lt;/p&gt;

&lt;p&gt;root@isg-disc-mon-05:~# lfs quota -u jb23 /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
    Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                     0       0       1       -       0       0       1       -&lt;br/&gt;
root@isg-disc-mon-05:~# lfs setquota -u jb23 -I 2 -B 2 /lustre/scratch101&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u jb23 /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
    Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                     0       0       2       -       0       0       2       -&lt;br/&gt;
root@isg-disc-mon-05:~# lfs setquota -u jb23 -I 1 -B 1 /lustre/scratch101&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u jb23 /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
    Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                     0       0       1       -       0       0       1       -&lt;br/&gt;
root@isg-disc-mon-05:~# &lt;/p&gt;

&lt;p&gt;the e2fsck -fp is interesting&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~#  e2fsck -fp /dev/lus01-ost0/lus01 &lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (3788615680, 1231588) != expected (761856, 139)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 0Project-Id-Version: e2fsprogs&lt;br/&gt;
Report-Msgid-Bugs-To: FULL NAME &amp;lt;EMAIL@ADDRESS&amp;gt;&lt;br/&gt;
POT-Creation-Date: 2008-06-17 22:16-0400&lt;br/&gt;
PO-Revision-Date: 2008-08-10 09:38+0000&lt;br/&gt;
Last-Translator: Jen Ockwell &amp;lt;jenfraggleubuntu@googlemail.com&amp;gt;&lt;br/&gt;
Language-Team: English (United Kingdom) &amp;lt;en_GB@li.org&amp;gt;&lt;br/&gt;
MIME-Version: 1.0&lt;br/&gt;
Content-Type: text/plain; charset=UTF-8&lt;br/&gt;
Content-Transfer-Encoding: 8bit&lt;br/&gt;
X-Launchpad-Export-Date: 2013-01-28 10:46+0000&lt;br/&gt;
X-Generator: Launchpad (build 16451)&lt;br/&gt;
.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (4592496640, 1237152) != expected (761856, 139)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 1Project-Id-Version: e2fsprogs&lt;br/&gt;
Report-Msgid-Bugs-To: FULL NAME &amp;lt;EMAIL@ADDRESS&amp;gt;&lt;br/&gt;
POT-Creation-Date: 2008-06-17 22:16-0400&lt;br/&gt;
PO-Revision-Date: 2008-08-10 09:38+0000&lt;br/&gt;
Last-Translator: Jen Ockwell &amp;lt;jenfraggleubuntu@googlemail.com&amp;gt;&lt;br/&gt;
Language-Team: English (United Kingdom) &amp;lt;en_GB@li.org&amp;gt;&lt;br/&gt;
MIME-Version: 1.0&lt;br/&gt;
Content-Type: text/plain; charset=UTF-8&lt;br/&gt;
Content-Transfer-Encoding: 8bit&lt;br/&gt;
X-Launchpad-Export-Date: 2013-01-28 10:46+0000&lt;br/&gt;
X-Generator: Launchpad (build 16451)&lt;br/&gt;
.&lt;br/&gt;
lus01-OST0000: 2140413/488366080 files (3.2% non-contiguous), 668882260/1953457152 blocks&lt;/p&gt;</comment>
                            <comment id="64720" author="james beal" created="Wed, 21 Aug 2013 08:36:55 +0000"  >&lt;p&gt;more:&lt;/p&gt;

&lt;p&gt;That is interesting. I wonder how that got in the e2fsprogs... Maybe there is some localization files that got screwed up? Does the error reoccur if you rerun e2fsck? If so, can you try it with -fy?&lt;/p&gt;


&lt;p&gt;e2fsck  /dev/lus01-ost0/lus01 &lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
lus01-OST0000: clean, 2140413/488366080 files, 668882263/1953457152 blocks&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~#  e2fsck -fp  /dev/lus01-ost0/lus01 &lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (3788619776, 1231588) != expected (4096, 32)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 0Project-Id-Version: e2fsprogs&lt;br/&gt;
Report-Msgid-Bugs-To: FULL NAME &amp;lt;EMAIL@ADDRESS&amp;gt;&lt;br/&gt;
POT-Creation-Date: 2008-06-17 22:16-0400&lt;br/&gt;
PO-Revision-Date: 2008-08-10 09:38+0000&lt;br/&gt;
Last-Translator: Jen Ockwell &amp;lt;jenfraggleubuntu@googlemail.com&amp;gt;&lt;br/&gt;
Language-Team: English (United Kingdom) &amp;lt;en_GB@li.org&amp;gt;&lt;br/&gt;
MIME-Version: 1.0&lt;br/&gt;
Content-Type: text/plain; charset=UTF-8&lt;br/&gt;
Content-Transfer-Encoding: 8bit&lt;br/&gt;
X-Launchpad-Export-Date: 2013-01-28 10:46+0000&lt;br/&gt;
X-Generator: Launchpad (build 16451)&lt;br/&gt;
.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (4592500736, 1237152) != expected (4096, 32)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 1Project-Id-Version: e2fsprogs&lt;br/&gt;
Report-Msgid-Bugs-To: FULL NAME &amp;lt;EMAIL@ADDRESS&amp;gt;&lt;br/&gt;
POT-Creation-Date: 2008-06-17 22:16-0400&lt;br/&gt;
PO-Revision-Date: 2008-08-10 09:38+0000&lt;br/&gt;
Last-Translator: Jen Ockwell &amp;lt;jenfraggleubuntu@googlemail.com&amp;gt;&lt;br/&gt;
Language-Team: English (United Kingdom) &amp;lt;en_GB@li.org&amp;gt;&lt;br/&gt;
MIME-Version: 1.0&lt;br/&gt;
Content-Type: text/plain; charset=UTF-8&lt;br/&gt;
Content-Transfer-Encoding: 8bit&lt;br/&gt;
X-Launchpad-Export-Date: 2013-01-28 10:46+0000&lt;br/&gt;
X-Generator: Launchpad (build 16451)&lt;br/&gt;
.&lt;br/&gt;
lus01-OST0000: 2140413/488366080 files (3.2% non-contiguous), 668882261/1953457152 blocks&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~# &lt;br/&gt;
root@lus01-oss1:~# &lt;br/&gt;
root@lus01-oss1:~#  e2fsck -fy  /dev/lus01-ost0/lus01 &lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;br/&gt;
lus01-OST0000: 2140413/488366080 files (3.2% non-contiguous), 668882261/1953457152 blocks&lt;/p&gt;

&lt;p&gt;followed by&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~#  e2fsck -fp  /dev/lus01-ost0/lus01 &lt;br/&gt;
lus01-OST0000: 2140413/488366080 files (3.2% non-contiguous), 668882261/1953457152 blocks&lt;/p&gt;



&lt;p&gt;Also, what does lfs quota -u root look like?&lt;/p&gt;


&lt;p&gt;root@isg-disc-mon-05:~# lfs quota -u root  /lustre/scratch101&lt;br/&gt;
Disk quotas for user root (uid 0):&lt;br/&gt;
    Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                     5       0       0       -       0       0       0       -&lt;/p&gt;</comment>
                            <comment id="64721" author="james beal" created="Wed, 21 Aug 2013 08:40:57 +0000"  >&lt;p&gt;On 15 Aug 2013, at 09:54, Guy Coates &amp;lt;gmpc@sanger.ac.uk&amp;gt; wrote:&lt;/p&gt;

&lt;p&gt;Hi all,&lt;/p&gt;

&lt;p&gt;The logs on the OST are interesting at mount time; it looks like there is a corrupt quota entry &lt;br/&gt;
(Can&apos;t read quota structure for id 19228). I wonder if that is a fatal error for the quota subsystem.&lt;/p&gt;

&lt;p&gt;Will a forces fsck fix that up?&lt;/p&gt;



&lt;p&gt;Aug 14 23:40:59 lus01-oss1 kernel: LDISKFS-fs (dm-14): mounted filesystem with ordered data mode. quota=on. Opts: &lt;br/&gt;
Aug 14 23:40:59 lus01-oss1 kernel: Lustre: 10735:0:(ofd_dev.c:221:ofd_process_config()) For interoperability, skip this ost.quota_type. It is obsolete.&lt;br/&gt;
Aug 14 23:41:11 lus01-oss1 kernel: Lustre: lus01-OST0000: Will be in recovery for at least 5:00, or until 5 clients reconnect&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: Lustre: lus01-OST0000: Recovery over after 0:24, of 5 clients 5 recovered and 0 were evicted.&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: Lustre: lus01-OST0000: deleting orphan objects from 0x0:72330153 to 0x0:72330466&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: VFS: Quota for id 19228 referenced but not present.&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: VFS: Can&apos;t read quota structure for id 19228.&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: LustreError: 10738:0:(qsd_entry.c:215:qsd_refresh_usage()) $$$ failed to read disk usage, rc:-3 qsd:lus01-OST0000 qtype:usr id:19228 enforced:1 granted:0 pending:0 waiting:0 req:0 usage:0 qunit:0 qtune:0 edquot:0&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: Lustre: 10738:0:(qsd_reint.c:349:qsd_reconciliation()) lus01-OST0000: failed to locate lqe. &lt;span class=&quot;error&quot;&gt;&amp;#91;0x200000006:0x20000:0x0&amp;#93;&lt;/span&gt;, -3&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: Lustre: 10738:0:(qsd_reint.c:525:qsd_reint_main()) lus01-OST0000: reconciliation failed. &lt;span class=&quot;error&quot;&gt;&amp;#91;0x0:0x0:0x0&amp;#93;&lt;/span&gt;, -3&lt;br/&gt;
Aug 15 00:03:15 lus01-oss1 kernel: EXT4-fs (dm-7): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 00:03:15 lus01-oss1 kernel: EXT4-fs (dm-6): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 00:03:15 lus01-oss1 kernel: EXT4-fs (dm-5): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 00:03:15 lus01-oss1 kernel: EXT4-fs (dm-5): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 00:03:16 lus01-oss1 kernel: EXT4-fs (dm-4): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 00:03:16 lus01-oss1 kernel: EXT4-fs (dm-3): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 00:03:16 lus01-oss1 kernel: EXT4-fs (dm-2): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 00:03:16 lus01-oss1 kernel: EXT4-fs (dm-1): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 00:03:16 lus01-oss1 kernel: EXT4-fs (dm-0): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 15 01:03:30 lus01-oss1 kernel: EXT4-fs (dm-7): Couldn&apos;t mount because of unsupported optional features (100)&lt;/p&gt;

&lt;p&gt;&amp;lt;last messages repeated indefinitely&amp;gt;&lt;/p&gt;

&lt;p&gt;....&lt;/p&gt;

&lt;p&gt;On 15 Aug 2013, at 18:44, James Beal &amp;lt;JAMES.BEAL@SANGER.AC.UK&amp;gt; wrote:&lt;/p&gt;


&lt;p&gt;On 15 Aug 2013, at 15:55, James Beal  wrote:&lt;/p&gt;


&lt;p&gt;On 15 Aug 2013, at 15:50, Kit Westneat  wrote:&lt;/p&gt;

&lt;p&gt;Hi Guy,&lt;/p&gt;

&lt;p&gt;I think that&apos;s the OST that James ran fsck on already, strange. If you want to try deleting the quota inodes and regenerating them, you can do this:&lt;/p&gt;


&lt;p&gt;Thanks Kit I will try that and see what happens &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;

&lt;p&gt;Do we need to do it on all the OST&apos;s and the MDT ?&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~# umount /export/vd01&lt;br/&gt;
root@lus01-oss1:~# &lt;/p&gt;
{ echo &quot;clri &amp;lt;3&amp;gt;&quot;; echo &quot;clri &amp;lt;4&amp;gt;&quot;; }
&lt;p&gt; | debugfs -w /dev/lus01-ost0/lus01 &lt;br/&gt;
debugfs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
debugfs:  clri &amp;lt;3&amp;gt;&lt;br/&gt;
debugfs:  clri &amp;lt;4&amp;gt;&lt;br/&gt;
debugfs:  root@lus0e2fsck -fy /dev/lus01-ost0/lus01 &lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Quota inode is not regular file.  Clear? yes&lt;/p&gt;

&lt;p&gt;Quota inode is not regular file.  Clear? yes&lt;/p&gt;

&lt;p&gt;Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;br/&gt;
Block bitmap differences:  &lt;del&gt;(1548&lt;/del&gt;&lt;del&gt;1551) -(1555&lt;/del&gt;&lt;del&gt;1557) -(1560&lt;/del&gt;&lt;del&gt;1561) -(1563&lt;/del&gt;&lt;del&gt;1567) -1575 -(1582&lt;/del&gt;&lt;del&gt;1584) -(1586&lt;/del&gt;&lt;del&gt;1587) -1590 -(1592&lt;/del&gt;&lt;del&gt;1600) -(1602&lt;/del&gt;&lt;del&gt;1605) -(1610&lt;/del&gt;&lt;del&gt;1611) -(1614&lt;/del&gt;&lt;del&gt;1616) -(1618&lt;/del&gt;&lt;del&gt;1623) -1626 -(1628&lt;/del&gt;&lt;del&gt;1637) -(1639&lt;/del&gt;&lt;del&gt;1644) -(1655&lt;/del&gt;&lt;del&gt;1658) -(1660&lt;/del&gt;&lt;del&gt;1664) -(1666&lt;/del&gt;&lt;del&gt;1677) -(1679&lt;/del&gt;&lt;del&gt;1681) -(1684&lt;/del&gt;&lt;del&gt;1700) -(1704&lt;/del&gt;&lt;del&gt;1707) -(1712&lt;/del&gt;&lt;del&gt;1715) -(1728&lt;/del&gt;&lt;del&gt;1731) -(1736&lt;/del&gt;&lt;del&gt;1739) -(1745&lt;/del&gt;&lt;del&gt;1751) -(1754&lt;/del&gt;&lt;del&gt;1755) -(1856&lt;/del&gt;&lt;del&gt;1869) -(1903&lt;/del&gt;&lt;del&gt;1912) -(1914&lt;/del&gt;&lt;del&gt;1915) -(1981&lt;/del&gt;&lt;del&gt;1990) -(1992&lt;/del&gt;&lt;del&gt;2013) -4223 -(12320&lt;/del&gt;-12341) -12745 -12888&lt;br/&gt;
Fix? yes&lt;/p&gt;

&lt;p&gt;Free blocks count wrong for group #0 (3327, counted=3538).&lt;br/&gt;
Fix? yes&lt;/p&gt;

&lt;p&gt;Free blocks count wrong (1284432058, counted=1284432269).&lt;br/&gt;
Fix? yes&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;ERROR&amp;#93;&lt;/span&gt; quotaio.c:246:quota_file_open:: qh_ops-&amp;gt;check_file failed&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;ERROR&amp;#93;&lt;/span&gt; mkquota.c:543:quota_compare_and_update:: Open quota file failed&lt;br/&gt;
Update quota info for quota type 0? yes&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;ERROR&amp;#93;&lt;/span&gt; quotaio.c:246:quota_file_open:: qh_ops-&amp;gt;check_file failed&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;ERROR&amp;#93;&lt;/span&gt; mkquota.c:543:quota_compare_and_update:: Open quota file failed&lt;br/&gt;
Update quota info for quota type 1? yes&lt;/p&gt;


&lt;p&gt;lus01-OST0000: ***** FILE SYSTEM WAS MODIFIED *****&lt;br/&gt;
lus01-OST0000: 2140509/488366080 files (3.2% non-contiguous), 669025094/1953457152 blocks&lt;br/&gt;
root@lus01-oss1:~# &lt;/p&gt;


&lt;p&gt;root@isg-disc-mon-05:~# lfs quota -v -u jb23 /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
    Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                     3*      0       1       -       0       0       1       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                     1       -       0       -       0       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                     0       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0001_UUID&lt;/p&gt;</comment>
                            <comment id="64724" author="niu" created="Wed, 21 Aug 2013 09:22:48 +0000"  >&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-9): Couldn&apos;t mount because of unsupported optional features (100)
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-8): Couldn&apos;t mount because of unsupported optional features (100)
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-7): Couldn&apos;t mount because of unsupported optional features (100)
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-6): Couldn&apos;t mount because of unsupported optional features (100)
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-5): Couldn&apos;t mount because of unsupported optional features (100)
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-2): Couldn&apos;t mount because of unsupported optional features (100)
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-3): Couldn&apos;t mount because of unsupported optional features (100)
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-1): Couldn&apos;t mount because of unsupported optional features (100)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I don&apos;t see how these message come from, it looks like backend filesystem doesn&apos;t support quota feature (100), but I think it should be reported as &quot;LDISKFS-fs&quot; but not &quot;EXT4-fs&quot;. Do you use ldiskfs as backend filesystem? Could you try to mount the OST device as ldiskfs manually to see if it can be mount properly? Thanks. &lt;/p&gt;</comment>
                            <comment id="64727" author="kitwestneat" created="Wed, 21 Aug 2013 13:30:19 +0000"  >&lt;p&gt;We have mounted it ldiskfs to verify that the user has objects located on it:&lt;br/&gt;
root@lus01-oss1:~# mount -t ldiskfs /dev/lus01-ost0/lus01 /export/vd01&lt;br/&gt;
root@lus01-oss1:~# find /export/vd01 -uid 12296 -ls&lt;br/&gt;
109 6144 rw-rw-rw 1 jb23 4294936579 6291456 Aug 15 17:33 /export/vd01/O/0/d25/72330521&lt;/p&gt;</comment>
                            <comment id="64732" author="niu" created="Wed, 21 Aug 2013 15:12:13 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Aug 14 23:41:35 lus01-oss1 kernel: VFS: Quota for id 19228 referenced but not present.&lt;br/&gt;
Aug 14 23:41:35 lus01-oss1 kernel: VFS: Can&apos;t read quota structure for id 19228.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Quota file on some OST seems corrupted, you can truncate &amp;amp; regenerate the quota file by:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;tune2fs -O ^quota $dev (disable quota feature, which will truncate quota files);&lt;/li&gt;
	&lt;li&gt;tune2fs -O quota $dev (enable quota feature, which will scan all inodes and write old quota limit &amp;amp; quota accounting information into quota files)&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;After these two steps, we&apos;d suppose that e2fsck will not report the message like &quot;&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (3788615680, 1231588) != expected (761856, 139)&quot;&lt;/p&gt;

&lt;p&gt;If everything goes well, you can try to mount lustre again to see if the problem is resolved. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We are running a kernel compiled from redhat source.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;What&apos;s the kernel version?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;10:03:12 lus01-oss1 kernel: EXT4-fs (dm-9): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-8): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-7): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-6): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:12 lus01-oss1 kernel: EXT4-fs (dm-5): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-2): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-3): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
Aug 13 10:03:13 lus01-oss1 kernel: EXT4-fs (dm-1): Couldn&apos;t mount because of unsupported optional features (100)&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Are these devices OST device? Did you mount the OST device in ext4 manually? I really want to know where these error messages come from.&lt;/p&gt;</comment>
                            <comment id="64792" author="james beal" created="Wed, 21 Aug 2013 21:41:37 +0000"  >&lt;p&gt;The systems are connected via a network which is being upgraded and are not available today and tomorrow.&lt;/p&gt;

&lt;p&gt;The kernel is  2.6.32-lustre-2.4  which is based off RHEL 6.4 I believe.&lt;/p&gt;

&lt;p&gt;Yes the messages are from the OST device, we did not mount them as ext4 only as lustre or ldiskfs.&lt;/p&gt;


</comment>
                            <comment id="64885" author="james beal" created="Thu, 22 Aug 2013 20:09:57 +0000"  >&lt;p&gt;That doesn&apos;t appear to have helped.&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~# umount /export/vd01&lt;br/&gt;
root@lus01-oss1:~# tune2fs -O ^quota ^C&lt;br/&gt;
root@lus01-oss1:~# grep vd01 /etc/fstab&lt;br/&gt;
/dev/lus01-ost0/lus01 /export/vd01  lustre  extents,mballoc,noauto,rw 0 0&lt;br/&gt;
root@lus01-oss1:~# tune2fs -O ^quota /dev/lus01-ost0/lus01 &lt;br/&gt;
tune2fs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
root@lus01-oss1:~# tune2fs -O quota /dev/lus01-ost0/lus01 &lt;br/&gt;
tune2fs 1.42.7.wc1 (12-Apr-2013)&lt;/p&gt;

&lt;p&gt;Warning: the quota feature is still under development&lt;br/&gt;
See &lt;a href=&quot;https://ext4.wiki.kernel.org/index.php/Quota&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://ext4.wiki.kernel.org/index.php/Quota&lt;/a&gt; for more information&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~# &lt;/p&gt;

&lt;p&gt;mount /export/vd01&lt;br/&gt;
root@lus01-oss1:~# cat /proc/fs/lustre/obdfilter/lus01-OST0000/recovery_status &lt;br/&gt;
status: RECOVERING&lt;br/&gt;
recovery_start: 0&lt;br/&gt;
time_remaining: 0&lt;br/&gt;
connected_clients: 0/5&lt;br/&gt;
req_replay_clients: 0&lt;br/&gt;
lock_repay_clients: 0&lt;br/&gt;
completed_clients: 0&lt;br/&gt;
evicted_clients: 0&lt;br/&gt;
replayed_requests: 0&lt;br/&gt;
queued_requests: 0&lt;br/&gt;
next_transno: 176093659137&lt;br/&gt;
root@lus01-oss1:~# cat /proc/fs/lustre/obdfilter/lus01-OST0000/recovery_status &lt;br/&gt;
status: COMPLETE&lt;br/&gt;
recovery_start: 1377199282&lt;br/&gt;
recovery_duration: 87&lt;br/&gt;
completed_clients: 5/5&lt;br/&gt;
replayed_requests: 0&lt;br/&gt;
last_transno: 176093659136&lt;br/&gt;
VBR: DISABLED&lt;br/&gt;
IR: DISABLED&lt;/p&gt;

&lt;p&gt;It doesn&apos;t appear to have helped.&lt;/p&gt;

&lt;p&gt;jb23@isg-disc-mon-05:~$ lfs quota -v /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      3*      0       1       -       0       0       1       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                      1       -       0       -       0       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0001_UUID&lt;br/&gt;
                      1*      -       1       -       -       -       -       -&lt;/p&gt;


&lt;p&gt;root@lus01-oss1:~# umount /export/vd01&lt;br/&gt;
root@lus01-oss1:~# e2fsck -fp /dev/lus01-ost0/lus01 &lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (3788619776, 1231588) != expected (0, 32)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 0Project-Id-Version: e2fsprogs&lt;br/&gt;
Report-Msgid-Bugs-To: FULL NAME &amp;lt;EMAIL@ADDRESS&amp;gt;&lt;br/&gt;
POT-Creation-Date: 2008-06-17 22:16-0400&lt;br/&gt;
PO-Revision-Date: 2008-08-10 09:38+0000&lt;br/&gt;
Last-Translator: Jen Ockwell &amp;lt;jenfraggleubuntu@googlemail.com&amp;gt;&lt;br/&gt;
Language-Team: English (United Kingdom) &amp;lt;en_GB@li.org&amp;gt;&lt;br/&gt;
MIME-Version: 1.0&lt;br/&gt;
Content-Type: text/plain; charset=UTF-8&lt;br/&gt;
Content-Transfer-Encoding: 8bit&lt;br/&gt;
X-Launchpad-Export-Date: 2013-01-28 10:46+0000&lt;br/&gt;
X-Generator: Launchpad (build 16451)&lt;br/&gt;
.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (4598792192, 1237153) != expected (0, 32)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 1Project-Id-Version: e2fsprogs&lt;br/&gt;
Report-Msgid-Bugs-To: FULL NAME &amp;lt;EMAIL@ADDRESS&amp;gt;&lt;br/&gt;
POT-Creation-Date: 2008-06-17 22:16-0400&lt;br/&gt;
PO-Revision-Date: 2008-08-10 09:38+0000&lt;br/&gt;
Last-Translator: Jen Ockwell &amp;lt;jenfraggleubuntu@googlemail.com&amp;gt;&lt;br/&gt;
Language-Team: English (United Kingdom) &amp;lt;en_GB@li.org&amp;gt;&lt;br/&gt;
MIME-Version: 1.0&lt;br/&gt;
Content-Type: text/plain; charset=UTF-8&lt;br/&gt;
Content-Transfer-Encoding: 8bit&lt;br/&gt;
X-Launchpad-Export-Date: 2013-01-28 10:46+0000&lt;br/&gt;
X-Generator: Launchpad (build 16451)&lt;br/&gt;
.&lt;br/&gt;
lus01-OST0000: 2140514/488366080 files (3.2% non-contiguous), 669025091/1953457152 blocks&lt;/p&gt;</comment>
                            <comment id="64940" author="niu" created="Fri, 23 Aug 2013 07:55:56 +0000"  >&lt;p&gt;Could you paste the output of &apos;dumpe2fs /dev/lus01-ost0/lus01&apos;? Thanks.&lt;/p&gt;</comment>
                            <comment id="64984" author="james beal" created="Fri, 23 Aug 2013 17:39:57 +0000"  >&lt;p&gt;Filesystem volume name:   lus01-OST0000&lt;br/&gt;
Last mounted on:          /&lt;br/&gt;
Filesystem UUID:          1b59b58a-73bc-4fdf-a007-c184da2e6847&lt;br/&gt;
Filesystem magic number:  0xEF53&lt;br/&gt;
Filesystem revision #:    1 (dynamic)&lt;br/&gt;
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent mmp sparse_super large_file uninit_bg quota&lt;br/&gt;
Filesystem flags:         signed_directory_hash &lt;br/&gt;
Default mount options:    (none)&lt;br/&gt;
Filesystem state:         clean&lt;br/&gt;
Errors behavior:          Continue&lt;br/&gt;
Filesystem OS type:       Linux&lt;br/&gt;
Inode count:              488366080&lt;br/&gt;
Block count:              1953457152&lt;br/&gt;
Reserved block count:     0&lt;br/&gt;
Free blocks:              1284432060&lt;br/&gt;
Free inodes:              486225566&lt;br/&gt;
First block:              0&lt;br/&gt;
Block size:               4096&lt;br/&gt;
Fragment size:            4096&lt;br/&gt;
Reserved GDT blocks:      558&lt;br/&gt;
Blocks per group:         32768&lt;br/&gt;
Fragments per group:      32768&lt;br/&gt;
Inodes per group:         8192&lt;br/&gt;
Inode blocks per group:   512&lt;br/&gt;
RAID stride:              1&lt;br/&gt;
RAID stripe width:        8&lt;br/&gt;
Filesystem created:       Fri Mar 19 16:07:37 2010&lt;br/&gt;
Last mount time:          Thu Aug 22 21:10:45 2013&lt;br/&gt;
Last write time:          Fri Aug 23 18:41:08 2013&lt;br/&gt;
Mount count:              1&lt;br/&gt;
Maximum mount count:      -1&lt;br/&gt;
Last checked:             Thu Aug 22 20:26:41 2013&lt;br/&gt;
Check interval:           15552000 (6 months)&lt;br/&gt;
Next check after:         Tue Feb 18 19:26:41 2014&lt;br/&gt;
Lifetime writes:          36 TB&lt;br/&gt;
Reserved blocks uid:      0 (user root)&lt;br/&gt;
Reserved blocks gid:      0 (group root)&lt;br/&gt;
First inode:              11&lt;br/&gt;
Inode size:               256&lt;br/&gt;
Required extra isize:     28&lt;br/&gt;
Desired extra isize:      28&lt;br/&gt;
Journal inode:            8&lt;br/&gt;
Default directory hash:   half_md4&lt;br/&gt;
Directory Hash Seed:      4b551ff2-886c-43e2-abb6-02d583f4c533&lt;br/&gt;
Journal backup:           inode blocks&lt;br/&gt;
MMP block number:         1546&lt;br/&gt;
MMP update interval:      1&lt;br/&gt;
User quota inode:         3&lt;br/&gt;
Group quota inode:        4&lt;br/&gt;
Journal features:         journal_incompat_revoke&lt;br/&gt;
Journal size:             400M&lt;br/&gt;
Journal length:           102400&lt;br/&gt;
Journal sequence:         0x07b07791&lt;br/&gt;
Journal start:            0&lt;/p&gt;</comment>
                            <comment id="65047" author="niu" created="Mon, 26 Aug 2013 06:15:39 +0000"  >&lt;p&gt;Thank you, James. The output of dumpe2fs looks sane to me.&lt;/p&gt;

&lt;p&gt;Look at the output of &quot;e2fsck -fp /dev/lus01-ost0/lus01&quot;&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (3788619776, 1231588) != expected (0, 32)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 0Project-Id-Version: e2fsprogs&lt;/p&gt;
{qutoe}

&lt;p&gt;I don&apos;t see why the accounting is still not correct after &apos;tune2fs -O quota&apos; (which is supposed to do quotacheck and udpate accounting), however, after e2fsck, the quota accounting should have been fixed as the message shown. Would you try to run &quot;e2fsck -fp /dev/lus01-ost0/lus01&quot; again to see if the quota inconsistent problem fixed? Thanks.&lt;/p&gt;</comment>
                            <comment id="65052" author="james beal" created="Mon, 26 Aug 2013 08:36:48 +0000"  >&lt;p&gt;Same again....&lt;/p&gt;


&lt;p&gt;root@lus01-oss1:~# e2fsck -fp /dev/lus01-ost0/lus01 &lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (3788619776, 1231588) != expected (0, 32)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 0Project-Id-Version: e2fsprogs&lt;br/&gt;
Report-Msgid-Bugs-To: FULL NAME &amp;lt;EMAIL@ADDRESS&amp;gt;&lt;br/&gt;
POT-Creation-Date: 2008-06-17 22:16-0400&lt;br/&gt;
PO-Revision-Date: 2008-08-10 09:38+0000&lt;br/&gt;
Last-Translator: Jen Ockwell &amp;lt;jenfraggleubuntu@googlemail.com&amp;gt;&lt;br/&gt;
Language-Team: English (United Kingdom) &amp;lt;en_GB@li.org&amp;gt;&lt;br/&gt;
MIME-Version: 1.0&lt;br/&gt;
Content-Type: text/plain; charset=UTF-8&lt;br/&gt;
Content-Transfer-Encoding: 8bit&lt;br/&gt;
X-Launchpad-Export-Date: 2013-01-28 10:46+0000&lt;br/&gt;
X-Generator: Launchpad (build 16451)&lt;br/&gt;
.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (4598792192, 1237153) != expected (0, 32)&lt;br/&gt;
lus01-OST0000: Update quota info for quota type 1Project-Id-Version: e2fsprogs&lt;br/&gt;
Report-Msgid-Bugs-To: FULL NAME &amp;lt;EMAIL@ADDRESS&amp;gt;&lt;br/&gt;
POT-Creation-Date: 2008-06-17 22:16-0400&lt;br/&gt;
PO-Revision-Date: 2008-08-10 09:38+0000&lt;br/&gt;
Last-Translator: Jen Ockwell &amp;lt;jenfraggleubuntu@googlemail.com&amp;gt;&lt;br/&gt;
Language-Team: English (United Kingdom) &amp;lt;en_GB@li.org&amp;gt;&lt;br/&gt;
MIME-Version: 1.0&lt;br/&gt;
Content-Type: text/plain; charset=UTF-8&lt;br/&gt;
Content-Transfer-Encoding: 8bit&lt;br/&gt;
X-Launchpad-Export-Date: 2013-01-28 10:46+0000&lt;br/&gt;
X-Generator: Launchpad (build 16451)&lt;br/&gt;
.&lt;br/&gt;
lus01-OST0000: 2140514/488366080 files (3.2% non-contiguous), 669025091/1953457152 blocks&lt;/p&gt;

&lt;p&gt;root@lus01-oss1:~# &lt;/p&gt;</comment>
                            <comment id="65121" author="niu" created="Tue, 27 Aug 2013 02:37:39 +0000"  >&lt;p&gt;hmm, it&apos;s weird, I just tried a upgrade on 1.8.9, but didn&apos;t see your problem. (actually, we have upgrade auto-test)&lt;/p&gt;

&lt;p&gt;Could you try to repeat the &quot;tune2fs -O ^quota&quot; &amp;amp; &quot;tune2fs -O quota&quot; (disable then enable quota) to every mdt/ost device, then remount lustre and do &quot;lfs quota -v&quot; again? Please capture the log from mount mdt/ost to &quot;lfs quota -v&quot;. Thanks. &lt;/p&gt;</comment>
                            <comment id="65130" author="james beal" created="Tue, 27 Aug 2013 06:14:15 +0000"  >&lt;p&gt;I am starting that process, did you note that this file system was originally 1.6 ?&lt;/p&gt;</comment>
                            <comment id="65134" author="niu" created="Tue, 27 Aug 2013 07:16:01 +0000"  >&lt;blockquote&gt;
&lt;p&gt;I am starting that process, did you note that this file system was originally 1.6 ?&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Good point, I think we never tested quota on system upgraded from 1.6, however, it has been upgraded to 1.8 for a while, I didn&apos;t think there could be problem.&lt;/p&gt;</comment>
                            <comment id="65135" author="james beal" created="Tue, 27 Aug 2013 07:36:43 +0000"  >&lt;p&gt;Given that each OSS is taking over an hour to run that process, I would expect it done at the end of the day. Is there anything else we can do  ?&lt;/p&gt;

&lt;p&gt;We really need to have this working before we upgrade all our systems and we want to do that so we can start using the lustre 2.4 client so we can use a modern kernel on all our clients.&lt;/p&gt;</comment>
                            <comment id="65137" author="niu" created="Tue, 27 Aug 2013 07:44:42 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Given that each OSS is taking over an hour to run that process, I would expect it done at the end of the day. Is there anything else we can do ?&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;I can&apos;t think of any other things now. Please save log for every mdt/ost.&lt;/p&gt;</comment>
                            <comment id="65145" author="gmpc@sanger.ac.uk" created="Tue, 27 Aug 2013 10:42:27 +0000"  >&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;I have another datapoint. I ran the 2.4  upgrade procedure on a freshly formatted 1.8.8 system. I get the same symptoms. Quota accounting / enforcement is not working, even after doing tunefs.lustre --quota.&lt;/p&gt;

&lt;p&gt;The ost &amp;amp; mdt both report errors when checked with e2fsck.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (1204224, 200) != expected (0, 34)&lt;br/&gt;
Update quota info for quota type 0? yes&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (1204224, 200) != expected (0, 34)&lt;br/&gt;
Update quota info for quota type 1? yes&lt;/p&gt;

&lt;p&gt;If I fix the errors, and do some filesystem activity on the client, and check the ost/mdt for errors, both report new errors with e2fsck.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (1204224, 201) != expected (0, 32)&lt;br/&gt;
Update quota info for quota type 0&amp;lt;y&amp;gt;? yes&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (1204224, 201) != expected (0, 32)&lt;br/&gt;
Update quota info for quota type 1&amp;lt;y&amp;gt;? yes&lt;/p&gt;

&lt;p&gt;However, I have not seen any of the:&lt;/p&gt;

&lt;p&gt;EXT4-fs (dm-9): Couldn&apos;t mount because of unsupported optional features (100)&lt;br/&gt;
errors.&lt;/p&gt;

&lt;p&gt;This is using the same 2.4.X kernel/binaries as on the lus01 system, so it does not rule out that we&apos;ve broken our kernel/server build somehow.&lt;/p&gt;

</comment>
                            <comment id="65154" author="gmpc@sanger.ac.uk" created="Tue, 27 Aug 2013 13:09:29 +0000"  >&lt;p&gt;Hi,&lt;br/&gt;
I think there might be something broken in our 2.4 server build. I get the same quota problem on a freshly formatted 2.4 filesystem and 1.8.9 client. If  I try to mount using a 2.4 client, the client panics immediately!&lt;/p&gt;

&lt;p&gt;I am going to start looking at our 2.4 build to see if we have done something silly...&lt;/p&gt;

&lt;p&gt;Cheers,&lt;/p&gt;

&lt;p&gt;Guy&lt;/p&gt;</comment>
                            <comment id="65233" author="gmpc@sanger.ac.uk" created="Wed, 28 Aug 2013 10:16:47 +0000"  >&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;I&apos;ve redone our 2.4 build, and quota on my test system now works correctly; both the 1.8-&amp;gt;2.4 upgraded one, and the freshly formatted 2.4 system.  (I needed  a round of e2fsck  / tune2fs -O ^quota / tunefs.lustre --quota / lctl conf_param  to get the stats in sync.)&lt;/p&gt;



</comment>
                            <comment id="65246" author="niu" created="Wed, 28 Aug 2013 13:54:40 +0000"  >&lt;p&gt;Good news, thank you, Guy.&lt;/p&gt;</comment>
                            <comment id="65539" author="james beal" created="Mon, 2 Sep 2013 10:31:37 +0000"  >&lt;p&gt;A collection of log files.&lt;/p&gt;</comment>
                            <comment id="65540" author="james beal" created="Mon, 2 Sep 2013 10:31:42 +0000"  >&lt;p&gt;An update on the state of play:&lt;/p&gt;

&lt;p&gt;The following messages were tracked down to grub probing the lustre luns to see if there were any OS&apos;es on them than needed to be added to grub.&lt;/p&gt;

&lt;p&gt;Aug 15 00:03:15 lus01-oss1 kernel: EXT4-fs (dm-7): Couldn&apos;t mount because of unsupported optional features (100)&lt;/p&gt;

&lt;p&gt;We have put new kernels in place and booted from them and run the following script on each of the luns:&lt;/p&gt;

&lt;p&gt;#!/bin/sh&lt;br/&gt;
LOG=&quot;/root/`echo $1.log | sed &lt;del&gt;e &apos;s#/#&lt;/del&gt;#g&apos;`&quot;&lt;br/&gt;
date | tee -a $LOG&lt;br/&gt;
echo $1 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;br/&gt;
tune2fs -O ^quota $1 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;br/&gt;
date 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;br/&gt;
e2fsck -fy $1 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;br/&gt;
date 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;br/&gt;
tunefs.lustre -v --quota $1 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;br/&gt;
date 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;/p&gt;

&lt;p&gt;I have made a tar archive of the results and /var/log/kern.log  added them to this ticket.&lt;/p&gt;

&lt;p&gt;The following bits might be relevant:&lt;/p&gt;

&lt;p&gt;1404 Aug 30 12:42:17 lus01-mds2 kernel: Lustre: 3243:0:(obd_config.c:1428:class_config_llog_handler()) For 1.8 interoperability, rename obd type from mds to mdt&lt;br/&gt;
1405 Aug 30 12:42:17 lus01-mds2 kernel: Lustre: lus01-MDT0000: used disk, loading&lt;br/&gt;
1406 Aug 30 12:42:17 lus01-mds2 kernel: LustreError: 3243:0:(sec_config.c:1115:sptlrpc_target_local_read_conf()) missing llog context&lt;br/&gt;
1407 Aug 30 12:42:17 lus01-mds2 kernel: Lustre: 3243:0:(mdt_handler.c:4945:mdt_process_config()) For interoperability, skip this mdt.quota_type. It is obsolete.&lt;/p&gt;

&lt;p&gt;3226 Aug 30 20:58:48 lus01-oss1 kernel: LDISKFS-fs (dm-14): mounted filesystem with ordered data mode. quota=on. Opts:&lt;br/&gt;
3227 Aug 30 20:58:49 lus01-oss1 kernel: Lustre: 28014:0:(ofd_dev.c:221:ofd_process_config()) For interoperability, skip this ost.quota_type. It is obsolete.&lt;/p&gt;

&lt;p&gt; 42 Sep  2 09:41:35 lus01-oss1 kernel: VFS: Quota for id 19228 referenced but not present.&lt;br/&gt;
 43 Sep  2 09:41:35 lus01-oss1 kernel: VFS: Can&apos;t read quota structure for id 19228.&lt;br/&gt;
 44 Sep  2 09:41:35 lus01-oss1 kernel: LustreError: 8948:0:(qsd_entry.c:215:qsd_refresh_usage()) $$$ failed to read disk usage, rc:-3 qsd:lus01-OST0000 qtype:usr id:19228 enforced:1 granted:0 pending:0 waiting:0 req:0 usage:0 qunit:0 qtune:0 edquot:0&lt;br/&gt;
 45 Sep  2 09:41:35 lus01-oss1 kernel: Lustre: 8948:0:(qsd_reint.c:349:qsd_reconciliation()) lus01-OST0000: failed to locate lqe. &lt;span class=&quot;error&quot;&gt;&amp;#91;0x200000006:0x20000:0x0&amp;#93;&lt;/span&gt;, -3&lt;br/&gt;
 46 Sep  2 09:41:35 lus01-oss1 kernel: Lustre: 8948:0:(qsd_reint.c:525:qsd_reint_main()) lus01-OST0000: reconciliation failed. &lt;span class=&quot;error&quot;&gt;&amp;#91;0x0:0x0:0x0&amp;#93;&lt;/span&gt;, -3 &lt;br/&gt;
 47 Sep  2 09:43:28 lus01-oss1 kernel: VFS: Quota for id 19228 referenced but not present.&lt;br/&gt;
 48 Sep  2 09:43:28 lus01-oss1 kernel: VFS: Can&apos;t read quota structure for id 19228.&lt;br/&gt;
 49 Sep  2 09:43:47 lus01-oss1 kernel: VFS: Quota for id 19228 referenced but not present.&lt;br/&gt;
 50 Sep  2 09:43:47 lus01-oss1 kernel: VFS: Can&apos;t read quota structure for id 19228.&lt;br/&gt;
 51 Sep  2 09:43:47 lus01-oss1 kernel: VFS: Quota for id 19228 referenced but not present.&lt;br/&gt;
 52 Sep  2 09:43:47 lus01-oss1 kernel: VFS: Can&apos;t read quota structure for id 19228.&lt;/p&gt;

&lt;p&gt;At first it appears that nothing has changed..... However ( continued in next update ).&lt;/p&gt;</comment>
                            <comment id="65541" author="james beal" created="Mon, 2 Sep 2013 10:34:29 +0000"  >&lt;p&gt;An experiment with a user that had no data on the system, it looks like quotas &quot;work&quot; for &quot;new&quot; users.&lt;/p&gt;

&lt;p&gt;root@isg-disc-mon-05:~# lfs quota -u aac /lustre/scratch101&lt;br/&gt;
Disk quotas for user aac (uid 9052):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      0       0       1       -       0       0       1       -&lt;br/&gt;
root@isg-disc-mon-05:~# mkdir /lustre/scratch101/sanger/aac&lt;br/&gt;
root@isg-disc-mon-05:~# chown aac /lustre/scratch101/sanger/aac&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u aac /lustre/scratch101&lt;br/&gt;
Disk quotas for user aac (uid 9052):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      4*      0       1       -       1*      0       1       -&lt;br/&gt;
root@isg-disc-mon-05:~# lfs setquota -u aac /lustre/scratch101 -I 150000 -B 5T&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u aac /lustre/scratch101&lt;br/&gt;
Disk quotas for user aac (uid 9052):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      4       0 5368709120       -       1       0  150000       -&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u aac /lustre/scratch101 -v &lt;br/&gt;
Disk quotas for user aac (uid 9052):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      4       0 5368709120       -       1       0  150000       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                      4       -       0       -       1       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                      0       -      64       -       -       -       -       -&lt;br/&gt;
lus01-OST0001_UUID&lt;br/&gt;
                      0       -     244       -       -       -       -       -&lt;br/&gt;
lus01-OST0002_UUID&lt;br/&gt;
                      0       -      80       -       -       -       -       -&lt;br/&gt;
lus01-OST0003_UUID&lt;br/&gt;
                      0       -      44       -       -       -       -       -&lt;br/&gt;
lus01-OST0004_UUID&lt;br/&gt;
                      0       -      56       -       -       -       -       -&lt;br/&gt;
lus01-OST0005_UUID&lt;br/&gt;
                      0       -    1856       -       -       -       -       -&lt;br/&gt;
lus01-OST0006_UUID&lt;br/&gt;
                      0       -      44       -       -       -       -       -&lt;br/&gt;
lus01-OST0007_UUID&lt;br/&gt;
                      0       -      52       -       -       -       -       -&lt;br/&gt;
lus01-OST0008_UUID&lt;br/&gt;
                      0       -     252       -       -       -       -       -&lt;br/&gt;
lus01-OST0009_UUID&lt;br/&gt;
                      0       -     132       -       -       -       -       -&lt;br/&gt;
lus01-OST000a_UUID&lt;br/&gt;
                      0       -     128       -       -       -       -       -&lt;br/&gt;
lus01-OST000b_UUID&lt;br/&gt;
                      0       -      72       -       -       -       -       -&lt;br/&gt;
lus01-OST000c_UUID&lt;br/&gt;
                      0       -      56       -       -       -       -       -&lt;br/&gt;
lus01-OST000d_UUID&lt;br/&gt;
                      0       -     168       -       -       -       -       -&lt;br/&gt;
lus01-OST000e_UUID&lt;br/&gt;
                      0       -     252       -       -       -       -       -&lt;br/&gt;
lus01-OST000f_UUID&lt;br/&gt;
                      0       -     148       -       -       -       -       -&lt;br/&gt;
lus01-OST0010_UUID&lt;br/&gt;
                      0       -      84       -       -       -       -       -&lt;br/&gt;
lus01-OST0011_UUID&lt;br/&gt;
                      0       -     132       -       -       -       -       -&lt;br/&gt;
lus01-OST0012_UUID&lt;br/&gt;
                      0       -     196       -       -       -       -       -&lt;br/&gt;
lus01-OST0013_UUID&lt;br/&gt;
                      0       -     176       -       -       -       -       -&lt;br/&gt;
lus01-OST0014_UUID&lt;br/&gt;
                      0       -     292       -       -       -       -       -&lt;br/&gt;
lus01-OST0015_UUID&lt;br/&gt;
                      0       -      72       -       -       -       -       -&lt;br/&gt;
lus01-OST0016_UUID&lt;br/&gt;
                      0       -     168       -       -       -       -       -&lt;br/&gt;
lus01-OST0017_UUID&lt;br/&gt;
                      0       -      48       -       -       -       -       -&lt;br/&gt;
lus01-OST0018_UUID&lt;br/&gt;
                      0       -     176       -       -       -       -       -&lt;br/&gt;
lus01-OST0019_UUID&lt;br/&gt;
                      0       -      60       -       -       -       -       -&lt;br/&gt;
lus01-OST001a_UUID&lt;br/&gt;
                      0       -     144       -       -       -       -       -&lt;br/&gt;
lus01-OST001b_UUID&lt;br/&gt;
                      0       -     160       -       -       -       -       -&lt;br/&gt;
lus01-OST001c_UUID&lt;br/&gt;
                      0       -      56       -       -       -       -       -&lt;br/&gt;
lus01-OST001d_UUID&lt;br/&gt;
                      0       -      68       -       -       -       -       -&lt;br/&gt;
root@isg-disc-mon-05:~# su - aac&lt;br/&gt;
isg-disc-mon-05:~&amp;gt; cd /lustre/scratch101/sanger/aac/&lt;br/&gt;
isg-disc-mon-05:/lustre/scratch101/sanger/aac&amp;gt; tar xvf ~jb23/linux-2.6.32-358.6.2.el6.x86_64.tar.gz &lt;br/&gt;
./&lt;br/&gt;
./config-debug&lt;br/&gt;
./usr/&lt;br/&gt;
./usr/Kconfig&lt;br/&gt;
./usr/initramfs_data.S&lt;br/&gt;
./usr/gen_init_cpio.c&lt;br/&gt;
./usr/.gitignore&lt;br/&gt;
./usr/Makefile&lt;br/&gt;
./config-x86_64-nodebug-rhel&lt;br/&gt;
./config-i686-debug&lt;br/&gt;
./REPORTING-BUGS&lt;br/&gt;
./kernel.pub&lt;br/&gt;
./config-framepointer&lt;br/&gt;
./fs/&lt;br/&gt;
./fs/autofs/&lt;br/&gt;
./fs/autofs/root.c&lt;br/&gt;
^C&lt;br/&gt;
isg-disc-mon-05:/lustre/scratch101/sanger/aac&amp;gt; lfs quota /lustre/scratch101&lt;br/&gt;
Disk quotas for user aac (uid 9052):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                     16       0 5368709120       -      16       0  150000       -&lt;br/&gt;
Disk quotas for group hsg (gid 701):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                     12       0       0       -      15       0       0       -&lt;br/&gt;
isg-disc-mon-05:/lustre/scratch101/sanger/aac&amp;gt; du -sh .&lt;br/&gt;
32K	.&lt;br/&gt;
isg-disc-mon-05:/lustre/scratch101/sanger/aac&amp;gt; find . -print | wc -l&lt;br/&gt;
16&lt;/p&gt;</comment>
                            <comment id="65543" author="james beal" created="Mon, 2 Sep 2013 10:42:18 +0000"  >&lt;p&gt;I note that access is particularly slow and the OSS discs seem to be working hard.....&lt;/p&gt;

&lt;p&gt;From atop -l&lt;/p&gt;

&lt;p&gt;ATOP - lus01-oss1                                          2013/09/02  11:38:31                                          ------                                            3s elapsed&lt;br/&gt;
PRC | sys    0.18s  | user   0.01s  |               | #proc    574  | #trun      1  | #tslpi   643  | #tslpu    60  | #zombie    0  | clones     0  |               | #exit      0  |&lt;br/&gt;
CPU | sys       4%  | user      1%  | irq       0%  |               | idle     44%  | wait    753%  |               | steal     0%  | guest     0%  | avgf 2.44GHz  | avgscal  81%  |&lt;br/&gt;
CPL | avg1   61.19  | avg5   69.38  |               | avg15  73.49  |               |               | csw     6851  | intr    4912  |               |               | numcpu     8  |&lt;br/&gt;
MEM | tot    15.7G  | free    6.3G  | cache 736.9M  | dirty 282.6M  | buff    7.0G  |               | slab  826.1M  |               |               |               |               |&lt;br/&gt;
SWP | tot     4.0G  | free    4.0G  |               |               |               |               |               |               |               | vmcom 544.9M  | vmlim  11.9G  |&lt;br/&gt;
LVM | --ost6-lus01  | busy    100%  | read       3  | write    232  | KiB/r      4  |               | KiB/w      4  | MBr/s   0.00  | MBw/s   0.30  | avq   257.71  | avio 12.8 ms  |&lt;br/&gt;
LVM | --ost5-lus01  | busy    100%  | read       0  | write    116  | KiB/r      0  |               | KiB/w      4  | MBr/s   0.00  | MBw/s   0.15  | avq   867.63  | avio 25.9 ms  |&lt;br/&gt;
LVM | --ost4-lus01  | busy    100%  | read       0  | write    123  | KiB/r      0  |               | KiB/w      4  | MBr/s   0.00  | MBw/s   0.16  | avq   180.31  | avio 24.4 ms  |&lt;br/&gt;
LVM | --ost3-lus01  | busy    100%  | read      70  | write    929  | KiB/r      4  |               | KiB/w      4  | MBr/s   0.09  | MBw/s   1.21  | avq   789.85  | avio 3.00 ms  |&lt;br/&gt;
LVM | --ost2-lus01  | busy    100%  | read     448  | write    195  | KiB/r      4  |               | KiB/w      4  | MBr/s   0.58  | MBw/s   0.25  | avq   275.68  | avio 4.67 ms  |&lt;/p&gt;


&lt;p&gt;I can see that ls -l can be slow&lt;/p&gt;

&lt;p&gt;root@isg-disc-mon-05:/lustre/scratch101/sanger/jb23/delete/AutoFACT/pathways/cps# time  ls -l &lt;br/&gt;
total 13504&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   13986 2011-02-14 18:49 cps00010.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   50127 2011-02-04 18:42 cps00010.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   13256 2010-12-28 03:32 cps00020.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   45360 2010-06-25 18:13 cps00020.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   13513 2010-12-28 03:32 cps00030.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   52074 2011-01-05 18:13 cps00030.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   12971 2011-02-14 18:49 cps00040.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   72999 2010-12-27 20:07 cps00040.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   12468 2011-03-08 19:16 cps00051.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   62620 2011-03-08 19:19 cps00051.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   11351 2010-12-28 03:32 cps00052.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   57643 2010-12-27 21:47 cps00052.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   11512 2011-02-14 18:52 cps00053.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   61274 2010-12-27 23:03 cps00053.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   15924 2010-12-28 03:32 cps00061.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   79674 2010-07-14 18:21 cps00061.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   17994 2011-02-24 19:55 cps00071.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   69633 2011-02-24 19:57 cps00071.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    6861 2010-12-28 03:32 cps00072.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   17104 2010-03-23 17:49 cps00072.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   16199 2010-12-28 03:32 cps00130.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   83147 2011-01-31 18:49 cps00130.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   11724 2011-01-11 18:04 cps00190.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  154952 2010-11-16 19:49 cps00190.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   27196 2011-03-11 19:31 cps00230.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  119322 2011-03-11 19:35 cps00230.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   19599 2011-03-11 21:18 cps00240.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   78727 2011-03-11 21:21 cps00240.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   14025 2011-03-24 18:57 cps00250.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   66562 2011-03-24 18:59 cps00250.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   17179 2011-01-12 18:28 cps00260.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   80375 2011-02-02 19:10 cps00260.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   17452 2011-01-26 20:01 cps00270.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   88111 2011-01-26 20:03 cps00270.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   16385 2011-02-14 18:56 cps00280.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   63738 2010-09-30 23:17 cps00280.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8869 2010-12-28 03:32 cps00281.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   30420 2010-06-04 18:01 cps00281.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   12504 2010-12-28 03:32 cps00290.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   44690 2010-11-17 19:11 cps00290.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   11687 2010-12-28 03:32 cps00300.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   48016 2011-01-14 18:36 cps00300.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   12442 2011-02-14 18:57 cps00310.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   60245 2010-11-18 19:25 cps00310.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    8342 2011-02-02 20:14 cps00311.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   29024 2009-10-14 17:05 cps00311.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   22023 2011-03-24 20:30 cps00330.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  123495 2011-03-24 20:32 cps00330.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   12716 2011-02-14 18:59 cps00340.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   52095 2011-02-14 19:01 cps00340.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   16240 2010-12-28 03:32 cps00350.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  100076 2011-01-26 23:21 cps00350.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   14756 2010-12-28 03:32 cps00360.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   85562 2011-01-27 00:34 cps00360.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   16158 2010-12-28 03:32 cps00361.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev  108844 2010-12-27 10:28 cps00361.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   14764 2010-12-28 03:32 cps00362.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   90466 2010-12-27 10:33 cps00362.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    9556 2010-12-28 03:32 cps00364.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   46533 2011-01-17 18:32 cps00364.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   16568 2011-02-14 19:15 cps00380.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   97498 2010-12-17 19:11 cps00380.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   13288 2010-12-28 03:32 cps00400.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   60318 2011-02-02 20:58 cps00400.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   10543 2010-12-28 03:32 cps00401.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   63502 2010-12-02 02:04 cps00401.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   10618 2011-02-14 19:16 cps00410.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   43560 2010-12-27 11:07 cps00410.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8542 2010-12-28 03:32 cps00430.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   27838 2010-06-15 19:53 cps00430.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   11738 2010-12-28 03:32 cps00440.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   57870 2010-12-27 11:19 cps00440.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    9499 2011-03-11 22:34 cps00450.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   34578 2011-03-11 22:37 cps00450.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   10673 2011-02-23 19:34 cps00460.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   42915 2011-02-23 19:37 cps00460.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    7707 2011-03-14 19:25 cps00471.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   21167 2011-03-14 19:27 cps00471.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    6790 2010-12-28 03:32 cps00473.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   17756 2010-10-29 18:57 cps00473.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   12633 2011-02-02 21:55 cps00480.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   62834 2010-03-01 19:13 cps00480.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   14395 2010-12-28 03:32 cps00500.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   74275 2010-12-27 11:58 cps00500.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    6121 2010-12-28 03:32 cps00511.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   19740 2011-02-02 22:10 cps00511.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   22089 2010-12-28 03:32 cps00520.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev  132722 2010-12-27 12:13 cps00520.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    9778 2010-12-28 03:32 cps00521.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   35088 2010-12-27 12:21 cps00521.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    9958 2010-12-28 04:52 cps00540.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   67578 2009-11-16 17:22 cps00540.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   13857 2010-12-28 04:56 cps00550.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   58310 2009-08-29 04:33 cps00550.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   10094 2011-03-23 20:14 cps00561.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   52766 2011-03-23 20:15 cps00561.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   10986 2010-12-28 05:08 cps00562.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   53760 2010-11-01 18:45 cps00562.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   14103 2011-03-07 19:44 cps00564.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   77876 2011-03-23 22:04 cps00564.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   13757 2011-03-24 22:22 cps00590.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   60212 2011-03-24 22:24 cps00590.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   10019 2011-03-07 22:24 cps00592.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   38364 2011-03-22 20:38 cps00592.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8935 2010-12-28 05:32 cps00600.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   46473 2009-12-18 17:23 cps00600.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   13356 2011-02-14 19:30 cps00620.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   63303 2011-01-25 19:48 cps00620.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   11156 2010-12-28 05:51 cps00623.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   63744 2010-12-28 05:51 cps00623.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   11350 2011-02-14 19:30 cps00625.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   44536 2011-01-17 19:04 cps00625.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   13772 2011-02-24 22:22 cps00626.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   75065 2011-02-24 22:23 cps00626.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   14594 2010-12-28 06:09 cps00627.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   89697 2010-12-17 20:27 cps00627.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   14234 2011-01-31 21:36 cps00630.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   71459 2011-01-31 21:38 cps00630.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8905 2010-12-28 06:21 cps00633.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   40955 2010-12-02 06:42 cps00633.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   12598 2011-02-14 19:34 cps00640.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   55499 2011-01-11 18:25 cps00640.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    7988 2011-02-24 22:59 cps00642.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   27490 2011-02-24 23:02 cps00642.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   13229 2011-02-25 19:25 cps00650.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   56327 2011-02-25 19:26 cps00650.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    9932 2010-12-28 06:43 cps00660.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   40183 2010-03-02 03:26 cps00660.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8296 2010-12-28 06:48 cps00670.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   25657 2010-12-28 06:48 cps00670.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   25292 2011-03-07 23:17 cps00680.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  139869 2011-03-07 23:19 cps00680.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   15911 2011-03-03 19:53 cps00720.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   63168 2011-03-03 19:55 cps00720.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   10212 2011-03-12 00:07 cps00730.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   39113 2011-03-12 00:08 cps00730.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    9414 2011-03-18 19:31 cps00740.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   34663 2011-03-22 21:36 cps00740.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   10528 2010-12-28 07:17 cps00750.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   47588 2010-01-08 17:59 cps00750.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   12512 2010-12-28 07:23 cps00760.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   62140 2010-12-28 07:23 cps00760.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   10487 2010-12-28 07:30 cps00770.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   38306 2010-06-24 20:03 cps00770.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    7288 2010-12-28 07:36 cps00780.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   18189 2011-02-02 23:00 cps00780.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    6331 2010-12-28 07:41 cps00785.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   16647 2009-10-01 17:36 cps00785.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   11602 2010-12-28 07:46 cps00790.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   49292 2010-10-07 19:34 cps00790.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   22626 2011-03-18 21:08 cps00860.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  131502 2011-03-18 21:11 cps00860.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   13293 2010-12-28 08:04 cps00900.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   64827 2010-11-12 18:37 cps00900.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   13428 2011-02-18 19:40 cps00903.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   73934 2011-02-18 19:41 cps00903.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   11168 2010-12-28 08:19 cps00910.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   51100 2010-10-14 20:28 cps00910.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    8556 2011-02-09 00:26 cps00920.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   33161 2010-12-28 08:26 cps00920.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    8270 2011-02-14 19:44 cps00930.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   35600 2011-02-14 19:47 cps00930.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   17936 2011-01-25 21:15 cps00970.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   82911 2010-12-02 19:45 cps00970.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   20928 2011-02-18 19:42 cps01040.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   62125 2011-02-18 19:43 cps01040.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev 1340513 2011-03-25 21:03 cps01100.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev 1359025 2011-03-25 21:19 cps01100.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  667587 2011-03-26 04:08 cps01110.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  663129 2011-03-26 04:12 cps01110.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  510209 2011-03-12 08:29 cps01120.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  548705 2011-03-12 08:33 cps01120.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   21560 2011-03-26 07:08 cps02010.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  249270 2011-03-26 07:10 cps02010.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   17142 2011-02-03 01:48 cps02020.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  212533 2010-10-01 10:31 cps02020.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    8347 2011-01-12 20:56 cps02030.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   21835 2010-07-07 22:18 cps02030.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8924 2010-12-28 11:40 cps02040.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   57726 2010-06-23 20:58 cps02040.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   12565 2010-12-28 11:43 cps02060.html&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   59537 2010-03-02 18:04 cps02060.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   11918 2010-12-28 11:47 cps03010.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   93039 2010-12-07 09:08 cps03010.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    8252 2011-01-24 20:30 cps03018.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   81469 2011-01-24 20:34 cps03018.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev    5534 2011-03-26 08:04 cps03020.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   78209 2010-06-16 19:04 cps03020.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8018 2010-12-28 12:06 cps03030.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  118841 2010-10-21 01:17 cps03030.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    7925 2010-12-28 12:15 cps03060.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  167934 2010-11-30 22:08 cps03060.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    9446 2010-12-28 12:21 cps03070.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev  101663 2010-04-14 20:30 cps03070.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    7753 2010-12-28 12:27 cps03410.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   76010 2010-05-27 18:48 cps03410.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    7529 2010-12-28 12:33 cps03420.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   88892 2010-05-27 19:19 cps03420.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8329 2010-12-28 12:40 cps03430.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   52596 2010-10-21 02:31 cps03430.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev    8661 2010-12-28 12:45 cps03440.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   79622 2010-10-21 03:42 cps03440.png&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev   10663 2010-12-28 12:56 cps04122.html&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   61935 2010-11-06 07:07 cps04122.png&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 maa pathdev   28782 2011-03-26 10:39 cps_gene_map.tab&lt;br/&gt;
&lt;del&gt;rw-rw-r&lt;/del&gt;- 1 maa pathdev  352465 2011-03-27 00:20 cps.list&lt;/p&gt;

&lt;p&gt;real	4m41.294s&lt;br/&gt;
user	0m0.000s&lt;br/&gt;
sys	0m0.184s&lt;/p&gt;

&lt;p&gt;And the client under strace shows. I tried mounting the client with noacl but that made no change.&lt;/p&gt;

&lt;p&gt;getxattr(&quot;asa/asa00900.html&quot;, &quot;system.posix_acl_access&quot;, 0x0, 0) = -1 EOPNOTSUPP (Operation not supported)&lt;br/&gt;
lstat(&quot;asa/asa02040.png&quot;, &lt;/p&gt;
{st_mode=S_IFREG|0644, st_size=57726, ...}
&lt;p&gt;) = 0&lt;br/&gt;
lgetxattr(&quot;asa/asa02040.png&quot;, &quot;security.selinux&quot;, &quot;&quot;, 255) = 0&lt;br/&gt;
getxattr(&quot;asa/asa02040.png&quot;, &quot;system.posix_acl_access&quot;, 0x0, 0) = -1 EOPNOTSUPP (Operation not supported)&lt;br/&gt;
lstat(&quot;asa/asa03440.html&quot;, &lt;/p&gt;
{st_mode=S_IFREG|0664, st_size=8426, ...}
&lt;p&gt;) = 0&lt;br/&gt;
lgetxattr(&quot;asa/asa03440.html&quot;, &quot;security.selinux&quot;, &quot;&quot;, 255) = 0&lt;/p&gt;

&lt;p&gt;Looking at the oid scrub.&lt;/p&gt;

&lt;p&gt;cat ./osd-ldiskfs/lus01-MDT0000/oi_scrub&lt;br/&gt;
name: OI_scrub&lt;br/&gt;
magic: 0x4c5fd252&lt;br/&gt;
oi_files: 64&lt;br/&gt;
status: completed&lt;br/&gt;
flags:&lt;br/&gt;
param:&lt;br/&gt;
time_since_last_completed: 1731287 seconds&lt;br/&gt;
time_since_latest_start: 1732086 seconds&lt;br/&gt;
time_since_last_checkpoint: 1731287 seconds&lt;br/&gt;
latest_start_position: 329680457&lt;br/&gt;
last_checkpoint_position: 1050017793&lt;br/&gt;
first_failure_position: N/A&lt;br/&gt;
checked: 16323841&lt;br/&gt;
updated: 16323820&lt;br/&gt;
failed: 0&lt;br/&gt;
prior_updated: 0&lt;br/&gt;
noscrub: 0&lt;br/&gt;
igif: 0&lt;br/&gt;
success_count: 1&lt;br/&gt;
run_time: 1003 seconds&lt;br/&gt;
average_speed: 16275 objects/sec&lt;br/&gt;
real-time_speed: N/A&lt;br/&gt;
current_position: N/A&lt;/p&gt;

&lt;p&gt;While on the OSS we have&lt;/p&gt;

&lt;p&gt;/proc/fs/lustre/osd-ldiskfs/lus01-OST0000/oi_scrub&lt;br/&gt;
name: OI_scrub&lt;br/&gt;
magic: 0x4c5fd252&lt;br/&gt;
oi_files: 64&lt;br/&gt;
status: init&lt;br/&gt;
flags:&lt;br/&gt;
param:&lt;br/&gt;
time_since_last_completed: N/A&lt;br/&gt;
time_since_latest_start: N/A&lt;br/&gt;
time_since_last_checkpoint: N/A&lt;br/&gt;
latest_start_position: N/A&lt;br/&gt;
last_checkpoint_position: N/A&lt;br/&gt;
first_failure_position: N/A&lt;br/&gt;
checked: 0&lt;br/&gt;
updated: 0&lt;br/&gt;
failed: 0&lt;br/&gt;
prior_updated: 0&lt;br/&gt;
noscrub: 0&lt;br/&gt;
igif: 0&lt;br/&gt;
success_count: 0&lt;br/&gt;
run_time: 0 seconds&lt;br/&gt;
average_speed: 0 objects/sec&lt;br/&gt;
real-time_speed: N/A&lt;br/&gt;
current_position: N/A&lt;/p&gt;

&lt;p&gt;grep -i status /proc/fs/lustre/osd-ldiskfs/*/oi_scrub&lt;br/&gt;
/proc/fs/lustre/osd-ldiskfs/lus01-OST0000/oi_scrub:status: init&lt;br/&gt;
/proc/fs/lustre/osd-ldiskfs/lus01-OST0001/oi_scrub:status: init&lt;br/&gt;
/proc/fs/lustre/osd-ldiskfs/lus01-OST0002/oi_scrub:status: init&lt;br/&gt;
/proc/fs/lustre/osd-ldiskfs/lus01-OST0003/oi_scrub:status: init&lt;br/&gt;
/proc/fs/lustre/osd-ldiskfs/lus01-OST0004/oi_scrub:status: init&lt;br/&gt;
/proc/fs/lustre/osd-ldiskfs/lus01-OST0005/oi_scrub:status: init&lt;br/&gt;
/proc/fs/lustre/osd-ldiskfs/lus01-OST0006/oi_scrub:status: init&lt;/p&gt;</comment>
                            <comment id="65545" author="james beal" created="Mon, 2 Sep 2013 11:33:09 +0000"  >&lt;p&gt;Here I have deleted all the files ( I believe for my userid jb23 )&lt;/p&gt;

&lt;p&gt;jb23@isg-disc-mon-05:/lustre/scratch101/sanger/jb23$ mkdir test_dir&lt;br/&gt;
jb23@isg-disc-mon-05:/lustre/scratch101/sanger/jb23$ lfs setstripe test_dir  -c -1&lt;br/&gt;
jb23@isg-disc-mon-05:/lustre/scratch101/sanger/jb23$ lfs quota /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      8       0 5368709120       -       1       0 1500000       -&lt;br/&gt;
Disk quotas for group team94 (gid 1105):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      8       0       0       -       1       0       0       -&lt;br/&gt;
jb23@isg-disc-mon-05:/lustre/scratch101/sanger/jb23$ dd if=/dev/zero of=test_dir/deleteme&lt;br/&gt;
^C384692+0 records in&lt;br/&gt;
384692+0 records out&lt;br/&gt;
196962304 bytes (197 MB) copied, 3.67066 s, 53.7 MB/s&lt;/p&gt;

&lt;p&gt;jb23@isg-disc-mon-05:/lustre/scratch101/sanger/jb23$ lfs quota /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                 161832       0 5368709120       -       2       0 1500000       -&lt;br/&gt;
Disk quotas for group team94 (gid 1105):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                 161832       0       0       -       2       0       0       -&lt;br/&gt;
jb23@isg-disc-mon-05:/lustre/scratch101/sanger/jb23$ ls -l ./test_dir/deleteme &lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 jb23 team94 196962304 2013-09-02 12:29 ./test_dir/deleteme&lt;br/&gt;
jb23@isg-disc-mon-05:/lustre/scratch101/sanger/jb23$ lfs quota /lustre/scratch101 -v&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                 192400       0 5368709120       -       2       0 1500000       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                      8       -       0       -       2       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0001_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0002_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0003_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0004_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0005_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0006_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0007_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0008_UUID&lt;br/&gt;
                   7172       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0009_UUID&lt;br/&gt;
                   7008       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000a_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000b_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000c_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000d_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000e_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000f_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0010_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0011_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0012_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0013_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0014_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0015_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0016_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0017_UUID&lt;br/&gt;
                   7172       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0018_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0019_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST001a_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST001b_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST001c_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST001d_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
Disk quotas for group team94 (gid 1105):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                 192400       0       0       -       2       0       0       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                      8       -       0       -       2       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0001_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0002_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0003_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0004_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0005_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0006_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0007_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0008_UUID&lt;br/&gt;
                   7172       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0009_UUID&lt;br/&gt;
                   7008       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000a_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000b_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000c_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000d_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000e_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000f_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0010_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0011_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0012_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0013_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0014_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0015_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0016_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0017_UUID&lt;br/&gt;
                   7172       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0018_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0019_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST001a_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST001b_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST001c_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST001d_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;/p&gt;

&lt;p&gt;I then wait a bit and the quota command gives the &quot;right&quot; answer....&lt;/p&gt;

&lt;p&gt;jb23@isg-disc-mon-05:/lustre/scratch101/sanger/jb23$ lfs quota /lustre/scratch101 -v&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                 192400       0 5368709120       -       2       0 1500000       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                      8       -       0       -       2       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0001_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0002_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0003_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0004_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0005_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0006_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0007_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0008_UUID&lt;br/&gt;
                   7172       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0009_UUID&lt;br/&gt;
                   7008       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000a_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000b_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000c_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000d_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000e_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST000f_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0010_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0011_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0012_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0013_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0014_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0015_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0016_UUID&lt;br/&gt;
                   7168       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0017_UUID&lt;br/&gt;
                   7172       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0018_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST0019_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST001a_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST001b_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST001c_UUID&lt;br/&gt;
                   6148       - 67108864       -       -       -       -       -&lt;br/&gt;
lus01-OST001d_UUID&lt;br/&gt;
                   6144       - 67108864       -       -       -       -       -&lt;br/&gt;
Disk quotas for group team94 (gid 1105):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                 192400       0       0       -       2       0       0       -&lt;br/&gt;
lus01-MDT0000_UUID&lt;br/&gt;
                      8       -       0       -       2       -       0       -&lt;br/&gt;
lus01-OST0000_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0001_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0002_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0003_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0004_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0005_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0006_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0007_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0008_UUID&lt;br/&gt;
                   7172       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0009_UUID&lt;br/&gt;
                   7008       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000a_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000b_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000c_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000d_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000e_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST000f_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0010_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0011_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0012_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0013_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0014_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0015_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0016_UUID&lt;br/&gt;
                   7168       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0017_UUID&lt;br/&gt;
                   7172       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0018_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST0019_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST001a_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST001b_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST001c_UUID&lt;br/&gt;
                   6148       -       0       -       -       -       -       -&lt;br/&gt;
lus01-OST001d_UUID&lt;br/&gt;
                   6144       -       0       -       -       -       -       -&lt;/p&gt;</comment>
                            <comment id="65546" author="james beal" created="Mon, 2 Sep 2013 11:38:02 +0000"  >&lt;p&gt;Another data point.&lt;/p&gt;

&lt;p&gt;Changing the owner of a file to someone else and returning it will make the file turn up on the right persons quota.&lt;/p&gt;

&lt;p&gt;root@isg-disc-mon-05:/lustre/scratch101/ensembl/kb3/scratch/MouseEncode# lfs quota -u kb3 /lustre/scratch101 &lt;br/&gt;
Disk quotas for user kb3 (uid 11809):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      0       0       1       -       0       0       1       -&lt;br/&gt;
root@isg-disc-mon-05:/lustre/scratch101/ensembl/kb3/scratch/MouseEncode# ls -l Compara.12_eutherian_mammals_EPO.tar&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 kb3 ebiusers 18887997440 2012-03-02 14:56 Compara.12_eutherian_mammals_EPO.tar&lt;br/&gt;
root@isg-disc-mon-05:/lustre/scratch101/ensembl/kb3/scratch/MouseEncode# chown kb3 Compara.12_eutherian_mammals_EPO.tar &lt;/p&gt;

&lt;p&gt;root@isg-disc-mon-05:/lustre/scratch101/ensembl/kb3/scratch/MouseEncode# lfs quota -u kb3 /lustre/scratch101 &lt;br/&gt;
Disk quotas for user kb3 (uid 11809):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      0       0       1       -       0       0       1       -&lt;br/&gt;
root@isg-disc-mon-05:/lustre/scratch101/ensembl/kb3/scratch/MouseEncode# chown jb23 Compara.12_eutherian_mammals_EPO.tar &lt;br/&gt;
root@isg-disc-mon-05:/lustre/scratch101/ensembl/kb3/scratch/MouseEncode# chown kb3 Compara.12_eutherian_mammals_EPO.tar &lt;br/&gt;
root@isg-disc-mon-05:/lustre/scratch101/ensembl/kb3/scratch/MouseEncode# lfs quota -u kb3 /lustre/scratch101 &lt;br/&gt;
Disk quotas for user kb3 (uid 11809):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                18445516*      0       1       -       1*      0       1       -&lt;br/&gt;
root@isg-disc-mon-05:/lustre/scratch101/ensembl/kb3/scratch/MouseEncode# &lt;/p&gt;</comment>
                            <comment id="65547" author="james beal" created="Mon, 2 Sep 2013 11:54:33 +0000"  >&lt;p&gt;As a summary.&lt;/p&gt;

&lt;p&gt;It appears that new files or files which have their ownership changed are included in a users quota. &lt;/p&gt;

&lt;p&gt;We continue to have issues with getting the original quotas in to the system.&lt;/p&gt;
</comment>
                            <comment id="65549" author="niu" created="Mon, 2 Sep 2013 12:48:10 +0000"  >&lt;p&gt;Hi, James&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a summary.&lt;br/&gt;
It appears that new files or files which have their ownership changed are included in a users quota.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Could you explain this a little bit?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We continue to have issues with getting the original quotas in to the system.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;I don&apos;t quite follow this neither...&lt;/p&gt;

&lt;p&gt;As Guy said, the quota works for him with new build:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I&apos;ve redone our 2.4 build, and quota on my test system now works correctly; both the 1.8-&amp;gt;2.4 upgraded one, and the freshly formatted 2.4 system. (I needed a round of e2fsck / tune2fs -O ^quota / tunefs.lustre --quota / lctl conf_param to get the stats in sync.)&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Did you use the new build?&lt;/p&gt;</comment>
                            <comment id="65550" author="james beal" created="Mon, 2 Sep 2013 13:01:23 +0000"  >
&lt;p&gt;&amp;gt;&amp;gt;It appears that new files or files which have their ownership changed are included in a users quota.&lt;br/&gt;
&amp;gt;Could you explain this a little bit?&lt;/p&gt;

&lt;p&gt;New files are correctly accounted for I think. Changing the owner of a file to root and then changing the ownership back to the original ownership will ensure that file is correctly accounted for. It is worth noting that there is a bit of a delay ( about 20 seconds ) between the writes and the changes becoming apparent with lfs quota&lt;/p&gt;


&lt;p&gt;&amp;gt;&amp;gt;We continue to have issues with getting the original quotas in to the system.&lt;br/&gt;
&amp;gt;I don&apos;t quite follow this neither...&lt;/p&gt;

&lt;p&gt;The system to rescan the filesystem and initalise the quota system is still not working.&lt;/p&gt;

&lt;p&gt;&amp;gt;As Guy said, the quota works for him with new build:&lt;br/&gt;
&amp;gt;I&apos;ve redone our 2.4 build, and quota on my test system now works correctly; both the 1.8-&amp;gt;2.4 upgraded one, and &amp;gt;the freshly formatted 2.4 system. (I needed a round of e2fsck / tune2fs -O ^quota / tunefs.lustre --quota / &amp;gt;lctl conf_param to get the stats in sync.)&lt;br/&gt;
&amp;gt;Did you use the new build?&lt;/p&gt;

&lt;p&gt;I have used the new build. I am currently trying the following order for e2fsck/tune2fs/tunefs.lustre&lt;/p&gt;

&lt;p&gt;#!/bin/sh&lt;br/&gt;
LOG=&quot;/root/`echo $1.log | sed -e &apos;s#/#_#g&apos;`&quot;&lt;br/&gt;
e2fsck -fy $1 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;br/&gt;
tune2fs -O ^quota $1 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;br/&gt;
tunefs.lustre -v --quota $1 2&amp;gt;&amp;amp;1 | tee -a $LOG&lt;/p&gt;

&lt;p&gt;I am also mounting the MDT before running the script on the OSS&apos;s.&lt;/p&gt;



</comment>
                            <comment id="65561" author="james beal" created="Mon, 2 Sep 2013 17:26:27 +0000"  >&lt;p&gt;I have repeated the e2fsck/tune2fs/tunefs.lustre on the MDT, mounted the MDT and then repeated it for the OSS&apos;s.&lt;/p&gt;

&lt;p&gt;All quotas report as 0.&lt;/p&gt;</comment>
                            <comment id="65615" author="gmpc@sanger.ac.uk" created="Tue, 3 Sep 2013 16:01:10 +0000"  >&lt;p&gt;Just to clarify James&apos;s remarks:&lt;/p&gt;

&lt;p&gt;With the new server build, disk accounting and quota enforcement is working, but only for newly written files.&lt;/p&gt;

&lt;p&gt;If I create 1G file, the quota system will account for 1GB of space (and will enforce the quota, if appropriate.)&lt;/p&gt;

&lt;p&gt;cd /lustre/scratch101/sanger/gmpc/test&lt;br/&gt;
dd if=/dev/zero of=bigfiles bs=1M count=1000&lt;/p&gt;

&lt;p&gt;lfs quota .&lt;br/&gt;
Disk quotas for user gmpc (uid 10795):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
              . 1033460       0 104857600       -     437       0  153600       -&lt;/p&gt;

&lt;p&gt;However, the quota system is not counting files that existed on the filesystem before the 1.8 --&amp;gt; 2.4 upgrade was done. &lt;/p&gt;

&lt;p&gt;(eg the 4GB of files in this directory not not accounted at all)&lt;/p&gt;

&lt;p&gt;ls -alh /lustre/scratch101/sanger/gmpc/allstripe&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-  1 gmpc team94  21M 2011-07-06 09:34 fart1.dat.gz&lt;br/&gt;
&lt;del&gt;r&lt;/del&gt;-------  1 gmpc team94 4.0G 2013-09-03 16:10 fart3.dat&lt;br/&gt;
&lt;del&gt;r&lt;/del&gt;-------  1 gmpc team94  11M 2011-06-29 10:32 fart.dat.gz&lt;/p&gt;
</comment>
                            <comment id="65706" author="james beal" created="Wed, 4 Sep 2013 09:36:00 +0000"  >&lt;p&gt;Could we raise the priority of this ticket please ?&lt;/p&gt;

&lt;p&gt;Is there any additional information or tests that we can do ?&lt;/p&gt;
</comment>
                            <comment id="65729" author="niu" created="Wed, 4 Sep 2013 14:01:13 +0000"  >&lt;p&gt;James, Guy&lt;/p&gt;

&lt;p&gt;Is old inode accounted? if not, could you run following commands for mdt device?&lt;/p&gt;

&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;tune2fs -O ^quota mdt_device (disable quota)&lt;/li&gt;
	&lt;li&gt;tune2fs -O quota mdt_device (enable quota)&lt;/li&gt;
	&lt;li&gt;setup lustre;&lt;/li&gt;
	&lt;li&gt;lfs quota -v user_id; (check if old inodes are accounted)&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Please save the console output of first 2 steps and dmesg of last 2 steps. Thanks.&lt;/p&gt;</comment>
                            <comment id="65734" author="james beal" created="Wed, 4 Sep 2013 14:47:51 +0000"  >&lt;p&gt;To clarify what is meant by &quot;setup lustre;&quot; is that the mount of the mdt ?&lt;/p&gt;</comment>
                            <comment id="65738" author="james beal" created="Wed, 4 Sep 2013 15:03:09 +0000"  >&lt;p&gt;/dev/lus01-mdt0/lus01  /export/MDS lustre   noauto 0 0 &lt;br/&gt;
root@lus01-mds2:~# tune2fs -O ^quota /dev/lus01-mdt0/lus01 &lt;br/&gt;
tune2fs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
root@lus01-mds2:~# tune2fs -O quota /dev/lus01-mdt0/lus01 &lt;br/&gt;
tune2fs 1.42.7.wc1 (12-Apr-2013)&lt;/p&gt;

&lt;p&gt;Warning: the quota feature is still under development&lt;br/&gt;
See &lt;a href=&quot;https://ext4.wiki.kernel.org/index.php/Quota&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://ext4.wiki.kernel.org/index.php/Quota&lt;/a&gt; for more information&lt;/p&gt;

&lt;p&gt;root@lus01-mds2:~# &lt;/p&gt;</comment>
                            <comment id="65756" author="james beal" created="Wed, 4 Sep 2013 17:17:29 +0000"  >&lt;p&gt;&quot;Is old inode accounted?&quot;&lt;/p&gt;

&lt;p&gt;No the process does not fix the inodes.&lt;/p&gt;

&lt;p&gt;I could make an image of the MDS and MGS discs and upload them ?&lt;/p&gt;


&lt;p&gt;Sep  4 18:08:59 lus01-mds2 kernel: LNet: HW CPU cores: 8, npartitions: 2&lt;br/&gt;
Sep  4 18:08:59 lus01-mds2 kernel: alg: No test for crc32 (crc32-table)&lt;br/&gt;
Sep  4 18:08:59 lus01-mds2 kernel: alg: No test for adler32 (adler32-zlib)&lt;br/&gt;
Sep  4 18:09:08 lus01-mds2 kernel: Lustre: Lustre: Build Version: 2.4.0--PRISTINE-2.6.32-lustre-2.4&lt;br/&gt;
Sep  4 18:09:08 lus01-mds2 kernel: LNet: Added LNI 172.17.99.9@tcp &lt;span class=&quot;error&quot;&gt;&amp;#91;8/256/0/180&amp;#93;&lt;/span&gt;&lt;br/&gt;
Sep  4 18:09:08 lus01-mds2 kernel: LNet: Accept secure, port 988&lt;br/&gt;
Sep  4 18:09:09 lus01-mds2 kernel: LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. quota=on. Opts: &lt;br/&gt;
Sep  4 18:09:21 lus01-mds2 kernel: LustreError: 28636:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880438307800 x1445267555483656/t0(0) o253-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  4 18:09:21 lus01-mds2 kernel: LustreError: 28636:0:(obd_mount_server.c:1123:server_register_target()) lus01-MDT0000: error registering with the MGS: rc = -5 (not fatal)&lt;br/&gt;
Sep  4 18:09:27 lus01-mds2 kernel: LustreError: 28636:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880435616400 x1445267555483660/t0(0) o101-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  4 18:09:33 lus01-mds2 kernel: LustreError: 28636:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880435616400 x1445267555483664/t0(0) o101-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  4 18:09:33 lus01-mds2 kernel: Lustre: 28769:0:(obd_config.c:1428:class_config_llog_handler()) For 1.8 interoperability, rename obd type from mds to mdt&lt;br/&gt;
Sep  4 18:09:33 lus01-mds2 kernel: Lustre: lus01-MDT0000: used disk, loading&lt;br/&gt;
Sep  4 18:09:33 lus01-mds2 kernel: LustreError: 28769:0:(sec_config.c:1115:sptlrpc_target_local_read_conf()) missing llog context&lt;br/&gt;
Sep  4 18:09:33 lus01-mds2 kernel: Lustre: 28769:0:(mdt_handler.c:4945:mdt_process_config()) For interoperability, skip this mdt.quota_type. It is obsolete.&lt;br/&gt;
Sep  4 18:09:40 lus01-mds2 kernel: LustreError: 28636:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff8803d376a000 x1445267555483904/t0(0) o101-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  4 18:09:46 lus01-mds2 kernel: LustreError: 28636:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff8803d376a000 x1445267555483912/t0(0) o101-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  4 18:09:46 lus01-mds2 kernel: LustreError: 11-0: lus01-MDT0000-lwp-MDT0000: Communicating with 0@lo, operation mds_connect failed with -11.&lt;br/&gt;
Sep  4 18:09:57 lus01-mds2 kernel: LustreError: 28636:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff8804001f2c00 x1445267555483920/t0(0) o253-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  4 18:10:12 lus01-mds2 kernel: Lustre: lus01-MDT0000: Will be in recovery for at least 5:00, or until 1 client reconnects&lt;br/&gt;
Sep  4 18:10:12 lus01-mds2 kernel: Lustre: lus01-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted.&lt;/p&gt;



&lt;p&gt;root@isg-disc-mon-05:~# lfs quota -u jb23 /lustre/scratch101 &lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      0       0 5368709120       -       0       0 1500000       -&lt;br/&gt;
root@isg-disc-mon-05:~# touch /lustre/scratch1&lt;br/&gt;
root@isg-disc-mon-05:~# chown jb23 /lustre/scratch101/ensembl/kb3&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u jb23 /lustre/scratch101 &lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                      4       0 5368709120       -       1       0 1500000       -&lt;/p&gt;</comment>
                            <comment id="65766" author="kitwestneat" created="Wed, 4 Sep 2013 18:59:17 +0000"  >&lt;p&gt;Hi James,&lt;/p&gt;

&lt;p&gt;It looks like there is a DEBUG_QUOTA define that if set will spit out a ton of debug data when it does the e2fsck. Would it be possible to recompile the e2fsprogs with that and see if it outputs any useful information during the tune2fs?&lt;/p&gt;

&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;Kit&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="65787" author="niu" created="Thu, 5 Sep 2013 02:16:35 +0000"  >&lt;p&gt;James, is there any error messages in dmesg when executing &apos;lfs quota&apos;?&lt;/p&gt;</comment>
                            <comment id="65789" author="niu" created="Thu, 5 Sep 2013 02:51:43 +0000"  >&lt;p&gt;I reproduced the problem in my local environment, is trying to figure out the reason.&lt;/p&gt;</comment>
                            <comment id="65799" author="niu" created="Thu, 5 Sep 2013 06:57:32 +0000"  >&lt;p&gt;There is a defect in e2fsprogs, which caused the quotacheck (triggered on tune2fs -O quota) can only write single user accounting information into quota file, I posted a fix here: &lt;a href=&quot;http://review.whamcloud.com/7556&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/7556&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="65806" author="james beal" created="Thu, 5 Sep 2013 07:33:24 +0000"  >&lt;p&gt;&quot;I reproduced the problem in my local environment, is trying to figure out the reason.&quot; &lt;/p&gt;

&lt;p&gt;Thank you for that, your work is very much appreciated &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="65812" author="james beal" created="Thu, 5 Sep 2013 08:40:28 +0000"  >&lt;p&gt;I have run the process on the MGS and MDS, signs look good.&lt;/p&gt;

&lt;p&gt;This is a set of lfs quota after running the process&lt;/p&gt;

&lt;p&gt;root@isg-disc-mon-05:~# lfs quota -u jb23 /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                     24       0 5368709120       -       6       0 1500000       -&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u kb3 /lustre/scratch101&lt;br/&gt;
Disk quotas for user kb3 (uid 11809):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                   2076*      0       1       -    2443*      0       1       -&lt;br/&gt;
root@isg-disc-mon-05:~# lfs quota -u gmpc /lustre/scratch101&lt;br/&gt;
Disk quotas for user gmpc (uid 10795):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                1352464       0 104857600       -  307736*      0  153600       -&lt;/p&gt;


&lt;p&gt;Thu Sep  5 09:14:38 BST 2013&lt;br/&gt;
/dev/lus01-mdt0/lus01&lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (806342656, 1888) != expected (8192, 0)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 12296:actual (24576, 6) != expected (4096, 1)&lt;br/&gt;
Update quota info for quota type 0? yes&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;QUOTA WARNING&amp;#93;&lt;/span&gt; Usage inconsistent for ID 0:actual (836046848, 15856) != expected (8192, 0)&lt;br/&gt;
Update quota info for quota type 1? yes&lt;/p&gt;


&lt;p&gt;lus01-MDT0000: ***** FILE SYSTEM WAS MODIFIED *****&lt;br/&gt;
lus01-MDT0000: 16039942/1050017792 files (0.1% non-contiguous), 133891006/1050001408 blocks&lt;br/&gt;
Thu Sep  5 09:28:00 BST 2013&lt;br/&gt;
tune2fs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Thu Sep  5 09:29:10 BST 2013&lt;/p&gt;

&lt;p&gt;Warning: the quota feature is still under development&lt;br/&gt;
See &lt;a href=&quot;https://ext4.wiki.kernel.org/index.php/Quota&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://ext4.wiki.kernel.org/index.php/Quota&lt;/a&gt; for more information&lt;/p&gt;

&lt;p&gt;tune2fs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
checking for existing Lustre data: found&lt;br/&gt;
Reading CONFIGS/mountdata&lt;/p&gt;

&lt;p&gt;   Read previous values:&lt;br/&gt;
Target:     lus01-MDT0000&lt;br/&gt;
Index:      0&lt;br/&gt;
Lustre FS:  lus01&lt;br/&gt;
Mount type: ldiskfs&lt;br/&gt;
Flags:      0x1&lt;br/&gt;
              (MDT )&lt;br/&gt;
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro&lt;br/&gt;
Parameters: mgsnode=172.17.99.10@tcp mgsnode=172.17.99.9@tcp failover.node=172.17.99.10@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups&lt;/p&gt;


&lt;p&gt;   Permanent disk data:&lt;br/&gt;
Target:     lus01-MDT0000&lt;br/&gt;
Index:      0&lt;br/&gt;
Lustre FS:  lus01&lt;br/&gt;
Mount type: ldiskfs&lt;br/&gt;
Flags:      0x1&lt;br/&gt;
              (MDT )&lt;br/&gt;
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro&lt;br/&gt;
Parameters: mgsnode=172.17.99.10@tcp mgsnode=172.17.99.9@tcp failover.node=172.17.99.10@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups&lt;/p&gt;

&lt;p&gt;cmd: tune2fs -O quota /dev/lus01-mdt0/lus01&lt;br/&gt;
Thu Sep  5 09:34:04 BST 2013&lt;br/&gt;
root@lus01-mds2:/root# &lt;/p&gt;


&lt;p&gt;Sep  5 09:35:39 lus01-mds2 kernel: LNet: HW CPU cores: 8, npartitions: 2&lt;br/&gt;
Sep  5 09:35:39 lus01-mds2 kernel: alg: No test for crc32 (crc32-table)&lt;br/&gt;
Sep  5 09:35:39 lus01-mds2 kernel: alg: No test for adler32 (adler32-zlib)&lt;br/&gt;
Sep  5 09:35:48 lus01-mds2 kernel: Lustre: Lustre: Build Version: 2.4.0--PRISTINE-2.6.32-lustre-2.4&lt;br/&gt;
Sep  5 09:35:48 lus01-mds2 kernel: LNet: Added LNI 172.17.99.9@tcp &lt;span class=&quot;error&quot;&gt;&amp;#91;8/256/0/180&amp;#93;&lt;/span&gt;&lt;br/&gt;
Sep  5 09:35:48 lus01-mds2 kernel: LNet: Accept secure, port 988&lt;br/&gt;
Sep  5 09:35:49 lus01-mds2 kernel: LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. quota=on. Opts: &lt;br/&gt;
Sep  5 09:36:01 lus01-mds2 kernel: LustreError: 4973:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880256f3f400 x1445325856309256/t0(0) o253-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  5 09:36:01 lus01-mds2 kernel: LustreError: 4973:0:(obd_mount_server.c:1123:server_register_target()) lus01-MDT0000: error registering with the MGS: rc = -5 (not fatal)&lt;br/&gt;
Sep  5 09:36:07 lus01-mds2 kernel: LustreError: 4973:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880438307800 x1445325856309260/t0(0) o101-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  5 09:36:13 lus01-mds2 kernel: LustreError: 4973:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880438307800 x1445325856309264/t0(0) o101-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  5 09:36:13 lus01-mds2 kernel: Lustre: 5105:0:(obd_config.c:1428:class_config_llog_handler()) For 1.8 interoperability, rename obd type from mds to mdt&lt;br/&gt;
Sep  5 09:36:13 lus01-mds2 kernel: Lustre: lus01-MDT0000: used disk, loading&lt;br/&gt;
Sep  5 09:36:13 lus01-mds2 kernel: LustreError: 5105:0:(sec_config.c:1115:sptlrpc_target_local_read_conf()) missing llog context&lt;br/&gt;
Sep  5 09:36:13 lus01-mds2 kernel: Lustre: 5105:0:(mdt_handler.c:4945:mdt_process_config()) For interoperability, skip this mdt.quota_type. It is obsolete.&lt;br/&gt;
Sep  5 09:36:20 lus01-mds2 kernel: LustreError: 4973:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff88043378f000 x1445325856309504/t0(0) o101-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  5 09:36:26 lus01-mds2 kernel: LustreError: 4973:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff88037d951000 x1445325856309512/t0(0) o101-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  5 09:36:26 lus01-mds2 kernel: LustreError: 11-0: lus01-MDT0000-lwp-MDT0000: Communicating with 0@lo, operation mds_connect failed with -11.&lt;br/&gt;
Sep  5 09:36:37 lus01-mds2 kernel: LustreError: 4973:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff88037d951000 x1445325856309520/t0(0) o253-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1&lt;br/&gt;
Sep  5 09:37:17 lus01-mds2 kernel: Lustre: lus01-MDT0000: Will be in recovery for at least 5:00, or until 1 client reconnects&lt;br/&gt;
Sep  5 09:37:17 lus01-mds2 kernel: Lustre: lus01-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted.&lt;br/&gt;
Sep  5 09:37:35 lus01-mds2 kernel: Lustre: 5024:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1378370150/real 1378370150&amp;#93;&lt;/span&gt;  req@ffff880256f3f800 x1445325856309252/t0(0) o250-&amp;gt;MGC172.17.99.10@tcp@0@lo:26/25 lens 400/544 e 0 to 1 dl 1378370255 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1&lt;/p&gt;</comment>
                            <comment id="65904" author="niu" created="Fri, 6 Sep 2013 04:13:22 +0000"  >&lt;p&gt;Hi, James&lt;br/&gt;
Could install the e2fsprogs from &lt;a href=&quot;http://build.whamcloud.com/job/e2fsprogs-reviews/173/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/e2fsprogs-reviews/173/&lt;/a&gt; (see &lt;a href=&quot;http://review.whamcloud.com/#/c/7556/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/7556/&lt;/a&gt;), and disable/enable quota for all your mdt and ost devices by:&lt;br/&gt;
tune2fs -O ^quota $dev&lt;br/&gt;
tune2fs -O quota $dev&lt;br/&gt;
Then setup lustre to see if the problem is resloved? Thanks.&lt;/p&gt;</comment>
                            <comment id="65925" author="james beal" created="Fri, 6 Sep 2013 08:44:28 +0000"  >&lt;p&gt;We did this yesterday and ran though the procedure.&lt;/p&gt;

&lt;p&gt;I believe that the patch has fixed the issue.&lt;/p&gt;

&lt;p&gt;lfs quota -u jb23 /lustre/scratch101&lt;br/&gt;
Disk quotas for user jb23 (uid 12296):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                 192412       0 5368709120       -       6       0 1500000       -&lt;/p&gt;

&lt;p&gt;lfs quota -u kb3 /lustre/scratch101&lt;br/&gt;
Disk quotas for user kb3 (uid 11809):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                150478460*      0       1       -    2443*      0       1       -&lt;/p&gt;

&lt;p&gt;lfs quota -g ensembl  /lustre/scratch101&lt;br/&gt;
Disk quotas for group ensembl (gid 707):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
/lustre/scratch101&lt;br/&gt;
                4544447964       0       0       - 2506955       0       0       -&lt;/p&gt;

</comment>
                            <comment id="73643" author="niu" created="Tue, 17 Dec 2013 02:14:22 +0000"  >&lt;p&gt;patch landed&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="20712">LU-3861</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="13419" name="lus01_logs.tar.gz" size="233745" author="james beal" created="Mon, 2 Sep 2013 10:31:37 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvykv:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9787</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>