<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:53:08 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12500] group block quota limits not enforced</title>
                <link>https://jira.whamcloud.com/browse/LU-12500</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;it doesn&apos;t look like lustre is enforcing group block quota. eg.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; &amp;gt; lfs quota -g oz011 /fred
Disk quotas for grp oz011 (gid 10206):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
          /fred 10893023318*      0 10737418240       -   57475       0 1000000       -
 &amp;gt; dd if=/tmp/urand100 of=/fred/oz011/blah bs=1M
1000+1 records in
1000+1 records out
1048738400 bytes (1.0 GB) copied, 0.995716 s, 1.1 GB/s
 &amp;gt; ls -lsh /fred/oz011/blah
688M -rw-r--r-- 1 user oz011 1001M Jul  1 21:42 /fred/oz011/blah
 &amp;gt; lfs quota -g oz011 /fred
Disk quotas for grp oz011 (gid 10206):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
          /fred 10894895294*      0 10737418240       -   57477       0 1000000       -
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I can see old quota bugs that look similar, but none currently open.&lt;/p&gt;

&lt;p&gt;all our directories are setgid. ie.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; &amp;gt; ls -ld /fred/oz011
drwxrws--- 17 root oz011 33280 Jul  1 21:42 /fred/oz011
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;server is 2.10.5 plus these&lt;/p&gt;

&lt;p&gt;lu11082-lu11103-stuckMdtThreads-gerrit32853-3dc08caa.diff&lt;br/&gt;
 lu11418-refreshStale-gerrit33401-v4-71f409c9.diff&lt;br/&gt;
 lu11111-lfsck-gerrit32796-693fe452.ported.patch&lt;br/&gt;
 lu11418-stopOrphCleanupDaThreadSpinning-gerrit33662-45434fd0.diff&lt;br/&gt;
 lu11201-lfsckDoesntFinish-gerrit33078-4829fb05.patch&lt;br/&gt;
 lu11419-lfsckDoesntFinish-gerrit33252-22503a1d.diff&lt;br/&gt;
 lu11301-stuckMdtThreads2-c43baa1c.patch&lt;br/&gt;
 lu11663-partialPageCorruption-gerrit33748-18d6b8fb.diff&lt;br/&gt;
 lu11418-hungMdtZfs-gerrit33248-eaa3c60d.diff&lt;/p&gt;

&lt;p&gt;not all of which are in 2.10.x AFAIK (all but one are in 2.12?), so it&apos;d unfortunately be quite a bit of work to update servers to 2.10.8.&lt;/p&gt;

&lt;p&gt;clients are all stock 2.10.7&lt;/p&gt;

&lt;p&gt;thanks&lt;/p&gt;

&lt;p&gt;cheers,&lt;br/&gt;
robin&lt;/p&gt;</description>
                <environment>2.10.5 + lots of patches on servers, x86_64, zfs 0.7.9, OPA&lt;br/&gt;
2.10.7 on clients, x86_64, OPA&lt;br/&gt;
group block and inode quotas on.&lt;br/&gt;
</environment>
        <key id="56246">LU-12500</key>
            <summary>group block quota limits not enforced</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="6">Not a Bug</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="scadmin">SC Admin</reporter>
                        <labels>
                    </labels>
                <created>Mon, 1 Jul 2019 15:57:03 +0000</created>
                <updated>Mon, 1 Jul 2019 18:41:30 +0000</updated>
                            <resolved>Mon, 1 Jul 2019 18:41:30 +0000</resolved>
                                    <version>Lustre 2.10.5</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="250425" author="scadmin" created="Mon, 1 Jul 2019 16:13:33 +0000"  >&lt;p&gt;a bit more info is that if I wait a while then the &quot;size on disk&quot; of the file increases. ie. now it&apos;s&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; &amp;gt; ls -lsh /fred/oz011/blah
915M -rw-r--r-- 1 user oz011 1001M Jul  1 21:42 /fred/oz011/blah
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;perhaps that&apos;s normal. I forget... :-/&lt;/p&gt;

&lt;p&gt;a 3rd bit of info is that I suspect this quota issue also causes unusually high load on OSS&apos;s. we recently had a user in (what we now realise was) an over-quota group running 300+ i/o intensive jobs. this caused load on OSS&apos;s of 200+. one OSS was STONITH&apos;d because it hit a timeout - probably just from the load.&lt;/p&gt;

&lt;p&gt;cheers,&lt;br/&gt;
robin&lt;/p&gt;</comment>
                            <comment id="250430" author="pjones" created="Mon, 1 Jul 2019 17:17:34 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Can you please advise?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="250435" author="pfarrell" created="Mon, 1 Jul 2019 17:43:10 +0000"  >&lt;p&gt;It would be good to check that you&apos;ve got quota &lt;b&gt;enforcement&lt;/b&gt; enabled, and not just accounting.&lt;/p&gt;

&lt;p&gt;This is described in the quota section of the Lustre operations manual, but you can check with this command on the MDS:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lctl get_param osd-*.*.quota_slave.info &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If you do not have both &apos;u&apos; and &apos;g&apos; under &apos;enabled&apos;, then quota enforcement is not enabled.&lt;/p&gt;</comment>
                            <comment id="250439" author="scadmin" created="Mon, 1 Jul 2019 18:38:15 +0000"  >&lt;p&gt;ah. a beer for Patrick &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; # cexec -p warble:2 oss:1-10 &apos;lctl get_param osd-*.*.quota_slave.info&apos; | grep enable
warble warble2: quota enabled:  g
warble warble2: quota enabled:  g
warble warble2: quota enabled:  g
oss arkle1: quota enabled:  g
oss arkle1: quota enabled:  g
oss arkle2: quota enabled:  g
oss arkle2: quota enabled:  g
oss arkle3: quota enabled:  g
oss arkle3: quota enabled:  g
oss arkle4: quota enabled:  g
oss arkle4: quota enabled:  g
oss arkle5: quota enabled:  g
oss arkle5: quota enabled:  g
oss arkle6: quota enabled:  g
oss arkle6: quota enabled:  g
oss arkle7: quota enabled:  g
oss arkle7: quota enabled:  g
oss arkle8: quota enabled:  g
oss arkle8: quota enabled:  g
oss arkle9: quota enabled:  none
oss arkle9: quota enabled:  none
oss arkle10: quota enabled:  none
oss arkle10: quota enabled:  none
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;we added 2 more OSS&apos;s a while back, but looks like the conf_param isn&apos;t inherited to new OSS&apos;s.&lt;/p&gt;

&lt;p&gt;is that expected behaviour?&lt;/p&gt;

&lt;p&gt;I re-did the conf_param on the MGS&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[warble1]root: lctl conf_param dagg.quota.ost=g
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and now it looks like dd&apos;s still writes some data, but not much, and at least there&apos;s an error coming back too -&amp;gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; &amp;gt; dd if=/tmp/urand100 of=/fred/oz011/blah bs=1M
dd: error writing &apos;/fred/oz011/blah&apos;: Disk quota exceeded
10+0 records in
9+0 records out
9437184 bytes (9.4 MB) copied, 0.404197 s, 23.3 MB/s
 &amp;gt; dd if=/tmp/urand100 of=/fred/oz011/blah2 bs=1M
dd: error writing &apos;/fred/oz011/blah2&apos;: Disk quota exceeded
9+0 records in
8+0 records out
8388608 bytes (8.4 MB) copied, 0.199364 s, 42.1 MB/s
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;so that&apos;s probably fine.&lt;/p&gt;

&lt;p&gt;if this is all expected behaviour then please close this ticket. thanks!&lt;/p&gt;

&lt;p&gt;cheers,&lt;br/&gt;
robin&lt;/p&gt;</comment>
                            <comment id="250441" author="pfarrell" created="Mon, 1 Jul 2019 18:40:50 +0000"  >&lt;p&gt;Yeah, this is not ideal behavior (re: inheritance with the new OSTs), but it is expected.&lt;/p&gt;

&lt;p&gt;Glad to be of assistance!&lt;/p&gt;</comment>
                            <comment id="250442" author="pfarrell" created="Mon, 1 Jul 2019 18:41:30 +0000"  >&lt;p&gt;Config issue at customer site.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00j1z:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>