<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:34:16 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-10350] ost-pools test 1n fails with &apos;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&apos;</title>
                <link>https://jira.whamcloud.com/browse/LU-10350</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;ost-pools tests 1n, 11, 15, 16, 19 and 22 all fail trying to create/open or write files with the following error message:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;File too large
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For example, from the test_log of test_1n&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== ost-pools test 1n: Pool with a 15 char pool name works well ======================================= 10:03:28 (1512554608)
CMD: trevis-8vm4 lctl pool_new lustre.testpool1234567
trevis-8vm4: Pool lustre.testpool1234567 created
CMD: trevis-8vm4 lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 				2&amp;gt;/dev/null || echo foo
CMD: trevis-8vm4 lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 				2&amp;gt;/dev/null || echo foo
CMD: trevis-8vm1.trevis.hpdd.intel.com lctl get_param -n lov.lustre-*.pools.testpool1234567 		2&amp;gt;/dev/null || echo foo
CMD: trevis-8vm1.trevis.hpdd.intel.com lctl get_param -n lov.lustre-*.pools.testpool1234567 		2&amp;gt;/dev/null || echo foo
CMD: trevis-8vm4 lctl pool_add lustre.testpool1234567 OST0000
trevis-8vm4: OST lustre-OST0000_UUID added to pool lustre.testpool1234567
CMD: trevis-8vm4 lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 |
				sort -u | tr &apos;\n&apos; &apos; &apos; 
CMD: trevis-8vm4 lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 |
				sort -u | tr &apos;\n&apos; &apos; &apos; 
CMD: trevis-8vm1.trevis.hpdd.intel.com lctl get_param -n lov.lustre-*.pools.testpool1234567 |
		sort -u | tr &apos;\n&apos; &apos; &apos; 
CMD: trevis-8vm1.trevis.hpdd.intel.com lctl get_param -n lov.lustre-*.pools.testpool1234567 |
		sort -u | tr &apos;\n&apos; &apos; &apos; 
dd: failed to open &apos;/mnt/lustre/d1n.ost-pools/file&apos;: File too large
 ost-pools test_1n: @@@@@@ FAIL: failed to write to /mnt/lustre/d1n.ost-pools/file: 1 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the dmesg log for the MDS (vm4), we can see a failure&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[18753.542095] Lustre: DEBUG MARKER: == ost-pools test 1n: Pool with a 15 char pool name works well ======================================= 13:37:10 (1512567430)
[18753.714379] Lustre: DEBUG MARKER: lctl pool_new lustre.testpool1234567
[18758.015205] Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 				2&amp;gt;/dev/null || echo foo
[18758.331296] Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 				2&amp;gt;/dev/null || echo foo
[18760.686719] Lustre: DEBUG MARKER: lctl pool_add lustre.testpool1234567 OST0000
[18766.993199] Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 |
				sort -u | tr &apos;\n&apos; &apos; &apos; 
[18767.303867] Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 |
				sort -u | tr &apos;\n&apos; &apos; &apos; 
[18768.515291] LustreError: 3750:0:(lod_qos.c:1350:lod_alloc_specific()) can&apos;t lstripe objid [0x200029443:0xdaad:0x0]: have 1 want 7
[18768.704524] Lustre: DEBUG MARKER: /usr/sbin/lctl mark  ost-pools test_1n: @@@@@@ FAIL: failed to write to \/mnt\/lustre\/d1n.ost-pools\/file: 1 
[18768.896290] Lustre: DEBUG MARKER: ost-pools test_1n: @@@@@@ FAIL: failed to write to /mnt/lustre/d1n.ost-pools/file: 1
[18769.103049] Lustre: DEBUG MARKER: /usr/sbin/lctl dk &amp;gt; /home/autotest/autotest/logs/test_logs/2017-12-05/lustre-master-el7-x86_64--full--1_1_1__3676___6c155f47-820d-447d-893f-15b24418827f/ost-pools.test_1n.debug_log.$(hostname -s).1512567446.log;
         dmesg &amp;gt; /home/autotest/autotest/lo
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and similar failures for the other tests. Note: there are 7 OSTs and 1 MDS for the following test suite:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/fdd54642-dae4-11e7-8027-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/fdd54642-dae4-11e7-8027-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These ost-pools tests started failing with the &#8216;File too large&#8217; error on September 27, 2017 with 2.10.52.113.&lt;/p&gt;

&lt;p&gt;Note: So far we are only seeing these failures during &apos;full&apos; test sessions and not in review-* test sessions.&lt;/p&gt;

&lt;p&gt;Logs for some of the other instances of this failure are at:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/da2df238-db44-11e7-9c63-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/da2df238-db44-11e7-9c63-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4fc12420-daa0-11e7-9c63-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4fc12420-daa0-11e7-9c63-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/307880b4-da7c-11e7-9c63-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/307880b4-da7c-11e7-9c63-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/0e1cd21c-da73-11e7-8027-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/0e1cd21c-da73-11e7-8027-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c1f5d0c8-dadb-11e7-9c63-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c1f5d0c8-dadb-11e7-9c63-52540065bddc&lt;/a&gt;&lt;/p&gt;</description>
                <environment></environment>
        <key id="49643">LU-10350</key>
            <summary>ost-pools test 1n fails with &apos;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&apos;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Thu, 7 Dec 2017 17:15:39 +0000</created>
                <updated>Tue, 14 Jun 2022 21:22:25 +0000</updated>
                            <resolved>Mon, 14 Jun 2021 19:42:27 +0000</resolved>
                                    <version>Lustre 2.11.0</version>
                    <version>Lustre 2.12.0</version>
                    <version>Lustre 2.10.3</version>
                    <version>Lustre 2.10.4</version>
                    <version>Lustre 2.10.5</version>
                    <version>Lustre 2.10.6</version>
                    <version>Lustre 2.12.1</version>
                    <version>Lustre 2.12.6</version>
                                    <fixVersion>Lustre 2.12.7</fixVersion>
                    <fixVersion>Lustre 2.15.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="215593" author="adilger" created="Thu, 7 Dec 2017 19:09:12 +0000"  >&lt;p&gt;The file create appears to be failing because a 7-stripe file was requested, but only 1 stripe could be created. We need at least 3/4 of the requested stripe count to consider the create successful. &lt;/p&gt;

&lt;p&gt;First thing to check is whether the debug log on the MDS has enough info to see why the MDS isn&#8217;t able to create the requested stripes.  It might be some leftovers from the previous tests that have exhausted insides on the OSTs?&lt;/p&gt;

&lt;p&gt;Separately, it would be useful to make a debugging patch enable full debugging for test_1a, to print &lt;tt&gt;lfs df&lt;/tt&gt; and &lt;tt&gt;lfs df -i&lt;/tt&gt; before the test is run, along with &lt;tt&gt;do_nodes $(comma_list $(mdts_nodes)) lctl get_param osp.&amp;#42;.prealloc_&amp;#42;_id&lt;/tt&gt; to dump the OST object preallocation state before and after the test failure. &lt;/p&gt;</comment>
                            <comment id="215604" author="gerrit" created="Thu, 7 Dec 2017 21:04:11 +0000"  >&lt;p&gt;James Nunez (james.a.nunez@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/30440&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/30440&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; tests: get inode count for ost-pools test 1n&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 2a01764becd66379e735a0db308bda2bac84b951&lt;/p&gt;</comment>
                            <comment id="215878" author="jamesanunez" created="Sat, 9 Dec 2017 15:11:46 +0000"  >&lt;p&gt;ost-pools failed with the debug patch; &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4df8a486-dc82-11e7-9840-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4df8a486-dc82-11e7-9840-52540065bddc&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;For test_1n, we print the free space and free inodes at the beginning of the test and on error. There&apos;s enough of both. prealoc_last and next are also printed. Here&apos;s what we see in the client test_log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== ost-pools test 1n: Pool with a 15 char pool name works well ======================================= 17:13:48 (1512753228)
CMD: trevis-33vm4 /usr/sbin/lctl get_param -n debug
CMD: trevis-33vm1.trevis.hpdd.intel.com,trevis-33vm2,trevis-33vm3,trevis-33vm4 /usr/sbin/lctl set_param debug_mb=150
debug_mb=150
debug_mb=150
debug_mb=150
debug_mb=150
CMD: trevis-33vm1.trevis.hpdd.intel.com,trevis-33vm2,trevis-33vm3,trevis-33vm4 /usr/sbin/lctl set_param debug=-1;
debug=-1
debug=-1
debug=-1
debug=-1
UUID                   1K-blocks        Used   Available Use% Mounted on
lustre-MDT0000_UUID      1165900       10980     1051724   1% /mnt/lustre[MDT:0]
lustre-OST0000_UUID     13745592       52880    12957832   0% /mnt/lustre[OST:0]
lustre-OST0001_UUID     13745592       44108    12966604   0% /mnt/lustre[OST:1]
lustre-OST0002_UUID     13745592       48732    12961980   0% /mnt/lustre[OST:2]
lustre-OST0003_UUID     13745592       46088    12964624   0% /mnt/lustre[OST:3]
lustre-OST0004_UUID     13745592       63636    12947076   0% /mnt/lustre[OST:4]
lustre-OST0005_UUID     13745592       45744    12964968   0% /mnt/lustre[OST:5]
lustre-OST0006_UUID     13745592       46824    12963888   0% /mnt/lustre[OST:6]

filesystem_summary:     96219144      348012    90726972   0% /mnt/lustre

UUID                      Inodes       IUsed       IFree IUse% Mounted on
lustre-MDT0000_UUID       838864         551      838313   0% /mnt/lustre[MDT:0]
lustre-OST0000_UUID       211200         293      210907   0% /mnt/lustre[OST:0]
lustre-OST0001_UUID       211200         291      210909   0% /mnt/lustre[OST:1]
lustre-OST0002_UUID       211200         285      210915   0% /mnt/lustre[OST:2]
lustre-OST0003_UUID       211200         284      210916   0% /mnt/lustre[OST:3]
lustre-OST0004_UUID       211200         294      210906   0% /mnt/lustre[OST:4]
lustre-OST0005_UUID       211200         291      210909   0% /mnt/lustre[OST:5]
lustre-OST0006_UUID       211200         292      210908   0% /mnt/lustre[OST:6]

filesystem_summary:       838864         551      838313   0% /mnt/lustre

CMD: trevis-33vm4 lctl get_param osp.*.prealloc_*_id
osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=58697
osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=58666
osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=24385
osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=24354
osp.lustre-OST0002-osc-MDT0000.prealloc_last_id=24353
osp.lustre-OST0002-osc-MDT0000.prealloc_next_id=24322
osp.lustre-OST0003-osc-MDT0000.prealloc_last_id=24321
osp.lustre-OST0003-osc-MDT0000.prealloc_next_id=24290
osp.lustre-OST0004-osc-MDT0000.prealloc_last_id=24321
osp.lustre-OST0004-osc-MDT0000.prealloc_next_id=24290
osp.lustre-OST0005-osc-MDT0000.prealloc_last_id=24321
osp.lustre-OST0005-osc-MDT0000.prealloc_next_id=24290
osp.lustre-OST0006-osc-MDT0000.prealloc_last_id=24289
osp.lustre-OST0006-osc-MDT0000.prealloc_next_id=24258
CMD: trevis-33vm4 lctl pool_new lustre.testpool1234567
trevis-33vm4: Pool lustre.testpool1234567 created
CMD: trevis-33vm4 lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 				2&amp;gt;/dev/null || echo foo
CMD: trevis-33vm4 lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 				2&amp;gt;/dev/null || echo foo
CMD: trevis-33vm1.trevis.hpdd.intel.com lctl get_param -n lov.lustre-*.pools.testpool1234567 		2&amp;gt;/dev/null || echo foo
CMD: trevis-33vm1.trevis.hpdd.intel.com lctl get_param -n lov.lustre-*.pools.testpool1234567 		2&amp;gt;/dev/null || echo foo
CMD: trevis-33vm4 lctl pool_add lustre.testpool1234567 OST0000
trevis-33vm4: OST lustre-OST0000_UUID added to pool lustre.testpool1234567
CMD: trevis-33vm4 lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 |
				sort -u | tr &apos;\n&apos; &apos; &apos; 
CMD: trevis-33vm4 lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.testpool1234567 |
				sort -u | tr &apos;\n&apos; &apos; &apos; 
CMD: trevis-33vm1.trevis.hpdd.intel.com lctl get_param -n lov.lustre-*.pools.testpool1234567 |
		sort -u | tr &apos;\n&apos; &apos; &apos; 
CMD: trevis-33vm1.trevis.hpdd.intel.com lctl get_param -n lov.lustre-*.pools.testpool1234567 |
		sort -u | tr &apos;\n&apos; &apos; &apos; 
dd: failed to open &apos;/mnt/lustre/d1n.ost-pools/file&apos;: File too large
UUID                   1K-blocks        Used   Available Use% Mounted on
lustre-MDT0000_UUID      1165900       10984     1051720   1% /mnt/lustre[MDT:0]
lustre-OST0000_UUID     13745592       52880    12957832   0% /mnt/lustre[OST:0]
lustre-OST0001_UUID     13745592       44108    12966604   0% /mnt/lustre[OST:1]
lustre-OST0002_UUID     13745592       48732    12961980   0% /mnt/lustre[OST:2]
lustre-OST0003_UUID     13745592       46088    12964624   0% /mnt/lustre[OST:3]
lustre-OST0004_UUID     13745592       63636    12947076   0% /mnt/lustre[OST:4]
lustre-OST0005_UUID     13745592       45744    12964968   0% /mnt/lustre[OST:5]
lustre-OST0006_UUID     13745592       46824    12963888   0% /mnt/lustre[OST:6]

filesystem_summary:     96219144      348012    90726972   0% /mnt/lustre

UUID                      Inodes       IUsed       IFree IUse% Mounted on
lustre-MDT0000_UUID       838864         552      838312   0% /mnt/lustre[MDT:0]
lustre-OST0000_UUID       211200         293      210907   0% /mnt/lustre[OST:0]
lustre-OST0001_UUID       211200         291      210909   0% /mnt/lustre[OST:1]
lustre-OST0002_UUID       211200         285      210915   0% /mnt/lustre[OST:2]
lustre-OST0003_UUID       211200         284      210916   0% /mnt/lustre[OST:3]
lustre-OST0004_UUID       211200         294      210906   0% /mnt/lustre[OST:4]
lustre-OST0005_UUID       211200         291      210909   0% /mnt/lustre[OST:5]
lustre-OST0006_UUID       211200         292      210908   0% /mnt/lustre[OST:6]

filesystem_summary:       838864         552      838312   0% /mnt/lustre

CMD: trevis-33vm4 lctl get_param osp.*.prealloc_*_id
osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=58697
osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=58666
osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=24385
osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=24354
osp.lustre-OST0002-osc-MDT0000.prealloc_last_id=24353
osp.lustre-OST0002-osc-MDT0000.prealloc_next_id=24322
osp.lustre-OST0003-osc-MDT0000.prealloc_last_id=24321
osp.lustre-OST0003-osc-MDT0000.prealloc_next_id=24290
osp.lustre-OST0004-osc-MDT0000.prealloc_last_id=24321
osp.lustre-OST0004-osc-MDT0000.prealloc_next_id=24290
osp.lustre-OST0005-osc-MDT0000.prealloc_last_id=24321
osp.lustre-OST0005-osc-MDT0000.prealloc_next_id=24290
osp.lustre-OST0006-osc-MDT0000.prealloc_last_id=24289
osp.lustre-OST0006-osc-MDT0000.prealloc_next_id=24258
CMD: trevis-33vm1.trevis.hpdd.intel.com,trevis-33vm2,trevis-33vm3,trevis-33vm4 /usr/sbin/lctl set_param debug_mb=4
debug_mb=4
debug_mb=4
debug_mb=4
debug_mb=4
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="215952" author="adilger" created="Mon, 11 Dec 2017 18:13:27 +0000"  >&lt;p&gt;Looking at the most recent logs, I&apos;m wondering if there is some problem adding the OST(s) to the pool, which causes an error creating a file in a pool with no OSTs? I&apos;ve added some more debugging to James&apos; patch.&lt;/p&gt;

&lt;p&gt;The debug logs have the &lt;tt&gt;-EFBIG = -27&lt;/tt&gt; error:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_logs/4e4d4982-dc82-11e7-9840-52540065bddc/show_text&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_logs/4e4d4982-dc82-11e7-9840-52540065bddc/show_text&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;0000004:00000001:0.0:1512753249.949213:0:30195:0:(lod_object.c:4453:lod_declare_striped_create()) Process entered
00020000:00000001:0.0:1512753249.949221:0:30195:0:(lod_qos.c:2253:lod_prepare_create()) Process entered
00020000:00001000:0.0:1512753249.949225:0:30195:0:(lod_qos.c:2298:lod_prepare_create()) 0 [0, 0)
00020000:00000001:0.0:1512753249.949226:0:30195:0:(lod_qos.c:2065:lod_qos_prep_create()) Process entered
00020000:00000001:0.0:1512753249.949227:0:30195:0:(lod_qos.c:270:lod_qos_statfs_update()) Process entered
00020000:00000001:0.0:1512753249.949229:0:30195:0:(lod_qos.c:195:lod_statfs_and_check()) Process entered
00000004:00001000:0.0:1512753249.949232:0:30195:0:(osp_dev.c:774:osp_statfs()) lustre-OST0000-osc-MDT0000: 3436398 blocks, 3423178 free, 3239390 avail, 211200 files, 210907 free files
00000004:00001000:0.0:1512753249.949237:0:30195:0:(osp_dev.c:774:osp_statfs()) lustre-OST0001-osc-MDT0000: 3436398 blocks, 3425371 free, 3241583 avail, 211200 files, 210909 free files
00000004:00001000:0.0:1512753249.949242:0:30195:0:(osp_dev.c:774:osp_statfs()) lustre-OST0002-osc-MDT0000: 3436398 blocks, 3424215 free, 3240427 avail, 211200 files, 210915 free files
00000004:00001000:0.0:1512753249.949245:0:30195:0:(osp_dev.c:774:osp_statfs()) lustre-OST0003-osc-MDT0000: 3436398 blocks, 3424876 free, 3241088 avail, 211200 files, 210916 free files
00000004:00001000:0.0:1512753249.949249:0:30195:0:(osp_dev.c:774:osp_statfs()) lustre-OST0004-osc-MDT0000: 3436398 blocks, 3420489 free, 3236701 avail, 211200 files, 210906 free files
00000004:00001000:0.0:1512753249.949252:0:30195:0:(osp_dev.c:774:osp_statfs()) lustre-OST0005-osc-MDT0000: 3436398 blocks, 3424962 free, 3241174 avail, 211200 files, 210909 free files
00000004:00001000:0.0:1512753249.949256:0:30195:0:(osp_dev.c:774:osp_statfs()) lustre-OST0006-osc-MDT0000: 3436398 blocks, 3424692 free, 3240904 avail, 211200 files, 210908 free files
00020000:00000001:0.0:1512753249.949258:0:30195:0:(lod_qos.c:296:lod_qos_statfs_update()) Process leaving
00020000:00001000:0.0:1512753249.949260:0:30195:0:(lod_qos.c:2101:lod_qos_prep_create()) tgt_count 7 stripe_count 7
00020000:00000001:0.0:1512753249.949260:0:30195:0:(lod_qos.c:1237:lod_alloc_specific()) Process entered
:
:
00020000:00020000:0.0:1512753249.949299:0:30195:0:(lod_qos.c:1350:lod_alloc_specific()) can&apos;t lstripe objid [0x2000599b1:0x2:0x0]: have 1 want 7
00020000:00000001:0.0:1512753249.953090:0:30195:0:(lod_qos.c:1359:lod_alloc_specific()) Process leaving (rc=18446744073709551589 : -27 : ffffffffffffffe5)
00020000:00000001:0.0:1512753249.953100:0:30195:0:(lod_qos.c:2157:lod_qos_prep_create()) Process leaving (rc=18446744073709551589 : -27 : ffffffffffffffe5)
00020000:00000001:0.0:1512753249.953101:0:30195:0:(lod_qos.c:2306:lod_prepare_create()) Process leaving (rc=18446744073709551589 : -27 : ffffffffffffffe5)
00000004:00000001:0.0:1512753249.953105:0:30195:0:(lod_object.c:4462:lod_declare_striped_create()) Process leaving via out (rc=18446744073709551589 : -27 : 0xffffffffffffffe5)
00000004:00000001:0.0:1512753249.953111:0:30195:0:(lod_object.c:4603:lod_declare_create()) Process leaving (rc=18446744073709551589 : -27 : ffffffffffffffe5)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="215953" author="adilger" created="Mon, 11 Dec 2017 18:15:29 +0000"  >&lt;p&gt;It looks like the problem is that there is only a single OST added to the pool:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;CMD: trevis-35vm8 lctl pool_add lustre.testpool1234567 OST0000
trevis-35vm8: OST lustre-OST0000_UUID added to pool lustre.testpool1234567
Pools from lustre:
lustre.testpool1234567
Pool: lustre.testpool1234567
lustre-OST0000_UUID
dd: failed to open &apos;/mnt/lustre/d1n.ost-pools/file&apos;: File too large
# lfs df -p
UUID                   1K-blocks        Used   Available Use% Mounted on
lustre-MDT0000_UUID      1165900       10752     1051952   1% /mnt/lustre[MDT:0]
lustre-OST0000_UUID     13745592       43056    12967656   0% /mnt/lustre[OST:0]

filesystem_summary:     13745592       43056    12967656   0% /mnt/lustre
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="215955" author="adilger" created="Mon, 11 Dec 2017 18:18:14 +0000"  >&lt;p&gt;More correctly, the problem appears to be that the filesystem default stripe count is 7, but there is only a single OST in the pool, which causes the test failure.  So it doesn&apos;t look like the problem is in &lt;tt&gt;ost-pools.sh&lt;/tt&gt; itself, but some previous test is changing the default stripe count.&lt;/p&gt;</comment>
                            <comment id="215995" author="jamesanunez" created="Mon, 11 Dec 2017 23:01:35 +0000"  >&lt;p&gt;I ran ost-pools on my test system and it completed with no failures. I then ran sanity-pfl and then ost-pools and ost-pools test 1n fails with &apos;File too large&apos; error.&lt;/p&gt;

&lt;p&gt;If you run sanity-pfl test 10 and then run ost-pools test 1n, you can trigger the error. On my system, before running sanity-pfl, the layout of the mount point looks like:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@trevis-58vm8 tests]# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:        stripe_offset: -1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;After running sanity-pfl test 10, we see that the pattern is now raid0&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:       raid0 stripe_offset: 0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
</comment>
                            <comment id="216013" author="adilger" created="Tue, 12 Dec 2017 03:51:48 +0000"  >&lt;p&gt;It would be useful to add a call to &lt;tt&gt;lfs getstripe -d $MOUNT&lt;/tt&gt; and &lt;tt&gt;lfs getstripe -d $DIR&lt;/tt&gt; to see what the default striping is at the end of sanity-pfl. It doesn&#8217;t make sense that it would be 1, but 7.  Maybe that is a difference between your local test configuration and the auto test full config?&lt;/p&gt;</comment>
                            <comment id="216021" author="adilger" created="Tue, 12 Dec 2017 05:48:57 +0000"  >&lt;p&gt;It does indeed seem that the addition of &lt;tt&gt;sanity-pfl&lt;/tt&gt; to the full test list is the source of this problem - it was added to the autotest repo on Sept. 25th, just before the problems were first seen on Sept. 27th.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;commit 4213c2cc5caad5abc9d4ac328f57df2836cdc605
Author:     colmstea &amp;lt;charlie.olmstead@intel.com&amp;gt;
AuthorDate: Mon Sep 25 09:51:54 2017 -0600
Commit:     Charlie Olmstead &amp;lt;charlie.olmstead@intel.com&amp;gt;
CommitDate: Mon Sep 25 15:53:54 2017 +0000

    ATM-675 - add sanity-pfl to autotest full test group
    
    added sanity-pfl to the full test group
    
    Change-Id: I50c0d197301c77687d9df7b20117990ac20a6394
    Reviewed-on: https://review.whamcloud.com/29192
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="216065" author="jamesanunez" created="Tue, 12 Dec 2017 16:29:16 +0000"  >&lt;p&gt;When I create a file system, the mount point pattern is blank and I, as root, can&#8217;t set the pattern on the mount point to raid0 or mdt:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:        stripe_offset: -1

# lfs setstripe -L raid0 /lustre/scratch/
# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:        stripe_offset: -1

# lfs setstripe -L mdt /lustre/scratch/
# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:        stripe_offset: -1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Yet, sanity-pfl test_10 does change the pattern on the mount point to the default &#8216;raid0&#8217; (and this answers Andreas&#8217; question about what is the default striping is after sanity-pfl):&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:        stripe_offset: -1

# NAME=ncli ./auster -k -v sanity-pfl --only 10
Started at Tue Dec 12 16:15:25 UTC 2017
&#8230;
PASS 10 (3s)
== sanity-pfl test complete, duration 14 sec ========================================================= 16:15:46 (1513095346)
sanity-pfl returned 0
Finished at Tue Dec 12 16:15:46 UTC 2017 in 21s
./auster: completed with rc 0

# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:       raid0 stripe_offset: 0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and I can set the mount point pattern back to &#8216;blank&#8217;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:       raid0 stripe_offset: 0

# lfs setstripe -d /lustre/scratch/
# lfs getstripe /lustre/scratch/
/lustre/scratch/
stripe_count:  1 stripe_size:   1048576 pattern:        stripe_offset: -1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;sanity-pfl test 10 gets the layout of the mount point using get_layout_param()/parse_layout_param(), but these functions don&#8217;t take into account the pattern of the directory meaning they don&#8217;t get the file/dir pattern (--layout parameter). If the pattern isn&#8217;t specified, then it defaults to the default pattern which is raid0. &lt;/p&gt;

&lt;p&gt;We really want mount point pattern to remain the same before and after sanity-pfl. Do we want to allow the user to set the pattern on the mount point?&lt;/p&gt;</comment>
                            <comment id="216087" author="jgmitter" created="Tue, 12 Dec 2017 18:47:38 +0000"  >&lt;p&gt;Hi Lai,&lt;/p&gt;

&lt;p&gt;Can you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="216117" author="adilger" created="Tue, 12 Dec 2017 22:27:18 +0000"  >&lt;p&gt;It isn&apos;t clear if we want to allow &lt;em&gt;only&lt;/em&gt; the pattern to be set on the mountpoint, since a raw &quot;&lt;tt&gt;mdt&lt;/tt&gt;&quot; layout on the root is mostly useless unless the filesystem has only MDTs, no OSTs (we can cross that bridge when we get to it, there will be other fixes needed as well).  Instead, it makes sense to set a PFL layout with &lt;tt&gt;mdt&lt;/tt&gt; as the first component.&lt;/p&gt;

&lt;p&gt;What is strange/broken in &lt;tt&gt;ost-pools test_1n&lt;/tt&gt; is that the test is using &lt;tt&gt;create_dir&lt;/tt&gt; to set the stripe count to -1 (as it always has) in a pool with only 1 OST (as it always has been), but this is now failing when trying to create 7 stripes on the file.  It should limit the stripe count to the number of OSTs in the pool.&lt;/p&gt;</comment>
                            <comment id="217012" author="gerrit" created="Thu, 21 Dec 2017 21:33:01 +0000"  >&lt;p&gt;James Nunez (james.a.nunez@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/30636&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/30636&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; tests: make parsing routines pattern aware&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 36e0f170359a231b8b75e1c49deb8595f61ddb84&lt;/p&gt;</comment>
                            <comment id="217013" author="jamesanunez" created="Thu, 21 Dec 2017 21:36:38 +0000"  >&lt;p&gt;The patch at &lt;a href=&quot;https://review.whamcloud.com/30636&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/30636&lt;/a&gt; only modifies the parsing routines that sanity-pfl test 10 use. When sanity-pfl test_10 is run, this patch should return all original parameters to the mount point and, thus, stop several test failures including most (all?) recent/new ost-pools.sh test failures. &lt;/p&gt;

&lt;p&gt;This patch does not address the OST pools issues that Andreas has commented on in this ticket.&lt;/p&gt;</comment>
                            <comment id="218189" author="gerrit" created="Sun, 14 Jan 2018 02:36:10 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/30636/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/30636/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; tests: make parsing routines pattern aware&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 503a78bde8a59e176356a02b2d078332e3201575&lt;/p&gt;</comment>
                            <comment id="218212" author="pjones" created="Sun, 14 Jan 2018 15:37:35 +0000"  >&lt;p&gt;Landed for 2.11&lt;/p&gt;</comment>
                            <comment id="221147" author="jamesanunez" created="Thu, 15 Feb 2018 23:51:41 +0000"  >&lt;p&gt;Reopening this issue because we are seeing it or something closely related with recent full testing. One example of a recent failure is at:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/b29773fa-10e3-11e8-bd00-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/b29773fa-10e3-11e8-bd00-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="221340" author="sarah" created="Wed, 21 Feb 2018 00:09:08 +0000"  >&lt;p&gt;+1 on master, tag-2.10.58 &lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8d2359e6-1132-11e8-a6ad-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8d2359e6-1132-11e8-a6ad-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="223326" author="mdiep" created="Mon, 12 Mar 2018 14:47:48 +0000"  >&lt;p&gt;+1 on b2_10&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/7d4a2422-23da-11e8-8d2f-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/7d4a2422-23da-11e8-8d2f-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="226247" author="gerrit" created="Wed, 18 Apr 2018 14:57:30 +0000"  >&lt;p&gt;James Nunez (james.a.nunez@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/32048&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32048&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; tests: make parsing routines pattern aware&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 94a3d12c3ce299a519809ee5c4f36e941c202fa2&lt;/p&gt;</comment>
                            <comment id="227254" author="gerrit" created="Thu, 3 May 2018 20:00:24 +0000"  >&lt;p&gt;John L. Hammond (john.hammond@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/32048/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32048/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; tests: make parsing routines pattern aware&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 563b643c089cc651d12e82e4af84e1ff8c643b6b&lt;/p&gt;</comment>
                            <comment id="228151" author="sarah" created="Fri, 18 May 2018 16:37:00 +0000"  >&lt;p&gt;Still hit this on 2.10.4 EL7 server with EL6.9 client&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/010bc5d2-599a-11e8-b9d3-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/010bc5d2-599a-11e8-b9d3-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="231535" author="sarah" created="Mon, 6 Aug 2018 16:49:36 +0000"  >&lt;p&gt;hit this again on 2.10.5 ldiskfs DNE&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/e80a3ce8-994b-11e8-b0aa-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/e80a3ce8-994b-11e8-b0aa-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="238455" author="jamesanunez" created="Wed, 12 Dec 2018 17:17:27 +0000"  >&lt;p&gt;We&apos;re seeing parallel-scale-nfsv3 and parallel-scale-nfsv4 test_compilebench fail with &#8216;IOError: &lt;span class=&quot;error&quot;&gt;&amp;#91;Errno 27&amp;#93;&lt;/span&gt; File too large&#8217; and &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[102528.920205] LustreError: 26259:0:(lod_qos.c:1438:lod_alloc_specific()) can&apos;t lstripe objid [0x200022ac9:0x8e8b:0x0]: have 7 want 8
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;in the MDS dmesg. It looks like this is the same issue as reported here.&lt;/p&gt;

&lt;p&gt;Logs are at (all use zfs):&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/2f3b07b8-fd9d-11e8-b837-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/2f3b07b8-fd9d-11e8-b837-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/2252d69e-f752-11e8-b67f-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/2252d69e-f752-11e8-b67f-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/776270a4-f518-11e8-86c0-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/776270a4-f518-11e8-86c0-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/77290d46-f518-11e8-86c0-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/77290d46-f518-11e8-86c0-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="303127" author="gerrit" created="Mon, 31 May 2021 22:44:50 +0000"  >&lt;p&gt;Bobi Jam (bobijam@hotmail.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/43882&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43882&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; lod: adjust stripe count to available ost count&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 725e920db06d8f61a3a107231539f44fca8638e4&lt;/p&gt;</comment>
                            <comment id="304194" author="gerrit" created="Thu, 10 Jun 2021 23:29:48 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/43976&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43976&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; lod: adjust stripe count to available ost count&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 1a8f62ed7fe82fa7b2f9a76b9bc9f7a7f621d2ef&lt;/p&gt;</comment>
                            <comment id="304238" author="gerrit" created="Fri, 11 Jun 2021 09:04:33 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/43976/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43976/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; lod: adjust stripe count to available ost count&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 670d78952901183012ae08f2b5e9374d6e293bcf&lt;/p&gt;</comment>
                            <comment id="304442" author="gerrit" created="Mon, 14 Jun 2021 16:43:20 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/43882/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43882/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10350&quot; title=&quot;ost-pools test 1n fails with &amp;#39;failed to write to /mnt/lustre/d1n.ost-pools/file: 1&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10350&quot;&gt;&lt;del&gt;LU-10350&lt;/del&gt;&lt;/a&gt; lod: adjust stripe count to available ost count&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: f430ec079bf882744729d7aabc2021dfd26aba0c&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="45147">LU-9277</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="49763">LU-10396</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="49654">LU-10353</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="50882">LU-10689</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="37539">LU-8264</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="16283">LU-2113</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzoy7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>