<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:08:59 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-638] conf-sanity test_55: @@@@@@ FAIL: client start failed</title>
                <link>https://jira.whamcloud.com/browse/LU-638</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;ON client we get&lt;/p&gt;

&lt;p&gt;Writing CONFIGS/mountdata&lt;br/&gt;
start mds service on barry-mds1&lt;br/&gt;
Starting mds1: -o user_xattr,acl  /dev/md5 /tmp/mds1&lt;br/&gt;
barry-mds1: mount.lustre: mount /dev/md5 at /tmp/mds1 failed: Invalid argument&lt;br/&gt;
barry-mds1: This may have multiple causes.&lt;br/&gt;
barry-mds1: Are the mount options correct?&lt;br/&gt;
barry-mds1: Check the syslog for more info.&lt;br/&gt;
mount -t lustre  /dev/md5 /tmp/mds1&lt;br/&gt;
Start of /dev/md5 on mds1 failed 22&lt;br/&gt;
start ost1 service on barry-oss1&lt;br/&gt;
Starting ost1:   /dev/mpath/barry1a-l0 /tmp/ost1&lt;/p&gt;

&lt;p&gt;Client dmesgLustre: DEBUG MARKER: == conf-sanity test 56: check big indexes ============================================================ 09:59:58 (1314280798)&lt;br/&gt;
Lustre: 30459:0:(sec.c:1474:sptlrpc_import_sec_adapt()) import MGC10.37.248.61@o2ib1-&amp;gt;MGC10.37.248.61@o2ib1_0 netid 50001: select flavor null&lt;br/&gt;
LustreError: 152-6: Ignoring deprecated mount option &apos;acl&apos;.&lt;br/&gt;
Lustre: MGC10.37.248.61@o2ib1: Reactivating import&lt;br/&gt;
Lustre: 30459:0:(sec.c:1474:sptlrpc_import_sec_adapt()) import lustre-MDT0000-mdc-ffff81017a592c00-&amp;gt;10.37.248.61@o2ib1 netid 50001: select flavor null&lt;br/&gt;
Lustre: 25819:0:(client.c:1778:ptlrpc_expire_one_request()) @@@ Request x1378123303092234 sent from lustre-MDT0000-mdc-ffff81017a592c00 to NID 10.37.248.61@o2ib1 has timed out for slow repl&lt;br/&gt;
y: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1314280862&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;real_sent 1314280862&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;current 1314280867&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;deadline 5s&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;delay 0s&amp;#93;&lt;/span&gt;  req@ffff81017ed84c00 x1378123303092234/t0(0) o-1-&amp;gt;lustre-MDT0000_UUID@10.37.248.61@o2ib1:12/10 lens 368/512 e 0 to 1 dl 1314280867 ref 1 fl Rpc:XN/ffffffff/ffffffff rc 0/-1&lt;br/&gt;
Lustre: 25820:0:(import.c:526:import_select_connection()) lustre-MDT0000-mdc-ffff81017a592c00: tried all connections, increasing latency to 5s&lt;br/&gt;
Lustre: 25819:0:(client.c:1778:ptlrpc_expire_one_request()) @@@ Request x1378123303092239 sent from lustre-MDT0000-mdc-ffff81017a592c00 to NID 10.37.248.61@o2ib1 has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1314280872&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;real_sent 1314280872&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;current 1314280882&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;deadline 10s&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;delay 0s&amp;#93;&lt;/span&gt;  req@ffff8101747ca400 x1378123303092239/t0(0) o-1-&amp;gt;lustre-MDT0000_UUID@10.37.248.61@o2ib1:12/10 lens 368/512 e 0 to 1 dl 1314280882 ref 1 fl Rpc:XN/ffffffff/ffffffff rc 0/-1&lt;/p&gt;

&lt;p&gt;MDS dmesg&lt;/p&gt;

&lt;p&gt;Lustre: DEBUG MARKER: == conf-sanity test 56: check big indexes ============================================================ 09:59:58 (1314280798)&lt;br/&gt;
LDISKFS-fs (md5): warning: maximal mount count reached, running e2fsck is recommended&lt;br/&gt;
LDISKFS-fs (md5): mounted filesystem with ordered data mode&lt;br/&gt;
JBD: barrier-based sync failed on md5-8 - disabling barriers&lt;br/&gt;
LDISKFS-fs (md5): mounted filesystem with ordered data mode&lt;br/&gt;
JBD: barrier-based sync failed on md5-8 - disabling barriers&lt;br/&gt;
LDISKFS-fs (md5): mounted filesystem with ordered data mode&lt;br/&gt;
Lustre: MGS: Regenerating lustre-MDTffff log by user request.&lt;br/&gt;
Lustre: Skipped 30 previous similar messages&lt;br/&gt;
Lustre: Setting parameter lustre-MDT0001-mdtlov.lov.stripesize in log lustre-MDT0001&lt;br/&gt;
Lustre: Skipped 4 previous similar messages&lt;br/&gt;
JBD: barrier-based sync failed on md5-8 - disabling barriers&lt;br/&gt;
Lustre: Enabling ACL&lt;br/&gt;
Lustre: Enabling user_xattr&lt;br/&gt;
LustreError: 22858:0:(mdt_handler.c:4504:mdt_init0()) CMD Operation not allowed in IOP mode&lt;br/&gt;
LustreError: 22858:0:(obd_config.c:522:class_setup()) setup lustre-MDT0001 failed (-22)&lt;br/&gt;
LustreError: 22858:0:(obd_config.c:1361:class_config_llog_handler()) Err -22 on cfg command:&lt;br/&gt;
Lustre:    cmd=cf003 0:lustre-MDT0001  1:lustre-MDT0001_UUID  2:1  3:lustre-MDT0001-mdtlov  4:f  &lt;br/&gt;
LustreError: 15b-f: MGC10.37.248.61@o2ib1: The configuration from log &apos;lustre-MDT0001&apos;failed from the MGS (-22).  Make sure this client and the MGS are running compatible versions of Lustre.&lt;br/&gt;
LustreError: 15c-8: MGC10.37.248.61@o2ib1: The configuration from log &apos;lustre-MDT0001&apos; failed (-22). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.&lt;br/&gt;
LustreError: 22820:0:(obd_mount.c:1192:server_start_targets()) failed to start server lustre-MDT0001: -22&lt;br/&gt;
LustreError: 22820:0:(obd_mount.c:1719:server_fill_super()) Unable to start targets: -22&lt;br/&gt;
LustreError: 22820:0:(obd_config.c:567:class_cleanup()) Device 3 not setup&lt;br/&gt;
Lustre: 22820:0:(obd_mount.c:1540:server_put_super()) Cleaning orphaned obd lustre-MDT0001-mdtlov&lt;br/&gt;
Lustre: server umount lustre-MDT0001 complete&lt;br/&gt;
Lustre: Skipped 2 previous similar messages&lt;br/&gt;
LustreError: 22820:0:(obd_mount.c:2160:lustre_fill_super()) Unable to mount  (-22)&lt;br/&gt;
Lustre: 21484:0:(ldlm_lib.c:877:target_handle_connect()) MGS: connection from 40a74cfa-a6bf-33ca-ed4c-2f183d1e5bde@10.37.248.62@o2ib1 t0 exp 0000000000000000 cur 1314280859 last 0&lt;br/&gt;
Lustre: 21484:0:(ldlm_lib.c:877:target_handle_connect()) Skipped 78 previous similar messages&lt;/p&gt;





</description>
                <environment></environment>
        <key id="11570">LU-638</key>
            <summary>conf-sanity test_55: @@@@@@ FAIL: client start failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="mdiep">Minh Diep</assignee>
                                    <reporter username="simmonsja">James A Simmons</reporter>
                        <labels>
                    </labels>
                <created>Thu, 25 Aug 2011 10:36:21 +0000</created>
                <updated>Thu, 15 Dec 2011 09:22:00 +0000</updated>
                            <resolved>Thu, 15 Dec 2011 09:22:00 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>1</watches>
                                                                            <comments>
                            <comment id="19635" author="simmonsja" created="Thu, 25 Aug 2011 10:53:28 +0000"  >&lt;p&gt;Sorry meant to label this as config-sanity test_56 failure&lt;/p&gt;</comment>
                            <comment id="19666" author="adilger" created="Sat, 27 Aug 2011 11:54:00 +0000"  >&lt;p&gt;This looks like you are trying to run with 2 MDTs in CMD mode?  There shouldn&apos;t be an MDT0001 otherwise. &lt;/p&gt;</comment>
                            <comment id="19705" author="simmonsja" created="Mon, 29 Aug 2011 09:29:01 +0000"  >&lt;p&gt;Doesn&apos;t that require a mkfs.lustre parameter iam_dir. This what I&apos;m formating the MDT with &quot;&lt;/p&gt;

&lt;p&gt;--mgsnode=10.37.248.61@o2ib1 --mdt --fsname=lustre --param sys.timeout=20 --device-size=200000 --mountfsoptions=errors=remount-ro,user_xattr,acl --param lov.stripesize=1048576 --param lov.stripecount=0 --param mdt.identity_upcall=/usr/sbin/l_getidentity --mkfsoptions=\&quot;-E lazy_itable_init\&quot;&lt;/p&gt;
</comment>
                            <comment id="19711" author="simmonsja" created="Mon, 29 Aug 2011 11:10:48 +0000"  >&lt;p&gt;After some tracking I discovered the problem was the mount option acl. Once I removed it from both the client mount string and the mds mount string the test past. I also tried conf-sanity test 55 and the same result. I&apos;m looking to see what other test the mount option acl breaks.&lt;/p&gt;</comment>
                            <comment id="24620" author="pjones" created="Tue, 13 Dec 2011 12:31:07 +0000"  >&lt;p&gt;Minh&lt;/p&gt;

&lt;p&gt;What would your expectations be re using the mount option acl?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="24649" author="mdiep" created="Tue, 13 Dec 2011 19:05:03 +0000"  >&lt;p&gt;Hi James,&lt;/p&gt;

&lt;p&gt;Could you try the same (with and without acl) with 1 MDT?&lt;/p&gt;</comment>
                            <comment id="24704" author="simmonsja" created="Wed, 14 Dec 2011 09:22:06 +0000"  >&lt;p&gt;Okay I ran a bunch of test with different options. First the acl option doesn&apos;t cause the failure any more. It fails in either condition. Only MDT is being formated with&lt;/p&gt;

&lt;p&gt;Format mds1: /dev/md5 with --mdt --fsname=lustre --device-size=200000 --param sys.timeout=20  --mountfsoptions=errors=remount-ro,user_xattr,acl --param lov.st....&lt;/p&gt;

&lt;p&gt;Now the error I get is...&lt;/p&gt;

&lt;p&gt;Lustre: DEBUG MARKER: == conf-sanity test 55: check lov_objid size ========================================================= 09:06:09 (1323871569)&lt;br/&gt;
Lustre: import MGC10.37.248.56@o2ib1-&amp;gt;MGC10.37.248.56@o2ib1_0 netid 50001: select flavor null&lt;br/&gt;
LustreError: 152-6: Ignoring deprecated mount option &apos;acl&apos;.&lt;br/&gt;
Lustre: MGC10.37.248.56@o2ib1: Reactivating import&lt;br/&gt;
Lustre: import lustre-MDT0000-mdc-ffff8101680e6400-&amp;gt;10.37.248.61@o2ib1 netid 50001: select flavor null&lt;br/&gt;
LustreError: 11-0: an error occurred while communicating with 10.37.248.61@o2ib1. The mds_connect operation failed with -11&lt;br/&gt;
Lustre: import lustre-OST03ff-osc-ffff8101680e6400-&amp;gt;10.37.248.62@o2ib1 netid 50001: select flavor null&lt;br/&gt;
Lustre: Client lustre-client has started&lt;br/&gt;
LustreError: 19110:0:(ldlm_request.c:1173:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway&lt;br/&gt;
LustreError: 19110:0:(ldlm_request.c:1800:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108&lt;br/&gt;
LustreError: 19110:0:(ldlm_request.c:1173:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway&lt;br/&gt;
LustreError: 19110:0:(ldlm_request.c:1800:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108&lt;br/&gt;
Lustre: client ffff8101680e6400 umount complete&lt;br/&gt;
Lustre: import MGC10.37.248.56@o2ib1-&amp;gt;MGC10.37.248.56@o2ib1_0 netid 50001: select flavor null&lt;br/&gt;
LustreError: 152-6: Ignoring deprecated mount option &apos;acl&apos;.&lt;br/&gt;
Lustre: 26837:0:(client.c:1789:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1323871776/real 1323871776&amp;#93;&lt;/span&gt;  req@ffff81015de111&lt;br/&gt;
Lustre: 26846:0:(import.c:525:import_select_connection()) MGC10.37.248.56@o2ib1: tried all connections, increasing latency to 5s&lt;br/&gt;
LustreError: 26894:0:(client.c:1065:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff81015de11800 x1388179954335770/t0(0) o101-&amp;gt;MGC10.37.248.56@o2ib11&lt;br/&gt;
LustreError: 26904:0:(client.c:1065:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff810192ff9800 x1388179954335773/t0(0) o101-&amp;gt;MGC10.37.248.56@o2ib11&lt;br/&gt;
Lustre: 26837:0:(client.c:1789:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1323871781/real 1323871781&amp;#93;&lt;/span&gt;  req@ffff81015de111&lt;br/&gt;
Lustre: 26846:0:(import.c:525:import_select_connection()) MGC10.37.248.56@o2ib1: tried all connections, increasing latency to 10s&lt;br/&gt;
Lustre: 26837:0:(client.c:1789:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1323871796/real 1323871796&amp;#93;&lt;/span&gt;  req@ffff81015de111&lt;br/&gt;
Lustre: 26846:0:(import.c:525:import_select_connection()) MGC10.37.248.56@o2ib1: tried all connections, increasing latency to 15s&lt;br/&gt;
LustreError: 26894:0:(client.c:1065:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff81015de11800 x1388179954335772/t0(0) o101-&amp;gt;MGC10.37.248.56@o2ib11&lt;br/&gt;
LustreError: 15c-8: MGC10.37.248.56@o2ib1: The configuration from log &apos;lustre-client&apos; failed (-5). This may be the result of communication errors between this n.&lt;br/&gt;
LustreError: 26894:0:(llite_lib.c:951:ll_fill_super()) Unable to process log: -5&lt;br/&gt;
Lustre: 26837:0:(client.c:1789:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1323871816/real 1323871816&amp;#93;&lt;/span&gt;  req@ffff81015de111&lt;br/&gt;
Lustre: client ffff810163c07400 umount complete&lt;br/&gt;
LustreError: 26894:0:(obd_mount.c:2306:lustre_fill_super()) Unable to mount  (-5)&lt;br/&gt;
Lustre: DEBUG MARKER: conf-sanity test_55: @@@@@@ FAIL: client start failed&lt;/p&gt;</comment>
                            <comment id="24706" author="simmonsja" created="Wed, 14 Dec 2011 09:23:57 +0000"  >&lt;p&gt;Yipes. The MGS is stopped but never restarted...&lt;/p&gt;</comment>
                            <comment id="24810" author="simmonsja" created="Thu, 15 Dec 2011 08:27:17 +0000"  >&lt;p&gt;Tracked down the problem. Its due to having separate MGS and MDS. This problem was reported in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-424&quot; title=&quot;conf-sanity test 55, 56, 58 do not work with separate MGS and MDT&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-424&quot;&gt;&lt;del&gt;LU-424&lt;/del&gt;&lt;/a&gt;. I have a patch that fixed the test. Peter you can close this problem out as a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-424&quot; title=&quot;conf-sanity test 55, 56, 58 do not work with separate MGS and MDT&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-424&quot;&gt;&lt;del&gt;LU-424&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="24819" author="pjones" created="Thu, 15 Dec 2011 09:22:00 +0000"  >&lt;p&gt;Duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-424&quot; title=&quot;conf-sanity test 55, 56, 58 do not work with separate MGS and MDT&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-424&quot;&gt;&lt;del&gt;LU-424&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzv4db:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4241</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>