<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:46:24 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4851] Lustre kernel panic when using Intel Vtune Amplier XE 2013</title>
                <link>https://jira.whamcloud.com/browse/LU-4851</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;How to trigger the kernel panic:&lt;/p&gt;

&lt;p&gt;1. start-up amplxe-gui and create a new project, e.g. &quot;test&quot;.&lt;/p&gt;

&lt;p&gt;2. Then set the executable to use to say /bin/ls.&lt;/p&gt;

&lt;p&gt;3. Now create a new analysis and select any of the ones available to your process architecture and run it.&lt;/p&gt;

&lt;p&gt;4 .Once the simulation has completed, exit amplxe-gui.&lt;/p&gt;

&lt;p&gt;5. Load amplxe-gui again and re-run the simulation in step 3.&lt;/p&gt;

&lt;p&gt;6. Exit amplxe-gui once the simulation completes.&lt;/p&gt;

&lt;p&gt;7. Repeat steps 5 to 6 until the node kernel panics (normally takes three or four attempts)&lt;/p&gt;

&lt;p&gt;It should be noted that users&apos; home directories are stored on Lustre and also the environment variable $TMPDIR is set to a directory within users&apos; homespaces on Lustre. The Vtune &quot;project files&quot; are therefore stored on Lustre. I suspect that the reading or writing of these files by Vtune could be the cause of the kernel panic.&lt;/p&gt;</description>
                <environment></environment>
        <key id="24031">LU-4851</key>
            <summary>Lustre kernel panic when using Intel Vtune Amplier XE 2013</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bfaccini">Bruno Faccini</assignee>
                                    <reporter username="rganesan@ddn.com">Rajeshwaran Ganesan</reporter>
                        <labels>
                    </labels>
                <created>Wed, 2 Apr 2014 09:54:39 +0000</created>
                <updated>Fri, 16 May 2014 14:06:10 +0000</updated>
                            <resolved>Fri, 16 May 2014 14:06:09 +0000</resolved>
                                    <version>Lustre 2.4.3</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="80825" author="rganesan@ddn.com" created="Wed, 2 Apr 2014 10:58:52 +0000"  >&lt;p&gt;This issue occurs only on the client, &lt;/p&gt;


&lt;p&gt;here is the lustre versions&lt;/p&gt;


&lt;p&gt;The kernel panic only occurs on the client where Vtune was being executed.&lt;/p&gt;

&lt;p&gt;The MDS and OSS nodes do not suffer any issues.&lt;/p&gt;

&lt;p&gt;The clients are running SLES 11 SP3:&lt;/p&gt;

&lt;p&gt;lustre-client-2.4.3-3.0.93_0.8_default_gfc544a1&lt;br/&gt;
lustre-client-modules-2.4.3-3.0.93_0.8_default_gfc544a1&lt;br/&gt;
lustre-client-tests-2.4.3-3.0.93_0.8_default_gfc544a1&lt;br/&gt;
lustre-iokit-1.4.0-1&lt;/p&gt;

&lt;p&gt;Kernel: 3.0.93-0.8-default&lt;/p&gt;

&lt;p&gt;The MDS and OSS nodes are running CentOS:&lt;/p&gt;

&lt;p&gt;kernel-2.6.32-358.18.1.el6_lustre.es50.x86_64&lt;br/&gt;
kernel-devel-2.6.32-358.18.1.el6_lustre.es50.x86_64&lt;br/&gt;
kernel-firmware-2.6.32-358.18.1.el6_lustre.es50.x86_64&lt;br/&gt;
kernel-headers-2.6.32-358.18.1.el6_lustre.es50.x86_64&lt;br/&gt;
kernel-ib-1.5.3-2.6.32_358.18.1.el6_lustre.es50.x86_64_2.6.32_358.18.1.el6_lustre.es50.x86_64&lt;br/&gt;
kernel-ib-devel-1.5.3-2.6.32_358.18.1.el6_lustre.es50.x86_64_2.6.32_358.18.1.el6_lustre.es50.x86_64&lt;br/&gt;
lustre-2.4.1-ddn1.0_2.6.32_358.18.1.el6_lustre.es50.x86_64_ES.x86_64&lt;br/&gt;
lustre-ldiskfs-4.1.0-2.6.32_358.18.1.el6_lustre.es50.x86_64.x86_64&lt;br/&gt;
lustre-modules-2.4.1-ddn1.0_2.6.32_358.18.1.el6_lustre.es50.x86_64_ES.x86_64&lt;br/&gt;
lustre-osd-ldisfs-2.4.1-ddn1.0_2.6.32_358.18.1.el6_lustre.es50.x86_64_ES.x86_64&lt;br/&gt;
lustre-source-2.4.1-ddn1.0_2.6.32_358.18.1.el6_lustre.es50.x86_64_ES.x86_64&lt;/p&gt;</comment>
                            <comment id="80834" author="bfaccini" created="Wed, 2 Apr 2014 12:53:59 +0000"  >&lt;p&gt;Hello Rajeshwaran,&lt;br/&gt;
Is there a crash-dump available for one of the occurrences, and if yes can you provide it ??&lt;br/&gt;
Also was there some Lustre debug (like rpctrace, dlmtrace,...) enabled at the time of the crashes ?&lt;/p&gt;</comment>
                            <comment id="80873" author="green" created="Wed, 2 Apr 2014 17:10:22 +0000"  >&lt;p&gt;This actually might be a dup of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4403&quot; title=&quot;ASSERTION( lock-&amp;gt;l_readers &amp;gt; 0 )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4403&quot;&gt;&lt;del&gt;LU-4403&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="80874" author="bfaccini" created="Wed, 2 Apr 2014 17:17:43 +0000"  >&lt;p&gt;Oleg, even if &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4403&quot; title=&quot;ASSERTION( lock-&amp;gt;l_readers &amp;gt; 0 )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4403&quot;&gt;&lt;del&gt;LU-4403&lt;/del&gt;&lt;/a&gt; occurs on Server/MDS side ??&lt;/p&gt;</comment>
                            <comment id="80877" author="green" created="Wed, 2 Apr 2014 17:25:24 +0000"  >&lt;p&gt;actually, with a client crash, it might be something else. We definitely need a full backtrace from the crash here at the very least.&lt;/p&gt;</comment>
                            <comment id="80935" author="bfaccini" created="Thu, 3 Apr 2014 14:04:34 +0000"  >&lt;p&gt;Hello Rajeshwaran,&lt;br/&gt;
As we agreed during the conf-call, I am trying to reproduce the issue in-house. I am currently unable to reproduce the LBUG (&quot;(ldlm_lock.c:851:ldlm_lock_decref_internal_nolock()) ASSERTION( lock-&amp;gt;l_readers &amp;gt; 0 ) failed:&quot;) you encountered.&lt;br/&gt;
BTW, you seem to use VTune&apos;s GUI, but did you try to run with command-line only (&quot;amplxe-cl -&lt;span class=&quot;error&quot;&gt;&amp;#91;collect,report&amp;#93;&lt;/span&gt; ...&quot;), just to see if we can simplify the reproducer ??&lt;/p&gt;
</comment>
                            <comment id="81245" author="jfc" created="Wed, 9 Apr 2014 01:28:45 +0000"  >&lt;p&gt;Hello Rajeshwaran, &lt;br/&gt;
Could you please have a shot at running VTune using just the command line interface? (As requested above).&lt;br/&gt;
If you are able to provide us with a reproducer from a command line run, it will improve our chances that we will be successful in reproducing this issue, on our in-house platform.&lt;/p&gt;

&lt;p&gt;Many thanks,&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="81257" author="bfaccini" created="Wed, 9 Apr 2014 09:45:26 +0000"  >&lt;p&gt;Yes, it would be helpful and easier to reproduce in-house if you could confirm that you also reproduce by running VTune via its command-line interface. So, if I try to mimic your GUI&apos;s actions to reproduce wit command-line, it should be something like :&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;for i in `seq 1 100` ; do 
/opt/intel/vtune_amplifier_xe_2013/bin64/amplxe-cl -collect hotspots -result-dir=/mnt/lustre/vtune/intel/amplxe/projects/ls_lustre2/r${i}hs -app-working-dir=/mnt/lustre/vtune/ ls -laR /mnt/lustre
/opt/intel/vtune_amplifier_xe_2013/bin64/amplxe-cl -report hotspots -r /mnt/lustre/vtune/intel/amplxe/projects/ls_lustre2/r${i}hs
done
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;On the other hand, I am still trying and unable to reproduce even using the GUI interface as you reported ... So, I begin to think that it is more configuration dependent (OSTs number, default/used striping, ...) and not only a pure VTune behavior consequence ...&lt;/p&gt;

&lt;p&gt;BTW, I forgot to ask which mount options are used on your Client ? And particularly, do you mount Lustre with flock/localflock/noflock ?&lt;/p&gt;
</comment>
                            <comment id="81260" author="rganesan@ddn.com" created="Wed, 9 Apr 2014 10:58:04 +0000"  >&lt;p&gt;Hello, &lt;/p&gt;

&lt;p&gt;Lustre is mounted on our login nodes with the following options: &#8220;rw&#8221;, &#8220;_netdev&#8221; and &#8220;flock&#8221;&lt;/p&gt;


&lt;p&gt;We mount it under /scratch&lt;/p&gt;



&lt;p&gt;The mount definitions for Lustre are stored in /etc/fstab and we manually mount the file system once the node has been booted, e.g.:&lt;/p&gt;



&lt;p&gt;            mount /scratch&lt;/p&gt;



&lt;p&gt;No output from the mount command is produced. In dmesg we see:&lt;/p&gt;



&lt;p&gt;Lustre: Lustre: Build Version: jenkins-arch=x86_64,build_type=client,distro=el6,ib_stack=inkernel-22597-gfc544a1-PRISTINE-../lustre/scripts&lt;/p&gt;

&lt;p&gt;LNet: Added LNI XXX.XXX.XXX.XXX@o2ib &lt;span class=&quot;error&quot;&gt;&amp;#91;YYY/YYY/YYY/YYY&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Lustre: Layout lock feature supported.&lt;/p&gt;

&lt;p&gt;Lustre: Mounted scratch-client&lt;/p&gt;



&lt;p&gt;Note: IP info redacted from the output above.&lt;/p&gt;



&lt;p&gt;The default stripe count is 12 (all OSTs).&lt;/p&gt;

</comment>
                            <comment id="81269" author="rganesan@ddn.com" created="Wed, 9 Apr 2014 13:53:45 +0000"  >&lt;p&gt;Please find attached a dmesg trace that is produced when the LBUG kernel panic occurs.&lt;/p&gt;

&lt;p&gt;I&apos;ve not be able to recreate the issue with the command line utility.&lt;/p&gt;

&lt;p&gt;However, I have found a quicker way to trigger the bug by just merely closing and opening the Vtune GUI (i.e. no need to create any projects or analyses)&lt;/p&gt;

&lt;p&gt;E.g.&lt;/p&gt;

&lt;p&gt;cd ~&lt;br/&gt;
rm -rf .intel intel tmp/*&lt;br/&gt;
amplx-gui&lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;now close the GUI (if it loads) and rerun the &quot;amplx-gui&quot; command&lt;/li&gt;
	&lt;li&gt;repeat until crash occurs (normally by 4th attempt)&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;The crash is triggered whilst the splash screen for Vtune is visible but before the actual main Vtune window is displayed. Therefore, I assume the bug is being triggered by some of the Vtune GUI&apos;s start-up code?&lt;/p&gt;

&lt;p&gt;As suspected, I cannot recreate the issue if I move my homespace to a non-Lustre file system.&lt;/p&gt;

&lt;p&gt;Hope this helps,&lt;/p&gt;</comment>
                            <comment id="81272" author="rganesan@ddn.com" created="Wed, 9 Apr 2014 14:03:23 +0000"  >&lt;p&gt;Hello Bruno, &lt;/p&gt;

&lt;p&gt;Please let me know, if you need any other logs or any other commands to try. &lt;/p&gt;


&lt;p&gt;I can send them and get the results.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Rajesh&lt;/p&gt;</comment>
                            <comment id="81341" author="bfaccini" created="Thu, 10 Apr 2014 09:05:59 +0000"  >&lt;p&gt;Hello Rajeshwaran,&lt;br/&gt;
Thanks for all these additional infos!&lt;br/&gt;
BTW, with the new dmesg you provided, and the panic/LBUG stack now available, it is clear that the problem occurs during FLock operations.&lt;br/&gt;
This allows me to suspect that you may trigger the problem/race I already worked on as part of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3684&quot; title=&quot;LBUG/&amp;quot;ldlm_lock_decref_internal_nolock()) ASSERTION(lock-&amp;gt;l_readers &amp;gt; 0) failed&amp;quot; running Bull&amp;#39;s NFS locktests&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3684&quot;&gt;&lt;del&gt;LU-3684&lt;/del&gt;&lt;/a&gt;, where I pushed a patch to fix but does not seem to be in b2_4 ...&lt;br/&gt;
Will also try again to reproduce using your new+simplified instructions.&lt;/p&gt;</comment>
                            <comment id="81354" author="rganesan@ddn.com" created="Thu, 10 Apr 2014 12:48:08 +0000"  >&lt;p&gt;Hello Bruno,&lt;/p&gt;

&lt;p&gt;Cu. tried running the CLI as shown below, but after 100 iterations the login node refuses to crash.&lt;/p&gt;


&lt;p&gt;Yet if they switch back to the GUI, they can get the login node to crash after a few attempted launches as described previously.&lt;br/&gt;
Thanks,&lt;br/&gt;
Rajesh&lt;/p&gt;</comment>
                            <comment id="81355" author="rganesan@ddn.com" created="Thu, 10 Apr 2014 12:50:07 +0000"  >&lt;p&gt;Hello Bruno,&lt;/p&gt;

&lt;p&gt;some good news, the localflock is not causing the crash. &lt;/p&gt;

&lt;p&gt;if they mount using flock, it is crashing as usual. &lt;/p&gt;


&lt;p&gt;Is there any harm in mounting our Lustre file system with &#8220;localflock&#8221; on our clients rather then &#8220;flock&#8221;?&lt;/p&gt;

&lt;p&gt;Whats your suggestion on localflock and flock&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Rajesh&lt;/p&gt;</comment>
                            <comment id="81366" author="bfaccini" created="Thu, 10 Apr 2014 14:02:16 +0000"  >&lt;p&gt;After I found that the issue is FLock related, I had also in mind to ask you to try with either localflock and/or noflock mount options, but decided not to do so right now since I assumed that your customer is likely to use applications that require cluster-wide/multi-nodes FLock support (guaranteed with flock option), and not only local/single-node FLock support (localflock scope). But since you asked/tried, may be you can check with your customer if this can feet with their production/applications ??...&lt;/p&gt;
</comment>
                            <comment id="81713" author="rganesan@ddn.com" created="Wed, 16 Apr 2014 08:27:34 +0000"  >&lt;p&gt;Are we getting any fix for flock option?&lt;/p&gt;</comment>
                            <comment id="81719" author="bfaccini" created="Wed, 16 Apr 2014 09:53:45 +0000"  >&lt;p&gt;1st of all I would like to add an update about my reproduction efforts in-house ... Unfortunatelly I am still unable to reproduce until now, and this even after I used your latest instructions and configuration details.&lt;/p&gt;

&lt;p&gt;Concerning a possible fix to cover the flock option usage, I made a b2_4 back-port of my previous patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1126&quot; title=&quot;Client file locking issue. Assertion triggered when decrementing a read lock on an item that has no existing read locks.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1126&quot;&gt;&lt;del&gt;LU-1126&lt;/del&gt;&lt;/a&gt;/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3684&quot; title=&quot;LBUG/&amp;quot;ldlm_lock_decref_internal_nolock()) ASSERTION(lock-&amp;gt;l_readers &amp;gt; 0) failed&amp;quot; running Bull&amp;#39;s NFS locktests&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3684&quot;&gt;&lt;del&gt;LU-3684&lt;/del&gt;&lt;/a&gt;, it is available at &lt;a href=&quot;http://review.whamcloud.com/9968&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/9968&lt;/a&gt;, and running our tests suites.&lt;/p&gt;

&lt;p&gt;Last, and since I am unable to reproduce, did you pursue (and succeed) with the process of having informations out from the site and as we discussed during conf-call with customer ?? As we already insisted, the full debug log from Client side taken during reproducer run wouls be more than helpful to understand the issue, and also to confirm that my patch is the fix ...&lt;/p&gt;</comment>
                            <comment id="81721" author="rganesan@ddn.com" created="Wed, 16 Apr 2014 10:17:03 +0000"  >&lt;p&gt;Hello Bruno,&lt;/p&gt;

&lt;p&gt;Could  you please provide the source RPM with the patch, I can ask them rebuild and install it. we can verify it in the client. &lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Rajesh&lt;/p&gt;</comment>
                            <comment id="81726" author="bfaccini" created="Wed, 16 Apr 2014 12:56:53 +0000"  >&lt;p&gt;Builds with my patch are available under &lt;a href=&quot;http://build.whamcloud.com/job/lustre-reviews/23081/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-reviews/23081/&lt;/a&gt;. Can you access it ? If yes, you can check the target OS from the build matrix, and then follow the &quot;Build Artifacts&quot; link where you can find the corresponding source rpm.&lt;/p&gt;

&lt;p&gt;But I suggest you wait for the success of our test suites run before to apply it on-site.&lt;/p&gt;</comment>
                            <comment id="81727" author="rganesan@ddn.com" created="Wed, 16 Apr 2014 13:00:35 +0000"  >&lt;p&gt;Thanks for your help, Sure I can wait, please let me know once it passes the test. &lt;/p&gt;</comment>
                            <comment id="81817" author="bfaccini" created="Thu, 17 Apr 2014 09:57:43 +0000"  >&lt;p&gt;#9968 has successfully passed almost all Maloo tests, only one unrelated failure in lustre-rsync-test/test_8 for a known+unrelated failure already tracked in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3573&quot; title=&quot;lustre-rsync-test test_8: @@@@@@ FAIL: Failure in replication; differences found. &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3573&quot;&gt;&lt;del&gt;LU-3573&lt;/del&gt;&lt;/a&gt;. So, it is safe for on-site exposure !&lt;/p&gt;

&lt;p&gt;Concerning how-to get the lustre debug-log upon LBUG, here my instructions to be applied on the Client node where the reproducer will run :&lt;/p&gt;

&lt;p&gt;         _ as already requested, ensure /proc/sys/lnet/debug&lt;span class=&quot;error&quot;&gt;&amp;#91;_mb&amp;#93;&lt;/span&gt; are respectively set to at least rpctrace+dlmtrace in addition to the default value (-1 would be the best!) for the trace-mask and to a reasonable value (2048 or even 4096) for the debug-buffer size.&lt;/p&gt;

&lt;p&gt;         _ unset/0 /proc/sys/lnet/panic_on_lbug&lt;/p&gt;

&lt;p&gt;         _ run reproducer/VTune to get to the LBUG&lt;/p&gt;

&lt;p&gt;         _ the Lustre debug-log should be dumped automatically upon LBUG in a file with name/path /tmp/lustre-log.&amp;lt;seconds-since-the-Epoch.milliseconds&amp;gt;, but if not you can force this with &quot;lctl dk &amp;lt;path/file&amp;gt;&quot; command.&lt;/p&gt;

&lt;p&gt;         _ if for any reason panic_on_lbug can not be unset ... I can also provide you with the information+stuff necessary to extract Lustre debug-log from a crash-dump!&lt;/p&gt;

&lt;p&gt;         _ when you will have ensured the Lustre debug-log has been collected/saved, you will need to reboot the Client node to get Lustre functional again.&lt;/p&gt;

</comment>
                            <comment id="82256" author="rganesan@ddn.com" created="Wed, 23 Apr 2014 09:13:31 +0000"  >&lt;p&gt;Hello Bruno,&lt;/p&gt;

&lt;p&gt;What could be best option for the mounting for vtunes application. flock or localflock. what is the best practice for the vtunes application. Could you please check with Vtunes team&lt;/p&gt;</comment>
                            <comment id="82259" author="dmiter" created="Wed, 23 Apr 2014 09:40:06 +0000"  >&lt;p&gt;VTune application use flock to protect database changes across multiple users. If you don&apos;t share results between different computers the options &quot;localflock&quot; will be enough. If you have concurrent access to the same result directory from different computers you definitely need &quot;flock&quot; option.&lt;/p&gt;</comment>
                            <comment id="82388" author="rganesan@ddn.com" created="Thu, 24 Apr 2014 14:36:59 +0000"  >&lt;p&gt;Hello Bruno - Cu. is applying the patch this week, once I have an update, I will let you know. &lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Rajesh&lt;/p&gt;</comment>
                            <comment id="83217" author="bfaccini" created="Mon, 5 May 2014 17:57:46 +0000"  >&lt;p&gt;Hello Rajesh,&lt;br/&gt;
Do you have any news/update from the site ?&lt;/p&gt;</comment>
                            <comment id="83280" author="rganesan@ddn.com" created="Tue, 6 May 2014 11:40:51 +0000"  >&lt;p&gt;Hello Bruno,&lt;/p&gt;

&lt;p&gt;Cu have updated the patch and not seeing the issue, they are in the process of updating the remaining clients. &lt;/p&gt;


&lt;p&gt;Thanks,&lt;br/&gt;
Rajesh&lt;/p&gt;</comment>
                            <comment id="84224" author="jfc" created="Fri, 16 May 2014 00:54:59 +0000"  >&lt;p&gt;Rajesh, how are we doing?&lt;br/&gt;
Can we mark this as resolved?&lt;br/&gt;
Thanks!&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="84246" author="rganesan@ddn.com" created="Fri, 16 May 2014 14:00:06 +0000"  >&lt;p&gt;Hello John -&lt;/p&gt;

&lt;p&gt;Please go ahead and close this LU. &lt;/p&gt;


&lt;p&gt;Thanks for your help,&lt;br/&gt;
Rajesh&lt;/p&gt;</comment>
                            <comment id="84247" author="pjones" created="Fri, 16 May 2014 14:06:10 +0000"  >&lt;p&gt;Thanks Rajesh&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="14678" name="AWE_sprig3_dmesg.txt" size="3045" author="rganesan@ddn.com" created="Wed, 9 Apr 2014 13:54:44 +0000"/>
                            <attachment id="14640" name="sprig_vtune_messages.txt" size="10037" author="rganesan@ddn.com" created="Wed, 2 Apr 2014 11:26:13 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwj3r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>13382</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10021"><![CDATA[2]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>