<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:40:27 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4186] Change lustre_get_jobid to read from proc file and cache it in lu_env</title>
                <link>https://jira.whamcloud.com/browse/LU-4186</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;lustre_get_jobid is calling cfs_access_process_vm() that is a core mm function. In upstream kernel client, we changed it to read jobid variable from /proc/self/environ file and cache it in lu_env so that we can remove cfs_access_process_vm().&lt;/p&gt;

&lt;p&gt;Currently, jobid is cached at vvp layer and is only accessible by llite. One possible optimization is to add a new lu_context_key at obdclass layer to cache jobid. And then change lustre_get_jobid() to handle the caching logic. For every thread, it only read proc environ file once and cache the results forever. Callers just call lustre_get_jobid() and use the result thereafter.&lt;/p&gt;

&lt;p&gt;The ticket is created to back port related upstream kernel patches and also initiate the discussion about whether it is good to cache jobid in lu_env at obdclass layer.&lt;/p&gt;</description>
                <environment></environment>
        <key id="21722">LU-4186</key>
            <summary>Change lustre_get_jobid to read from proc file and cache it in lu_env</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="bergwolf">Peng Tao</reporter>
                        <labels>
                    </labels>
                <created>Wed, 30 Oct 2013 02:38:56 +0000</created>
                <updated>Mon, 24 Apr 2023 18:52:24 +0000</updated>
                            <resolved>Mon, 24 Apr 2023 18:52:24 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="70264" author="adilger" created="Wed, 30 Oct 2013 16:06:30 +0000"  >&lt;p&gt;We&apos;d like to be able to fetch the jobid for a thread once from the environment (which has just gotten more expensive in the upstream kernel), and then keep it in a thread-local location afterward that can be accessed easily.&lt;/p&gt;

&lt;p&gt;The jobid should track IO operations associated with the user process, and not against processes like ptlrpcd or pdflush, so we also store it in the inode for use when RPCs are generated.    &lt;/p&gt;

&lt;p&gt;Is lu_env the right place for this to go?  How can it be used so that the jobid is accessible in different layers of the stack?&lt;/p&gt;
</comment>
                            <comment id="70329" author="bergwolf" created="Thu, 31 Oct 2013 03:24:08 +0000"  >&lt;p&gt;It turns out that Greg KH has explicitly rejected the parsing-proc approach. As a result, it is no longer necessary to discuss where to cache jobid... Although I am still uncertain what to do with cfs_access_process_vm().&lt;/p&gt;</comment>
                            <comment id="70407" author="adilger" created="Thu, 31 Oct 2013 17:25:31 +0000"  >&lt;p&gt;I&apos;m not against caching the jobid in lu_env as was done in your first patch, since that is still more efficient than looking it up for each syscall. &lt;/p&gt;

&lt;p&gt;For now, I don&apos;t really care if the upstream build is disabled for the architectures that don&apos;t have copy_to_user_page() available.  That was actually fixed for MIPS, so it could be enabled separate from the others, but I&apos;d rather just drop this issue for now or Greg may just ask to get all of jobid deleted.  I don&apos;t want the in-kernel client to be crippled for foolish reasons like this. &lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="55724">LU-12330</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzw7an:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>11326</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>