<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:57:47 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13032] Add lctl cleanup|uncache|revalidate commands for PCC</title>
                <link>https://jira.whamcloud.com/browse/LU-13032</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;If there is a &quot;kept&quot; file on PCC, but the file was modified in Lustre after data restore, then we need to ensure that the stale PCC copy is removed from cache.&lt;/p&gt;

&lt;p&gt;Usually there is a daemon running on the PCC client, monitoring the space usage of the PCC device, scanning the PCC device, doing some actions accordingly, which can be used to remove this kind of PCC copies.&lt;br/&gt;
 We could add some lctl pcc commands or llapi interface as follows:&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;lctl pcc clean $MNTPT $PCCPATH&lt;br/&gt;
 The command above can be used to clean up the stale invalid PCC copies out from PCC to free up space.&lt;/li&gt;
	&lt;li&gt;lctl pcc uncache $MNTPT $PCCPATH&lt;br/&gt;
 This command will restore all data back to Lustre OSTs, and then remove the PCC copies, similar with lctl pcc del, but does not delete the PCC backend from the client.&lt;/li&gt;
	&lt;li&gt;lctl pcc revalidate $MNTPT $PCCPATH&lt;br/&gt;
 This command will try to attach the PCC copies again if it is still valid.&lt;br/&gt;
First, if the Layout generation is consistent, we can attach it directly;&lt;br/&gt;
Otherwise, compare the data version between the value in HSM attrs and the one of the file in Lustre, if they are same, we can also revalidate the PCC cache.&lt;/li&gt;
&lt;/ol&gt;
</description>
                <environment></environment>
        <key id="57501">LU-13032</key>
            <summary>Add lctl cleanup|uncache|revalidate commands for PCC</summary>
                <type id="7" iconUrl="https://jira.whamcloud.com/images/icons/issuetypes/task_agile.png">Technical task</type>
                            <parent id="56799">LU-12714</parent>
                                    <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="qian_wc">Qian Yingjin</assignee>
                                    <reporter username="qian_wc">Qian Yingjin</reporter>
                        <labels>
                    </labels>
                <created>Fri, 29 Nov 2019 03:10:54 +0000</created>
                <updated>Fri, 29 Nov 2019 09:27:30 +0000</updated>
                                                                                <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="258974" author="adilger" created="Fri, 29 Nov 2019 07:11:49 +0000"  >&lt;p&gt;wasn&apos;t there already a patch in the PCC branch that did this - sync the PCC cache with Lustre at unmount time?  I think that adding commands to do this manually might help, but I think it is more important that this be handled as automatically as much as possible.  Options include to resync idle PCC files to Lustre periodically so that they can be released from PCC quickly if needed, and to reduce the delay at unmount time. &lt;/p&gt;</comment>
                            <comment id="258976" author="qian_wc" created="Fri, 29 Nov 2019 07:41:01 +0000"  >&lt;p&gt;No. We do not have such a patch in the PCC branch to sync the PCC cache with Lustre at unmount time.&lt;br/&gt;
As these commands above are all implemented in the user space, there are some complex to sync at unmount time in the kernel...&lt;br/&gt;
I agree that we should add an option to determine whether to sync the PCC cache at unmount time for better support for disconnected operation with WBC on PCC, i.e. dedicated for a mobile device which may go offline manually.&lt;/p&gt;</comment>
                            <comment id="258982" author="adilger" created="Fri, 29 Nov 2019 09:27:30 +0000"  >&lt;p&gt;I was thinking about patch &lt;a href=&quot;https://review.whamcloud.com/35230&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/35230&lt;/a&gt; &quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12373&quot; title=&quot;detach and delete the PCC cached files when remove a PCC backend from a client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12373&quot;&gt;LU-12373&lt;/a&gt; pcc: uncache the pcc copies when remove a PCC backend&lt;/tt&gt;&quot; to prevent the PCC cache filesystem from holding dirty files.&lt;/p&gt;

&lt;p&gt;I think we are still a long way away from disconnected client operations like CODA/Intermezzo.  I&apos;m not against that at some point in the future (I actually worked on Intermezzo to have disconnected clients at the same time I first worked on Lustre) but we have to have our cache file management/resync &lt;b&gt;much&lt;/b&gt; better than it is today before this would be practical to deploy.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00q6f:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>