Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12313

Mark lustre_inode_cache as reclaimable

    XMLWordPrintable

Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.13.0
    • None
    • None
    • 9223372036854775807

    Description

      As discussed in this email thread currently lustre does not mark memory allocated from slab as reclaimable. 

      This makes the kernel's MemAvailable, SReclaimable and SUnreclaim (as reported by /proc/meminfo) unreliable, because the memory gets actually freed on memory pressure or when droping caches. 

      This change should affect only how memory is reported and should not affect much else. 

      As pointed out by NeilBrown other filesystems also set inode cache to reclaimable: 

      That said: 9p, adfs, affs, befs, bfs, btrfs, ceph, cifs, coda, efs, ext2, ext4, f2fs, fat, freevxfs, fuse, gfs2, hpfs, isofs, jffs2, jfs, minix, nfs, nilfs, ntfs, ocfs2, openpromfs, overlayfs, procfs, qnx4, qnx6, reiserfs, romfs, squashfs, sysvfs, ubifs, udf, ufs, xfs all set SLAB_RECLAIM_ACCOUNT on their inode caches.

       Also from NeilBrown: 

      Yes, I think lustre_inode_cache should certainly be flagged as
      SLAB_RECLAIM_ACCOUNT.
      If the SReclaimable value is too small (and there aren't many
      reclaimable pagecache pages), vmscan can decide not to bother. This is
      probably a fairly small risk but it is possible that the missing
      SLAB_RECLAIM_ACCOUNT flag can result in memory not being reclaimed when
      it could be.

       
      It remains open question which other caches could also be marked as reclaimable but marking just lustre_inode_cache would be a good improvement.

      Attachments

        Issue Links

          Activity

            People

              Tomaka Jacek Tomaka (Inactive)
              Tomaka Jacek Tomaka (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: