Details

    • Improvement
    • Resolution: Fixed
    • Critical
    • None
    • None
    • None
    • Orion
    • 7130

    Description

      Changes to format and mount options

      Attachments

        Issue Links

          Activity

            [LUDOC-87] OSD-Mount Doc Changes

            compatibility bits are landed

            bzzz Alex Zhuravlev added a comment - compatibility bits are landed

            http://review.whamcloud.com/5873 – with the patch read_cache_enable, readcache_max_filesize and writethrough_cache_enable can be accessed via obdfilter.*, so we wouldn't need to change the manual.

            bzzz Alex Zhuravlev added a comment - http://review.whamcloud.com/5873 – with the patch read_cache_enable, readcache_max_filesize and writethrough_cache_enable can be accessed via obdfilter.*, so we wouldn't need to change the manual.

            with the latest changes to master branch, we can keep using obdfilter.* to access brw_stats. we should handle read_cache_enable similarly.

            bzzz Alex Zhuravlev added a comment - with the latest changes to master branch, we can keep using obdfilter.* to access brw_stats. we should handle read_cache_enable similarly.

            – 13.14.�Identifying To Which Lustre File an OST Object Belongs – check
            probably we need to mention "0" in /O/0/d$((34976 % 32))/34976" is a sequence number and with DNE it won't be 0, instead it should be taken properly from lfs getstripe output

            – 14.2.� Finding Nodes in the Lustre File System

            cat /proc/fs/lustre/lov/fsname-mdtlov/target_obd - s/fsname-mdtlov/fsname-MDT<index>-mdtlov/target_obd

            – 14.8.5.� Restoring OST Configuration Files

            should be OK

            – 14.12.� Separate a combined MGS/MDT – check

            should be OK

            – 17.1.1.1.�Using Lustre_rsync – check

            should be OK

            – 24.3.1

            starting from 2.4 we load ofd.ko instead of obdfilter.ko

            24.3.4.2.�Visualizing Results

            since 2.4 brw_stats should be checked in /proc/.../osd-*/brw_stats

            30.1.4.�OST Failure (Failover) - s/LOV/LOD/

            since 2.4 MDS uses LOD instead of LOV

            31.1.5.�Free Space Distribution

            cat /proc/fs/lustre/lov/fsname-mdtlov/qos_prio_free – it's actually /.../fsname-MDTXXXX-mdtlov/ since some previous version

            31.2.6.�Watching the OST Block I/O Stream – in OSD now

            oss# lctl get_param obdfilter.testfs-OST0000.brw_stats - osd-*.testfs-OST0000.brwstats since 2.4

            31.2.8.1.�Using OSS Read Cache

            lctl set_param osd-ldiskfs..read_cache_enable=0 – osd-.*.read_cache_enable since 2.4, doesn't apply to osd-zfs

            36.18.3.�Options

            --mountfsoptions=opts should be OK

            bzzz Alex Zhuravlev added a comment - – 13.14.�Identifying To Which Lustre File an OST Object Belongs – check probably we need to mention "0" in /O/0/d$((34976 % 32))/34976" is a sequence number and with DNE it won't be 0, instead it should be taken properly from lfs getstripe output – 14.2.� Finding Nodes in the Lustre File System cat /proc/fs/lustre/lov/fsname-mdtlov/target_obd - s/fsname-mdtlov/fsname-MDT<index>-mdtlov/target_obd – 14.8.5.� Restoring OST Configuration Files should be OK – 14.12.� Separate a combined MGS/MDT – check should be OK – 17.1.1.1.�Using Lustre_rsync – check should be OK – 24.3.1 starting from 2.4 we load ofd.ko instead of obdfilter.ko 24.3.4.2.�Visualizing Results since 2.4 brw_stats should be checked in /proc/.../osd-*/brw_stats 30.1.4.�OST Failure (Failover) - s/LOV/LOD/ since 2.4 MDS uses LOD instead of LOV 31.1.5.�Free Space Distribution cat /proc/fs/lustre/lov/fsname-mdtlov/qos_prio_free – it's actually /.../fsname-MDTXXXX-mdtlov/ since some previous version 31.2.6.�Watching the OST Block I/O Stream – in OSD now oss# lctl get_param obdfilter.testfs-OST0000.brw_stats - osd-*.testfs-OST0000.brwstats since 2.4 31.2.8.1.�Using OSS Read Cache lctl set_param osd-ldiskfs. .read_cache_enable=0 – osd- .*.read_cache_enable since 2.4, doesn't apply to osd-zfs 36.18.3.�Options --mountfsoptions=opts should be OK

            I've gone through the manual and got the following TODO:

            13.14.�Identifying To Which Lustre File an OST Object Belongs – check

            14.2.� Finding Nodes in the Lustre File System

            cat /proc/fs/lustre/lov/lustre-mdtlov/target_obd – lustre-MDTXXXX-mdtlov

            14.8.5.� Restoring OST Configuration Files – check

            14.12.� Separate a combined MGS/MDT – check

            17.1.1.1.�Using Lustre_rsync – check

            24.3.1.�Testing Local Disk Performance – load ofd module since 2.4

            24.3.4.2.�Visualizing Results - brw_stats in /proc/.../osd-*/brw_stats

            30.1.4.�OST Failure (Failover) - s/LOV/LOD/

            31.1.5.�Free Space Distribution – lustre-MDTXXXX-mdtlov

            31.2.6.�Watching the OST Block I/O Stream – in OSD now

            31.2.8.1.�Using OSS Read Cache - in OSD now

            36.14.3.�Examples – should we specify MDT index always?

            36.15.3.�Options – check md_stripe_cache_size still works

            36.18.3.�Options – check --mountfsoptions=opts

              • describe LOV->LOD, OSC->OSP change on MDS ?
              • new parameters in LOD (under /proc/fs/lustre/osc/fsname-OST*-osc-MDT*/:
                sync_changes, sync_in_flight, sync_in_progress
              • ZFS support

            any comments?

            bzzz Alex Zhuravlev added a comment - I've gone through the manual and got the following TODO: 13.14.�Identifying To Which Lustre File an OST Object Belongs – check 14.2.� Finding Nodes in the Lustre File System cat /proc/fs/lustre/lov/lustre-mdtlov/target_obd – lustre-MDTXXXX-mdtlov 14.8.5.� Restoring OST Configuration Files – check 14.12.� Separate a combined MGS/MDT – check 17.1.1.1.�Using Lustre_rsync – check 24.3.1.�Testing Local Disk Performance – load ofd module since 2.4 24.3.4.2.�Visualizing Results - brw_stats in /proc/.../osd-*/brw_stats 30.1.4.�OST Failure (Failover) - s/LOV/LOD/ 31.1.5.�Free Space Distribution – lustre-MDTXXXX-mdtlov 31.2.6.�Watching the OST Block I/O Stream – in OSD now 31.2.8.1.�Using OSS Read Cache - in OSD now 36.14.3.�Examples – should we specify MDT index always? 36.15.3.�Options – check md_stripe_cache_size still works 36.18.3.�Options – check --mountfsoptions=opts describe LOV->LOD, OSC->OSP change on MDS ? new parameters in LOD (under /proc/fs/lustre/osc/fsname-OST*-osc-MDT*/: sync_changes, sync_in_flight, sync_in_progress ZFS support any comments?

            Update the user manual to describe changes to the user tools:

            • formatting, mounting, usage, configuration, etc.
            • potentially some of the example output (if relevant to the examples)
            • /proc tunables, module parameters, etc.

            Since we use a single manual for all of the Lustre releases, when changing any significant sections of text the content should include the version in which the change was made (i.e. 2.4) and leave the old comments in place (at least back to 2.1, older 1.x sections can be removed), for example:

            Up to Lustre 2.3 running writeconf was via tunefs.lustre --writeconf /dev/mdtdev. Starting with Lustre 2.4 running writeconf should be use ...

            adilger Andreas Dilger added a comment - Update the user manual to describe changes to the user tools: formatting, mounting, usage, configuration, etc. potentially some of the example output (if relevant to the examples) /proc tunables, module parameters, etc. Since we use a single manual for all of the Lustre releases, when changing any significant sections of text the content should include the version in which the change was made (i.e. 2.4) and leave the old comments in place (at least back to 2.1, older 1.x sections can be removed), for example: Up to Lustre 2.3 running writeconf was via tunefs.lustre --writeconf /dev/mdtdev . Starting with Lustre 2.4 running writeconf should be use ...

            People

              bzzz Alex Zhuravlev
              jlevi Jodi Levi (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: