[LU-7686] Interop 2.7.1<->master - sanity-scrub test_4a: (4) Expected 'inconsistent' on mds1, but got 'recreated,inconsistent' Created: 19/Jan/16  Updated: 02/Jun/16  Resolved: 11/Feb/16

Status: Closed
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Maloo Assignee: WC Triage
Resolution: Won't Fix Votes: 0
Labels: None
Environment:

Server: master, build# 3303, RHEL 6.7
Client: 2.7.1, b2_7_fe/34


Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

This issue was created by maloo for Saurabh Tandan <saurabh.tandan@intel.com>

This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/39652058-bad8-11e5-87b4-5254006e85c2.

The sub-test test_4a failed with the following error:

(4) Expected 'inconsistent' on mds1, but got 'recreated,inconsistent'

test log:

== sanity-scrub test 4a: Auto trigger OI scrub if bad OI mapping was found (1) == 10:46:38 (1452681998)
preparing... Wed Jan 13 10:46:38 UTC 2016
creating 0 files on mds1
prepared Wed Jan 13 10:46:39 UTC 2016.
CMD: shadow-2vm5.shadow.whamcloud.com,shadow-2vm6 running=\$(grep -c /mnt/lustre' ' /proc/mounts);
if [ \$running -ne 0 ] ; then
echo Stopping client \$(hostname) /mnt/lustre opts:;
lsof /mnt/lustre || need_kill=no;
if [ x != x -a x\$need_kill != xno ]; then
    pids=\$(lsof -t /mnt/lustre | sort -u);
    if [ -n \"\$pids\" ]; then
             kill -9 \$pids;
    fi
fi;
while umount  /mnt/lustre 2>&1 | grep -q busy; do
    echo /mnt/lustre is still busy, wait one second && sleep 1;
done;
fi
stop mds1
CMD: shadow-2vm12 grep -c /mnt/mds1' ' /proc/mounts
CMD: shadow-2vm12 umount -d /mnt/mds1
CMD: shadow-2vm12 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
CMD: shadow-2vm12 test -b /dev/lvm-Role_MDS/P1
file-level backup/restore on mds1:/dev/lvm-Role_MDS/P1
CMD: shadow-2vm12 mkdir -p /mnt/brpt
CMD: shadow-2vm12 rm -f /tmp/backup_restore.ea /tmp/backup_restore.tgz
CMD: shadow-2vm12 mount -t ldiskfs /dev/lvm-Role_MDS/P1 /mnt/brpt
backup EA
CMD: shadow-2vm12 cd /mnt/brpt && getfattr -R -d -m '.*' -P . > /tmp/backup_restore.ea && cd -
/usr/lib64/lustre/tests
backup data
CMD: shadow-2vm12 umount -d /mnt/brpt
reformat new device
CMD: shadow-2vm12 grep -c /mnt/mds1' ' /proc/mounts
CMD: shadow-2vm12 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
CMD: shadow-2vm12 mkfs.lustre --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=lov.stripesize=1048576 --param=lov.stripecount=0 --param=mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype=ldiskfs --device-size=200000 --backfstype ldiskfs --reformat /dev/lvm-Role_MDS/P1
CMD: shadow-2vm12 mount -t ldiskfs /dev/lvm-Role_MDS/P1 /mnt/brpt
restore data
restore EA
CMD: shadow-2vm12 cd /mnt/brpt && setfattr --restore=/tmp/backup_restore.ea && cd - 
/usr/lib64/lustre/tests
remove recovery logs
CMD: shadow-2vm12 rm -fv /mnt/brpt/OBJECTS/* /mnt/brpt/CATALOGS
removed `/mnt/brpt/CATALOGS'
CMD: shadow-2vm12 umount -d /mnt/brpt
CMD: shadow-2vm12 rm -f /tmp/backup_restore.ea /tmp/backup_restore.tgz
CMD: shadow-2vm12 e2label /dev/lvm-Role_MDS/P1 lustre-MDT0000
starting MDTs with OI scrub disabled
CMD: shadow-2vm12 mkdir -p /mnt/mds1
CMD: shadow-2vm12 test -b /dev/lvm-Role_MDS/P1
CMD: shadow-2vm12 mkdir -p /mnt/mds1; mount -t lustre -o user_xattr,noscrub  		                   /dev/lvm-Role_MDS/P1 /mnt/mds1
CMD: shadow-2vm12 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/openmpi/bin:/usr/bin:/bin:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh set_default_debug \"vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck\" \"all -lnet -lnd -pinger\" 4 
CMD: shadow-2vm12 e2label /dev/lvm-Role_MDS/P1 2>/dev/null
CMD: shadow-2vm12 /usr/sbin/lctl get_param -n osd-ldiskfs.lustre-MDT0000.oi_scrub
 sanity-scrub test_4a: @@@@@@ FAIL: (4) Expected 'inconsistent' on mds1, but got 'recreated,inconsistent'


 Comments   
Comment by Saurabh Tandan (Inactive) [ 10/Feb/16 ]

Another instance found for interop tag 2.7.66 - EL6.7 Server/2.7.1 Client, build# 3316
https://testing.hpdd.intel.com/test_sets/535a0f2e-cc98-11e5-b80c-5254006e85c2

Another instance found for interop tag 2.7.66 - EL6.7 Server/2.5.5 Client, build# 3316
https://testing.hpdd.intel.com/test_sets/ad6dd9b2-cc9f-11e5-963e-5254006e85c2

Another instance found for interop tag 2.7.66 - EL7 Server/2.5.5 Client, build# 3316
https://testing.hpdd.intel.com/test_sets/781e3562-cc46-11e5-901d-5254006e85c2

Comment by nasf (Inactive) [ 11/Feb/16 ]

It is NOT necessary to test sanity-scrub/sanity-lfsck under interoperability mode. See LU-7144

Generated at Sat Feb 10 02:11:03 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.