Details
-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
Lustre 2.10.3
-
None
-
SLES12 SP3 Server/Client ldiskfs DNE
b2_10 build 96
-
3
-
9223372036854775807
Description
This issue was created by maloo for Saurabh Tandan <saurabh.tandan@intel.com>
This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/86a4bb42-3cf6-11e8-8f8a-52540065bddc
test_24 failed with the following error:
adding fops nodemaps failed 1
test_logs:
== sanity-sec test 24: check nodemap proc files for LBUGs and Oopses ================================= 12:00:48 (1523386848)
CMD: onyx-48vm8,onyx-48vm9 /usr/sbin/lctl set_param mdt.*.identity_upcall=NONE
mdt.lustre-MDT0000.identity_upcall=NONE
mdt.lustre-MDT0002.identity_upcall=NONE
mdt.lustre-MDT0001.identity_upcall=NONE
mdt.lustre-MDT0003.identity_upcall=NONE
CMD: onyx-48vm5 /usr/sbin/lctl list_nids | grep tcp | cut -f 1 -d @
CMD: onyx-48vm8 /usr/sbin/lctl nodemap_add c0
onyx-48vm8: error: c0 existing nodemap name
sanity-sec test_24: @@@@@@ FAIL: adding fops nodemaps failed 1
Trace dump:
= /usr/lib64/lustre/tests/test-framework.sh:5328:error()
= /usr/lib64/lustre/tests/sanity-sec.sh:1464:nodemap_test_setup()
= /usr/lib64/lustre/tests/sanity-sec.sh:1776:test_24()
= /usr/lib64/lustre/tests/test-framework.sh:5604:run_one()
= /usr/lib64/lustre/tests/test-framework.sh:5643:run_one_logged()
= /usr/lib64/lustre/tests/test-framework.sh:5490:run_test()
= /usr/lib64/lustre/tests/sanity-sec.sh:1784:main()
Dumping lctl log to /home/autotest/autotest/logs/test_logs/2018-04-09/lustre-b2_10-sles12sp3-x86_64--full--2_23_1__96___33663a7e-8f39-4015-95ec-ba0769bf55d5/sanity-sec.test_24.*.1523386848.log
CMD: onyx-48vm5,onyx-48vm6,onyx-48vm7,onyx-48vm8,onyx-48vm9 /usr/sbin/lctl dk > /home/autotest/autotest/logs/test_logs/2018-04-09/lustre-b2_10-sles12sp3-x86_64--full--2_23_1__96___33663a7e-8f39-4015-95ec-ba0769bf55d5/sanity-sec.test_24.debug_log.\$(hostname -s).1523386848.log;
dmesg > /home/autotest/autotest/logs/test_logs/2018-04-09/lustre-b2_10-sles12sp3-x86_64--full--2_23_1__96___33663a7e-8f39-4015-95ec-ba0769bf55d5/sanity-sec.test_24.dmesg.\$(hostname -s).1523386848.log
Resetting fail_loc on all nodes...CMD: onyx-48vm5,onyx-48vm6,onyx-48vm7,onyx-48vm8,onyx-48vm9 lctl set_param -n fail_loc=0 fail_val=0 2>/dev/null
done.
FAIL 24 (1s)
== sanity-sec test 25: test save and reload nodemap config =========================================== 12:00:49 (1523386849)
CMD: onyx-48vm8 /usr/sbin/lctl get_param -n version 2>/dev/null ||
/usr/sbin/lctl lustre_build_version 2>/dev/null ||
/usr/sbin/lctl --version 2>/dev/null | cut -d' ' -f2
Stopping clients: onyx-48vm5,onyx-48vm6 /mnt/lustre (opts:)
CMD: onyx-48vm5,onyx-48vm6 running=\$(grep -c /mnt/lustre' ' /proc/mounts);
if [ \$running -ne 0 ] ; then
echo Stopping client \$(hostname) /mnt/lustre opts:;
lsof /mnt/lustre || need_kill=no;
if [ x != x -a x\$need_kill != xno ]; then
pids=\$(lsof -t /mnt/lustre | sort -u);
if [ -n \"\$pids\" ]; then
kill -9 \$pids;
fi
fi;
while umount /mnt/lustre 2>&1 | grep -q busy; do
echo /mnt/lustre is still busy, wait one second && sleep 1;
done;
fi
Stopping client onyx-48vm5 /mnt/lustre opts:
Stopping client onyx-48vm6 /mnt/lustre opts:
CMD: onyx-48vm8,onyx-48vm9 /usr/sbin/lctl set_param mdt.*.identity_upcall=NONE
mdt.lustre-MDT0001.identity_upcall=NONE
mdt.lustre-MDT0003.identity_upcall=NONE
mdt.lustre-MDT0000.identity_upcall=NONE
mdt.lustre-MDT0002.identity_upcall=NONE
CMD: onyx-48vm5 /usr/sbin/lctl list_nids | grep tcp | cut -f 1 -d @
CMD: onyx-48vm8 /usr/sbin/lctl nodemap_add c0
onyx-48vm8: error: c0 existing nodemap name
sanity-sec test_25: @@@@@@ FAIL: adding fops nodemaps failed 1
Trace dump:
VVVVVVV DO NOT REMOVE LINES BELOW, Added by Maloo for auto-association VVVVVVV
sanity-sec test_24 - adding fops nodemaps failed 1
Attachments
Issue Links
- mentioned in
-
Page Loading...