Details
-
Bug
-
Resolution: Fixed
-
Blocker
-
Lustre 2.1.0, Lustre 1.8.8, Lustre 1.8.6, Lustre 1.8.9
-
None
-
Lustre Branch: v1_8_6_RC2
Lustre Build: http://newbuild.whamcloud.com/job/lustre-b1_8/80/
e2fsprogs Build: http://newbuild.whamcloud.com/job/e2fsprogs-master/40/
Distro/Arch: RHEL6/x86_64(patchless client, in-kernel OFED, kernel version: 2.6.32-131.2.1.el6)
RHEL5/x86_64(server, OFED 1.5.3.1, kernel version: 2.6.18-238.12.1.el5_lustre)
Lustre Branch: v1_8_6_RC2 Lustre Build: http://newbuild.whamcloud.com/job/lustre-b1_8/80/ e2fsprogs Build: http://newbuild.whamcloud.com/job/e2fsprogs-master/40/ Distro/Arch: RHEL6/x86_64(patchless client, in-kernel OFED, kernel version: 2.6.32-131.2.1.el6) RHEL5/x86_64(server, OFED 1.5.3.1, kernel version: 2.6.18-238.12.1.el5_lustre)
-
3
-
23,206
-
4972
Description
performance-sanity test_8 failed as follows:
===== mdsrate-stat-large.sh Test preparation: creating 125125 files. + /usr/lib64/lustre/tests/mdsrate --create --dir /mnt/lustre/mdsrate --nfiles 125125 --filefmt 'f%%d' UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 415069 50 415019 0% /mnt/lustre[MDT:0] lustre-OST0000_UUID 125184 89 125095 0% /mnt/lustre[OST:0] lustre-OST0001_UUID 125184 89 125095 0% /mnt/lustre[OST:1] lustre-OST0002_UUID 125184 89 125095 0% /mnt/lustre[OST:2] lustre-OST0003_UUID 125184 89 125095 0% /mnt/lustre[OST:3] lustre-OST0004_UUID 125184 89 125095 0% /mnt/lustre[OST:4] lustre-OST0005_UUID 125184 89 125095 0% /mnt/lustre[OST:5] filesystem summary: 415069 50 415019 0% /mnt/lustre + chmod 0777 /mnt/lustre drwxrwxrwx 5 root root 4096 Jun 13 13:41 /mnt/lustre + su mpiuser sh -c "/usr/lib64/openmpi/bin/mpirun -np 2 -machinefile /tmp/mdsrate-stat-large.machines /usr/lib64/lustre/tests/mdsrate --create --dir /mnt/lustre/mdsrate --nfiles 125125 --filefmt 'f%%d' " 0: client-10-ib starting at Mon Jun 13 13:49:30 2011 rank 0: open(f124836) error: Input/output error -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 4468 on node client-10-ib exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- rank 1: open(f124837) error: Input/output error UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 500096 124886 375210 25% /mnt/lustre[MDT:0] lustre-OST0000_UUID 125184 124985 199 100% /mnt/lustre[OST:0] lustre-OST0001_UUID 125184 125184 0 100% /mnt/lustre[OST:1] lustre-OST0002_UUID 125184 125184 0 100% /mnt/lustre[OST:2] lustre-OST0003_UUID 125184 123961 1223 99% /mnt/lustre[OST:3] lustre-OST0004_UUID 125184 124633 551 100% /mnt/lustre[OST:4] lustre-OST0005_UUID 125184 124377 807 99% /mnt/lustre[OST:5] filesystem summary: 500096 124886 375210 25% /mnt/lustre status script Total(sec) E(xcluded) S(low) ------------------------------------------------------------------------------------ test-framework exiting on error performance-sanity test_8: @@@@@@ FAIL: test_8 failed with 1
Dmesg on the MDS node:
Lustre: DEBUG MARKER: ===== mdsrate-stat-large.sh Test preparation: creating 125125 files. Lustre: 8659:0:(lov_qos.c:459:qos_shrink_lsm()) using fewer stripes for object 278662: old 6 new 5 Lustre: 8681:0:(lov_qos.c:459:qos_shrink_lsm()) using fewer stripes for object 278663: old 6 new 5 Lustre: 8663:0:(lov_qos.c:459:qos_shrink_lsm()) using fewer stripes for object 279300: old 6 new 5 Lustre: 8663:0:(lov_qos.c:459:qos_shrink_lsm()) Skipped 636 previous similar messages Lustre: 8662:0:(lov_qos.c:459:qos_shrink_lsm()) using fewer stripes for object 280599: old 6 new 3 Lustre: 8662:0:(lov_qos.c:459:qos_shrink_lsm()) Skipped 1298 previous similar messages LustreError: 8685:0:(mds_open.c:441:mds_create_objects()) error creating objects for inode 281132: rc = -5 LustreError: 8685:0:(mds_open.c:826:mds_finish_open()) mds_create_objects: rc = -5 LustreError: 8681:0:(mds_open.c:441:mds_create_objects()) error creating objects for inode 281132: rc = -5 LustreError: 8681:0:(mds_open.c:826:mds_finish_open()) mds_create_objects: rc = -5 Lustre: DEBUG MARKER: performance-sanity test_8: @@@@@@ FAIL: test_8 failed with 1
Dmesg on the OSS node:
Lustre: DEBUG MARKER: ===== mdsrate-stat-large.sh Test preparation: creating 125125 files. LustreError: 25861:0:(filter.c:3449:filter_precreate()) create failed rc = -28 LustreError: 27807:0:(filter.c:3449:filter_precreate()) create failed rc = -28 LustreError: 27804:0:(filter.c:3449:filter_precreate()) create failed rc = -28 LustreError: 27804:0:(filter.c:3449:filter_precreate()) Skipped 2 previous similar messages Lustre: DEBUG MARKER: performance-sanity test_8: @@@@@@ FAIL: test_8 failed with 1
Maloo report: https://maloo.whamcloud.com/test_sets/9b2e5a46-964f-11e0-9a27-52540025f9af
This is an known issue: bug 23206
Attachments
Issue Links
- Trackbacks
-
Lustre 1.8.6-wc1 release testing tracker Lustre 1.8.6wc1 RC1 Tag: v186RC1 Created Date: 20110610 RC1 was DOA due to a build failure related to tag name LU408
-
Lustre 1.8.8-wc1 release testing tracker Lustre 1.8.8wc1 RC1 Tag: v188WC1RC1 Build:
-
Lustre 1.8.x known issues tracker While testing against Lustre b18 branch, we would hit known bugs which were already reported in Lustre Bugzilla https://bugzilla.lustre.org/. In order to move away from relying on Bugzilla, we would create a JIRA
-
Lustre 2.1.2 release testing tracker Lustre 2.1.2 RC2 Tag: v212RC2 Build:
-
Changelog 1.8 {}version 1.8.7wc1{} {}Support for networks: socklnd \any kernel supported by Lustre, qswlnd Qsnet kernel modules 5.20 and later, openiblnd IbGold 1.8.2, o2iblnd OFED 1.3, 1.4.1, 1.4.2, 1.5.1, 1.5.2, 1.5.3.1 and 1.5.3.2 gmlnd GM 2.1....