[LU-722] Create mdsdb exit with failure Created: 26/Sep/11  Updated: 28/Dec/11  Resolved: 28/Dec/11

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 1.8.6
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Marian Hromiak Assignee: WC Triage
Resolution: Duplicate Votes: 0
Labels: e2fsprogs
Environment:

RHEL 5.5 with kernel from Oracle Lustre 1.8.5 + e2fsprogs-1.41.90.wc3-0redhat


Severity: 3
Epic: metadata, server
Rank (Obsolete): 6550

 Description   

Hello,

when we tried generate mdsdb from lustre 1.8.5 via e2fsprogs-1.41.90.wc3-0redhat we obtain following error:
e2fsck -nv --mdsdb /tmp/mdsdb /dev/vgl6mdt/lvol1

e2fsck 1.41.90.wc3 (28-May-2011)
device /dev/mapper/vgl6mdt-lvol1 mounted by lustre per /proc/fs/lustre/mds/l6-MDT0000/mntdev
Warning! /dev/vgl6mdt/lvol1 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
l6-MDTffff has been mounted 24 times without being checked, check forced.
Pass 1: Checking inodes, blocks, and sizes
Inode 11 creation time (Sun Feb 8 02:38:54 1970) invalid.
Clear? no

MDS: ost_idx 0 max_id 25693660
MDS: ost_idx 1 max_id 25622279
MDS: ost_idx 2 max_id 25965512
MDS: ost_idx 3 max_id 25795531
MDS: got 32 bytes = 4 entries in lov_objids
MDS: max_files = 3512383
MDS: num_osts = 4
mds info db file written
error: only handle v1/v3 LOV EAs, not 00000001
e2fsck: aborted

Could you help us, how we can dump mds database for distributed client check?



 Comments   
Comment by Brian Murrell (Inactive) [ 28/Dec/11 ]

This looks like a duplicate of LU-752. Closing as such. Please feel free to reopen if the fix in LU-752 (when it becomes available) doesn't resolve your problem.

Generated at Sat Feb 10 01:09:48 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.