[LU-1367] Help finding bad LBA Created: 03/May/12  Updated: 15/Mar/14  Resolved: 15/Mar/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Minor
Reporter: Roger Spellman (Inactive) Assignee: Peter Jones
Resolution: Incomplete Votes: 0
Labels: None

Rank (Obsolete): 10367

 Description   

I am on the phone with our storage vendor. We have some corruption on the disk, and the file system when readonly. We suspect that there might be a FW issue in the RAID controller.

When we run fsck, we see a couple of messages like this:

Free inodes count wrong for group #45590 (1, counted=0).

The storage vendor would like to know the LBA associated with this this data. Can you tell me how to find that?

Thanks.



 Comments   
Comment by Andreas Dilger [ 03/May/12 ]

The "inode count wrong" is itself not associated with a particular block address. Instead, this is a side-effect of some previous error that was corrected (e.g. a corrupted or unreferenced inode being deleted), and now the summary count for the group that held that inode is incorrect.

It is possible to find the filesystem block number for a particular inode using the "debugfs -c -R 'imap <inode number>' /dev/XXX" command. Once the filesystem block number is found, this needs to be converted to a sector number (usually block_number * 8, assuming 512-byte sectors) and added to the offset of the start of the partition (0, in the recommended case of using the whole device for a Lustre target).

Comment by Roger Spellman (Inactive) [ 03/May/12 ]

Andreas, Thanks for this info.
From this message, I don't see the inode number, just the group number.
If I knew the size of the group, I should would be able to calculate the LBA of that particular group, then I could use the offset of the inode count and free inodes to get their LBAs.
Do you know the size of the group?

Comment by John Fuchs-Chesney (Inactive) [ 05/Mar/14 ]

Roger – can we mark this issue as resolved?
Thanks,
~ jfc.

Comment by John Fuchs-Chesney (Inactive) [ 15/Mar/14 ]

Looks like we will not pursue this issue further.

Generated at Sat Feb 10 01:16:01 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.