[LU-4722] IO Errors during the failover - SLES 11 SP2 - Lustre 2.4.2 Created: 06/Mar/14  Updated: 12/Dec/14  Resolved: 12/Dec/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.4.2
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Rajeshwaran Ganesan Assignee: Hongchao Zhang
Resolution: Fixed Votes: 0
Labels: mn4
Environment:

SLES 11 SP2
Lustre 2.4.2


Attachments: File 2014-02-19-client_messages_20140219(1).tgz     Text File 2014-02-19-es_lustre_showall_2014-02-19_161417(1).tar.bz2     Text File 2014-02-19-es_lustre_showall_2014-02-19_161417.tar.bz2     Text File 2014-02-19-server_lctl_dk_20140218.tgz     Text File check_SLES11SP3_20140210.txt     Text File check_SLES11SP3_20140320-1.txt    
Severity: 3
Rank (Obsolete): 12978

 Description   

We have applied the patch provided in teh LU-3645. And still the customer complains that the issue is can be reproduced.

Attaching the latest set of logs.

The issue re-occured on 18th Feb.



 Comments   
Comment by Peter Jones [ 06/Mar/14 ]

Hongchao

Could you please look into this one?

Thanks

Peter

Comment by Hongchao Zhang [ 07/Mar/14 ]

the EIO(-5) found in the logs,

pfscn1/logs/kern:Jul 23 17:41:27 mds1 kernel: : LustreError: 3107:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xd72d sub-object on OST idx 0/1: rc = -5
pfscn1/logs/kern:Jul 23 17:41:27 mds1 kernel: : LustreError: 2574:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x5719 sub-object on OST idx 0/1: rc = -5
pfscn1/logs/kern:Jul 23 17:41:27 mds1 kernel: : LustreError: 3647:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x1 sub-object on OST idx 0/1: rc = -5
pfscn1/logs/kern:Aug  1 15:15:35 mds1 kernel: : LustreError: 12170:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xba34 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 15 14:37:35 mds2 kernel: : LustreError: 26395:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x49c2 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 15 14:37:35 mds2 kernel: : LustreError: 10162:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x1896d sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 15 14:37:35 mds2 kernel: : LustreError: 5067:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x2720 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 22 15:14:44 mds2 kernel: : LustreError: 2522:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xcdce sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 22 15:14:44 mds2 kernel: : LustreError: 2523:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xb96 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 22 15:14:44 mds2 kernel: : LustreError: 2738:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x4d74 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 22 15:14:44 mds2 kernel: : LustreError: 21918:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xb61 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 30 15:38:44 mds2 kernel: : LustreError: 17462:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x8bb8 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Jul 31 13:11:33 mds2 kernel: : LustreError: 17452:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x10e14 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Aug 27 15:01:01 mds2 kernel: : LustreError: 20790:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xe06d sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Aug 27 15:01:51 mds2 kernel: : LustreError: 21441:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xe064 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Sep  4 14:40:51 mds2 kernel: : LustreError: 15954:0:(lov_request.c:640:create_done()) qos_remedy_create failed -5
pfscn2/logs/kern:Sep  4 14:40:51 mds2 kernel: : LustreError: 15954:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x9a8/0x0 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Sep 25 15:17:39 pfscn2 kernel: : LustreError: 7308:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x3288 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:51:51 pfscn2 kernel: : LustreError: 18998:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x11052 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:51:51 pfscn2 kernel: : LustreError: 29944:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x10f5a sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:51:51 pfscn2 kernel: : LustreError: 4755:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x18c7 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:52:42 pfscn2 kernel: : LustreError: 18998:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x11053 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:52:42 pfscn2 kernel: : LustreError: 19004:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x10f5b sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:52:42 pfscn2 kernel: : LustreError: 4755:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x18c5 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:53:32 pfscn2 kernel: : LustreError: 19004:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x18c6 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:53:32 pfscn2 kernel: : LustreError: 18998:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x11054 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:53:32 pfscn2 kernel: : LustreError: 29944:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x10f5d sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:54:22 pfscn2 kernel: : LustreError: 29944:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x11055 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:54:22 pfscn2 kernel: : LustreError: 18998:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x10f5e sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct  8 16:54:22 pfscn2 kernel: : LustreError: 19004:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x18c8 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct 21 11:15:02 pfscn2 kernel: : LustreError: 21381:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xfe76 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct 23 10:48:39 pfscn2 kernel: : LustreError: 10250:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x10949 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Oct 23 10:54:28 pfscn2 kernel: : LustreError: 24178:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0xd948 sub-object on OST idx 0/1: rc = -5
pfscn2/logs/kern:Nov  4 21:33:56 pfscn2 kernel: : LustreError: 12037:0:(lov_request.c:579:lov_update_create_set()) error creating fid 0x1377d sub-object on OST idx 0/1: rc = -5

no IO error was found around 18th Feb.
Does it occur when failing over OST, just like LU-3645?

Comment by Rajeshwaran Ganesan [ 13/Mar/14 ]

Basically customer is running an high intensive IO application, they are doing some testing with SLES 11 and Lustre 2.4.2. They are testing one of the test case, reboot one of the OSS and check the failover. this is not working properly.

attaching their test scanerio for your reference.

Comment by Rajeshwaran Ganesan [ 20/Mar/14 ]

We would like to know why the failover is not working properly on the SUSE clients. Do you need any other logs?

Comment by Hongchao Zhang [ 20/Mar/14 ]

Hi,
the logs of the OSS (both the primary OSS and the failover OSS) will be useful to track the issue.
btw, did only SUSE clients have the problem and they encountered -EIO after failing over OSS? if so, the logs of the SUSE clients will be much useful.

Thanks

Comment by Rajeshwaran Ganesan [ 20/Mar/14 ]

Hi,

We have tried new set of failover and we got the same results. So far we have only SUSE clients and we are seeing the issues.

All the new log files are added into the ftp.whamcloud.com under the LU-4722.

Attaching the Test Scenario for your reference. (check_SLES11SP3_20140320-1.txt)

Comment by Rajeshwaran Ganesan [ 25/Mar/14 ]


Do you have any update on this issue? Have you had a chance to look at the logs.

Comment by Rajeshwaran Ganesan [ 25/Mar/14 ]

We have tried the same test on the RHEL client as well and result is same. The issue is easily easily reproduce able on both RHEL/SUSE linux. so, it is not specific to SUSE Linux OS. It must be fixed.

Comment by Hongchao Zhang [ 26/Mar/14 ]

Hi,

yes, we have checked the logs, and it should be not the duplicate of LU-3645.

this issue is related to the eviction of clients by OST,

00010000:02000400:7.0:1395331134.233862:0:4638:0:(ldlm_lib.c:1581:target_start_recovery_timer()) pfscdat2-OST0000: Will be in recovery for at least 2:30, or until 1 client reconnects
00010000:02000400:7.0:1395331134.233874:0:4638:0:(ldlm_lib.c:1077:target_handle_connect()) pfscdat2-OST0000: Denying connection for new client 01e5a9c0-2c56-a938-4eea-7658375f5c04 (at 172.26.20.2@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:30
00010000:00080000:18.0F:1395331134.557763:0:4632:0:(ldlm_lib.c:1033:target_handle_connect()) pfscdat2-OST0000: connection from 69c34fd9-73e4-94d1-e43e-5c26d4b1ac9f@172.26.20.5@o2ib recovering/t0 exp (null) cur 1395331134 last 0
00010000:02000400:18.0:1395331134.557774:0:4632:0:(ldlm_lib.c:1077:target_handle_connect()) pfscdat2-OST0000: Denying connection for new client 69c34fd9-73e4-94d1-e43e-5c26d4b1ac9f (at 172.26.20.5@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:29
00010000:00080000:21.0F:1395331137.127888:0:4723:0:(ldlm_lib.c:1033:target_handle_connect()) pfscdat2-OST0000: connection from 0067c3cd-6643-b8bb-721b-08b1b92dccde@172.26.4.1@o2ib recovering/t0 exp (null) cur 1395331137 last 0
00010000:02000400:21.0:1395331137.127899:0:4723:0:(ldlm_lib.c:1077:target_handle_connect()) pfscdat2-OST0000: Denying connection for new client 0067c3cd-6643-b8bb-721b-08b1b92dccde (at 172.26.4.1@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:27
00010000:00080000:7.0:1395331137.164613:0:4638:0:(ldlm_lib.c:1033:target_handle_connect()) pfscdat2-OST0000: connection from 798993fc-cca8-3e6f-6b5d-5ce89dd9836b@172.26.20.1@o2ib recovering/t146029313516 exp (null) cur 1395331137 last 0
00010000:02000400:7.0:1395331137.164620:0:4638:0:(ldlm_lib.c:1077:target_handle_connect()) pfscdat2-OST0000: Denying connection for new client 798993fc-cca8-3e6f-6b5d-5ce89dd9836b (at 172.26.20.1@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:27
00010000:00080000:7.0:1395331137.212815:0:4638:0:(ldlm_lib.c:1033:target_handle_connect()) pfscdat2-OST0000: connection from 49ae061e-87a8-feb4-0a3c-96e0e66336ca@172.26.20.6@o2ib recovering/t0 exp (null) cur 1395331137 last 0

there is only one client found by pfscdat2-OST0000 to be needed to recover, after the recovery completed, the other clients were evicted by the OST.

I am trying to write an debug patch to collect some debug info to track the issue now.

Comment by Hongchao Zhang [ 27/Mar/14 ]

Hi,

Do you use both pfscn3 and pfscn4 to run the "pfscdat2-OST0000" service and these two nodes use the same device "/dev/mapper/ost_pfscdat2_0"?

according to the logs, pfscn4 only found one client (the connection from MDT) saved in the OST disk (ost_pfscdat2_0),

00010000:02000400:7.0:1395331134.233862:0:4638:0:(ldlm_lib.c:1581:target_start_recovery_timer()) pfscdat2-OST0000: Will be in recovery for at least 2:30, or until 1 client reconnects
00010000:00080000:23.0:1395331183.996474:0:4723:0:(ldlm_lib.c:748:target_handle_reconnect()) connect export for UUID 'pfscdat2-MDT0000-mdtlov_UUID' at ffff880ffbe7a400, cookie 0x6b5630daed2ba3e
00010000:00080000:23.0:1395331183.996481:0:4723:0:(ldlm_lib.c:1033:target_handle_connect()) pfscdat2-OST0000: connection from pfscdat2-MDT0000-mdtlov_UUID@172.26.17.2@o2ib recovering/t150323924993 exp ffff880ffbe7a400 cur 1395331183 last 1395331132
00002000:00080000:23.0:1395331183.996492:0:4723:0:(ofd_obd.c:145:ofd_parse_connect_data()) pfscdat2-OST0000: Received MDS connection for group 0

there are some write operations from iccn2(172.26.20.2), but not from iccn1, (only two write operations from iccn3).

could you please help to get the debug log (lctl dk) before rebooting the OSS (pfscn4)?

Comment by Rajeshwaran Ganesan [ 28/Mar/14 ]

Hello,

You commented that " I am trying to write an debug patch to collect some debug info to track the issue now." May I know when the debug patch is available"

I can install the debug patch and get the debug logs and also lctl dk output.

Comment by Hongchao Zhang [ 28/Mar/14 ]

Hi,

the debug patch is at http://review.whamcloud.com/#/c/9845/, which prints more logs related to client allocation, deletion and initialization,
and the "ha" and "inode" should be added to the "debug" by "lctl set_param debug='+ha +inode'".

Thanks

Comment by Rajeshwaran Ganesan [ 31/Mar/14 ]

Hello - Could you please provide us source RPM.

Does it need to be applied on all the MDS/OSSes or only required for the Lustre clients,

Thanks

Comment by Rajeshwaran Ganesan [ 03/Apr/14 ]

Hello - Could you please provide me the RPM with the patch and also, please let me know whether its a client patch or server patch.

Thanks,
Rajesh

Comment by Hongchao Zhang [ 04/Apr/14 ]

Hi Rajesh,

you can download the RPMs from http://build.whamcloud.com/job/lustre-reviews/22788/arch=x86_64,build_type=server,distro=el6,ib_stack=inkernel/artifact/artifacts/RPMS/x86_64/
and it's only for server side.

Regards,
Hongchao

Comment by Rajeshwaran Ganesan [ 09/Apr/14 ]

This is an EXAScaler setup and we use MOFED. Could you please provide the RPM with MOFED.

Comment by Rajeshwaran Ganesan [ 15/Apr/14 ]

the patch have been applied, we have re-tried the test, results are same. attaching teh logs for your reference.

Comment by Hongchao Zhang [ 17/Apr/14 ]

Hi Rajesh,

the uploaded logs seems to not be compressed correctly

hongchao@eric:/scratch/ftp/uploads/LU-4722$ file 2014-04-14-client_lctl_dk_20140414.tgz 
2014-04-14-client_lctl_dk_20140414.tgz: HTML document text
hongchao@eric:/scratch/ftp/uploads/LU-4722$ file client_messages_20140414.tgz 
client_messages_20140414.tgz: HTML document text
hongchao@eric:/scratch/ftp/uploads/LU-4722$ file server_lctl_dk_20140414.tgz 
server_lctl_dk_20140414.tgz: HTML document text
hongchao@eric:/scratch/ftp/uploads/LU-4722$ 

could you please check it? Thanks

Comment by Hongchao Zhang [ 17/Apr/14 ]

In pfscn3,

Apr 14 16:20:04 pfscn3 kernel: : LDISKFS-fs (dm-9): mounted filesystem with ordered data mode. quota=on. Opts:
Apr 14 16:20:05 pfscn3 kernel: : Lustre: pfscdat2-OST0000: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450
Apr 14 16:20:14 pfscn3 kernel: : Lustre: pfscdat2-OST0000: Will be in recovery for at least 2:30, or until 27 clients reconnect
Apr 14 16:22:44 pfscn3 kernel: : Lustre: pfscdat2-OST0000: recovery is timed out, evict stale exports
Apr 14 16:22:44 pfscn3 kernel: : Lustre: pfscdat2-OST0000: disconnecting 26 stale clients
Apr 14 16:22:44 pfscn3 kernel: : Lustre: pfscdat2-OST0000: Recovery over after 2:30, of 27 clients 1 recovered and 26 were evicted.
Apr 14 16:22:44 pfscn3 kernel: : Lustre: pfscdat2-OST0000: deleting orphan objects from 0x0:29711772 to 0x0:29714801
Apr 14 16:25:28 pfscn3 kernel: : Lustre: Failing over pfscdat2-OST0000
Apr 14 16:25:29 pfscn3 kernel: : Lustre: server umount pfscdat2-OST0000 complete
Apr 14 16:25:29 pfscn3 kernel: : LustreError: 137-5: pfscdat2-OST0000_UUID: not available for connect from 172.26.17.2@o2ib (no target)

In pfscn4

Apr 11 08:40:02 pfscn4 kernel: : LDISKFS-fs (dm-11): mounted filesystem with ordered data mode. quota=on. Opts:
Apr 11 08:40:02 pfscn4 kernel: : LustreError: 137-5: pfscdat2-OST0000_UUID: not available for connect from 172.26.16.28@o2ib (no target)
Apr 11 08:40:07 pfscn4 kernel: : Lustre: 3897:0:(client.c:1869:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent Apr 11 08:41:17 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Recovery over after 0:36, of 36 clients 36 recovered and 0 were evicted.
Apr 11 08:41:20 pfscn4 kernel: : Lustre: pfscdat2-OST0000: deleting orphan objects from 0x0:29380160 to 0x0:29380273

...

Apr 14 16:19:52 pfscn4 kernel: : Lustre: Failing over pfscdat2-OST0000
Apr 14 16:19:53 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Not available for connect from 172.26.20.7@o2ib (stopping)
Apr 14 16:19:53 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Not available for connect from 172.26.20.8@o2ib (stopping)
Apr 14 16:19:54 pfscn4 kernel: : LustreError: 137-5: pfscdat2-OST0000_UUID: not available for connect from 172.26.20.3@o2ib (no target)
Apr 14 16:19:57 pfscn4 kernel: : LustreError: 137-5: pfscdat2-OST0000_UUID: not available for connect from 172.26.17.2@o2ib (no target)
Apr 14 16:19:57 pfscn4 kernel: : LustreError: Skipped 1 previous similar message
Apr 14 16:19:58 pfscn4 kernel: : Lustre: server umount pfscdat2-OST0000 complete

...

Apr 14 16:25:43 pfscn4 kernel: : LDISKFS-fs (dm-11): mounted filesystem with ordered data mode. quota=on. Opts:
Apr 14 16:25:44 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450
Apr 14 16:25:47 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Will be in recovery for at least 2:30, or until 1 client reconnects
Apr 14 16:25:47 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Denying connection for new client 90192127-1d80-d056-258f-193df5a6691b (at 172.26.4.4@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:30
Apr 14 16:25:49 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Denying connection for new client 9b4ef354-d4a2-79a9-196f-2666496727d6 (at 172.26.20.9@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:28
Apr 14 16:25:49 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Denying connection for new client b1ee0c39-c7c4-6f09-d8ec-5bf4d696e919 (at 172.26.4.1@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:27
Apr 14 16:25:49 pfscn4 kernel: : Lustre: Skipped 2 previous similar messages
Apr 14 16:25:51 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Denying connection for new client b1ee0c39-c7c4-6f09-d8ec-5bf4d696e919 (at 172.26.4.1@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:25
Apr 14 16:25:54 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Denying connection for new client 42dd99f1-05b3-2c90-ca73-e27b00e04746 (at 172.26.4.8@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:23
Apr 14 16:25:54 pfscn4 kernel: : Lustre: Skipped 10 previous similar messages
Apr 14 16:26:14 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Denying connection for new client b1ee0c39-c7c4-6f09-d8ec-5bf4d696e919 (at 172.26.4.1@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:02
Apr 14 16:26:14 pfscn4 kernel: : Lustre: Skipped 1 previous similar message
Apr 14 16:26:20 pfscn4 kernel: : Lustre: pfscdat2-OST0000: Recovery over after 0:33, of 1 clients 1 recovered and 0 were evicted.
Apr 14 16:26:20 pfscn4 kernel: : Lustre: pfscdat2-OST0000: deleting orphan objects from 0x0:29711772 to 0x0:29714833

In pfscn3
the device "dm-9" was mounted at 16:20:04 as pfscdat2-OST0000, during recovery, Lustre indeed found there were 27 clients (26 normal clients,
1 client from MDT), but it seems these 26 normal clients didn't recover with pfscn3 (the eviction condition after recovery timeout is either the client
didn't need recovery or there was no queued replay request). then these clients were deleted and pfscdat2-OST0000 was unmounted at 16:25:29.

In pfscn4
the device "dm-11" was mounted at 16:25:43 as pfscdat2-OST0000, but it didn't contain client records, then these clients thought it were evicted.

then the problem could be why these clients didn't connect to pfscn3 to recover?

Comment by Hongchao Zhang [ 18/Apr/14 ]

Hi Rajesh,

Could you please check the content in "/proc/fs/lustre/osc/pfscdat2-OST0000-osc-XXXX/import" at one of your client nodes to verify whether
the failover_nids contains "172.26.4.3@o2ib" and "172.26.4.4@o2ib"?

Thanks!

Comment by Rajeshwaran Ganesan [ 25/Apr/14 ]

We did the erase config and added the failover IP using tunefs,

1. We would like to know why some of the OSTs are having the failover IP multiple times, is there any known bug in 2.4.1.
2. Having multiple time will it cause any negative impact?

/proc/fs/lustre/osc/pfs2dat2-OST0012-osc-ffff881033429400/import:
failover_nids: [172.26.8.15@o2ib, 172.26.8.15@o2ib, 172.26.8.14@o2ib]

/proc/fs/lustre/osc/pfs2dat2-OST0013-osc-ffff881033429400/import:
failover_nids: [172.26.8.15@o2ib, 172.26.8.14@o2ib]

Comment by Hongchao Zhang [ 29/Apr/14 ]

what is the command line to create the failover nid?

Could you please dump the config file of your system,
at MGS node:
lctl>dl (will show the device list, it will show the device number at the first column)
lctl>device #MGS (MGS index, say, 1)
lctl>dump_cfg pfs2dat2-client (it will dump the config to syslog)

normally one nid (Node Id) will have only one UUID to represent it (one UUID could have more nid)
Having duplicated failover nid could cause some recovery problem, for it will use these nids one by one and it is possible to miss the recovery window.

Thanks

Comment by Rajeshwaran Ganesan [ 01/May/14 ]

tunefs.lustre --erase-params --mgsnode=172.26.8.12@o2ib
--mgsnode=172.26.8.13@o2ib --servicenode=172.26.8.14@o2ib
--servicenode=172.26.8.15@o2ib /dev/mapper/ost_pfs2dat2_0
tunefs.lustre --erase-params --mgsnode=172.26.8.12@o2ib
--mgsnode=172.26.8.13@o2ib --servicenode=172.26.8.14@o2ib
--servicenode=172.26.8.15@o2ib /dev/mapper/ost_pfs2dat2_1

2. I have uploaded the files in the FTP server

3. only the OSC on the MDT is affected, having the duplicate entries. Anyway, we should find the reason for the wrong behaviour
of the servers.

Comment by Hongchao Zhang [ 04/May/14 ]

the configs seems fine.
Do you also regenerate the config of pfscdat2? it adds "172.26.17.4@o2ib" and "172.26.17.3@o2ib" as target node, and it was "172.26.4.4@o2ib" and "172.26.4.3@o2ib" previously.

cmd=cf010 marker=10(0x1)pfscdat2-OST0000 'add osc'
cmd=cf005 nid=172.26.17.4@o2ib(0x50000ac1a1104) 0:(null)  1:172.26.17.4@o2ib
cmd=cf001 0:pfscdat2-OST0000-osc  1:osc  2:pfscdat2-clilov_UUID
cmd=cf003 0:pfscdat2-OST0000-osc  1:pfscdat2-OST0000_UUID  2:172.26.17.4@o2ib
cmd=cf005 nid=172.26.17.4@o2ib(0x50000ac1a1104) 0:(null)  1:172.26.17.4@o2ib
cmd=cf00b 0:pfscdat2-OST0000-osc  1:172.26.17.4@o2ib
cmd=cf005 nid=172.26.17.3@o2ib(0x50000ac1a1103) 0:(null)  1:172.26.17.3@o2ib
cmd=cf00b 0:pfscdat2-OST0000-osc  1:172.26.17.3@o2ib 
cmd=cf00d 0:pfscdat2-clilov  1:pfscdat2-OST0000_UUID  2:0  3:1
cmd=cf010 marker=10(0x2)pfscdat2-OST0000 'add osc'

Does the issue occur again in the new generated config?

btw, since only the OSC(OSP) on the MDT is affected, could you please dump the config of MDT (the config-uuid-name is fsname-MDT0000, say, pfscdat2-MDT0000).
Thanks!

Comment by Hongchao Zhang [ 05/May/14 ]
/proc/fs/lustre/osc/pfs2dat2-OST0012-osc-ffff881033429400/import:
failover_nids: [172.26.8.15@o2ib, 172.26.8.15@o2ib, 172.26.8.14@o2ib]

Do you mount Lustre client at MDT? normally, the OSC(is OSP actually, and there is a symlink in /proc/fs/lustre/osc/
for each entry in /proc/fs/lustre/osp/) name for MDT is (fsname)-OSTXXXX-osc-MDT0000,
the name for client is (fsname)OSTXXXX-osc(address of superblock).

Comment by Rajeshwaran Ganesan [ 06/May/14 ]

There seems to be some confusion about our systems. The system with
prefix pfsc and IP addresses 172.26.4.x is our test system and the
system with prefix pfs2 and IP addresses 172.26.17.x is our production
system. We had seen the issue on both systems and therefore provided
logs from both systems. The configuration of both systems should be
very similar and also the config was newly generated on both systems.

After the config was newly generated the remaining issue is that
duplicate IP addresses appear as failover_nids on the servers only.
This appears for /proc/fs/lustre/osp/*/import on the MDS but
astonishingly only on pfsc and not on pfs2. It also appears for
/proc/fs/lustre/osc/*/import if we mount the file systems on servers
but astonishingly only for some OSTs. This appears if we mount the file
system on an OSS, on an MDS which is currently MDT for another file
system or on an currently unused (failover) MDS.

and also, uploading the logs into ftp site

Comment by Hongchao Zhang [ 06/May/14 ]

there is no error in these configs.

Could you please collect the debug logs(lctl dk >XXX.log) at the problematic node just after mounting the client (make sure the "ha" is contained in "/proc/sys/lnet/debug")?
Thanks very much!

Comment by Rajeshwaran Ganesan [ 08/May/14 ]

Hello Hongchao,

I have uploaded the requested log files into ftp.whamcloud.com:/uploads/LU-4722

2014-05-08-SR30502_pfs2n17.llog.gz

Thanks,
Rajesh

Comment by Hongchao Zhang [ 08/May/14 ]

there is a bug in obd_str2uuid,

 static inline void obd_str2uuid(struct obd_uuid *uuid, const char *tmp)
 {
        strncpy((char *)uuid->uuid, tmp, sizeof(*uuid));
        uuid->uuid[sizeof(*uuid) - 1] = '\0';
 }

it take "tmp" also as a implicit "obd_uuid" type, but it isn't in all cases, such as in "class_add_uuid", the "tmp" is
"lustre_cfg_string(lcfg, 1)", and obd_str2uuid will copy some undefined data beyond the "tmp" to "uuid" and could cause two same
"uuid" in config were thought to be different.

the patch against b2_4 is tracked at http://review.whamcloud.com/#/c/10269/

Hi Rajesh,
Could you please try the patch in your site?
Thanks!

Comment by Hongchao Zhang [ 16/May/14 ]

Hi Rajesh,

What is the result of the test?

Thanks.

Comment by Rajeshwaran Ganesan [ 16/May/14 ]

We are in the process of applying the patch. I will get back to you with the results.

Comment by Peter Jones [ 28/Nov/14 ]

Any update Rajesh?

Comment by Rajeshwaran Ganesan [ 28/Nov/14 ]

We can close this LU. Customer had upgraded the Server and Clients and they don't see this issue.

Comment by Peter Jones [ 28/Nov/14 ]

Rajesh

To be clear, do you mean upgraded to a newer Lustre version or upgraded to use the patch supplied?

Peter

Comment by Peter Jones [ 05/Dec/14 ]

Rajesh?

Comment by Rajeshwaran Ganesan [ 05/Dec/14 ]

We have upgraded both server side and client side.

1. On the server side customer upgraded into 2.4.3 with the Patch

And now they don't see the issue, and it can be closed.

Comment by Peter Jones [ 05/Dec/14 ]

Thanks Rajesh. My concern is that this patch has not yet been landed to newer versions of Lustre so if the customer were to upgrade it might mean that this issue reoccurs for them.

Hongchao

This fix was for SLES11 SP2 clients. Is it still necessary for SLES11 SP3 clients which is what is supported on master and b2_5? If so, please could you push this fix firstly against master to get it finalized and landed. If this issue is specific to SLES11 SP2 only then I agree that it is ok to mark the ticket as Resolved.

Thanks

Peter

Comment by Hongchao Zhang [ 12/Dec/14 ]

Hi, this patch isn't needed for SLES11 SP3.

Comment by Peter Jones [ 12/Dec/14 ]

Thanks for confirming Hongchao. Ok so then we do not need to land it to more current releases and I will close the ticket.

Generated at Sat Feb 10 01:45:16 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.