|
Niu
Could you please comment on this one?
Thanks
Peter
|
|
We found messages that occur during recovery when one of the misbehaving clients reconnects that refers to LU-617. Is there any relation here? What is the cause of the smaller transaction getting written to disk, or why is the client trying to replace a larger transaction?
(mdt_recovery.c:418:mdt_last_rcvd_update()) Trying to overwrite bigger transno:on-disk: 47302528652, new: 47302528651 replay: 0. see LU-617.
In context below. Note that there is not one of the "Trying to overwrite bigger transno" messages for every reconnection.
Dec 9 06:09:57 atlas-oss3b4 kernel: [1096933.917564] Lustre: atlas2-OST009b: Client 3b57a9ed-bec6-9b0d-7da8-04d696e1a7f2 (at 10.36.202.138@o2ib) reconnecting
Dec 9 06:09:57 atlas-oss3b4 kernel: [1096933.942120] Lustre: Skipped 9 previous similar messages
Dec 9 06:10:17 atlas-oss3b4 kernel: [1096953.629963] LustreError: 25257:0:(events.c:450:server_bulk_callback()) event type 5, status -5, desc ffff88072b512000
Dec 9 06:10:38 atlas-oss4g2 kernel: [1096977.515250] LustreError: 137-5: atlas2-OST007d_UUID: not available for connect from 10.36.202.142@o2ib (no target)
Dec 9 06:10:38 atlas-oss4g4 kernel: [1096978.820263] LustreError: 137-5: atlas2-OST007f_UUID: not available for connect from 10.36.202.142@o2ib (no target)
Dec 9 06:12:35 atlas-mds3 kernel: [1096356.518178] LustreError: 32369:0:(mdt_recovery.c:418:mdt_last_rcvd_update()) Trying to overwrite bigger transno:on-disk: 47302528652, new: 47302528651 replay: 0. see LU-617.
Dec 9 06:12:42 atlas-oss4h4 kernel: [510636.716395] LustreError: 16128:0:(events.c:450:server_bulk_callback()) event type 5, status -5, desc ffff880fa8d40000
Dec 9 06:12:42 atlas-oss4h4 kernel: [510636.745560] LustreError: 16127:0:(events.c:450:server_bulk_callback()) event type 5, status -5, desc ffff880fc2d3a000
Dec 9 06:12:43 atlas-oss4h4 kernel: [510636.784482] LustreError: 16128:0:(events.c:450:server_bulk_callback()) event type 3, status -5, desc ffff880fa8d40000
Dec 9 06:12:43 atlas-oss4h4 kernel: [510636.815353] LustreError: 16127:0:(events.c:450:server_bulk_callback()) event type 3, status -5, desc ffff880fc2d3a000
Dec 9 06:12:43 atlas-oss4h4 kernel: [510636.815385] LustreError: 61456:0:(ldlm_lib.c:2722:target_bulk_io()) @@@ network error on bulk GET 0(1048576) req@ffff880fe2a0e400 x1453430544373389/t0(0) o4->10c57078-ce10-72c6-b97d-e7c9a32a240c@10.36.202.142@o2ib:0/0 lens 448/448 e 0 to 0 dl 1386588141 ref 1 fl Interpret:/2/0 rc 0/0
|
|
This seem very repeatable. If I try reading the file that the user process (rsync) is stuck on, I can see the same errors. It appears to be on OSS 10.36.225.185@o2ib, specifically OST009b or obdindex 155. I found this from the client dmesg. I'm attaching a debug log from command 3 that hung below with flags
ioctl neterror warning dlmtrace error emerg ha rpctrace config console
[root@dtn04 eq]# dmesg|tail
LustreError: 2398:0:(events.c:199:client_bulk_callback()) event type 1, status -103, desc ffff880838928000
Lustre: 2405:0:(client.c:1529:ptlrpc_expire_one_request()) @@@ Request x1453958660133786 sent from atlas2-OST009b-osc-ffff880c37da6800 to NID 10.36.225.185@o2ib 20s ago has failed due to network error (567s prior to deadline).
req@ffff880821b5a000 x1453958660133786/t0 o3->atlas2-OST009b_UUID@10.36.225.185@o2ib:6/4 lens 448/592 e 0 to 1 dl 1386624465 ref 2 fl Rpc:/2/0 rc 0/0
Lustre: 2405:0:(client.c:1529:ptlrpc_expire_one_request()) Skipped 7 previous similar messages
Lustre: atlas2-OST009b-osc-ffff880c37da6800: Connection to service atlas2-OST009b via nid 10.36.225.185@o2ib was lost; in progress operations using this service will wait for recovery to complete.
Lustre: Skipped 7 previous similar messages
LustreError: 11-0: an error occurred while communicating with 10.36.225.185@o2ib. The ost_connect operation failed with -16
LustreError: Skipped 1 previous similar message
1)
[root@dtn04 ~]# lfs getstripe /lustre/atlas2/chm045/scratch/dermaid/eq370_50pct/eq/cer1_cer2_bena_chol_4x.eq370.0001.coor
/lustre/atlas2/chm045/scratch/dermaid/eq370_50pct/eq/cer1_cer2_bena_chol_4x.eq370.0001.coor
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_stripe_offset: 155
obdidx objid objid group
155 16095 0x3edf 0
156 16094 0x3ede 0
157 16165 0x3f25 0
158 16102 0x3ee6 0
[root@dtn04 ~]# stat /lustre/atlas2/chm045/scratch/dermaid/eq370_50pct/eq/cer1_cer2_bena_chol_4x.eq370.0001.coor
File: `/lustre/atlas2/chm045/scratch/dermaid/eq370_50pct/eq/cer1_cer2_bena_chol_4x.eq370.0001.coor'
Size: 20807164 Blocks: 40640 IO Block: 2097152 regular file
Device: 2586730ah/629568266d Inode: 144117763932180829 Links: 1
Access: (0600/rw------) Uid: ( 9348/ dermaid) Gid: (18645/ dermaid)
Access: 2013-12-09 13:12:46.000000000 -0500
Modify: 2013-11-30 13:45:37.000000000 -0500
Change: 2013-11-30 13:45:37.000000000 -0500
[root@dtn04 ~]# ls -l /lustre/atlas2/chm045/scratch/dermaid/eq370_50pct/eq/cer1_cer2_bena_chol_4x.eq370.0001.coor
rw------ 1 dermaid dermaid 20807164 Nov 30 13:45 /lustre/atlas2/chm045/scratch/dermaid/eq370_50pct/eq/cer1_cer2_bena_chol_4x.eq370.0001.coor
[root@dtn04 ~]# file /lustre/atlas2/chm045/scratch/dermaid/eq370_50pct/eq/cer1_cer2_bena_chol_4x.eq370.0001.coor
^C^C
2)
Now with a different file that works (no objects on obdindex 155):
[root@dtn04 eq]# stat cer1_cer2_bena_chol_4x.eq370.0000.coor
File: `cer1_cer2_bena_chol_4x.eq370.0000.coor'
Size: 20807164 Blocks: 40640 IO Block: 2097152 regular file
Device: 2586730ah/629568266d Inode: 144117777421110925 Links: 1
Access: (0640/rw-r----) Uid: ( 9348/ dermaid) Gid: (18645/ dermaid)
Access: 2013-12-09 15:49:01.000000000 -0500
Modify: 2013-11-28 14:15:42.000000000 -0500
Change: 2013-11-28 15:26:26.000000000 -0500
[root@dtn04 eq]# file cer1_cer2_bena_chol_4x.eq370.0000.coor
cer1_cer2_bena_chol_4x.eq370.0000.coor: data
3)
Now with another file that fails (object on obdindex 155)
cer1_cer2_bena_chol_4x.eq370.0001.xst
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_stripe_offset: 154
obdidx objid objid group
154 16163 0x3f23 0
155 16094 0x3ede 0
156 16093 0x3edd 0
157 16164 0x3f24 0
[root@dtn04 eq]# stat cer1_cer2_bena_chol_4x.eq370.0001.xst
File: `cer1_cer2_bena_chol_4x.eq370.0001.xst'
Size: 15610 Blocks: 32 IO Block: 2097152 regular file
Device: 2586730ah/629568266d Inode: 144117763932180577 Links: 1
Access: (0600/rw------) Uid: ( 9348/ dermaid) Gid: (18645/ dermaid)
Access: 2013-12-09 15:52:45.000000000 -0500
Modify: 2013-11-30 13:45:37.000000000 -0500
Change: 2013-11-30 13:45:37.000000000 -0500
[root@dtn04 eq]# file cer1_cer2_bena_chol_4x.eq370.0001.xst
|
|
Increased priority to Major due to customer request/comments:
"There are files on the filesystem that are inaccessible and I would like to rule out lingering issues from LU-4309 as a cause. At this point, our first priority is to recover the files that seem to be causing the hang."
|
|
can we have logs from atlas-oss3b4 on which atlas2-OST009b locates, and it is still busy servicing a client BULK read but fails and rejecting the same client's reconnection.
|
|
Still servicing client BULK reads:
Dec 13 15:44:50 atlas-oss3b4 kernel: [1476205.395677] LustreError: 11576:0:(ldlm_lib.c:2711:target_bulk_io()) Skipped 1 previous similar message
Dec 13 15:44:50 atlas-oss3b4 kernel: [1476205.425212] Lustre: atlas2-OST009b: Bulk IO read error with 3064e3d3-408f-7c76-94af-ce52cde93078 (at 10.36.202.138@o2ib), client will retry: rc -110
Dec 13 15:44:50 atlas-oss3b4 kernel: [1476205.466052] Lustre: Skipped 1 previous similar message
Dec 13 15:49:51 atlas-oss3b4 kernel: [1476505.278112] LustreError: 25256:0:(events.c:450:server_bulk_callback()) event type 5, status -5, desc ffff88070f632000
Dec 13 15:54:32 atlas-oss3b4 kernel: [1476785.683451] Lustre: atlas2-OST009b: Client 3064e3d3-408f-7c76-94af-ce52cde93078 (at 10.36.202.138@o2ib) reconnecting
Dec 13 15:54:32 atlas-oss3b4 kernel: [1476785.709484] Lustre: Skipped 9 previous similar messages
Dec 13 15:54:52 atlas-oss3b4 kernel: [1476805.464059] LustreError: 25256:0:(events.c:450:server_bulk_callback()) event type 5, status -5, desc ffff880717a48000
Dec 13 15:54:52 atlas-oss3b4 kernel: [1476805.585964] Lustre: atlas2-OST009b: Client 3064e3d3-408f-7c76-94af-ce52cde93078 (at 10.36.202.138@o2ib) refused reconnection, still busy with 1 active RPCs
Dec 13 15:54:52 atlas-oss3b4 kernel: [1476805.626704] Lustre: Skipped 1 previous similar message
Dec 13 15:54:52 atlas-oss3b4 kernel: [1476805.704087] LustreError: 11607:0:(ldlm_lib.c:2711:target_bulk_io()) @@@ Reconnect on bulk PUT req@ffff8807613a7c00 x1453958693735860/t0(0) o3->3064e3d3-408f-7c76-94af-ce52cde93078@10.36.202.138@o2ib:0/0 lens 448/432 e 0 to 0 dl 1386968389 ref 1 fl Interpret:/2/0 rc 0/0
Dec 13 15:54:52 atlas-oss3b4 kernel: [1476805.777371] LustreError: 11607:0:(ldlm_lib.c:2711:target_bulk_io()) Skipped 1 previous similar message
Dec 13 15:54:52 atlas-oss3b4 kernel: [1476805.815244] Lustre: atlas2-OST009b: Bulk IO read error with 3064e3d3-408f-7c76-94af-ce52cde93078 (at 10.36.202.138@o2ib), client will retry: rc -110
|
Just yesterday we identified 2 bad IB cables with high symbol error rates in our fabric that have since been disconnected. They were likely the cause for at least one of the issues.
Looks these two bad cables caused the following series of problems? The problems were not resovled even after you replaced the cables?
Dec 9 06:12:43 atlas-oss4h4 kernel: [510636.815385] LustreError: 61456:0:(ldlm_lib.c:2722:target_bulk_io()) @@@ network error on bulk GET 0(1048576) req@ffff880fe2a0e400 x1453430544373389/t0(0) o4->10c57078-ce10-72c6-b97d-e7c9a32a240c@10.36.202.142@o2ib:0/0 lens 448/448 e 0 to 0 dl 1386588141 ref 1 fl Interpret:/2/0 rc 0/0
Bulk read always fail for some reason, I think that's the reason why client fell into the loop of : bulk read timeout -> reconnect -> bulk read timeout -> reconnect ... Could you check if the connection between the client to OST009b is healthy?
|
|
We tried many different 1.8.9/RHEL6 clients after the bad links were removed. They all fall into the same loop, when writing to various OSTs. It does not appear to always be OST0009b, but it happens frequently enough, that the hangs are very common. When sweeping though all of the OSTs, another client hung on OST0024 for example.
The example shown here was repeated after the cables were replaced. After the case had been opened, the system was rebooted and a particular user tried again to run the same rsync command that triggered the OST0009b issues. The most recent logs from the lustre server are from the recurrence of the issue.
Strangely this is only on our 1.8.9 systems. 2.4 clients can access these files without issue and a test writing to files striped on each OST in the filesystem run to completion. The same test on a 1.8.9 client will hang on OST0024 as noted above.
|
Strangely this is only on our 1.8.9 systems. 2.4 clients can access these files without issue and a test writing to files striped on each OST in the filesystem run to completion. The same test on a 1.8.9 client will hang on OST0024 as noted above.
Do you use different router for 1.8 & 2.4 clients?
Liang, is there any simple way to check if the network between client and server is sane? Could you advise?
|
|
The test cases in this LU are all with clients on the same NI as the servers – no routers.
|
|
Could you run lnet-selftest to check the network between problematic OST and 1.8 clients?
I just posted the instructions of how to use lnet-selftest here and the wrapper script is attached:
= Preparation =
The LNET Selftest kernel module must be installed and loaded on all targets in the test before the application is started. Identify the set of all systems that will participate in a session and ensure that the kernel module has been loaded. To load the kernel module:
modprobe lnet_selftest
Dependencies are automatically resolved and loaded by modprobe. This will make sure all the necessary modules are loaded: libcfs, lnet, lnet_selftest and any one of the klnds (kernel lustre network devices, i.e. ksocklnd, ko2iblnd, etc.).
Identify a "console" node from which to conduct the tests. This is the single system from which all LNET selftest commands will be executed. The console node owns the LNET selftest session and there should be only one active session on the network at any given time (strictly speaking one can run several LNET selftest sessions in parallel across a network but this is generally discouraged unless the sessions are carefully isolated).
It is strongly recommended that a survey and analysis of raw network performance between the target systems is carried out prior to running the LNET selftest benchmark. This will help to identify and measure any performance overhead introduced by LNET. The HPDD SE team has recently been evaluating Netperf for this purpose on TCP/IP-based networks with good results. Refer to the HPDD SE Netperf page for details on how to manage this exercise.
= Using the Wrapper Script =
Use the LNET Selftest wrapper to execute the test cases referenced in this document. The header of the script has some variables that need to be set in accordance with the target environment. Without changes, the script is very unlikely to operate correctly, if at all. Here is a listing of the header:
Single Client Throughput – LNET Selftest Read (2 Nodes, 1:1)
Used to establish point to point unidirectional read performance between two nodes.
Set the wrapper up as follows:
#Output file
ST=lst-output-$(date +%Y-%m-%d-%H:%M:%S)
# Concurrency
CN=32
#Size
SZ=1M
# Length of time to run test (secs)
TM=60
# Which BRW test to run (read or write)
BRW=read
# Checksum calculation (simple or full)
CKSUM=simple
# The LST "from" list -- e.g. Lustre clients. Space separated list of NIDs.
LFROM="10.73.2.21@tcp"
# The LST "to" list -- e.g. Lustre servers. Space separated list of NIDs.
LTO="10.73.2.22@tcp"
CN: the concurrency setting simulates the number of threads performing communication. The LNET Selftest default is 1, which is not enough to properly exercise the connection. Set to at least 16, but experiment with higher values (32 or 64 being reasonable choices).
SZ: the size setting determines the size of the IO transaction. For bandwidth (throughput) measurements, use 1M.
TM: test time in seconds– how long to run the benchmark for. Set to a reasonable number in order to ensure collection of sufficient data to extrapolate a meaningful average (at least 60 seconds).
BRW: The Bulk Read/Write test to use. There are only two choices "read" or "write".
CKSUM: The checksum checking method. Choose either "simple" or "full".
LFROM: a space-separated list of NIDs that represent the "from" list (or source) in LNET Selftest. This is often a set of clients.
LTO: a space-separated list of NIDs that represent the "to" list (or destination) in LNET Selftest. This is often a set of servers.
Change the LFROM and LTO lists as required.
Run the script several times, changing the concurrency setting with at the start of every new run. Use the sequence 1, 2, 4, 8, 16, 32, 64, 128. Modify the output filename for each run so that it is clear what results have been captured into each file.
|
|
Hi Niu,
Thanks for the suggestion. We are open to trying lnet selftest, but while the system is in production, we would expect other client activity to servers mentioned here to interfere with the lnet selftest and vice versa. Do you expect that we'd see an issue at a low concurrency setting such that we wouldn't be subject to contention from other clients. If so, then the results will be meaningful. If not, we will need to wait 2 weeks before we have an interval where we can perform isolated testing. Please advise.
Another thought to address whether their is an an issue with the network:
- verify issue is still present with selected client running 1.8.9 client
- run rsync application across a large directory such that objects from every OST are read/written.
- upgrade selected client to 2.4.1
- repeat rsync test and look for any signs of network loss
What to look for:
When the issue with the 1.8.9 clients occurs, it is very obvious based on the number of Lustre error messages in our server logs. I'd consider the lack of these messages to be a sign of successful test.
Would this be useful or is it too high level?
|
Thanks for the suggestion. We are open to trying lnet selftest, but while the system is in production, we would expect other client activity to servers mentioned here to interfere with the lnet selftest and vice versa. Do you expect that we'd see an issue at a low concurrency setting such that we wouldn't be subject to contention from other clients. If so, then the results will be meaningful. If not, we will need to wait 2 weeks before we have an interval where we can perform isolated testing. Please advise.
Was there heavy load on servers when the bulk io timeout is seen? If the the problem always happen no matter of server load, I would think a low concurrency testing is meaningful.
Another thought to address whether their is an an issue with the network:
verify issue is still present with selected client running 1.8.9 client
run rsync application across a large directory such that objects from every OST are read/written.
upgrade selected client to 2.4.1
repeat rsync test and look for any signs of network loss
What to look for:
When the issue with the 1.8.9 clients occurs, it is very obvious based on the number of Lustre error messages in our server logs. I'd consider the lack of these messages to be a sign of successful test.
Yes, I think this way can help us to see if it's a network or Lustre version problem.
|
|
I'm afraid it might not be very helpful to try selftest if there is other activity on the client, it's better to have a "neat" run to diagnose network issue.
Niu, could it relate to LU-793? the patch is already in master but not in 2.4 releases.
|
|
This is happening with 1.8 clients. I looked at the LU-793 patch to see if I could back port it and give it a try. The problem is target_bulk_io doesn't even exist in 1.8!!! I'm not that familiar with 1.8 internals to do a proper back port of this magnitude.
|
|
As an update to this case, we tried running the rsync test with a lustre 1.8.9 client, but on an FDR ConnectX-3 card and could not reproduce the issue. Right now it appears to be an issue with the IB card or associated driver. It does not appear to be a lustre issue.
|
|
Unfortunately, the issue is still present independent of the IB hardware.
I'm unsure what contributed to the string of tests succeeding, but we are now back in the same pattern of reconnects on many different 1.8.9 clients. When this happens, processes that access the filesystem remain in the D state and the log messages given above repeat.
I believe the best way to pick this up is to try the 2.4 client on identical hardware to compare, and then also a LNET selftest on a dedicated client. We would have to deal with other activity on the server, but if I read Liang's comment correctly, the worry is client activity.
|
|
Evicting the client from from the OST the it is connect cycling with will cause an LBUG
<3>LustreError: 167-0: This client was evicted by atlas2-OST009b; in progress operations using this service will fail.
<0>LustreError: 2405:0:(osc_request.c:2357:brw_interpret()) ASSERTION(!(aa->aa_oa->o_valid & OBD_MD_FLHANDLE)) failed
<3>LustreError: 17528:0:(rw.c:1339:ll_issue_page_read()) page ffffea001ba4b0d0 map ffff880839a9aa30 index 0 flags c0000000000821 count 5 priv ffff880822484d80: read queue failed: rc -5
<0>LustreError: 2405:0:(osc_request.c:2357:brw_interpret()) LBUG
<4>Pid: 2405, comm: ptlrpcd
<4>
<4>Call Trace:
<1>BUG: unable to handle kernel NULL pointer dereference at (null)
<1>IP: [<(null)>] (null)
<4>PGD 81984f067 PUD 8392fc067 PMD 0
<4>Oops: 0010 1 SMP
<4>last sysfs file: /sys/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI000D:00/power1_average_interval
<4>CPU 10
<4>Modules linked in: iptable_nat nf_nat mptctl mptbase nfs lockd fscache auth_rpcgss nfs_acl mgc(U) lustre(U) lov(U) mdc(U) lquota(U) osc(U) ptl
rpc(U) obdclass(U) lvfs(U) ko2iblnd(U) lnet(U) libcfs(U) autofs4 sunrpc nf_conntrack_netbios_ns nf_conntrack_broadcast ipt_REJECT xt_comment nf_c
onntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack iptable_filter ip_tables ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_add
r ipv6 tcp_bic power_meter sg mlx4_ib ib_sa ib_mad ib_core mlx4_en mlx4_core hpilo hpwdt bnx2 myri10ge(U) dca microcode serio_raw k10temp amd64_e
dac_mod edac_core edac_mce_amd i2c_piix4 shpchp ext4 jbd2 mbcache sd_mod crc_t10dif hpsa pata_acpi ata_generic pata_atiixp ahci radeon ttm drm_km
s_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
<4>
<4>Pid: 2405, comm: ptlrpcd Not tainted 2.6.32-358.23.2.el6.x86_64 #1 HP ProLiant DL385 G7
<4>RIP: 0010:[<0000000000000000>] [<(null)>] (null)
<4>RSP: 0018:ffff88041b9ddb48 EFLAGS: 00010246
<4>RAX: ffff88041b9ddbac RBX: ffff88041b9ddba0 RCX: ffffffffa0558260
<4>RDX: ffff88041b9ddbe0 RSI: ffff88041b9ddba0 RDI: ffff88041b9dc000
<4>RBP: ffff88041b9ddbe0 R08: 0000000000000000 R09: 0000000000000000
<4>R10: 0000000000000003 R11: 0000000000000000 R12: 000000000000cbe0
<4>R13: ffffffffa0558260 R14: 0000000000000000 R15: ffff88044e483fc0
<4>FS: 00007f0e92ef9700(0000) GS:ffff88044e480000(0000) knlGS:00000000f77026c0
<4>CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
<4>CR2: 0000000000000000 CR3: 0000000815d93000 CR4: 00000000000007e0
<4>DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
<4>DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
<4>Process ptlrpcd (pid: 2405, threadinfo ffff88041b9dc000, task ffff880438ed3500)
<4>Stack:
<4> ffffffff8100e4a0 ffff88041b9ddbac ffff880438ed3500 ffffffffa07cdf78
<4><d> 00000000a07ce9a8 ffff88041b9dc000 ffff88041b9ddfd8 ffff88041b9dc000
<4><d> 000000000000000a ffff88044e480000 ffff88041b9ddbe0 ffff88041b9ddbb0
<4>Call Trace:
<4> [<ffffffff8100e4a0>] ? dump_trace+0x190/0x3b0
<4> [<ffffffffa054c835>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
<4> [<ffffffffa054ce65>] lbug_with_loc+0x75/0xe0 [libcfs]
<4> [<ffffffffa05555d6>] libcfs_assertion_failed+0x66/0x70 [libcfs]
<4> [<ffffffffa07c43ff>] brw_interpret+0xcff/0xe90 [osc]
<4> [<ffffffff810097cc>] ? __switch_to+0x1ac/0x320
<4> [<ffffffff8105ea69>] ? find_busiest_queue+0x69/0x150
<4> [<ffffffffa0702a9a>] ptlrpc_check_set+0x24a/0x16b0 [ptlrpc]
<4> [<ffffffff81081b5b>] ? try_to_del_timer_sync+0x7b/0xe0
<4> [<ffffffff81081be2>] ? del_timer_sync+0x22/0x30
<4> [<ffffffffa07397ad>] ptlrpcd_check+0x18d/0x270 [ptlrpc]
<4> [<ffffffffa0739a50>] ptlrpcd+0x160/0x270 [ptlrpc]
<4> [<ffffffff81063990>] ? default_wake_function+0x0/0x20
<4> [<ffffffff8100c0ca>] child_rip+0xa/0x20
<4> [<ffffffffa07398f0>] ? ptlrpcd+0x0/0x270 [ptlrpc]
<4> [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
<4>Code: Bad RIP value.
<1>RIP [<(null)>] (null)
<4> RSP <ffff88041b9ddb48>
<4>CR2: 0000000000000000
|
|
The ASSERTION(!(aa->aa_oa->o_valid & OBD_MD_FLHANDLE)) in brw_interpret was introduced by LU-2703.
Hongchao, could you take a look? What's the intention of this assert?
|
|
Hi Niu, this issue has been tracked at LU-3067 and there is a patch in it, Thanks!
|
|
I wanted to update this case that we are still experience the timeouts, however the patch for LU 2703 took care of the LBUG. That has significantly sped up the process of reproducing the error under different software stacks and on different systems. We noticed that 1.8.6 clients on SLES do not have this issue, so we are trying rolling back the RHEL clients to earlier lustre versions.
|
|
The failure was present with 1.8.8. Below are the logs from starting the the lustre 1.8.8 client, mounting filesystems, and then running file * in the directory that has been causing issue. The symptoms were the same, where the application would hang. Recognizing the problem, I went to unmount to try the next case (this has never worked in the past and before LU-3067, we had to reboot the client). As usual that failed but the logs have some different messages with 1.8.8.
-
-
- data mismatch with ino 144115188193296385/0
Does that tell us anything new?
Jan 8 08:33:17 client kernel: Lustre: Listener bound to ib0:10.36.202.142:987:mlx4_0
Jan 8 08:33:17 client kernel: Lustre: Added LNI 10.36.202.142@o2ib [63/2560/0/180]
Jan 8 08:33:17 client kernel: Lustre: Build Version: lustre/scripts--PRISTINE-2.6.32-358.23.2.el6.x86_64
Jan 8 08:33:18 client kernel: Lustre: Lustre Client File System; http://www.lustre.org/
Jan 8 08:33:18 client kernel: Lustre: MGC10.36.226.77@o2ib: Reactivating import
Jan 8 08:33:18 client kernel: Lustre: Server MGS version (2.4.1.0) is much newer than client version (1.8.8)
Jan 8 08:33:18 client kernel: Lustre: 8437:0:(obd_config.c:1127:class_config_llog_handler()) skipping 'lmv' config: cmd=cf001,clilmv:lmv
Jan 8 08:33:18 client kernel: Lustre: 8437:0:(obd_config.c:1127:class_config_llog_handler()) skipping 'lmv' config: cmd=cf014,clilmv:linkfarm-MDT0000_UUID
Jan 8 08:33:18 client kernel: Lustre: Server linkfarm-MDT0000_UUID version (2.4.1.0) is much newer than client version (1.8.8)
Jan 8 08:33:18 client kernel: Lustre: client supports 64-bits dir hash/offset!
Jan 8 08:33:18 client kernel: Lustre: Client linkfarm-client(ffff8803f4fd1000) mount complete
Jan 8 08:33:19 client kernel: Lustre: Server atlas1-MDT0000_UUID version (2.4.1.0) is much newer than client version (1.8.8)
Jan 8 08:33:19 client kernel: Lustre: client supports 64-bits dir hash/offset!
Jan 8 08:33:19 client kernel: Lustre: Client atlas1-client(ffff8803f4fd7400) mount complete
Jan 8 08:33:19 client kernel: Lustre: 8445:0:(obd_config.c:1127:class_config_llog_handler()) skipping 'lmv' config: cmd=cf001,clilmv:lmv
Jan 8 08:33:21 client kernel: Lustre: Server atlas2-MDT0000_UUID version (2.4.1.0) is much newer than client version (1.8.8)
Jan 8 08:33:21 client kernel: Lustre: client supports 64-bits dir hash/offset!
Jan 8 08:33:21 client kernel: Lustre: Client atlas2-client(ffff880c397d6400) mount complete
Jan 8 08:33:22 client kernel: Lustre: MGC10.36.227.200@o2ib: Reactivating import
Jan 8 08:33:22 client kernel: Lustre: Client widow1-client(ffff880fd01dec00) mount complete
Jan 8 08:35:14 client kernel: LustreError: 8410:0:(events.c:198:client_bulk_callback()) event type 1, status -103, desc ffff88080e1c4000
Jan 8 08:35:14 client kernel: Lustre: setting import linkfarm-MDT0000_UUID INACTIVE by administrator request
Jan 8 08:35:14 client kernel: LustreError: 9877:0:(namei.c:256:ll_mdc_blocking_ast()) ### data mismatch with ino 144115188193296385/0 (ffff88039adf6cd0) ns: linkfarm-MDT0000-mdc-ffff8803f4fd1000 lock: ffff88082165bc00/0x
f06137b3b58e545f lrc: 3/0,0 mode: PR/PR res: 8589934599/1 bits 0x13 rrc: 2 type: IBT flags: 0x2002c90 remote: 0xdca99b6a04cb5ba3 expref: -99 pid: 8466 timeout: 0
Jan 8 08:35:14 client kernel: Lustre: setting import linkfarm-OST0000_UUID INACTIVE by administrator request
Jan 8 08:35:14 client kernel: Lustre: client linkfarm-client(ffff8803f4fd1000) umount complete
Jan 8 08:35:14 client kernel: LustreError: 9879:0:(namei.c:256:ll_mdc_blocking_ast()) ### data mismatch with ino 144115188193296385/0 (ffff8804249bad10) ns: atlas1-MDT0000-mdc-ffff8803f4fd7400 lock: ffff88082165ba00/0xf06137b3b58e5466 lrc: 3/0,0 mode: PR/PR res: 8589934599/1 bits 0x13 rrc: 2 type: IBT flags: 0x2002c90 remote: 0xdff80ba35f064331 expref: -99 pid: 8466 timeout: 0
Jan 8 08:35:16 client kernel: Lustre: client atlas1-client(ffff8803f4fd7400) umount complete
Jan 8 08:35:16 client kernel: Lustre: setting import atlas2-MDT0000_UUID INACTIVE by administrator request
Jan 8 08:35:16 client kernel: LustreError: 9881:0:(namei.c:256:ll_mdc_blocking_ast()) ### data mismatch with ino 144115188193296385/0 (ffff880c39246cd0) ns: atlas2-MDT0000-mdc-ffff880c397d6400 lock: ffff88082165b800/0xf06137b3b58e546d lrc: 3/0,0 mode: PR/PR res: 8589934599/1 bits 0x13 rrc: 2 type: IBT flags: 0x2002c90 remote: 0x7f4ba8a4cefab31f expref: -99 pid: 8466 timeout: 0
Jan 8 08:35:16 client kernel: LustreError: 2866:0:(mdc_locks.c:653:mdc_enqueue()) ldlm_cli_enqueue error: -108
Jan 8 08:35:16 client kernel: LustreError: 2866:0:(file.c:3331:ll_inode_revalidate_fini()) failure -108 inode 144115188193296385
Jan 8 08:35:17 client kernel: Lustre: client widow-client(ffff88081da07c00) umount complete
Jan 8 08:35:17 client kernel: Lustre: setting import widow2-MDT0000_UUID INACTIVE by administrator request
Jan 8 08:35:18 client kernel: Lustre: client widow2-client(ffff88038e15b400) umount complete
Jan 8 08:35:18 client kernel: LustreError: 10316:0:(ldlm_request.c:1039:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway
Jan 8 08:35:18 client kernel: LustreError: 10316:0:(ldlm_request.c:1597:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108
Jan 8 08:35:18 client kernel: LustreError: 10316:0:(ldlm_request.c:1039:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway
Jan 8 08:35:18 client kernel: LustreError: 10316:0:(ldlm_request.c:1597:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108
Jan 8 08:35:18 client kernel: LustreError: 10318:0:(mdc_locks.c:653:mdc_enqueue()) ldlm_cli_enqueue error: -108
Jan 8 08:35:18 client kernel: LustreError: 10318:0:(file.c:3331:ll_inode_revalidate_fini()) failure -108 inode 144115188193296385
Jan 8 08:35:32 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:35:48 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:36:04 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:36:19 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:36:35 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:36:50 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:37:06 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:37:37 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:38:24 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:39:42 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
Jan 8 08:42:02 client kernel: LustreError: 2866:0:(llite_lib.c:1774:ll_statfs_internal()) mdc_statfs fails: rc = -108
|
Jan 8 08:35:14 client kernel: LustreError: 9877:0:(namei.c:256:ll_mdc_blocking_ast()) ### data mismatch with ino 144115188193296385/0 (ffff88039adf6cd0) ns: linkfarm-MDT0000-mdc-ffff8803f4fd1000 lock: ffff88082165bc00/0x
This is LU-1488.
|
|
The problem has been solved. We found the issue within the IB fabric. We fixed it by adjusting the weights of certain links. Routes between certain OSS and clients were taking an unexpected path through an older DDR switch with MTU 2048. Since the hosts with the RHEL6 OFED supported 4098 along with the newer switches, this worked fine most of the time. Setting ko2iblnd parameter ib_mtu=2048 provided us instant relief before the routes could be adjusted.
Thanks for the help narrowing the issue down on this case.
|
|
Blake,
Thank you for the update. We're glad the root cause of the problem was found.
|
Generated at Sat Feb 10 01:42:01 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.