[LU-1153] Client Unresponsive Created: 29/Feb/12  Updated: 15/Jun/12  Resolved: 15/Jun/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.2.0
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Roger Spellman (Inactive) Assignee: Lai Siyao
Resolution: Fixed Votes: 0
Labels: paj
Environment:

Lustre servers are running 2.6.32-220.el6, with Lustre 2.1.1.rc4.
Most Lustre clients are running 2.6.18-194.el5, with Lustre 1.8.4
One Lustre client is running 2.6.38.2, with special code created for this release, with http://review.whamcloud.com/#change,2170.
The problems are occurring only on this one client.


Attachments: File ioz-test-bug-1153.tgz     File slabinfo-and-mem2.tgz    
Severity: 3
Rank (Obsolete): 6441

 Description   

It is possible that I am seeing multiple bugs. So, you may want to split this one bug into several bugs.

Let me emphasize that this problem occurs only on one system, and that is the system built withhttp://review.whamcloud.com/#change,2170, which is code specifically for the 2.6.38.2 kernel.

This bug is preventing us from shipping the code to this customer.

PROBLEM 1
---------
My testing of the client on kernel 2.6.38.2 was going well overnight. The performance I am getting is the line-rate of the 10G NICs while running IOZone. Then, I switched to xdd.linux, which uses direct IO; xdd.linux also got line-rate.

Then, I noticed that I may have hit a minor bug.

After the iozone tests, I removed all of the files. Then, I did an lfs df -h, and I saw that some OSTs had 20G used still. After I unmounted all of the clients then remounted them, the problem went away (that is, all the OSTs had the same amount of used space). Here is some session output:

[root@compute-01-32 lustre]# cd /mnt/lustre
[root@compute-01-32 lustre]# find . -type f

[root@compute-01-32 lustre]# # Note that there are no files; I removed them a while ago.
[root@compute-01-32 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
denver-MDT0000_UUID 73.2M 4.6M 68.6M 6% /mnt/lustre[MDT:0]
denver-OST0000_UUID 190.0G 459.5M 189.6G 0% /mnt/lustre[OST:0]
denver-OST0001_UUID 190.0G 459.5M 189.6G 0% /mnt/lustre[OST:1]
denver-OST0002_UUID 190.0G 459.5M 189.6G 0% /mnt/lustre[OST:2]
denver-OST0003_UUID 190.0G 459.5M 189.6G 0% /mnt/lustre[OST:3]
denver-OST0004_UUID 190.0G 459.5M 189.6G 0% /mnt/lustre[OST:4]
denver-OST0005_UUID 190.0G 20.4G 169.6G 11% /mnt/lustre[OST:5]
denver-OST0006_UUID 190.0G 459.5M 189.6G 0% /mnt/lustre[OST:6]
denver-OST0007_UUID 190.0G 20.4G 169.6G 11% /mnt/lustre[OST:7]

filesystem summary: 1.5T 43.6G 1.4T 3% /mnt/lustre

[root@compute-01-32 lustre]# # Note that two OSTs have used 20.4G, even though no files !

[root@compute-01-32 lustre]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_compute0132-lv_root
51606140 12008608 36976092 25% /
tmpfs 1924356 88 1924268 1% /dev/shm
/dev/sda2 495844 59594 410650 13% /boot
/dev/mapper/vg_compute0132-lv_home
413557824 26856608 365693664 7% /home
172.20.0.1:/depot/shared
12095040 12094688 0 100% /shared
172.20.0.1:/home 413557824 26856608 365693664 7% /home
10.7.200.111@tcp0:10.7.200.112@tcp0:/denver
1594036256 45706984 1548328760 3% /mnt/lustre

PROBLEM 2
---------

I then unmounted Lustre on all clients, rebooted all the clients,
then remounted Lustre on all clients.
The problem described above seemed to have been resolved.

I decided to try and reproduce this bug. So, I started up IOZone again. Keep in mind that this is after a client reboot.

IOZone ran for a few seconds, then hung. I could ping the node with 2.6.38.8 kernel, but I could not ssh to it. The video console was locked up. Pressing Caps Lock and NumLock on the keyboard did not light up any LEDs on the keyboard. So, I power cycled.

This is all that I saw in /var/log/messages:

Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: Joining mDNS multicast group on interface eth2.IPv4 with address 10.7.1.32.
Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: New relevant interface eth2.IPv4 for mDNS.
Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: Registering new address record for 10.7.1.32 on eth2.IPv4.
Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: Withdrawing address record for 10.7.1.32 on eth2.
Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: Leaving mDNS multicast group on interface eth2.IPv4 with address 10.7.1.32.
Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: Interface eth2.IPv4 no longer relevant for mDNS.
Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: Joining mDNS multicast group on interface eth2.IPv4 with address 10.7.1.32.
Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: New relevant interface eth2.IPv4 for mDNS.
Feb 28 09:33:53 compute-01-32 avahi-daemon[1508]: Registering new address record for 10.7.1.32 on eth2.IPv4.
Feb 28 09:33:53 compute-01-32 kernel: device eth2 entered promiscuous mode
Feb 28 09:33:54 compute-01-32 kernel: Lustre: Lustre: Build Version: 2.1.56-g17d2c48-CHANGED-../lustre/scripts
Feb 28 09:33:55 compute-01-32 kernel: Lustre: Added LNI 10.7.1.32@tcp [8/256/0/180]
Feb 28 09:33:55 compute-01-32 kernel: Lustre: Accept secure, port 988
Feb 28 09:33:55 compute-01-32 kernel: Lustre: MGC10.7.200.111@tcp: Reactivating import
Feb 28 09:33:55 compute-01-32 kernel: Lustre: Client denver-client has started
Feb 28 09:37:28 compute-01-32 kernel: LustreError: 2567:0:(osc_request.c:797:osc_announce_cached()) dirty 2028 - 2028 > system dirty_max 647168
Feb 28 09:37:40 compute-01-32 kernel: LustreError: 2568:0:(osc_request.c:797:osc_announce_cached()) dirty 2040 - 2040 > system dirty_max 647168
Feb 28 09:37:46 compute-01-32 kernel: LustreError: 2565:0:(osc_request.c:797:osc_announce_cached()) dirty 1954 - 1954 > system dirty_max 647168
Feb 28 09:37:47 compute-01-32 kernel: LustreError: 2561:0:(osc_request.c:797:osc_announce_cached()) dirty 1975 - 1976 > system dirty_max 647168
Feb 28 09:37:50 compute-01-32 kernel: LustreError: 2566:0:(osc_request.c:797:osc_announce_cached()) dirty 2001 - 2002 > system dirty_max 647168
Feb 28 09:38:09 compute-01-32 kernel: LustreError: 2567:0:(osc_request.c:797:osc_announce_cached()) dirty 1946 - 1946 > system dirty_max 647168
Feb 28 09:39:20 compute-01-32 kernel: LustreError: 2615:0:(osc_request.c:797:osc_announce_cached()) dirty 2017 - 2017 > system dirty_max 647168
Feb 28 09:39:47 compute-01-32 kernel: LustreError: 2615:0:(osc_request.c:797:osc_announce_cached()) dirty 1847 - 1848 > system dirty_max 647168
Feb 28 09:39:47 compute-01-32 kernel: LustreError: 2615:0:(osc_request.c:797:osc_announce_cached()) Skipped 3 previous similar messages
Feb 28 09:40:23 compute-01-32 kernel: LustreError: 2609:0:(osc_request.c:797:osc_announce_cached()) dirty 1918 - 1919 > system dirty_max 647168
Feb 28 09:40:23 compute-01-32 kernel: LustreError: 2609:0:(osc_request.c:797:osc_announce_cached()) Skipped 1 previous similar message

I did not have a serial cable plugged into get the crash dump. I have set this up now, so if the problem occurs again, we should get more data. I will try to reproduce this bug.

PROBLEM 3
---------

I set up the serial cable, and I verified that SysRq-T worked and sent its output over the serial cable to another node that was capturing the data.

After many hours of testing, the 2.6.38.2 client became unresponsive again. I see some Out of memory messages. I will keep an eye on the slab usage next time.

Below following is the output over ttyS0. After the problem occurred, I plugged in a monitor and keyboard, and SysRq-T did not work. Perhaps I need to have the keyboard in prior to the problem occurring .

LustreError: 19819:0:(ldlm_request.c:1171:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway
LustreError: 19819:0:(ldlm_request.c:1797:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108
LustreError: 19819:0:(ldlm_request.c:1171:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway
LustreError: 19819:0:(ldlm_request.c:1797:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108
Out of memory: Kill process 1421 (rsyslogd) score 1 or sacrifice child
Killed process 1421 (rsyslogd) total-vm:242652kB, anon-rss:0kB, file-rss:784kB
Out of memory: Kill process 1452 (irqbalance) score 1 or sacrifice child
Killed process 1452 (irqbalance) total-vm:9252kB, anon-rss:0kB, file-rss:416kB
INFO: task irqbalance:1452 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task irqbalance:1452 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task automount:1823 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task ksmtuned:1975 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task irqbalance:1452 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task automount:1823 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task ksmtuned:1975 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task irqbalance:1452 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task automount:1823 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task ksmtuned:1975 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.



 Comments   
Comment by Peter Jones [ 29/Feb/12 ]

Lai

Could you please look into this one?

Thanks

Peter

Comment by Roger Spellman (Inactive) [ 02/Mar/12 ]

I am able to reproduce the bug at will.
The code that I have opens a file, writes to it, closes the file, then reopens the same file (truncating it), and writes it again, then closes it.
This is repeated for many, many files, until I stop the test or the test fails.

The test fails every time. I believe that the cause is out-of-memory.

I see the following in the logs quite frequently:

Mar 1 11:01:01 compute-01-32 kernel: cannot allocate a tage (7)
Mar 1 11:08:25 compute-01-32 kernel: cannot allocate a tage (13)
Mar 1 11:36:32 compute-01-32 kernel: cannot allocate a tage (13)
Mar 1 12:03:50 compute-01-32 kernel: cannot allocate a tage (9)
Mar 1 12:07:40 compute-01-32 kernel: cannot allocate a tage (13)

I have also seen:

Mar 1 10:52:35 compute-01-32 kernel: -----------[ cut here ]-----------
Mar 1 10:52:35 compute-01-32 kernel: WARNING: at fs/libfs.c:363 simple_setattr+0x99/0xb0()
Mar 1 10:52:35 compute-01-32 kernel: Hardware name: PowerEdge R210
Mar 1 10:52:35 compute-01-32 kernel: Modules linked in: lmv mgc lustre lquota lov osc mdc fid fld ksocklnd ptlrpc obdclass lnet lvfs libcfs ebtable_nat ebtables ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 ipt_REJECT xt_CHECKSUM iptable_mangle iptable_filter ip_tables bridge stp llc autofs4 nfs lockd fscache nfs_acl auth_rpcgss sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 vhost_net macvtap macvlan tun kvm uinput power_meter sg dcdbas microcode pcspkr iTCO_wdt iTCO_vendor_support bnx2 ixgbe dca mdio ext4 mbcache jbd2 sd_mod crc_t10dif ahci libahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib]
Mar 1 10:52:35 compute-01-32 kernel: Pid: 14596, comm: bash Not tainted 2.6.38.2 #2
Mar 1 10:52:35 compute-01-32 kernel: Call Trace:
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff810619ff>] ? warn_slowpath_common+0x7f/0xc0
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff81061a5a>] ? warn_slowpath_null+0x1a/0x20
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff811760f9>] ? simple_setattr+0x99/0xb0
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffffa0926f20>] ? ll_md_setattr+0x4b0/0xb20 [lustre]
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff811e0fc1>] ? inode_has_perm+0x51/0xa0
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffffa09277f3>] ? ll_setattr_raw+0x263/0x1040 [lustre]
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff811e14fb>] ? dentry_has_perm+0x5b/0x80
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffffa0928627>] ? ll_setattr+0x57/0xf0 [lustre]
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff8116d671>] ? notify_change+0x161/0x2c0
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff81153191>] ? do_truncate+0x61/0x90
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff8115f094>] ? finish_open+0x154/0x1d0
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff81160cd6>] ? do_last+0x86/0x370
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff81163028>] ? do_filp_open+0x3a8/0x760
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff8111e0e5>] ? handle_mm_fault+0x1e5/0x340
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff8116fc7f>] ? vfsmount_lock_global_unlock_online+0x4f/0x60
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff811704bc>] ? mntput_no_expire+0x19c/0x1c0
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff8116e7c5>] ? alloc_fd+0x95/0x160
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff81151fb9>] ? do_sys_open+0x69/0x110
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff811520a0>] ? sys_open+0x20/0x30
Mar 1 10:52:35 compute-01-32 kernel: [<ffffffff8100bf82>] ? system_call_fastpath+0x16/0x1b
Mar 1 10:52:35 compute-01-32 kernel: --[ end trace 69e4427ab9bdd594 ]--

Comment by Roger Spellman (Inactive) [ 02/Mar/12 ]

Lai, Can you please give an update to this bug? Thanks.
-Roger

Comment by Peter Jones [ 03/Mar/12 ]

Roger

I did chat with Lai about this ticket. There is a lot of information to sift through but he expects to post something in the near future. He is based in China so this may well occur before you are in the office on Monday

Regards

Peter

Comment by Lai Siyao [ 05/Mar/12 ]

Roger,

The warning message for simple_setattr() is not an issue, kernel exports this function, but it's too cautious that if a somewhat complicated filesystem uses this function (which has it's ->truncate implementation), it will complain. But Lustre just uses it to update times, so it's absolutely okay.

As for messages like "kernel: cannot allocate a tage", this is normal too. By default, Lustre debug is enabled, and it's printed to allocated kernel pages, but sometimes when system memory is tight, it can't allocate pages, it will print such warnings, and skip output debug logs.

I'm trying to reproduce this failure, and after that I'll update you. If you can run some scripts to output /proc/slabinfo periodically (eg. `watch sort -k2rn /proc/slabinfo`), and if OOM happens again, could you collect it from the serial console and put here?

Comment by Lai Siyao [ 05/Mar/12 ]

Roger, I can't reproduce here, could you upload the test script or program?

Comment by Roger Spellman (Inactive) [ 05/Mar/12 ]

While running my test, I also ran the script get-slab-loop. This script gets slab info, vmstat, and other stuff, and puts it into new file every 30 seconds. The script and the files are in the tarball.
So, mem.3 was taken 30 seconds after mem.2, which was taken 30 seconds after mem.1.

I believe that my test started running before mem.4 was created.

I will upload the test shortly.

Comment by Roger Spellman (Inactive) [ 05/Mar/12 ]

This tarball contains the test that causes this bug. The test uses IOZone. The binary is in this directory. It should be moved to a place that all clients can access. Then, the client_list should be modified with the location of that binary.

The format of the cilent_list is:

hostname mountPoint binaryLocation fileToCreate

The client list currently has 22 entries. You can modify this for the number of clients that you have.

Before running any tests, I run:

./setup_directories /mnt/lustre

That creates the following directories on my system with 8 OSTS:

/mnt/lustre/
/mnt/lustre/stripe-1M
/mnt/lustre/stripe-256M
/mnt/lustre/stripe-32M
/mnt/lustre/stripe-4M
/mnt/lustre/ost/ost-00
/mnt/lustre/ost/ost-01
/mnt/lustre/ost/ost-02
/mnt/lustre/ost/ost-03
/mnt/lustre/ost/ost-04
/mnt/lustre/ost/ost-05
/mnt/lustre/ost/ost-06
/mnt/lustre/ost/ost-07

The last 8 are striped to each OST. The client_list writes files to these directories. Feel free to change the client list to match the number of OSTs that you have.

The script that is run is called 'run-test'. This script includes the line:

for threads in 22

'22' is the number of entries in the client_list. If you change the number of entries in client_list, then you should change the number 22 to match that.

Please let me know if you need any help setting up the tests. It usually hangs within an hour or two.

Comment by Lai Siyao [ 06/Mar/12 ]

Yes, I can reproduce in your way, thanks! I'll update you when I have some clue.

Comment by Lai Siyao [ 06/Mar/12 ]

Hi Roger,

I'm wondering whether this issue is introduced by FC15 support code, do you have any client with supported kernels (eg. RHEL5/6) to verify this won't happen on them?

Comment by Roger Spellman (Inactive) [ 06/Mar/12 ]

Lai,
I have 10 clients running Lustre 1.8.4 on RHEL 5.5. I have one client with the new code. Only that one client is having this problem.

Comment by Lai Siyao [ 11/Mar/12 ]

I tested the same code on RHEL5/6 and FC15, OOM only happens on FC15. The log and memory stats doesn't show anything special, just too many cached pages, and the slab usage is a bit high. I talked to Jinshan, he said that iozone test consuming a lot memory is a known issue (but he never met OOM before), and it's because CLIO depends on kernel to release cached pages, but kernel tends to cache more pages. I doubt that FC15 kernel is more aggressive on this, therefore it will OOM. Jinshan is working on iozone memory use problem, I'll update here if he has any progress.

Comment by Lai Siyao [ 12/Mar/12 ]

I tried tuning kernel vm dirty_ratio and dirty_background_ratio to 5, but it still OOM. There may not exist a simple workaround for this excessive memory usage issue.

Comment by Roger Spellman (Inactive) [ 12/Mar/12 ]

> I talked to Jinshan, he said that iozone test consuming a lot
> memory is a known issue (but he never met OOM before),

This bug DOES NOT require IOZone. I am able to reproduce it with dd, with the following script:

  1. cat write-twice-loop
    (( COUNT = 0 ))
    size=2048
    > /root/keep_going
    cd /mnt/lustre/ost
    while [ -f /root/keep_going ];
    do
    index=`printf '%04d' $COUNT`
    file=file.$index
    echo -n "writing $file+ "
    (cd ost-01 && dd if=/dev/zero of=$file bs=1024k count=$size > ~/tmp.1a 2>&1 ) &
    (cd ost-07 && dd if=/dev/zero of=$file bs=1024k count=$size > ~/tmp.2a 2>&1 ) &
    wait
    echo -n "+ "
    (cd ost-01 && dd if=/dev/zero of=$file bs=1024k count=$size > ~/tmp.1b 2>&1 ) &
    (cd ost-07 && dd if=/dev/zero of=$file bs=1024k count=$size > ~/tmp.2b 2>&1 ) &
    wait
    echo
    (( COUNT = $COUNT + 1 ))
    sleep 0.5
    done

echo
echo Cannot find /root/keep_going

Comment by Peter Jones [ 13/Mar/12 ]

Roger

To clarify, Jinshan is referring to the conditions that can be simulated by running a tool like IOZONE. He is not suggesting that IOZONE itself is the key factor here

Peter

Comment by Roger Spellman (Inactive) [ 13/Mar/12 ]

Peter,
Thanks for the clarification.

Jinshan, I believe that the problem has to do with writing then over-writing the same file. IOZone does this by default, and the loop that I submitted does that too. When I just write a file once, I am not seeing this problem (in the time frame that I'm testing). You can do that in IOZone with the -+n option.

Roger

Comment by Jinshan Xiong (Inactive) [ 14/Mar/12 ]

I'll look into this issue. For the warning of simple_setattr(), it means we shouldn't use simple_setattr() if we have truncate method implemented. We used to call inode_setattr() for old kernels, we really need to fix that.

Comment by Lai Siyao [ 14/Mar/12 ]

Jinshan, please check http://review.whamcloud.com/#change,1863 and http://review.whamcloud.com/#change,2145.

Comment by Jinshan Xiong (Inactive) [ 31/Mar/12 ]

I'm trying to compile lustre-master with fc15 but it doesn't work. Can you please tell me which distribution you;re using for your client?

Comment by Lai Siyao [ 31/Mar/12 ]

Hmm, if you use kernel-devel package to build kernel module, lustre configure will fail on LB_LINUX_COMPILE_IFELSE in build/autoconf/lustre-build-linux.m4, I tried to tweak it a bit, but didn't succeed. In my setup, I build lustre patchless client against kernel source, and you need update kernel source Makefile 'EXTRAVERSION' to be in accordance with your kernel version(my setup is '.6-26.rc1.fc15.x86_64').

Comment by Roger Spellman (Inactive) [ 02/Apr/12 ]

I had no trouble building against 2.6.38.2.

I don't see LB_LINUX_COMPILE_IFELSE in .config in this release.

[root@RS_vm-2_6_38_2 linux-2.6.38.2]# pwd
/usr/src/kernels/linux-2.6.38.2
[root@RS_vm-2_6_38_2 linux-2.6.38.2]# grep LB_LINUX_COMPILE_IFELSE .config
[root@RS_vm-2_6_38_2 linux-2.6.38.2]#

-Roger

Comment by Roger Spellman (Inactive) [ 02/Apr/12 ]

I updated to the 5th patch on:
http://review.whamcloud.com/#change,2170

This bug is still present there.

Comment by Peter Jones [ 09/Apr/12 ]

Roger

Peter said today that he thought that this issue was independent of using the 2.6.38 client. Are you able to reproduce this same behaviour when running vanilla 2.1.x and RHEL6 clients, say?

Peter

Comment by Roger Spellman (Inactive) [ 10/Apr/12 ]

I have a client with kernel: 2.6.32-220.el6.x86_64

Is that what you want me to try?

What git tag should I use to get the 2.1.x code?

Comment by Peter Jones [ 15/Jun/12 ]

As per Terascala ok to close

Generated at Sat Feb 10 01:14:00 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.