[LU-970] Invalid Import messages Created: 09/Jan/12  Updated: 02/Feb/12  Resolved: 02/Feb/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 1.8.x (1.8.0 - 1.8.5)
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Supporto Lustre Jnet2000 (Inactive) Assignee: Zhenyu Xu
Resolution: Fixed Votes: 0
Labels: log, server
Environment:

Lustre version: 1.8.5.54-20110316022453-PRISTINE-2.6.18-194.17.1.el5_lustre.20110315140510
lctl version: 1.8.5.54-20110316022453-PRISTINE-2.6.18-194.17.1.el5_lustre.20110315140510
Red Hat Enterprise Linux Server release 5.4 (Tikanga)
auth type over ldap and kerberos
quota enabled only for group on lustre fs


Attachments: File dump-18-01.tgz     File lustre.info     File messages-lp-030.bz2     File messages-lp-031.bz2     File tunefs.tgz    
Severity: 2
Epic: log, server
Rank (Obsolete): 6494

 Description   

We receive many messages like:

Jan 8 04:21:55 osiride-lp-030 kernel: LustreError: 11463:0:(client.c:858:ptlrpc_import_delay_req()) @@@ IMP_INVALID req@ffff810a722ccc00 x1388786345868037/t0 o101->MGS@MGC10.121.13.31@tcp_0:26/25 lens 296/544 e 0 to 1 dl 0 ref 1 fl Rpc:/0/0 rc 0/0
Jan 8 04:21:55 osiride-lp-030 kernel: LustreError: 11463:0:(client.c:858:ptlrpc_import_delay_req()) Skipped 179 previous similar messages
Jan 8 04:22:38 osiride-lp-030 kernel: Lustre: 6743:0:(client.c:1487:ptlrpc_expire_one_request()) @@@ Request x1388786345868061 sent from MGC10.121.13.31@tcp to NID 0@lo 5s ago has timed out (5s prior to deadline).
Jan 8 04:22:38 osiride-lp-030 kernel: req@ffff810256529800 x1388786345868061/t0 o250->MGS@MGC10.121.13.31@tcp_0:26/25 lens 368/584 e 0 to 1 dl 1325992958 ref 1 fl Rpc:N/0/0 rc 0/0
Jan 8 04:22:38 osiride-lp-030 kernel: Lustre: 6743:0:(client.c:1487:ptlrpc_expire_one_request()) Skipped 100 previous similar messages

I have attached the "messages" of the MDS/MGS server.

Can you explain the meaning of these messages and how could we fix it?



 Comments   
Comment by Johann Lombardi (Inactive) [ 09/Jan/12 ]

This means that the MDT somehow cannot reach the MGS which is supposed to run locally.
Could you please run the following commands on this server and attach the output to this ticket?

  • lctl dl
  • lctl get_param mgc.*.import

Also, you mentioned that you are running "1.8.5.54-20110316022453". Do i understand correctly that you are running a beta version of Oracle's 1.8.6 which isn't intended to be used in production? If so, i would really advise to upgrade to 1.8.7-wc1.

Comment by Peter Jones [ 09/Jan/12 ]

Bobi

Could you please take care of this ticket?

Thanks

Peter

Comment by Supporto Lustre Jnet2000 (Inactive) [ 09/Jan/12 ]

[root@osiride-lp-031 wisi281]# lctl dl
0 UP mgc MGC10.121.13.31@tcp 326e50f4-053e-14d7-29f8-10a8ae98140d 5
1 UP ost OSS OSS_uuid 3
2 UP obdfilter home-OST0003 home-OST0003_UUID 43
3 UP mgs MGS MGS 43
4 UP mgc MGC10.121.13.62@tcp 8ac4b17d-d00e-1d89-9281-3d1615a38949 5
5 UP mdt MDS MDS_uuid 3
6 UP lov home-mdtlov home-mdtlov_UUID 4
7 UP mds home-MDT0000 home-MDT0000_UUID 41
8 UP osc home-OST0000-osc home-mdtlov_UUID 5
9 UP osc home-OST0001-osc home-mdtlov_UUID 5
10 UP osc home-OST0002-osc home-mdtlov_UUID 5
11 UP osc home-OST0003-osc home-mdtlov_UUID 5
12 UP osc home-OST0004-osc home-mdtlov_UUID 5
13 UP osc home-OST0005-osc home-mdtlov_UUID 5
14 UP osc home-OST0006-osc home-mdtlov_UUID 5
15 UP osc home-OST0007-osc home-mdtlov_UUID 5
16 UP osc home-OST0008-osc home-mdtlov_UUID 5
17 UP osc home-OST0009-osc home-mdtlov_UUID 5
18 UP osc home-OST000a-osc home-mdtlov_UUID 5
19 UP osc home-OST000b-osc home-mdtlov_UUID 5
20 UP obdfilter home-OST0000 home-OST0000_UUID 43
21 UP obdfilter home-OST0001 home-OST0001_UUID 43
22 UP obdfilter home-OST0002 home-OST0002_UUID 43
23 UP obdfilter home-OST0005 home-OST0005_UUID 43
24 UP obdfilter home-OST000a home-OST000a_UUID 43
25 UP obdfilter home-OST0008 home-OST0008_UUID 43
26 UP obdfilter home-OST0006 home-OST0006_UUID 43
27 UP obdfilter home-OST000b home-OST000b_UUID 43
28 UP obdfilter home-OST0009 home-OST0009_UUID 43
29 UP obdfilter home-OST0004 home-OST0004_UUID 43
30 UP obdfilter home-OST0007 home-OST0007_UUID 43

Comment by Supporto Lustre Jnet2000 (Inactive) [ 09/Jan/12 ]

[root@osiride-lp-031 wisi281]# lctl get_param mgc.*.import
mgc.MGC10.121.13.31@tcp.import=
import:
name: MGC10.121.13.31@tcp
target: MGS
state: CONNECTING
connect_flags: [version, adaptive_timeouts, fid_is_enabled]
import_flags: [ no_recov, invalid, replayable, pingable, recon_bk,
last_recon]
connection:
failover_nids: [10.121.13.31@tcp]
current_connection: 10.121.13.31@tcp
connection_attempts: 239371
generation: 478755
in-progress_invalidations: 0
rpcs:
inflight: 1
unregistering: 0
timeouts: 239370
avg_waittime: 0 <NULL>
service_estimates:
services: 1 sec
network: 1 sec
transactions:
last_replay: 0
peer_committed: 0
last_checked: 0
mgc.MGC10.121.13.62@tcp.import=
import:
name: MGC10.121.13.62@tcp
target: MGS
state: FULL
connect_flags: [version, adaptive_timeouts]
import_flags: [pingable, recon_bk]
connection:
failover_nids: [0@lo]
current_connection: 0@lo
connection_attempts: 1
generation: 1
in-progress_invalidations: 0
rpcs:
inflight: 0
unregistering: 0
timeouts: 0
avg_waittime: 0 <NULL>
service_estimates:
services: 1 sec
network: 1 sec
transactions:
last_replay: 0
peer_committed: 0
last_checked: 0
[root@osiride-lp-031 wisi281]#

Comment by Supporto Lustre Jnet2000 (Inactive) [ 10/Jan/12 ]

Hi,
other information on our setup. We have two lustre servers:

  • osiride-lp-030 ->10.121.13.31
  • osiride-lp-031 ->10.121.13.62

The first server hosts these services:

  • MGS
  • MDS
  • OST00
  • OST01
  • OST02

The second server hosts services OST03 to OST0b

We have a dedicated 10GbE using Broadcom Corporation NetXtreme II BCM57711E 10-Gigabit PCIe

We have a Red Hat Cluster Suite cluster to provide High Availability.

The output of "lctl dl" and "lctl get_param mgc.*.import" is taken after a failover and all the Lustre services are hosted on the osiride-lp-031 server. We have the same messages on the osiride-lp-031. I have attached the "messages" of osiride-lp-031 before and after the failover.

Thanks in advance

Comment by Supporto Lustre Jnet2000 (Inactive) [ 10/Jan/12 ]

messages of osiride-lp-031

Comment by Supporto Lustre Jnet2000 (Inactive) [ 10/Jan/12 ]

We are planning to upgrade to the latest GA version of Lustre at the end of January.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 10/Jan/12 ]

Hi,
Is it normal to have on the same server for 1 exported Lustre file system two mgc entries?

>> 0 UP mgc MGC10.121.13.31@tcp 326e50f4-053e-14d7-29f8-10a8ae98140d 5
>> 4 UP mgc MGC10.121.13.62@tcp 8ac4b17d-d00e-1d89-9281-3d1615a38949 5

Thanks in advance for your support

Comment by Johann Lombardi (Inactive) [ 10/Jan/12 ]

This indeed looks weird. Could you please run the following commands?

  • "tunefs.lustre --print $dev" against all OSTs & MDT devices
  • "mount"
Comment by Zhenyu Xu [ 10/Jan/12 ]

Could be the MDT device or some OST devices being mkfs.lustre-ed with wrong "--mgsnode" argument.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 10/Jan/12 ]

Hi Zhenyu,
I'm 100% sure that there are no mkfs.lustre mistake.

Hi Johann,
this is the output of "mount" on osiride-lp-031:
[root@osiride-lp-031 ~]# mount
/dev/mapper/vg_lp-lv_root on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/mapper/vg_lp-lv_tmp on /tmp type ext3 (rw)
/dev/mapper/vg_lp-lv_var on /var type ext3 (rw)
/dev/mapper/vg_lp-lv_vartmp on /var/tmp type ext3 (rw)
/dev/mapper/vg_lp-lv_varwww on /var/www type ext3 (rw)
/dev/mapper/vg_lp-lv_home on /home type ext3 (rw)
/dev/mapper/vg_lp-lv_varlibxen on /var/lib/xen type ext3 (rw)
/dev/mapper/vg_lp-lv_varlog on /var/log type ext3 (rw)
/dev/mapper/vg_lp-lv_varlibmysql on /var/lib/mysql type ext3 (rw)
/dev/mapper/vg_lp-lv_tmp_work on /tmp/work type ext3 (rw)
/dev/mapper/vg_lp-lv_opt on /opt type ext3 (rw)
/dev/cciss/c0d0p1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
none on /sys/kernel/config type configfs (rw)
/dev/mpath/mgsp1 on /lustre/mgs type lustre (rw)
/dev/mpath/mdtp1 on /lustre/mdt type lustre (rw,acl)
/dev/mpath/ost00p1 on /lustre/ost00 type lustre (rw)
/dev/mpath/ost01p1 on /lustre/ost01 type lustre (rw)
/dev/mpath/ost02p1 on /lustre/ost02 type lustre (rw)
/dev/mpath/ost03p1 on /lustre/ost03 type lustre (rw)
/dev/mpath/ost05p1 on /lustre/ost05 type lustre (rw)
/dev/mpath/ost10p1 on /lustre/ost10 type lustre (rw)
/dev/mpath/ost08p1 on /lustre/ost08 type lustre (rw)
/dev/mpath/ost06p1 on /lustre/ost06 type lustre (rw)
/dev/mpath/ost11p1 on /lustre/ost11 type lustre (rw)
/dev/mpath/ost09p1 on /lustre/ost09 type lustre (rw)
/dev/mpath/ost04p1 on /lustre/ost04 type lustre (rw)
/dev/mpath/ost07p1 on /lustre/ost07 type lustre (rw)

I'm not able to take the output of tunefs.lustre because this problem:

[root@osiride-lp-031 ~]# tunefs.lustre --print /dev/mpath/ost07p1
checking for existing Lustre data: not found

tunefs.lustre FATAL: Device /dev/mpath/ost07p1 has not been formatted with
mkfs.lustre
tunefs.lustre: exiting with 19 (No such device)

I try on the /dev/dm-31 that are the real block device, but I receive the same error.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 10/Jan/12 ]

Do you think that having this two entries in the device list table is the cause of the errors that I receive in the "messages"?
Is it possible that something going wrong with the failover?
If I restart the lustre platforma, should I fix this problem?

Thanks in advance

Comment by Zhenyu Xu [ 10/Jan/12 ]

Please umount /dev/mpath/ost07p1 and mount it as 'ldiskfs' type, and upload its "CONFIGS/mountdata" file here.

And check whether the "Invalid Import" messages persist as ost07 is "offline" the filesystem.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 10/Jan/12 ]

Sorry Zhenyu but I receive the tunefs.lustre error on all the lustre's block devices :

/dev/mpath/mgsp1 on /lustre/mgs type lustre (rw)
/dev/mpath/mdtp1 on /lustre/mdt type lustre (rw,acl)
/dev/mpath/ost00p1 on /lustre/ost00 type lustre (rw)
/dev/mpath/ost01p1 on /lustre/ost01 type lustre (rw)
/dev/mpath/ost02p1 on /lustre/ost02 type lustre (rw)
/dev/mpath/ost03p1 on /lustre/ost03 type lustre (rw)
/dev/mpath/ost05p1 on /lustre/ost05 type lustre (rw)
/dev/mpath/ost10p1 on /lustre/ost10 type lustre (rw)
/dev/mpath/ost08p1 on /lustre/ost08 type lustre (rw)
/dev/mpath/ost06p1 on /lustre/ost06 type lustre (rw)
/dev/mpath/ost11p1 on /lustre/ost11 type lustre (rw)
/dev/mpath/ost09p1 on /lustre/ost09 type lustre (rw)
/dev/mpath/ost04p1 on /lustre/ost04 type lustre (rw)
/dev/mpath/ost07p1 on /lustre/ost07 type lustre (rw)

Should I use tunefs.lustre with the real scsi disk device and not on the multipathed block devices?

Comment by Zhenyu Xu [ 11/Jan/12 ]

Then try to tunefs.lustre upon the real scsi disk device.

Or use

debugfs -R "dump CONFIGS/mountdata /tmp/mountdata" /dev/mpath/ost07p1

to dump the file and upload here.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 11/Jan/12 ]

opps

debugfs -R "dump CONFIGS/mountdata /tmp/mountdata-ost07" /dev/mpath/ost07p1

debugfs 1.41.10.sun2 (24-Feb-2010)
/dev/mpath/ost07p1: MMP: device currently active while opening filesystem
dump: Filesystem not open

Comment by Johann Lombardi (Inactive) [ 11/Jan/12 ]

This problem (i.e. debugfs cannot open the filesystem due to MMP) has been fixed in recent e2fsprogs. Could you please update the e2fsprogs package and rerun the tunefs.lustre command?
The latest one is e2fsprogs-1.41.90.wc3 and can be downloaded here: http://downloads.whamcloud.com/public/e2fsprogs/latest
This package can be updated while lustre is running.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 13/Jan/12 ]

Thanks Johann,
we are waiting the authorization from the end user to upgrade the e2fsprogs.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 13/Jan/12 ]

The end-user agree to upgrade the e2fsprogs, but we should start from the test environment. I'm planning to give to you the configuration information on 17th January.

See you soon!!!

Comment by Supporto Lustre Jnet2000 (Inactive) [ 17/Jan/12 ]

The end-user ask us to wait other 2 days to upgrade in production the e2fsprogs tools.

Thanks in advance

Comment by Supporto Lustre Jnet2000 (Inactive) [ 18/Jan/12 ]

dumps

Comment by Supporto Lustre Jnet2000 (Inactive) [ 18/Jan/12 ]

Ok, we upgrade the e2fsprogs and make the dumps of the configuration. I have attached it.

thanks in advance

Comment by Zhenyu Xu [ 18/Jan/12 ]
  1. strings mgs
    lustre
    acl,iopen_nopriv,user_xattr,errors=remount-ro
    failover.node=10.121.13.62@tcp

#strings mdt
home
home-MDT0000
...

#strings ost*
home
home-OST000x ==> x from 0 to b
...

You've formatted your devices with inconsistent fsname, this could happened when you formatted the mgs device without specifying "--fsname" argument where "lustre" is its default value, and you formatted the other devices with "--fsname=home".

Comment by Supporto Lustre Jnet2000 (Inactive) [ 18/Jan/12 ]

Could be this problem the cause of the Lustre-Error that we see in the messages?
How could we fix this problem?

Comment by Zhenyu Xu [ 18/Jan/12 ]

yes, this could cause the error message.

You need "tunefs.lustre --mgs --fsname=home <other options> <mgs device>" and remount it or even all other devices as well.

Comment by Johann Lombardi (Inactive) [ 18/Jan/12 ]

Please don't run this command for now until we can look at the output of "tunefs.lustre --print".
Any chance to attach the output of this command against all lustre targets (MGS/MDT/OSTs devices)?

Thanks in advance.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 19/Jan/12 ]

the tunefs output

Comment by Johann Lombardi (Inactive) [ 19/Jan/12 ]

I am afraid that you forgot to specify the failover mgsnode when formatting the OSTs & MDT:

   Read previous values:
Target:     home-OST0000
Index:      0
Lustre FS:  home
Mount type: ldiskfs
Flags:      0x2
              (OST )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=10.121.13.31@tcp failover.node=10.121.13.62@tcp ost.quota_type=ug

The OSTs and MDT should be given the full list of NIDs where the MGS can run. In your case, this is both 10.121.13.31@tcp and 10.121.13.62@tcp. This explains why the targets cannot reach the MGS when this latter runs on 10.121.13.62@tcp. That's the root cause of the error messages you see.

To fix this, you would have to do the following procedure for each OST and the MDT:

  • stop the target (via unmount or the HA agent)
  • run "tunefs.lustre --mgsnode=10.121.13.62@tcp ${path/to/device}"
  • restart the target (with mount or HA agent)

I also noticed that some OSTs have "failover.node=10.121.13.62@tcp" while some others have "10.121.13.31@tcp".
To make sure that there is no problem with the OST/MDT failover configuration, could you please run the following command on one lustre client? "lctl get_param

{mdc,osc}

.*.import"

Comment by Supporto Lustre Jnet2000 (Inactive) [ 19/Jan/12 ]

the lctl get_param

{mdc,osc}

.*.import output

Comment by Johann Lombardi (Inactive) [ 20/Jan/12 ]
  1. grep failover_nids lustre.info
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]
    failover_nids: [10.121.13.62@tcp, 10.121.13.31@tcp]

The MDT/OST failover config looks fine, so you just have to fix the mgsnode issue as mentioned above.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 20/Jan/12 ]

Johann, Should I fix the MGS configuration too?

Comment by Zhenyu Xu [ 20/Jan/12 ]

No, there is no need to set a fs name on a separate MGT which can handle multiple filesystems at once. My fault to mentioned the incorrect info in comment on 18/Jan/12 10:45 AM.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 20/Jan/12 ]

Hi Johann and Zhenyu, the normal configuration is:

  • MGT, MDT, OST0000, OST0001, OST0002 are owned by 10.121.13.31@tcp and the failover node are 10.121.13.62@tcp
  • OST0003 -> OST000b are owned by 10.121.13.62@tcp and the failover node are 10.121.13.31@tcp

How should I change the configuration according this setup and to avoid the Lustre errors?

When we take the tunefs output and the dumpfs output, we are in a failed situation, because all the targets are mounted on 10.121.13.62@tcp.

We have the Lustre errors before and after the shutdown of the 10.121.13.31@tcp node, as you see in the messages.

Thanks in advance

Comment by Johann Lombardi (Inactive) [ 20/Jan/12 ]

> How should I change the configuration according this setup and to avoid the Lustre errors?

There is no need to change the configuration. Please just follow the procedure i detailed in my comment on 19/Jan/12 9:17 AM and the error messages will be gone.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 20/Jan/12 ]

So when we start the node 10.121.13.31@tcp and rebalance the service, the Lustre error will be gone? But why we see the lustre error before the failing over of 10.121.13.31@tcp node?

thanks in advance

Comment by Johann Lombardi (Inactive) [ 20/Jan/12 ]

I'm afraid that we have not enough log of this incident to find out why the MGS wasn't responsive at this time.
I would suggest to fix the mgsnode configuration error and then we can look at this problem if this happens again.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 28/Jan/12 ]

Ok,
we have rebalanced the services on osiride-lp030 and osiride-lp031 and restart all the client. No more Lustre error and this is the output of lctl dl command on both the servers.

[root@osiride-lp-030 ~]# lctl dl
0 UP mgs MGS MGS 45
1 UP mgc MGC10.121.13.31@tcp 5c2ce5e0-645a-2b58-6c0d-c5a9a11671f5 5
2 UP ost OSS OSS_uuid 3
3 UP obdfilter home-OST0001 home-OST0001_UUID 43
4 UP obdfilter home-OST0002 home-OST0002_UUID 43
5 UP obdfilter home-OST0000 home-OST0000_UUID 43
6 UP mdt MDS MDS_uuid 3
7 UP lov home-mdtlov home-mdtlov_UUID 4
8 UP mds home-MDT0000 home-MDT0000_UUID 41
9 UP osc home-OST0000-osc home-mdtlov_UUID 5
10 UP osc home-OST0001-osc home-mdtlov_UUID 5
11 UP osc home-OST0002-osc home-mdtlov_UUID 5
12 UP osc home-OST0003-osc home-mdtlov_UUID 5
13 UP osc home-OST0004-osc home-mdtlov_UUID 5
14 UP osc home-OST0005-osc home-mdtlov_UUID 5
15 UP osc home-OST0006-osc home-mdtlov_UUID 5
16 UP osc home-OST0007-osc home-mdtlov_UUID 5
17 UP osc home-OST0008-osc home-mdtlov_UUID 5
18 UP osc home-OST0009-osc home-mdtlov_UUID 5
19 UP osc home-OST000a-osc home-mdtlov_UUID 5
20 UP osc home-OST000b-osc home-mdtlov_UUID 5

[root@osiride-lp-031 ~]# lctl dl
0 UP mgc MGC10.121.13.31@tcp e4919e7b-230b-9ce3-910d-3ec6e1bed6fc 5
1 UP ost OSS OSS_uuid 3
2 UP obdfilter home-OST0006 home-OST0006_UUID 43
3 UP obdfilter home-OST0004 home-OST0004_UUID 43
4 UP obdfilter home-OST0007 home-OST0007_UUID 43
5 UP obdfilter home-OST0003 home-OST0003_UUID 43
6 UP obdfilter home-OST0009 home-OST0009_UUID 43
7 UP obdfilter home-OST0008 home-OST0008_UUID 43
8 UP obdfilter home-OST0005 home-OST0005_UUID 43
9 UP obdfilter home-OST000b home-OST000b_UUID 43
10 UP obdfilter home-OST000a home-OST000a_UUID 43

Could you please close the issue? Thanks in advance

Comment by Johann Lombardi (Inactive) [ 30/Jan/12 ]

Cool . To be clear, you've also fixed the MGS configuration with tunefs.lustre as explained in my comment on 19/Jan/12 9:17 AM, right?

Comment by Supporto Lustre Jnet2000 (Inactive) [ 30/Jan/12 ]

No I have not. We are planning to upgrade the version of Lustre to the latest stable. I will change the configuration during the upgrade.

Comment by Supporto Lustre Jnet2000 (Inactive) [ 02/Feb/12 ]

Please close this issue

Comment by Peter Jones [ 02/Feb/12 ]

Thanks!

Generated at Sat Feb 10 01:12:14 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.