Details
-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
Lustre 2.8.0
-
None
-
[root@pts00433-vm5 lustre]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)
[root@pts00433-vm5 lustre]# lscpu
Architecture: ppc64le
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Model: 2.1 (pvr 004b 0201)
Model name: POWER8E (raw), altivec supported
L1d cache: 64K
L1i cache: 32K
NUMA node0 CPU(s): 0-3
[root@pts00433-vm5 lustre]# uname -a
Linux pts00433-vm5 3.10.0-327.18.2.el7.ppc64le #1 SMP Fri Apr 8 05:10:45 EDT 2016 ppc64le ppc64le ppc64le GNU/Linux
Built lustre from source ..
[root@pts00433-vm5 lustre-release]# ./LUSTRE-VERSION-GEN
2.8.60_19_g300739c
[root@pts00433-vm5 lustre-release]# lsmod | grep lustre
lustre 1059845 16
lmv 298120 2 lustre
mdc 219817 2 lustre
lov 391204 12 lustre
ptlrpc 1688258 9 fid,fld,lmv,mdc,lov,mgc,osc,ptlrpc_gss,lustre
obdclass 1556581 32 fid,fld,lmv,mdc,lov,mgc,osc,ptlrpc_gss,lustre,obdecho,ptlrpc
lnet 561656 7 mgc,osc,ptlrpc_gss,lustre,obdclass,ptlrpc,ksocklnd
libcfs 404614 14 fid,fld,lmv,mdc,lov,mgc,osc,lnet,ptlrpc_gss,lustre,obdecho,obdclass,ptlrpc,ksocklnd
Configured this vm as lustre client
[root@pts00433-vm5 lustre-release]# df -T | grep lustre
10.51.225.95@tcp:/whatevs lustre 41377088 50184 39153004 1% /mnt/first_mount
[root@pts00433-vm5 lustre-release]# lfs df -h
UUID bytes Used Available Use% Mounted on
whatevs-MDT0000_UUID 28.4G 58.9M 26.3G 0% /mnt/first_mount[MDT:0]
whatevs-OST0000_UUID 39.5G 49.0M 37.3G 0% /mnt/first_mount[OST:0]
filesystem summary: 39.5G 49.0M 37.3G 0% /mnt/first_mount
[ root@pts00433-vm5 lustre]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.3 (Maipo) [ root@pts00433-vm5 lustre]# lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Model: 2.1 (pvr 004b 0201) Model name: POWER8E (raw), altivec supported L1d cache: 64K L1i cache: 32K NUMA node0 CPU(s): 0-3 [ root@pts00433-vm5 lustre]# uname -a Linux pts00433-vm5 3.10.0-327.18.2.el7.ppc64le #1 SMP Fri Apr 8 05:10:45 EDT 2016 ppc64le ppc64le ppc64le GNU/Linux Built lustre from source .. [ root@pts00433-vm5 lustre-release]# ./LUSTRE-VERSION-GEN 2.8.60_19_g300739c [ root@pts00433-vm5 lustre-release]# lsmod | grep lustre lustre 1059845 16 lmv 298120 2 lustre mdc 219817 2 lustre lov 391204 12 lustre ptlrpc 1688258 9 fid,fld,lmv,mdc,lov,mgc,osc,ptlrpc_gss,lustre obdclass 1556581 32 fid,fld,lmv,mdc,lov,mgc,osc,ptlrpc_gss,lustre,obdecho,ptlrpc lnet 561656 7 mgc,osc,ptlrpc_gss,lustre,obdclass,ptlrpc,ksocklnd libcfs 404614 14 fid,fld,lmv,mdc,lov,mgc,osc,lnet,ptlrpc_gss,lustre,obdecho,obdclass,ptlrpc,ksocklnd Configured this vm as lustre client [ root@pts00433-vm5 lustre-release]# df -T | grep lustre 10.51.225.95@tcp :/whatevs lustre 41377088 50184 39153004 1% /mnt/first_mount [ root@pts00433-vm5 lustre-release]# lfs df -h UUID bytes Used Available Use% Mounted on whatevs-MDT0000_UUID 28.4G 58.9M 26.3G 0% /mnt/first_mount[MDT:0] whatevs-OST0000_UUID 39.5G 49.0M 37.3G 0% /mnt/first_mount[OST:0] filesystem summary: 39.5G 49.0M 37.3G 0% /mnt/first_mount
-
1
-
9223372036854775807
Description
I wanted to run the test cases ..
Next on Ubuntu, Rhel machine (post installation of dependencies ), try out the below ,
git clone git://git.hpdd.intel.com/fs/lustre-release.git
cd lustre-release/
sh autogen.sh
./configure --disable-server
make
Next change to tests folder
cd lustre-release/lustre/tests
[root@pts00433-vm5 tests]# ./sanity.sh
Logging to shared log directory: /tmp/test_logs/1479799950
Client: Lustre version: 2.8.60_19_g300739c
MDS: Lustre version: 2.8.60_19_g300739c
OSS: Lustre version: 2.8.60_19_g300739c
Checking servers environments
Checking clients pts00433-vm5 environments
Loading modules from /root/amit/lustre-release/lustre
detected 4 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
debug=vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck
subsystem_debug=all
Setup mgs, mdt, osts
e2label: No such file or directory while trying to open /tmp/lustre-mdt1
Couldn't find valid filesystem superblock.
while steps from here http://wiki.lustre.org/Testing_HOWTO
[root@pts00433-vm5 tests]# ./auster -rsv -d /opt/results/ sanity --only 0a
Started at Tue Nov 22 13:03:12 IST 2016
Lustre is not mounted, trying to do setup ...
Stopping clients: pts00433-vm5 /mnt/lustre (opts![]()
Stopping clients: pts00433-vm5 /mnt/lustre2 (opts![]()
Loading modules from /root/amit/lustre-release/lustre
detected 4 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
debug=vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck
subsystem_debug=all
Formatting mgs, mds, osts
Format mds1: /tmp/lustre-mdt1
sh: mkfs.lustre: command not found
So our multi node luster set up is as below
1. Machine 1 : Acting as Management server( MGS) , Management target ( MGT ) ,Metadata server( MDS) , Metadata target ( MDT ) - CentOS6.6 x86-VM1 - 10.51.225.95 -x86_64
2. Machine 2 : Acting as - Object Storage Server ( OSS ) , Object Storage Target ( OST ) CentOS6.8 x86-VM2 - 10.51.225.96 - x86_64
Lustre Clients
3. Machine 4 : Acting as lustre client - Ubuntu 16.04.1 LTS (Xenial Xerus) - 10.77.67.146 - ppc64le
4. Machine 5 : Acting as lustre client - Red Hat 7.2 - 10.77.67.129 -ppc64le
Can anyone help me point me how to execute sanity.sh script on this lustre client machine