Article Image
Article Image
read

Other posts in this series:

Hello, and welcome back to the third and final instalment of the “Sharing a GlusterFS volume with NFS-Ganesha on CentOS-7” series. Today we follow on from Last Time to set up the LVM or Logical Volume Management, there are quite a few steps, but they are all pretty straightforward, so don’t get daunted, we need to complete these steps on all four nodes. There is a pretty good write up explaining what I am doing here in more detail by Digital ocean

Lsblk 

Open the format disk utill for the gluster disk (in this case /dev/sdb)

fdisk /dev/sdb

The first command you enter should be “n” for new partition:

Command (m for help): n 

Then type in “p” for primary partition:

Select (default p): p 

Partition number should be 1

Partition number (1-4, default 1): 1 

Leave the defaults for the first sector

First sector (2048-104857599, default 2048): <ENTER DEFAULT> 

Using default value 2048 

Leave the default for the second sector (defaults to 100%)

Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599): <ENTER DEFAULT> 

Now enter “t”

Command (m for help): t 

If it asks you what partition select 1

Partition number (1, default 1): 1 

Enter the hex code 8e, which stands for Linux LVM

Hex code (type L to list all codes): 8e 

You should now see the following.

Changed type of partition 'Linux' to 'Linux LVM' 

Finally, we write the changes to disk with “w”

Command (m for help): w 

You should now see the following.

The partition table has been altered! 
Calling ioctl() to re-read partition table. 
Syncing disks. 

Now check the disks are showing up as 8e or LVM disks.

fdisk -l | grep LVM 

You should now see the following.

/dev/sdb1            2048    52427775    26212864   8e  Linux LVM 

Now the disks are formatted we can create the physical volumes.

pvcreate /dev/sdb1

You should now see the following.

Physical volume "/dev/sdb1" successfully created. 

Let’s create vg1 against the new physical volumes.

vgcreate vg1 /dev/sdb1  

You should now see the following.

Volume group "vg1" successfully created. 

Confirm creation with vgdisplay

vgdisplay 

You should now see something similar to the following.


  --- Volume group --- 

  VG Name               vg1 

  System ID 

  Format                lvm2 

  Metadata Areas        2 

  Metadata Sequence No  1 

  VG Access             read/write 

  VG Status             resizable 

  MAX LV                0 

  Cur LV                0 

  Open LV               0 

  Max PV                0 

  Cur PV                2 

  Act PV                2 

  VG Size               49.99 GiB 

  PE Size               4.00 MiB 

  Total PE              12798 

  Alloc PE / Size       0 / 0 

  Free  PE / Size       12798 / 49.99 GiB 

  VG UUID               Rjt3AJ-8qnK-Bsp8-0Rrl-IFCR-pcrz-Z6fEtE 

Now we need to create the logical volume group, using 100% of the available disk.

lvcreate -n gluster-brick -l 100%FREE /dev/vg1 

You should now see the following.

  Logical volume "gluster-brick" created. 

Confirm with lvdisplay

lvdisplay 

You should now see the following.

  --- Logical volume --- 

  LV Path                /dev/vg1/gluster-brick

  LV Name                gluster-brick

  VG Name                vg1 

  LV UUID                y0uQZi-qBF6-Sbky-gr8L-wDBO-wKIj-aFL1ME 

  LV Write Access        read/write 

  LV Creation host, time demo.qgi.qld.gov.au, 2018-05-08 15:17:55 +1000 

  LV Status              available 

  # open                 0 

  LV Size                <25.00 GiB 

  Current LE             6399 

  Segments               1 

  Allocation             inherit 

  Read ahead sectors     auto 

  - currently set to     8192 

  Block device           253:7 

Here we make our bricks directory, format the volume group, add the mount configuration to the /etc/fstab file, mount the drive and create our export dir. Do this on all four nodes.

sudo mkdir -p /bricks/demo  
mkfs -t ext4 /dev/vg1/gluster-brick
sudo echo '/dev/vg1/gluster-brick        /bricks/demo    ext4    defaults        0 0' >> /etc/fstab 
sudo mount /dev/vg1/gluster-brick /bricks/demo  
sudo mkdir -p /bricks/demo/export 

Now we’ll create a “notAsSimple” gluster volume, striped across two Nodes, then replicated to the other two servers, with an arbiter disk on Node1 and Node3. Then we disable gluster NFS. Do this on Node1.

sudo gluster volume create notAsSimple stripe 2 replica 3 arbiter 1 Node1:/bricks/demo/export Node2:/bricks/demo/export Node1:/bricks/arbiter/export Node3:/bricks/demo/export Node4:/bricks/demo/export Node3:/bricks/arbiter/export
sudo gluster volume set notAsSimple nfs.disable on  
sudo gluster volume start notAsSimple  

Then we enable Ganesha. Still on Node1.

sudo gluster nfs-ganesha enable  

Finally, we export the volume. Still on Node1.

sudo gluster vol set notAsSimple ganesha.enable on 

Everything should now be up and running you can test mount the NFS volume using one of the virtual IP’s.

sudo mount node1v:/notAsSimple /mnt/NFS-ganesha 

Hopefully, everything is working now and with the help of kkeithley’s post, my original post, my second post and this post, you have a good hang of what is required to set up a replicated gluster volume and then export it with NFS-Ganesha.

Until next time!

Blog Logo

Alex Simpson


Published

Image

pebkac

A virtualisation engineers venture into all things public cloud!

Back to Overview