FreeBSD ZFS Settings On i386 Hardware

VMware Virtual Machine Hosting

AMD64 Kernel FreeBSD machines are good about autotuning. i386, on the other hand, needs to be adjusted manually.

1. Rebuild your kernel
a. Disable/wipe all drivers you are not using or at least those you are highly unlikely to ever use.
b. Add: options KVA_PAGES=512
c. Recompile/Install new kernel

2. Add these parameters to /boot/loader.conf
a. vm.kmem_size=”1024M”
b. vm.kmem_size_max=”2048M”
c. vfs.zfs.arc_max=”256M”
d. vfs.zfs.vdev.cache.size=”40M”

This will get the machine online without it crashing with vm.kmem errors. ZFS will bring an untuned i386 machine down to its knees with kernel panics quickly!

Adjust the above variables to your tastes. My test platform is a Dual Xeon with 4GB of RAM.

VMware Virtual Machine Hosting

FreeBSD LAGG rc.conf

VMware Virtual Machine Hosting

This is what your rc.conf should contain to configure LAGG with FreeBSD:

### LAGG NFS Interface ###
ifconfig_bce3=”mtu 9000 up”
ifconfig_bce0=”mtu 9000 up”
cloned_interfaces=”lagg0″
ifconfig_lagg0=”laggproto roundrobin laggport bce3 laggport bce0″
ipv4_addrs_lagg0=”10.10.40.10/24″

Change the bce* interfaces to whatever your server is using for the network interfaces.

The “mtu 9000 up” is correct!

ZFS Replication on FreeBSD

VMware Virtual Machine Hosting

This script will replicate a ZFS pool to another FreeBSD machine. The sync process is quick, after the initial copy, and depending upon how much data changed.

Download this shell script: http://www.tediosity.com/zfsrep.sh

This script was written by another author for Solaris and I have fixed it to work on FreeBSD.

I chose to use: /root/zfsrep as the script location.

mkdir -p /root/zfsrep/zfsrep.snapshots
touch /root/zfsrep/zfsrep.log
cp zfsrep.sh /root/zfsrep/
vi zfsrep.sh and modify the e-mail address and location of the script (if you are not using /root/zfsrep)

Initial run:

/root/zfsrep/zfsrep.sh sinit nfs/datastore nfs/datastore 10.10.30.20

Subsequent runs:

/root/zfsrep/zfsrep.sh sync nfs/datastore nfs/datastore 10.10.30.20

Create a cronjob and forget about it.

If you make any changes, error fixes, or enhancements please e-mail them to me!! I love seeing other people’s creativity and putting their ideas to work in a production environment.

email: admin -at- tediosity.com

VMware Virtual Machine Hosting

Setup NIS + Red Hat + CentOS + Linux

VMware Virtual Machine Hosting

The following describes a procedure to set up NIS network name service under Red Hat Linux. This is geared toward a small intallation with only one domain. However, it should be fairly evident how to add more NIS domains. The NIS domain name has nothing to do with any DNS naming convention being used.

In these examples, the following conventions are used:
NIS domain: “internal”
Code or configuration file data: bold
Root prompt on NIS master server: master#
Root prompt on NIS client host: client#
Setting up a NIS master server:

yum install yp-tools ypbind ypserv portmap ntpd

Set up “ntpd” service or otherwise make sure the host’s clock is synchronized.
ntpdate pool.ntp.org
chkconfig ntpd on
/etc/init.d/ntpd start

Edit /etc/yp.conf:

domain internal server ip.of.nis.server

Edit /etc/ypserv.conf:

[The below settings are, by default, activated in CentOS config]
dns: no
files: 30
xfr_check_port: yes
* : * : shadow.byname : port
* : * : passwd.adjunct.byname : port

Edit /etc/sysconfig/network:

NISDOMAIN=”internal”

Set NIS domain name:

master# domainname internal
master# ypdomainname internal

Create file /var/yp/securenets:

host 127.0.0.1
255.255.255.0 10.0.0.0

Make sure the “portmap” service is running:

master# service portmap start
master# chkconfig portmap on

Edit File: /etc/nsswitch.conf

passwd: files nis
shadow: files nis
group: files nis

Start ypserv service:

master# service ypserv start

Check that it’s listening:

master# rpcinfo -u localhost ypserv

You should see:

program 100004 version 1 ready and waiting
program 100004 version 2 ready and waiting

Initialize the NIS maps:

master# /usr/lib/yp/ypinit -m

Specify local hostname, Ctrl-D, y, let finish.

Start up ypbind, yppasswdd, ypxfrd:

master# service ypbind start
master# service yppasswdd start
master# service ypxfrd start

Set YP services to run on boot-up:

master# chkconfig ypserv on
master# chkconfig ypbind on
master# chkconfig yppasswdd on
master# chkconfig ypxfrd on

NIS client host setup

Required packages: yp-tools ypbind portmap

Edit /etc/sysconfig/network:

NISDOMAIN=internal

Edit /etc/yp.conf:

domain internal server ip.of.master.server

Edit /etc/hosts:

ip.of.master.server hostname.domain hostname

Set NIS domain-name:

client# domainname internal
client# ypdomainname internal

Edit /etc/nsswitch.conf:

passwd: files nis
shadow: files nis
group: files nis

Make sure the portmap service is running:

client# service portmap start
client# chkconfig portmap on

The /etc/hosts.allow file will need rules allowing access from localhost and the NIS master server.

Start ypbind service:

client# service ypbind start
client# chkconfig ypbind on

Test it out:

client# rpcinfo -u localhost ypbind
client# ypcat passwd

ZFS + List all snapshots

zfs list -t snapshot

Example Output:

nas1# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
nfs/datastore@rep-init-20110113013713 18K – 21K –
nfs/datastore@rep20110113013825 311M – 311M –
nfs/datastore@rep20110113015314 0 – 68K –

VMware Virtual Machine Hosting

ZFS NFS Export

Activate NFS on the pool:

zfs set sharenfs=on

Now restrict access by IP range:

zfs sharenfs=”-maproot=root -alldirs -network 10.10.40.0 -mask 255.255.255.0″

Verify using showmount:

nas1# showmount -e
Exports list on localhost:
/nfs/datastore 10.10.40.0
/nfs 10.10.40.0

VMware Virtual Machine Hosting

FreeBSD + HAST + CARP + NFS

FreeBSD recently introduced a disk replication setup: HAST.

This is FreeBSD’s answer to DRDB for Linux.

Some very good blog posts, with scripting, can be found here:

FreeBSD + HAST + CARP + NFS

HAST with ZFS

My issues with HAST are its feeling of instability and the addition of numerous points of failure… sloppy design? The system, essentially, adds “virtual harddrives”. These “virtual harddrives (devices)” sit on top of the drives to be mirrored, the HAST devices receive the data, and then distribute the data to real harddrives on the primary server and to the secondary server. I found this to be a nightmare – more Linux style than FreeBSD.

I would not consider putting HAST in production with ZFS. ZFS is created on top of the “virtual harddrives (devices)”. That is 3 layers (ZFS + HAST devices + actual Harddrives). If HAST messes us, then your ZFS tank disappears. If you restart the HAST daemon then you have to make sure to export your ZFS first. If you do not export the ZFS pool, then ZFS will lock up and a hard reboot is needed. Then on reboot HAST has to be live first, and then your ZFS pool is reimported. All of the aforementioned means downtime. It my lab experiments I simply removed it from the servers and have deemed it not suitable for 24x7x365 applications.

For a ZFS pool replication script visit this post: http://www.tediosity.com/2011/05/31/zfs-replication-on-freebsd/

VMware Virtual Machine Hosting

VMWare convert thick to thin disk

If you have VMware vCenter – Click Migrate

For those of us who do not have VMware vCenter:

1. Shutdown the VM
2. SSH to the ESXi machine and type:

vmkfstools -i /vmfs/volumes/datastore1/NAME-OF-VM/NAME-OF-VM.vmdk /vmfs/volumes/datastore1/NAME-OF-VM/NAME-OF-VM-thin.vmdk -d ‘thin’ -a lsilogic

Once the copy is done, go into the settings of your VM, delete the hard disk, and add a new hard disk pointing to the “thin” vmdk you created. Boot your vm, if it all works then you can use the datastore browser to delete the thick vmdk and you are done.

VMware Virtual Machine Hosting