Setup NIS + Red Hat + CentOS + Linux

VMware Virtual Machine Hosting

The following describes a procedure to set up NIS network name service under Red Hat Linux. This is geared toward a small intallation with only one domain. However, it should be fairly evident how to add more NIS domains. The NIS domain name has nothing to do with any DNS naming convention being used.

In these examples, the following conventions are used:
NIS domain: “internal”
Code or configuration file data: bold
Root prompt on NIS master server: master#
Root prompt on NIS client host: client#
Setting up a NIS master server:

yum install yp-tools ypbind ypserv portmap ntpd

Set up “ntpd” service or otherwise make sure the host’s clock is synchronized.
ntpdate pool.ntp.org
chkconfig ntpd on
/etc/init.d/ntpd start

Edit /etc/yp.conf:

domain internal server ip.of.nis.server

Edit /etc/ypserv.conf:

[The below settings are, by default, activated in CentOS config]
dns: no
files: 30
xfr_check_port: yes
* : * : shadow.byname : port
* : * : passwd.adjunct.byname : port

Edit /etc/sysconfig/network:

NISDOMAIN=”internal”

Set NIS domain name:

master# domainname internal
master# ypdomainname internal

Create file /var/yp/securenets:

host 127.0.0.1
255.255.255.0 10.0.0.0

Make sure the “portmap” service is running:

master# service portmap start
master# chkconfig portmap on

Edit File: /etc/nsswitch.conf

passwd: files nis
shadow: files nis
group: files nis

Start ypserv service:

master# service ypserv start

Check that it’s listening:

master# rpcinfo -u localhost ypserv

You should see:

program 100004 version 1 ready and waiting
program 100004 version 2 ready and waiting

Initialize the NIS maps:

master# /usr/lib/yp/ypinit -m

Specify local hostname, Ctrl-D, y, let finish.

Start up ypbind, yppasswdd, ypxfrd:

master# service ypbind start
master# service yppasswdd start
master# service ypxfrd start

Set YP services to run on boot-up:

master# chkconfig ypserv on
master# chkconfig ypbind on
master# chkconfig yppasswdd on
master# chkconfig ypxfrd on

NIS client host setup

Required packages: yp-tools ypbind portmap

Edit /etc/sysconfig/network:

NISDOMAIN=internal

Edit /etc/yp.conf:

domain internal server ip.of.master.server

Edit /etc/hosts:

ip.of.master.server hostname.domain hostname

Set NIS domain-name:

client# domainname internal
client# ypdomainname internal

Edit /etc/nsswitch.conf:

passwd: files nis
shadow: files nis
group: files nis

Make sure the portmap service is running:

client# service portmap start
client# chkconfig portmap on

The /etc/hosts.allow file will need rules allowing access from localhost and the NIS master server.

Start ypbind service:

client# service ypbind start
client# chkconfig ypbind on

Test it out:

client# rpcinfo -u localhost ypbind
client# ypcat passwd

Dell 2970 – Crashes / Reboots

Error on screen:

HyperTransport error caused a system reset Embedded I/O Bridge Device 2

SEL will show:

Chipset Err: Critical Event Sensor PCI Err (BUS 0 Device 7 Function 0) was asserted

There is a discussion about it here (with no real solution):
http://en.community.dell.com/support-forums/servers/f/946/t/19281276.aspx

I replaced both power supplied with known good units from a Dell 2950. No effect.

My SEL was filled with these errors and, obviously, my server was rebooting frequently.

I noticed NO similarities or triggers. OS did not matter, CPU load did not matter, disk activity did not matter, etc etc.

 

THE SOLUTION THAT FIXED THIS ISSUE WAS A REPLACEMENT MOTHERBOARD FROM DELL.   THE ORIGINAL DESIGN IS CURSED.

VMware Virtual Machine Hosting

ZFS + List all snapshots

zfs list -t snapshot

Example Output:

nas1# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
nfs/datastore@rep-init-20110113013713 18K – 21K –
nfs/datastore@rep20110113013825 311M – 311M –
nfs/datastore@rep20110113015314 0 – 68K –

VMware Virtual Machine Hosting

ZFS NFS Export

Activate NFS on the pool:

zfs set sharenfs=on

Now restrict access by IP range:

zfs sharenfs=”-maproot=root -alldirs -network 10.10.40.0 -mask 255.255.255.0″

Verify using showmount:

nas1# showmount -e
Exports list on localhost:
/nfs/datastore 10.10.40.0
/nfs 10.10.40.0

VMware Virtual Machine Hosting

FreeBSD + HAST + CARP + NFS

FreeBSD recently introduced a disk replication setup: HAST.

This is FreeBSD’s answer to DRDB for Linux.

Some very good blog posts, with scripting, can be found here:

FreeBSD + HAST + CARP + NFS

HAST with ZFS

My issues with HAST are its feeling of instability and the addition of numerous points of failure… sloppy design? The system, essentially, adds “virtual harddrives”. These “virtual harddrives (devices)” sit on top of the drives to be mirrored, the HAST devices receive the data, and then distribute the data to real harddrives on the primary server and to the secondary server. I found this to be a nightmare – more Linux style than FreeBSD.

I would not consider putting HAST in production with ZFS. ZFS is created on top of the “virtual harddrives (devices)”. That is 3 layers (ZFS + HAST devices + actual Harddrives). If HAST messes us, then your ZFS tank disappears. If you restart the HAST daemon then you have to make sure to export your ZFS first. If you do not export the ZFS pool, then ZFS will lock up and a hard reboot is needed. Then on reboot HAST has to be live first, and then your ZFS pool is reimported. All of the aforementioned means downtime. It my lab experiments I simply removed it from the servers and have deemed it not suitable for 24x7x365 applications.

For a ZFS pool replication script visit this post: http://www.tediosity.com/2011/05/31/zfs-replication-on-freebsd/

VMware Virtual Machine Hosting

Dell Firmware Upgrade

If you have been searching around for the proper ISOs to use.

These two ISOs are what you need to use.

This is bootable:
cdu_1.5_core_225_A00.iso

After you have burned and booted with the above CD.

Download these three files:

http://ftp.us.dell.com/sysman/OM_6.2.0_SUU_A01.iso.001
http://ftp.us.dell.com/sysman/OM_6.2.0_SUU_A01.iso.002
http://ftp.us.dell.com/sysman/OM_6.2.0_SUU_A01.iso.003

The above files need to be joined:

Windows: copy /b OM* OM_610_SUU_A01.iso
Linux: cat OM* > OM_610_SUU_A01.iso

Burn the resulting ISO file.

Boot from the first DVD and then select Update Firmware. Insert Second CD.

VMware Virtual Machine Hosting

Reset Password Foundry EdgeIron – Asset Recovery

VMware Virtual Machine Hosting

Foundry Models Covered: EIF24G, EIF4802CF, EIF48G, EIF24GS, EIF48GS, etc.

1) Establish a connection to the device on the console port

2) Power the switch on, while holding down ctrl-u to access the system file menu (technically you can just hit �ctrl-u� in the one second time between powering on and it actually loading, but its hard to time it)

3) you have a few seconds to type the password for the file menu, it is ‘mercury’; clear off the asteriks that may remain from holding ctrl-u down first.

4) select D to delete all user defined configurations

5) enter the file name of the file type that is �Config File� and confirm if asked to delete it

6) select Q to reload.

At this point it will boot normally, and the username and password for the unit will be at default, admin, and admin. Its back to default now, have fun. I have no idea why Foundry barely documents this process. Even the users manual doesn�t tell you the password to enter the ROM menu (‘mercury’), it sais to call tech support to get it.

VMWare convert thick to thin disk

If you have VMware vCenter – Click Migrate

For those of us who do not have VMware vCenter:

1. Shutdown the VM
2. SSH to the ESXi machine and type:

vmkfstools -i /vmfs/volumes/datastore1/NAME-OF-VM/NAME-OF-VM.vmdk /vmfs/volumes/datastore1/NAME-OF-VM/NAME-OF-VM-thin.vmdk -d ‘thin’ -a lsilogic

Once the copy is done, go into the settings of your VM, delete the hard disk, and add a new hard disk pointing to the “thin” vmdk you created. Boot your vm, if it all works then you can use the datastore browser to delete the thick vmdk and you are done.

VMware Virtual Machine Hosting