Archive for zfs

A conundrum – ESXi and solaris zfs

Despite the change to a full hypervisor I wanted to keep using zfs to manage the storage of datafiles. Partly because it seems like a great system and the reporting and management is excellent and simple and partly because that’s what I started out with and I’m too bloody minded to change.

One of my inspirations for this came from solarisinternals.com who have moved to a similar setup.  There they have setup zfs to access the disks on esx server using ‘raw disk mode’ which is obviously exactly what I want isn’t it? This would be my preferred route. I know it goes against the virtual ethos but I could whack the drives into any solaris box and ‘zfs import’ them in the event of a hardware failure. I gives me options to keep the drives without having to offload all the data and re-format; if I run into problems with ESXi in the future.

Well NOT SO FAST…..it’s a case of needed to do the research and read the small print. ESXi 4.0 doesn’t support virtual machines accessing raw disk devices (‘raw disk mode’). It seems that older versions (or maybe other vmware products do or did). RDM or ‘Raw disk Mapping’ is a supoprted option in ESXi 4 but that refers to mapping onto raw disks over a SAN (NOT LOCALLY).

I have created an opensolaris 2009.06 virtual machine running on the hypervisor. The root pool or system disk of this is infact a .vmdk file sitting on the internal mirrored pair of drives sitting in the server. My intention was to add additional drives that would be managed directly under opensolaris. BUT this just doesn’t seem possible…ESXi 4.0 doesn’t allow raw device access or direct disk access.

Research is ongoing I have two choices it seems.

1. Use the hardware raid capabilities of the SAS/SATA RAID cards – Then just use ZFS to manage quotas/snapshots and management stuff. BUT I’m nervous about recovering these should a controller fail (I’m left with a situation where in order to recover the data I’d need to buy a very specific and very expensive RAID controller – or wait ages until the right thing came up on ebay). Also RAID-Z in ZFS removes write-hole errors.
2. Create virtual disks on the actual disks and use zfs to manage these as if they were actual disks. I guess I can see a disaster recover route for this option. The disk could be hooked out and connected to any sata controller and then read from within ESXi (I think) but I need to check that. This would have to be slower wouldn’t it?
3. Forget zfs completely. Use hardware RAID and create another virtual machine which is a small footprint freenas or similar box let that take care of all the file serving work. Still need to think about a possible route for recovering the data in the event of a hardware failure.
4. Find work around – there is always a work around!
5. Sod it – switch to hyper-v which does seem to support it!

ESXi access local disks as a raw device – workarounds
1. Use vmkfstools: There do seem to be worked examples
here………http://www.hardforum.com/showthread.php?t=1441318
and here…http://www.hardforum.com/showthread.php?t=1429640
2. Edit configuration files by hand
discussed here… http://communities.vmware.com/thread/145589?tstart=0&start=0

Decisions, decisions…

Comments (140)

Solaris Home Server: SMB setup

What is needed is obviously a simple windows filesharing setup – but opensolaris doesn’t come with this out of the box so it needs to be installed then configured.

It should be noted there are two ways to add windows (CIFS aka SMB) file sharing…one is to add the solaris port of SAMBA – the other way which promises to be more lightweight if a bit less feature rich is SUN’s in kernel CIFS server package.  It’s important to note that both are not available together!

Install CIFS server components;

In openSolaris 2009.06 there are two packages needed I installed then from the command line…
host:#pfexec pkg install SUNWsmbskr SUNWsmbs

then reboot

host:#pfexec reboot

(You can also install them using the package manager GUI).

Make the SMB service start automatically at boot

host:#pfexec svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.

(apparently the error message doesn't matter!)

Setup the PAM authentication needed

To give SMB access to OpenSolaris users, edit the /etc/pam.conf file to contain the following line:

other password required pam_smb_passwd.so.1 nowarn

Then the password must be re-created for each user that want’s access to the smb service.

host:#passwd john

Join the appropriate workgroup

host:#pfexec smbadm join -w OTB
OTB is the household smb workgroup

Say these magic words…

Apparently this will prevent problems later in defining access permissions and using java web console tools.
host:#pfexec zfs set aclinherit=passthrough rpool

Create the ZFS shares

Create a zfs filesystem within the rpool mirror for sharing pictures…..
host:#pfexec zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=name=pictures rpool/pictures
and one for the kids videos..
host:#pfexec zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=name=kids_videos rpool/kids_videos

Check the staus of smb shares with…
host:# sharemgr show -vp
default nfs=()
zfs
zfs/rpool/pictures smb=()
pictures=/rpool/pictures

Set File Permissions

At the end of this I ended up with a /rpool/pictures/ directory and a pictures share which can be read but with only the root user has permission to write to. To control access to the directory/share I’ve setup two levels of access.

First I took over ownership of the shares (in this case pictures).
host:#pfexec chown john pictures

I want two layers of access read only for unpriviledges users (like the kids) and read/write access for the grownups.
User       Groups
media    other,media
joseph   other,media
sarah      staff,grownups,media
john       staff,grownups,media

I know that I should work out the correct ACL but I just went into the opensolaris filemanager right-clicked on the folder and went to the permissions tab. I set staff to have full access and ‘others’ (e.g. those in the media group) only read access. I I’m struck by a flamingo I’ll sort out the correct ACL setup.

So I can restrict write permissions for the shares that contain anything valuable (like the family photos) and also restrict read access to the film that I have ripped that are 12 certificate and over.

For the future???

1. Automount home directories
Apparently you just create a file /etc/smbautohome and add the line… *   /export/home/&
and magically the home directory of the unix user will be mounted.

2. Proper ACL for different levels of access to files being server

Reference sources

There’s a good guide here – http://wiki.genunix.org/wiki/index.php/Getting_Started_With_the_Solaris_CIFS which includes details of actually installing the service!. There are a few gotchas like not trying to run SAMBA at the same time as the smb-kernel service. NOTE: no SMB/CIFS server is installed by default.
The other definitive source of information in the guide produced by SUN –http://docs.sun.com/app/docs/doc/820-2429

Other sources of information:

This description of setting up an opensolaris file server http://www.h-online.com/open/OpenSolaris-as-a-file-server–/features/112212

This description of howto install the smb packages from the open solaris express DVD – by looping back the iso image as a filesystem – usefull reference! http://osdir.com/ml/os.solaris.opensolaris.storage.general/2008-03/msg00112.html

Comments (164)

Solaris Home Server: ZFS setup

Setting up the ZFS file system:

I really want to make use of the ability for mirror the root filesystem for redundancy – the approach seems to be to install using a single disk then add the second disk later and create the mirror. These instructions worked for opensolaris 2009.06 – but it seems to be a moving target so maybe not relevant to later releases.

This was the webpage used a reference http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html

Initial install
At the initial install the system has a single 250GB drive. At install time I opted NOT TO USE THE ‘WHOLE DISK’ OPTION – since rpool cannot be mirrored later if you choose this!
So I installed Solaris by creating a single solaris partition on the disk taking up 100% of the available space.

Physical disk setup
In my setup I have 4 drives in a icydock multibay thus;
TOP
1                        SATA Ch 1
2                        SATA Ch 2
3                        SATA Ch 5                  c8d0                       rpool BOOT MIRROR first device
4                        SATA Ch 6                  c9d0                       rpool BOOT MIRROR second device (added after install)
BOTTOM

Partition the new disk

After physically adding the disk it needs to be formated. I used fdisk to create a single (100% of the disk) solaris partition. Then added a label to the disk.

host:#pfexec format.
Choose fdisk from the menu and then label.

Next copy an identical partition table from the old disk to the new one so they can be mirrored. The instructions all say this will do it.

host:#pfexec prtvtoc /dev/rdsk/c8d0s2 | fmthard -s – /dev/rdsk/c9d0s2

I found I needed to reboot and do it in 2 stages…
1st copy the old partition table from the old disk
host:#pfexec prtvtoc /dev/rdsk/c8d0s2 > c8d0.part
2nd copy it from there to the new disk
host:#pfexec fmthard -s c8d0.part /dev/rdsk/c9d0s2

Create Mirror

host:#pfexec zpool attach -f rpool c8d0s0 c9d0s0
The -f is because the disks are not exactly the same geometry.

Add boot loader to new disk

host:#pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c9d0s0

So now I have an mirrored rpool that will boot if either drive fails. (err I think!). I guess I’ll have to pull the drives one by one and check that it actually works.


Comments (6)

New Home Server: Feeling the need for ZFS

The linux file server which has worked faultlessly is too boring – and I really want to have a play with opensolaris and ZFS. So I’ve taken the chance to upgrade the server and install OpenSolaris (the 2008.11 2009.06 release).

The new server is based on the following hardware;
ABit AB9 motherboard
Celeron 430 CPU
2Gb OCZ Memory (running Dual Channel)
2 x 250Gb Maxtor SATA Hard Drives
ICY DOCK 4 into 3 SATA Caddy.
plus I’ve thown in and old Nvidia 6200 video card so I can see something.

This is an improvement over the previous setup since I can use SATAII (300 Gbs) and the SATA controller for all 4 channels (disks) is not on the PCI bus and so potentially limited by the throughput of a single PCI bus. In addition with the Abit motherboard there is a upgrade path (all the way to a quad core processor core processor if I wanted).

The whole thing is in a dirt cheap ATX 4u Rack mounted chassis.

The first thing was to install Solaris so I got a copy of 2008.11 2009.06 and put a CD drive in the server (temporarily) and it just works….good start!

Comments (5)