Archive for esxi

A conundrum – ESXi and solaris zfs

Despite the change to a full hypervisor I wanted to keep using zfs to manage the storage of datafiles. Partly because it seems like a great system and the reporting and management is excellent and simple and partly because that’s what I started out with and I’m too bloody minded to change.

One of my inspirations for this came from solarisinternals.com who have moved to a similar setup.  There they have setup zfs to access the disks on esx server using ‘raw disk mode’ which is obviously exactly what I want isn’t it? This would be my preferred route. I know it goes against the virtual ethos but I could whack the drives into any solaris box and ‘zfs import’ them in the event of a hardware failure. I gives me options to keep the drives without having to offload all the data and re-format; if I run into problems with ESXi in the future.

Well NOT SO FAST…..it’s a case of needed to do the research and read the small print. ESXi 4.0 doesn’t support virtual machines accessing raw disk devices (‘raw disk mode’). It seems that older versions (or maybe other vmware products do or did). RDM or ‘Raw disk Mapping’ is a supoprted option in ESXi 4 but that refers to mapping onto raw disks over a SAN (NOT LOCALLY).

I have created an opensolaris 2009.06 virtual machine running on the hypervisor. The root pool or system disk of this is infact a .vmdk file sitting on the internal mirrored pair of drives sitting in the server. My intention was to add additional drives that would be managed directly under opensolaris. BUT this just doesn’t seem possible…ESXi 4.0 doesn’t allow raw device access or direct disk access.

Research is ongoing I have two choices it seems.

1. Use the hardware raid capabilities of the SAS/SATA RAID cards – Then just use ZFS to manage quotas/snapshots and management stuff. BUT I’m nervous about recovering these should a controller fail (I’m left with a situation where in order to recover the data I’d need to buy a very specific and very expensive RAID controller – or wait ages until the right thing came up on ebay). Also RAID-Z in ZFS removes write-hole errors.
2. Create virtual disks on the actual disks and use zfs to manage these as if they were actual disks. I guess I can see a disaster recover route for this option. The disk could be hooked out and connected to any sata controller and then read from within ESXi (I think) but I need to check that. This would have to be slower wouldn’t it?
3. Forget zfs completely. Use hardware RAID and create another virtual machine which is a small footprint freenas or similar box let that take care of all the file serving work. Still need to think about a possible route for recovering the data in the event of a hardware failure.
4. Find work around – there is always a work around!
5. Sod it – switch to hyper-v which does seem to support it!

ESXi access local disks as a raw device – workarounds
1. Use vmkfstools: There do seem to be worked examples
here………http://www.hardforum.com/showthread.php?t=1441318
and here…http://www.hardforum.com/showthread.php?t=1429640
2. Edit configuration files by hand
discussed here… http://communities.vmware.com/thread/145589?tstart=0&start=0

Decisions, decisions…

Comments (140)

Dell SC440

As ever this entry is really intended as a point of reference for my own use rather than other people’s enlightenment.

I’ve been shopping. I’ve been experiencing all kinds of problems with my server setup and wanted to move to a ESXi hypervisor so I decided up upgrade (perhaps I’m sidegrading since the spec. of the new server is very similar to the old). I’ve come to think that perhaps some of the problems I’ve been having are because I’m using desktop parts and a number of components are poorly supported and perhaps drivers and bit buggy and the hardware itself is of lower quality etc, etc….

I’ve decided to go for all server components and choose parts very carefully to ensure they’s all work and are fully supported in ESXi. Using the unbelievable www.gixen.com service to snip on ebay I’ve been picking up tasty morsels on ebay over a period of a few months…

– A dell SC440 (a Xeon  Dual Core X3040 1.86Ghz with 6GB ECC RAM)
£103.00
– A 1GB USB flash memory ‘pen drive’ (from the back of my desk drawer)
£     0.00
– A dell PERC 6ir SAS/SATA internal RAID controller
£   16.00
– A nobrand broadcom gigabit ethernet card PCI-express
£     2.50
– A 3ware 9590se-4me SAS/SATA external RAID controller
£  15.00
– An intel 1000GT PCI gigabit controller (salvaged)
£    0.00
– Two SATA 5.25″ removable disk caddies data castle BT-32  (already had)
£    ——

Still on the shopping list……
A higher spec CPU;
Ideally a quad-core Xeon (X3210 and X3220) but something like a Q6600 or a Q6700 would work just fine. Another possibility could be a Xeon X3070 which is the maximum available dual-core Xeon for the machine.

A 4GB ECC Ram kit;
Dell states that only 4GB of RAM is supported (4 x 1GB modules) but the system will support upto 8GB as 4 x 2GB modules. My system currently has a total of 6GB installed as 2 x 1GB plus 2 x 2GB.
Compatible RAM for this machine must be 553 or 667Mhz DDR2 unregistered ECC. An example of a compatible ram kit would be Kingston KVR667D2E5/2G OR the IBM part 30R5150

A SFF-8484 cable set;
For completeness I should get hold of a compatible SFF-8484 SAS connector (on the Dell controller) cable set to convert SFF-8484 to 4 x SATA. I’m still researching which I had problems with the last ones.

These components are either a) rare and b) sought after so I’m going to take my time and see what come up!

Current hardware setup;

– The various cards are all plugged in the various slots (as you’d expect).
– The hard drives are all connected to the PERC 6ir at the moment.

Internal drives for ESXi datastore – storage of virtual machines
ESXi doesn’t support storage of virtual machine images on an ordinary SATA controller only RAID arrays or SANs – I’ve setup a pair of internal disks as a mirrored RAID (RAID-2) using the PERC 6iR controller.

Removable drives for data storage
Here it gets complicated. Initially I had intended to use the 3ware controller to connect to an external box containing an array of disks. On reflection that seems overkill so I settled for just 2 removable drives installed in the 5.25″ bays in the SC440 itself.
Problem 1: The plastic bezel supplied with sc440 is only suitable for installation of CD/DVD drives and will not fit round the removable caddies. Solution a) reach for the dremel or b) just leave it off. I’ve gone for solution 2 for now.
Problem 2: The Dell 6iR has two SFF8484 connectors for wiring in drives and it would seem sensible (and probably better use of available bandwidth) to put one set of drives on the first connector and the other set on the second – BUT I only have one cable set
that works. Solution I’ve got the two internal drives connected to channel 0 and channel 1, and the two drives in removable caddies are on channel 2 and 3 of the same connector.
The two removable drives are not setup as a RAID array by the controller they are presented to ESXi as two individual disks – I intend to manage them using zfs from within the solaris guest (more of that later I guess).

The 3ware controller is still installed and I’ll setup the drivers. I could expand the storage to an external box in the future if I need to.

REFERENCES AND SOURCES

– Dell SC440 support pages
http://support.dell.com/
– Dell SC440 manuals
http://support.dell.com/support/edocs/systems/pe440sc/en/index.htm
– The esxi whitebox HCL
http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php
– Silicom PEG2BPI information
http://www.silicom-usa.com/default.asp?contentID=656
– 3ware 9590se-4me information

– 3ware drivers for ESXI
http://www.3ware.com/kb/article.aspx?id=15548

Comments (3)