Archive for October, 2009

SGD on home server – set up to traverse a firewall.

I guess most employers will have a pretty robust firewall setup. Since by default SGD uses a load of non-standard  ports a corporate firewall will not allow you to run the client at work and access your SGD server at home. There are two options 1) open holes in the firewall or 2) modify the SGD installation to confine all the traffic to a standard port so that it will traverse a corporate firewall.

A nice post on the subject can be found here this post gives the background to the problem but in fact these instructions are out of date for SGD version 4.5.

There is a whole section in the manual that now provides the information neede to setup a SGD server to perform firewall traversal

I’ve reproduced the steps here (in case I need to do it again).

1. Create a selfsigned certificate

shs$pfexec /opt/tarantella/bin/tarantella security certrequest --country UK --state war --orgname "Nobody Puts Baby in a corner"
shs$pfexec /opt/tarantella/bin/tarantella security selfsign

2. Enable security on SGD server

shs$ pfexec /opt/tarantella/bin/tarantella security start

3. Edit apache .conf file

shs$ pfexec vi /opt/tarantella/webserver/apache/2.2.10_openssl‑0.9.8i_jk1.2.27/conf/httpd.conf

replace the section
<IfDefine SSL>


<IfDefine SSL>

4. Configure the SGD server to use 443 port
shs$ pfexec /opt/tarantella/bin/tarantella config edit --array-port-encrypted 443
shs$ pfexec /opt/tarantella/bin/tarantella config edit --array --security-firewallurl

5. Restart the SGD server

I followed these instructions and when I first accessed it it asked to to confirm the use of the temporary certificate. Straight away I can access my (unix) desktop straight out of the box. More work seems to be needed to access a windows desktop using rdesktop or uttsc (the sun ray windows connector) but I guess it must be relatively straight forward (right?).

What the instructions in the manual about enabling firewall traversal don’t do is setup the server to be accessed by https rather than plain old http – I guess this is a security hole but I decided to stop while

Comments (90)

Sun Secure Global Desktop (SGD) – installation and configuration on OpenSolaris

The latest version available (as of Oct 2009) is 4.5 so I went over to SUN’s website and got a copy.

Visited the SGD support WIKI and followed the instructions.

Before starting the install you need to create two users..
Here the info is pasted from the error you get if you try to carry out the install without first creating the users….

You must create two user accounts before you can install Secure Global Desktop.

– The user names must be “ttaserv” and “ttasys”.
– Both must have their primary group set to “ttaserv”.
– You can use any UIDs and GID you want.
– Both users must have a valid shell, for example /bin/sh.
– Both users must have writeable home directories.
– We recommend that you lock the user accounts (passwd -l).

The easiest way of doing this is to carry out the install without creating the users (knowing it will fail) then run the script /tmp/ that is created. Once the group and the users are created the install is a simple pkg add….

pfexec pkgadd -d /tempdir/tta-version.sol-x86.pkg

Once the install finishes you are told to start the server – DON’T DO THIS YET.

FIRST — here are a couple of things the check that are specific to SGD on OpenSolaris
1. Is the library installed?
This blog here suggested that SUNWmfrun (the motif software package) and SUNWxwrtl must be installed for SGD to work and explains exactly why. I think that this advice is PARTIALLY outdated as of version 4.5 of SGD and there is now a specific errata in the manual covering this (#6756705) about missing
Either follow the instructions in the installation manual OR if you have installed (or will install) the Sun Ray Server you should go the route of installing SUNWxwrtl and SUNWmfrun since these are both required for SRS on OpenSolaris. More details in the installing SRS post……

2. Modify the procs.exp script
This blog provides a modification to the /opt/tarantella/var/serverresources/expect/procs.exp script that should be made to overcome a problem in launching opensolaris apps in SGD. I’ve reproduced the relevant section of that post here.
To fix the problem, edit /opt/tarantella/var/serverresources/expect/procs.exp and change the following lines from:

416:    send -s "if \[ -f /bin/ksh \]; then HISTFILE=/dev/null; export HISTFILE; exec /bin/ksh; fi\n"
417:    wait_for_prompt


416:    send -s "if \[ -f /bin/ksh \] && \[ -x /bin/uname \] && \[ \"`/bin/uname -sr`\" != \"SunOS 5.11\" \];
then HISTFILE=/dev/null; export HISTFILE; exec /bin/ksh; fi\n"
417:    wait_for_prompt

The conditions for using ksh are now, /bin/ksh exists, /bin/uname exists and application server is OpenSolaris. Feel free to customize this logic to your own needs.
I should note that in the lastest SGD manual for 4.5 this is mentioned as bug #6831077 (they suggest an alternative fix – see the mailing list thread here).


pfexec /opt/tarantella/bin/tarantella start

On first starting the server you have to run through a configuration menu. I just choose the default options.

The SGD server should now run and you should be able to remotely access you desktop(s).

Reference sources and links…
– Various posts about getting SGD to work on OpenSolaris

– Sun’s page on SGD – obtain the latest version from here

– Sun’s WIKI page for SGD documentation and support

Comments (145)

Installing SRS EA2 on OpenSolaris 2009.06

I’ve gone the whole hog and installed The Early Access 2 release of the upcoming Sun Ray Server Software version 5.

Here goes – I’m following the guide here – which as the title suggests is for 2008.11 but should work ;-<>

1. Set my IP to static (as suggested). In fact I have totally disables NWAM since it seems to be the source of a number of problems for other users and hardcoded all the network information.

2. Checked DNS setup –  in my case resolv.conf looks like this…the router is the DNS server.

3. Addressed the ‘sock2path bug’


just change In /etc/sock2path change the following lines:
   2   2   0   tcp
   2   2   6   tcp
   26  2   0   tcp
   26  2   6   tcp
   2   1   0   udp
   2   1   17  udp
   26  1   0   udp
   26  1   17  udp

change to...
   2   2   0   /dev/tcp
   2   2   6   /dev/tcp
   26  2   0   /dev/tcp6
   26  2   6   /dev/tcp6
   2   1   0   /dev/udp
   2   1   17  /dev/udp
   26   1  0   /dev/udp6
   26   1  17  /dev/udp6

4. Setup a working DHCP and tftp server.

5. Install SRSS software

Following the instructions step-by-step with a couple of points to note..

Here are instructions for clearing a sun ray frozen in gecko or blank screen mode…
ALSO – sun web based troubleshooter – here might help.
ALSO – for dhcp

See also this blog post with some more recent information…

Comments (145)

Changing my mind going back to OpenSolaris

Changing my mind going back to OpenSolaris

Well after playing with a proper hypervisor based virtualised environment I’ve changed my mind (well it my perogative). My problems with ESXi were not that it isn’t totally awesome it’s just that it doesn’t suit my small home setup.

– I can’t have raw access to disks (which makes a disaster recovery more involved – ie I can’t whip out a disk and put it in anoter machine for example).
– I need to maintain a totally separate windows workstation or laptop to do the administration – my goal is to do away with looking after windows machine at home – I frankly can’t be arsed. But ESXi has no local-console all the administration requires a windows machine running Vclient.
– THEY FIXED VIRTUALBOX – My main reason to change from a hosted virtualised environment (virtualbox ontop of opensolaris) was that virtualbox sucked the sweat from a dead mans balls when running onto of opensolaris – it was so buggy that the setup would not run for more that 12/24hours without locking up the host completly. The new release of virtualbox 3.0.8 seems to have fixed that.

So now the setup is going to be a kind of Sun VDI on the cheap – A stable OpenSolaris host taking care of file and network services – running a virtual windows server in virtualbox so that people (i.e. everybody else in the house except for me) can have access to a windows desktop. Sunrays being the devices that people use to access their (virtual) desktop.

Comments (102)

A conundrum – ESXi and solaris zfs

Despite the change to a full hypervisor I wanted to keep using zfs to manage the storage of datafiles. Partly because it seems like a great system and the reporting and management is excellent and simple and partly because that’s what I started out with and I’m too bloody minded to change.

One of my inspirations for this came from who have moved to a similar setup.  There they have setup zfs to access the disks on esx server using ‘raw disk mode’ which is obviously exactly what I want isn’t it? This would be my preferred route. I know it goes against the virtual ethos but I could whack the drives into any solaris box and ‘zfs import’ them in the event of a hardware failure. I gives me options to keep the drives without having to offload all the data and re-format; if I run into problems with ESXi in the future.

Well NOT SO FAST…’s a case of needed to do the research and read the small print. ESXi 4.0 doesn’t support virtual machines accessing raw disk devices (‘raw disk mode’). It seems that older versions (or maybe other vmware products do or did). RDM or ‘Raw disk Mapping’ is a supoprted option in ESXi 4 but that refers to mapping onto raw disks over a SAN (NOT LOCALLY).

I have created an opensolaris 2009.06 virtual machine running on the hypervisor. The root pool or system disk of this is infact a .vmdk file sitting on the internal mirrored pair of drives sitting in the server. My intention was to add additional drives that would be managed directly under opensolaris. BUT this just doesn’t seem possible…ESXi 4.0 doesn’t allow raw device access or direct disk access.

Research is ongoing I have two choices it seems.

1. Use the hardware raid capabilities of the SAS/SATA RAID cards – Then just use ZFS to manage quotas/snapshots and management stuff. BUT I’m nervous about recovering these should a controller fail (I’m left with a situation where in order to recover the data I’d need to buy a very specific and very expensive RAID controller – or wait ages until the right thing came up on ebay). Also RAID-Z in ZFS removes write-hole errors.
2. Create virtual disks on the actual disks and use zfs to manage these as if they were actual disks. I guess I can see a disaster recover route for this option. The disk could be hooked out and connected to any sata controller and then read from within ESXi (I think) but I need to check that. This would have to be slower wouldn’t it?
3. Forget zfs completely. Use hardware RAID and create another virtual machine which is a small footprint freenas or similar box let that take care of all the file serving work. Still need to think about a possible route for recovering the data in the event of a hardware failure.
4. Find work around – there is always a work around!
5. Sod it – switch to hyper-v which does seem to support it!

ESXi access local disks as a raw device – workarounds
1. Use vmkfstools: There do seem to be worked examples
and here…
2. Edit configuration files by hand
discussed here…

Decisions, decisions…

Comments (140)

mini-SAS to SATA connectors are they all the same?

I bought a couple of “Mini 32 Pin SAS controller to for SATA Serial ATA cables” off ebay – they cost sod-all (£7 for the two including postage from Hong Kong) but have been the source of some consternation.

They’ve caused me no end of trouble…I tested the PERC 6iR card that I bought on ebay when it arrived and decided it was a dud – after reassurance from the seller I tested it again and it did work. What the hell was going on??????? It turns out that although the two cable sets look identical one works with my setup and the other one doesn’t.

This raised the question in my mind (in the smug style of a Sex in the City episode)… “mini-SAS to SATA connectors are they all the same?”. Alternatively mind I could have just been sold a dud (but since the cost of the item is less than it would cost me to send it back to on Hong Kong – we’ll never know will we :-() ).

Comments (141)

Dell SC440

As ever this entry is really intended as a point of reference for my own use rather than other people’s enlightenment.

I’ve been shopping. I’ve been experiencing all kinds of problems with my server setup and wanted to move to a ESXi hypervisor so I decided up upgrade (perhaps I’m sidegrading since the spec. of the new server is very similar to the old). I’ve come to think that perhaps some of the problems I’ve been having are because I’m using desktop parts and a number of components are poorly supported and perhaps drivers and bit buggy and the hardware itself is of lower quality etc, etc….

I’ve decided to go for all server components and choose parts very carefully to ensure they’s all work and are fully supported in ESXi. Using the unbelievable service to snip on ebay I’ve been picking up tasty morsels on ebay over a period of a few months…

– A dell SC440 (a Xeon  Dual Core X3040 1.86Ghz with 6GB ECC RAM)
– A 1GB USB flash memory ‘pen drive’ (from the back of my desk drawer)
£     0.00
– A dell PERC 6ir SAS/SATA internal RAID controller
£   16.00
– A nobrand broadcom gigabit ethernet card PCI-express
£     2.50
– A 3ware 9590se-4me SAS/SATA external RAID controller
£  15.00
– An intel 1000GT PCI gigabit controller (salvaged)
£    0.00
– Two SATA 5.25″ removable disk caddies data castle BT-32  (already had)
£    ——

Still on the shopping list……
A higher spec CPU;
Ideally a quad-core Xeon (X3210 and X3220) but something like a Q6600 or a Q6700 would work just fine. Another possibility could be a Xeon X3070 which is the maximum available dual-core Xeon for the machine.

A 4GB ECC Ram kit;
Dell states that only 4GB of RAM is supported (4 x 1GB modules) but the system will support upto 8GB as 4 x 2GB modules. My system currently has a total of 6GB installed as 2 x 1GB plus 2 x 2GB.
Compatible RAM for this machine must be 553 or 667Mhz DDR2 unregistered ECC. An example of a compatible ram kit would be Kingston KVR667D2E5/2G OR the IBM part 30R5150

A SFF-8484 cable set;
For completeness I should get hold of a compatible SFF-8484 SAS connector (on the Dell controller) cable set to convert SFF-8484 to 4 x SATA. I’m still researching which I had problems with the last ones.

These components are either a) rare and b) sought after so I’m going to take my time and see what come up!

Current hardware setup;

– The various cards are all plugged in the various slots (as you’d expect).
– The hard drives are all connected to the PERC 6ir at the moment.

Internal drives for ESXi datastore – storage of virtual machines
ESXi doesn’t support storage of virtual machine images on an ordinary SATA controller only RAID arrays or SANs – I’ve setup a pair of internal disks as a mirrored RAID (RAID-2) using the PERC 6iR controller.

Removable drives for data storage
Here it gets complicated. Initially I had intended to use the 3ware controller to connect to an external box containing an array of disks. On reflection that seems overkill so I settled for just 2 removable drives installed in the 5.25″ bays in the SC440 itself.
Problem 1: The plastic bezel supplied with sc440 is only suitable for installation of CD/DVD drives and will not fit round the removable caddies. Solution a) reach for the dremel or b) just leave it off. I’ve gone for solution 2 for now.
Problem 2: The Dell 6iR has two SFF8484 connectors for wiring in drives and it would seem sensible (and probably better use of available bandwidth) to put one set of drives on the first connector and the other set on the second – BUT I only have one cable set
that works. Solution I’ve got the two internal drives connected to channel 0 and channel 1, and the two drives in removable caddies are on channel 2 and 3 of the same connector.
The two removable drives are not setup as a RAID array by the controller they are presented to ESXi as two individual disks – I intend to manage them using zfs from within the solaris guest (more of that later I guess).

The 3ware controller is still installed and I’ll setup the drivers. I could expand the storage to an external box in the future if I need to.


– Dell SC440 support pages
– Dell SC440 manuals
– The esxi whitebox HCL
– Silicom PEG2BPI information
– 3ware 9590se-4me information

– 3ware drivers for ESXI

Comments (3)