WebEx in Ubuntu LXC containers

If, like me, you’ve Googled around looking for a solution to get Cisco WebEx working in Ubuntu and nothing really explained it properly, or you ended up with a messed up system, then I am here to help!

Most of the stuff I’ve seen requires a 32-bit installation of Firefox, which doesn’t help me much since I use a 64-bit OS, so I decided to put it all in a container (which is good practice anyway for anything that installs binaries).

Here, I’m installing my container as root as it removes a load of hassle later. You can install them as a regular user but you need more configuration, which overcomplicates things. I’ll leave it as an exercise to the reader to figure that out.

Create a 32-bit container, I’m calling mine “webex”:

sudo lxc-create -n webex -t download

It’ll prompt you for details, answer ‘ubuntu’, ‘trusty’, ‘i386’. and

Edit the config at /var/lib/lxc/webex/config and add these lines:

lxc.cgroup.devices.allow = c 116:* rwm
lxc.mount.entry = /dev/snd dev/snd none rw,bind,create=dir 0 0

These allow the container to access the host’s sound device.

Now start up the container and access its console:

sudo lxc-start -n webex
sudo lxc-attach -n webex

The first thing I do is install openssh-server

sudo apt-get install openssh-server

and then install firefox and a java plugin. Some blogs say you need Oracle Java, but I find that OpenJDK works fine.

sudo apt-get install firefox icedtea-7-plugin openjdk-7-jre

At this point, go ahead and set a password for the ubuntu user:

passwd ubuntu

Log out of the root console and now you can SSH into the ubuntu account like this:

ssh -Y ubuntu@webex

(I’ve left out the bit where ‘webex’ resolves to a real machine, just add it to your ssh config)

The -Y tells ssh to forward Xserver connections back to the host.

Now, we can test the sound to make sure that the config worked, try something like this:

aplay /usr/share/sounds/alsa/Front_Center.wav

If you hear the test sound, then it’s all good. If you don’t hear it, and get an error, then you’ll have to Google. In my case, the command was working without any error but there was no sound. I fixed this by adding a custom .asoundrc in the ubuntu user’s home directory:

pcm.!default {
 type plug
 slave.pcm {
 type hw
 card 1
 device 0

defaults.ctl.card 1

It’s highly likely you may have to edit this for your sound hardware, but then again it may work. I’m not an ALSA expert, do some Googling if there’s still no sound, you just need to find the right device. You can test more quickly with a line like this:

aplay -D plughw:1,0 /usr/share/sounds/alsa/Front_Center.wav

Vary the device numbers of 1,0. Hopefully you’ll get it working eventually.

Now start up firefox and visit the test WebEx site:


Start up a test meeting – and then close down firefox straight away. You did this step to get a .webex directory created, but it needs fixing. In the .webex directory you’ll see some files like this:

ubuntu@webex:~/.webex$ ls -F
1524/ remembercheckbox.bak tmpfile/

The numbered directory may be a different number, but you will have one nonetheless. Change into the directory and you’ll see some files, some of which are .so files. The problem lies in that these files depend on other libraries which are not present in Ubuntu’s latest releases (they were installed with the ia32-libs package which no longer exists). However, we can work out what’s needed and just install the packages manually.

First, we need to install a helper to find the files:

sudo apt-get install apt-file
sudo apt-file update

Now find the files that are missing:

ldd *.so | grep "not found" | sort -u

Now review what’s missing, you will see output like this (it may not be exactly the same):

libasound.so.2 => not found
libjawt.so => not found
libXmu.so.6 => not found
libXtst.so.6 => not found
libXv.so.1 => not found

Now for each missing file, we use apt-file to find out which package will install it:

apt-file search libXmu.so.6

And then install with:

sudo apt-get install -y libxmu6

After you finish this for each file, you should be all set. Start up firefox again and visit the test WebEx meeting. With any luck, the audio buttons will now be active and you can start your WebEx meeting!

Note, I am still missing a file that provides libjawt.so, but things still work for me. Go figure …

A rant on printer DRM

EDIT: I found this which works like a charm: http://www.fixyourownprinter.com/forums/laser/72904

This post is unashamedly a total rant about printer DRM. If you don’t enjoy a good rant, you’d better stop reading now.

I have the relatively cheap Samsung ML-2240 laser printer. It recently started running out of toner so I ordered a new cartridge.

RANT ONE: I can’t just buy the damn toner to refill it, you need a whole new drum cartridge, wasting perfectly good hardware. What the fuck?

I plugged in the cartridge and turned on the printer. Its light frustratingly stayed red, which means something is wrong. I plugged the old cartridge back in to check the printer wasn’t broken, and the light went green (albeit with a low toner light).


I contacted the people who sent me the cartridge and complained. After a few back and forth emails, it turns out that my printer has got regional DRM and because I bought it in the UK it won’t accept cartridges from here in Australia.

RANT TWO: My printer has got fucking regional restrictions on where it can be used. What the fuck?

RANT THREE: I did some reading and it turns out that the chip also has a page counter in it and will lock out the cartridge when it gets to 1500 pages! What the fuck?

I ended up mail ordering a hacked cartridge chip from a UK retailer to replace the one in the Australian cartridge, so that it can be reused in the UK printer. I was shocked by what I read in the instructions:

RANT FOUR: If the chip thinks the toner cartridge has totally run out of toner, it permanently bricks the cartridge. What the fuck?


I’m done with Samsung. Here’s a message to the Samsung printer people:


SAML Federation with Openstack

This is a bit of a followup to my last post on Kerberos-based federation so this post will make a lot more sense if you read that one. Kerberos didn’t really suit my needs because there’s no real web sign on to speak of, so getting hold of a Kerberos ticket in a friendly way on non-Windows platforms is problematic. The answer to this is to use SAML,which has some good support in Keystone, and more to come.


I’m not going to go into too much detail of how SAML works here, and assume you know a little, or are prepared to infer things as you go from this post. There’s more detailed information in the Shibboleth wiki but importantly you must know the concept of an identity provider (which holds authentication data) and a service provider (which protects a resource).

In this example. I’m going to use Shibboleth as a service provider, and the testshib.org service as an identity provider.

As before, I am doing all this on Ubuntu so if you’re on a different OS you’ll have to tweak things.


Shibboleth is quite solid but its logs and error messages are extremely cryptic and not particularly helpful. There are quite a few gotchas, and it simply doesn’t tell you exactly what went wrong. The main one is that all the entityID configs for Shibboleth and in Keystone MUST match up, and Apache must have its ServerName configured to the matching domain name.

Apache config

You will need the shibboleth module for Apache so go ahead and install it:

sudo apt-get install libapache2-mod-shib2

That will enable the module, so you don’t need to explicity do that. You’ll also have a shibd daemon running after installation.

Inside your Virtualhost block in /etc/apache2/sites-enabled/keystone, you’ll need to add some Shibboleth config:

<Virtualhost *:5000>

  WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/identity_providers/.*?/protocols/.*?/auth)$ /var/www/keystone/main/$1
  <Location ~ "/v3/auth/OS-FEDERATION/websso/saml2">
    ShibRequestSetting requireSession 1
    AuthType shibboleth
    # ShibRequireAll On  # Enable this if you're using 12.043
    ShibRequireSession On
    ShibExportAssertion Off
    Require valid-user

<VirtualHost *:80>
  <Location /Shibboleth.sso>
    SetHandler shib

You also need to make sure that your Apache knows what its server name is. If it complains that it doesn’t when you restart it, add an explicit ServerName directive that matches the exact domain name that you are going to give to testshib, shortly.

Now restart Apache.

sudo service apache2 restart

Testshib config

Visit http://testshib.org/ and follow the instructions carefully. It will eventually generate some Shibboleth configuration for your service provider, which you need to save as /etc/shibboleth/shibboleth2.xml

If you take a look in the config, you’ll see three main important things.

<ApplicationDefaults entityID="<your service provider ID>" REMOTE_USER="eppn">

You need to remove REMOTE_USER entirely as this causes Keystone to do the wrong thing.

Inside the ApplicationDefaults you’ll see:

<SSO entityID="https://idp.testshib.org/idp/shibboleth">

This is the part that tells Shibboleth what the ID of the identity provider is. Further down the file you’ll see something like:

<MetadataProvider type="XML" uri="http://www.testshib.org/metadata/testshib-providers.xml"
 backingFilePath="testshib-two-idp-metadata.xml" reloadInterval="180000" />

It tells Shibboleth where to get the IdP’s metadata, which describes how to interact with it (mainly URLs and signing keys).

These three parts are the main parts of the config that describe the remote IdP. If you change the IdP for a different one, it’s unlikely you’ll need to edit anything else.

Keystone config

As in the Kerberos post, you need to enable some things in the keystone.conf. Since I wrote that post, I’ve seen that federation is enabled by default in Kilo, so there’s much less to do now. Basically:

saml2 = keystone.auth.plugins.mapped.Mapped

remote_id_attribute = Shib-Identity-Provider
  • Copy the callback template to the right place:
cp /opt/stack/keystone/etc/sso_callback_template.html /etc/keystone/
  • Create the federation database tables if you haven’t already:
keystone-manage db_sync --extension federation

Keystone mapping data configuration

As before, we have to use the v3 API for federation. If you have sourced the credentials file already, you can just do two more environment variables:

export OS_AUTH_URL=http://$HOSTNAME:5000/v3

You may remember from the Kerberos post that we need a mapping file. The mapping used for kerberos can be re-used for this SAML authentication, here it is:

    "local": [
        "user": {
          "name": "{0}",
          "domain": {"name": "Default"}
        "group": {
          "id": "GROUP_ID"
    "remote": [
        "type": "REMOTE_USER"

Save this as a file called add-mapping.json.  Although it can be re-used from before, I’ll re-add it here for completeness:

openstack group create samlusers
openstack role add --project demo --group samlusers member
openstack identity provider create testshib
group_id=`openstack group list|grep samlusers|awk '{print $2}'`
cat add-mapping.json|sed s^GROUP_ID^$group_id^ > /tmp/mapping.json
openstack mapping create --rules /tmp/mapping.json saml_mapping
openstack federation protocol create --identity-provider testshib --mapping saml_mapping saml2
openstack identity provider set --remote-id <your entity ID> testshib

Replace <your entity ID> with the value mentioned above in the SSO entityId in the shibboleth2.xml config. Shibboleth sets Shib-Identiy-Id in the Apache request variables with the value of the entityId used, and we configured keystone to use this in keystone.conf above. This is the “remote id” for the identity provider, and keystone uses this to apply the correct protocol and mapping.

Horizon config

As before, a few Django config tweaks are needed. Edit the /opt/stack/horizon/openstack_dashboard/local/local_settings.py

 ("credentials", _("Keystone Credentials")),
 ("testshib", _("Testshib SAML")),
 "identity": 3

Replace $HOSTNAME with your actual keystone hostname.

Now, restart apache2 and shibd:

service apache2 restart
service shibd restart

You should now be all set. After making sure “Testshib SAML” is selected in the login screen, click connect and you will be redirected to the testshib login page. It has its own fixed users and tells you what they are when you visit that page.

Good luck!

Federated Openstack logins using Kerberos


I recently had cause to try to get federated logins working on Openstack, using Kerberos as an identity provider. I couldn’t find anything on the Internet that described this in a simple way that is understandable by a relative newbie to Openstack, so this post is attempting to do that, because it has taken me a long time to find and digest all the info scattered around. Unfortunately the actual Openstack docs are a little incoherent at the moment.


  • I’ve tried to get this working on older versions of Openstack but the reality is that unless you’re using Kilo or above it is going to be an uphill task, as the various parts (changes in Keystone and Horizon) don’t really come together until that release.
  • I’m only covering the case of getting this working in devstack.
  • I’m assuming you know a little about Kerberos, but not too much :)
  • I’m assuming you already have a fairly vanilla installation of Kilo devstack in a separate VM or container.
  • I use Ubuntu server. Some things will almost certainly need tweaking for other OSes.


The federated logins in Openstack work by using Apache modules to provide a remote user ID, rather than credentials in Keystone. This allows for a lot of flexibility but also provides a lot of pain points as there is a huge amount of configuration. The changes described below show how to configure Apache, Horizon and Keystone to do all of this.

Important! Follow these instructions very carefully. Kerberos is extremely fussy, and the configuration in Openstack is rather convoluted.


If you don’t already have a Kerberos server, you can install one by following https://help.ubuntu.com/community/Kerberos

The Kerberos server needs a service principal for Apache so that Apache can connect. You need to generate a keytab for Apache, and to do that you need to know the hostname for the container/VM where you are running devstack and Apache. Assuming it’s simply called ‘devstackhost’:

$ kadmin -p <your admin principal>
kadmin: addprinc -randkey HTTP/devstackhost
kadmin: ktadd -k keytab.devstackhost HTTP/devstackhost

This will write a file called keytab.devstackhost, you need to copy it to your devstack host under /etc/apache2/auth/

You can test that this works with:

$ kinit -k -t /etc/apache2/auth/keytab.devstackhost HTTP/devstackhost

You may need to install the krb5-user package to get kinit. If there is no problem then the command prompt just reappears with no error. If it fails then check that you got the keytab filename right and that the principal name is correct. You can also try using kinit with a known user to see if the underlying Kerberos install is right (the realm and the key server must have been configured correctly, installing any kerberos package usually prompts to set these up).

Finally, the keytab file must be owned by www-data and read/write only by that user:

$ sudo chown www-data /etc/apache2/auth/keytab.devstackhost
$ sudo chmod 0600 /etc/apache2/auth/keytab.devstackhost

Apache Configuration

Install the Apache Kerberos module:

$ sudo apt-get install libapache2-mod-auth-kerb

Edit the /etc/apache2/sites-enabled/keystone.conf file. You need to make sure the mod_auth_kerb module is installed, and add extra Kerberos config.

LoadModule auth_kerb_module modules/mod_auth_kerb.so

<VirtualHost *:5000>


 # KERB_ID must match the IdP set in Openstack.
 <Location ~ "kerberos" >
 AuthType Kerberos
 AuthName "Kerberos Login"
 KrbMethodNegotiate on
 KrbServiceName HTTP
 KrbSaveCredentials on
 KrbLocalUserMapping on
 KrbAuthRealms MY-REALM.COM
 Krb5Keytab /etc/apache2/auth/keytab.devstackhost
 KrbMethodK5Passwd on #optional-- if 'off' makes GSSAPI SPNEGO a requirement
 Require valid-user


  • Don’t forget to edit the KrbAuthRealms setting to your own realm.
  • Don’t forget to edit Krb5Keytab to match your keytab filename
  • Pretty much all browsers don’t support SPNEGO out of the box, so KrbMethodK5Passwd is enabled here which will make the browser pop up one of its own dialogs prompting for credentials (more on that later). If this is off, the browser must support SPNEGO which will fetch the Kerberos credentials from your user environment, assuming the user is already authenticated.
  • If you are using Apache 2.2 (used on Ubuntu 12.04) then KrbServiceName must be configured as HTTP/devstackhost (change devstackhost to match your own host name). This config is so that Apache uses the service principal name that we set up in the Kerberos server above.

Keystone configuration

Federation must be explicitly enabled in the keystone config.
http://docs.openstack.org/developer/keystone/extensions/federation.html explains this, but to summarise:

Edit /etc/keystone/keystone.conf and add the driver:

driver = keystone.contrib.federation.backends.sql.Federation
trusted_dashboard = http://devstackhost/auth/websso
sso_callback_template = /etc/keystone/sso_callback_template.html

(Change “devstackhost” again)

Copy the callback template to the right place:

$ cp /opt/stack/keystone/etc/sso_callback_template.html /etc/keystone/

Enable kerberos in the auth section of /etc/keystone/keystone.conf :

methods = external,password,token,saml2,kerberos
kerberos = keystone.auth.plugins.mapped.Mapped

Set the remote_id_attribute, which tells Openstack which IdP was used:

remote_id_attribute = KERB_ID

Add the middleware to keystone-paste.conf. ‘federation_extension’ should be the second last entry in the pipeline:api_v3 entry:

pipeline = sizelimit url_normalize build_auth_context token_auth admin_token_auth json_body ec2_extension_v3 s3_extension simple_cert_extension revoke_extension federation_extension service_v3

Now we have to create the database tables for federation:

$ keystone-manage db_sync --extension federation

Openstack Configuration

Federation must use the v3 API in Keystone. Get the Openstack RC file from the API access tab of Access & Security and then source it to get the shell API credentials set up. Then:

$ export OS_AUTH_URL=http://$HOSTNAME:5000/v3
$ export OS_USERNAME=admin

Test this by trying something like:

$ openstack project list

Now we have to set up the mapping between remote and local users. I’m going to add a new local group and map all remote users to that group. The mapping is defined with a blob of json and it’s currently very badly documented (although if you delve into the keystone unit tests you’ll see a bunch of examples). Start by making a file called add-mapping.json:

        "local": [
                "user": {
                    "name": "{0}",
                    "domain": {"name": "Default"}
                "group": {
                    "id": "GROUP_ID"
        "remote": [
                "type": "REMOTE_USER"

Now we need to add this mapping using the openstack shell.

openstack group create krbusers
openstack role add --project demo --group krbusers member
openstack identity provider create kerb group_id=`openstack group list|grep krbusers|awk '{print $2}'`
cat add-mapping.json|sed s^GROUP_ID^$group_id^ > /tmp/mapping.json
openstack mapping create --rules /tmp/mapping.json kerberos_mapping
openstack federation protocol create --identity-provider kerb --mapping kerberos_mapping kerberos
openstack identity provider set --remote-id KERB_ID kerb

(I’ve left out the command prompt so you can copy and paste this directly)

What did we just do there?

In my investigations, the part above took me the longest to figure out due to the current poor state of the docs. But basically:

  • Create a group krbusers to which all federated users will map
  • Make sure the group is in the demo project
  • Create a new identity provider which is linked to the group we just created (the API frustratingly needs the ID, not the name, hence the shell machinations)
  • Create the new mapping, then link it to a new “protocol” called kerberos which connects the mapping to the identity provider.
  • Finally, make sure the remote ID coming from Apache is linked to the identity provider. This makes sure that any requests from Apache are routed to the correct mapping. (Remember above in the Apache configuration that we set KERB_ID in the request environment? This is an arbitrary label but they need to match.)

After all this, we have a new group in Keystone called krbusers that will contain any user provided by Kerberos.

Ok, we’re nearly there! Onwards to …

Horizon Configuration

Web SSO must be enabled in Horizon. Edit the config at /opt/stack/horizon/openstack_dashboard/local/local_settings.py and make sure the following settings are set at the bottom:


("credentials", _("Keystone Credentials")),
("kerberos", _("Kerberos")),





"identity": 3


Make sure $HOSTNAME is actually the host name for your devstack instance.

Now, restart apache

$ sudo service apache2 restart

and you should be able to test that the federation part of Keystone is working by visiting this URL


You’ll get a load of json back if it worked OK.

You can now test the websso part of Horizon by going here:


You should get a browser dialog which asks for Kerberos credentials, and if you get through this OK you’ll see the sso_callback_template returned to the browser.

Trying it out!

If you don’t have any users in your Kerberos realm, it’s easy to add one:

$ ktadmin
ktadmin: addprinc -randkey <NEW USER NAME>
ktadmin: cpw -pw <NEW PASSWORD> <NEW USER NAME>

Now visit your Openstack dashboard and you should see something like this:


Click “Connect” and log in and you should be all set.

New MAAS features in 1.7.0

MAAS 1.7.0 is close to its release date, which is set to coincide with Ubuntu 14.10’s release.

The development team has been hard at work and knocked out some amazing new features and improvements. Let me take you through some of them!

UI-based boot image imports

Previously, MAAS used to require admins to configure (well, hand-hack) a yaml file on each cluster controller that specified precisely which OSes, release and architectures to import. This has all been replaced with a very smooth new API that lets you simply click and go.

New image import configuration page

Click for bigger version

The different images available are driven by a “simplestreams” data feed maintained by Canonical. What you see here is a representation of what’s available and supported.

Any previously-imported images also show on this page, and you can see how much space they are taking up, and how many nodes got deployed using each image. All the imported images are automatically synced across the cluster controllers.


Once a new selection is clicked, “Apply changes” kicks off the import. You can see that the progress is tracked right here.

(There’s a little more work left for us to do to track the percentage downloaded.)

Robustness and event logs

MAAS now monitors nodes as they are deploying and lets you know exactly what’s going on by showing you an event log that contains all the important events during the deployment cycle.


You can see here that this node has been allocated to a user and started up.

Previously, MAAS would have said “okay, over to you, I don’t care any more” at this point, which was pretty useless when things start going wrong (and it’s not just hardware that goes wrong, preseeds often fail).

So now, the node’s status shows “Deploying” and you can see the new event log at the bottom of the node page that shows these actions starting to take place.

After a while, more events arrive and are logged:


And eventually it’s completely deployed and ready to use:


You’ll notice how quick this process is nowadays.  Awesome!

More network support

MAAS has nascent support for tracking networks/subnets and attached devices. Changes in this release add a couple of neat things: Cluster interfaces automatically have their networks registered in the Networks tab (“master-eth0” in the image), and any node network interfaces known to be attached to any of these networks are automatically linked (see the “attached nodes” column).  This makes even less work for admins to set up things, and easier for users to rely on networking constraints when allocating nodes over the API.


Power monitoring

MAAS is now tracking whether the power is applied or not to your nodes, right in the node listing.  Black means off, green means on, and red means there was an error trying to find out.


Bugs squashed!

With well over 100 bugs squashed, this will be a well-received release.  I’ll post again when it’s out.

Enabling KVM via VNC access on the Intel NUC and other hurdles

While setting up my new NUCs to use with MAAS as a development deployment tool, I got very, very frustrated with the initial experience so I thought I’d write up some key things here so that others may benefit — especially if you are using MAAS.

First hurdle — when you hit ctrl-P at the boot screen it is likely to not work. This is because you need to disable the num lock.

Second hurdle — when you go and enable the AMT features it asks for a new password, but doesn’t tell you that it needs to contain upper case, lower case, numbers AND punctuation.

Third hurdle — if you want to use it headless like me, it’s a good idea to enable the VNC server.  You can do that with this script:

AMT_PASSWORD=<fill me in>
VNC_PASSWORD=<fill me in>
wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k RFBPassword=${VNC_PASSWORD} &&\
wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k Is5900PortEnabled=true &&\
wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k OptInPolicy=false &&\
wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k SessionTimeout=0 &&\
wsman invoke -a RequestStateChange http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_KVMRedirectionSAP -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k RequestedState=2

(wsman comes from the wsmancli package)

But there is yet another gotcha!  The VNC_PASSWORD must be no more than 8 characters and still meet the same requirements as the AMT password.

Once this is all done you should be all set to use this very fast machine with MAAS.

Relapsed again

It seemed too good to be true after my last post, and it was.  Within days I had relapsed after finishing the last course of Bactrim my symptoms were back, worse than ever.  So bad, that I had a trip to hospital courtesy of an ambulance which had to be called because I was in so much pain.  Oh sweet, sweet morphine, you are a cruel mistress.

The Bactrim was only holding the Bartonella at bay, it seems. My LLMD has now put me on a month’s worth of Ciprofloxacin, after verifying that a sore tendon was not too damaged.  Why do that?  Well, Cipro screws up tendons and ligaments if you take it too long so I had to verify that things were OK to start with.  I also have to take it easy and not exert myself too much in case I damage weakened tendons.

The one piece of good news is that a recent endoscopy showed no fungal infection from all the antibiotics I’ve been taking.  Unfortunately an echo test on my heart still shows a lot of fluid in the pericardial sac and I still have a huge amount of pain there which keeps me awake at night.

Because of all this, I am sad to be missing a work function in Austin this week, but it would have been foolish to travel with the tendon risk (moving my luggage would be a problem), my high levels of fatigue, and not to mention the pericardial fluid can become life-threatening at any time.