A rant on printer DRM

This post is unashamedly a total rant about printer DRM. If you don’t enjoy a good rant, you’d better stop reading now.

I have the relatively cheap Samsung ML-2240 laser printer. It recently started running out of toner so I ordered a new cartridge.

RANT ONE: I can’t just buy the damn toner to refill it, you need a whole new drum cartridge, wasting perfectly good hardware. What the fuck?

I plugged in the cartridge and turned on the printer. Its light frustratingly stayed red, which means something is wrong. I plugged the old cartridge back in to check the printer wasn’t broken, and the light went green (albeit with a low toner light).

Hmmm.

I contacted the people who sent me the cartridge and complained. After a few back and forth emails, it turns out that my printer has got regional DRM and because I bought it in the UK it won’t accept cartridges from here in Australia.

RANT TWO: My printer has got fucking regional restrictions on where it can be used. What the fuck?

RANT THREE: I did some reading and it turns out that the chip also has a page counter in it and will lock out the cartridge when it gets to 1500 pages! What the fuck?

I ended up mail ordering a hacked cartridge chip from a UK retailer to replace the one in the Australian cartridge, so that it can be reused in the UK printer. I was shocked by what I read in the instructions:

RANT FOUR: If the chip thinks the toner cartridge has totally run out of toner, it permanently bricks the cartridge. What the fuck?

DRM – DEFECTIVE BY DESIGN.

I’m done with Samsung. Here’s a message to the Samsung printer people:

FUCK YOU NVID^WSAMSUNG

SAML Federation with Openstack

This is a bit of a followup to my last post on Kerberos-based federation so this post will make a lot more sense if you read that one. Kerberos didn’t really suit my needs because there’s no real web sign on to speak of, so getting hold of a Kerberos ticket in a friendly way on non-Windows platforms is problematic. The answer to this is to use SAML,which has some good support in Keystone, and more to come.

Prerequisites

I’m not going to go into too much detail of how SAML works here, and assume you know a little, or are prepared to infer things as you go from this post. There’s more detailed information in the Shibboleth wiki but importantly you must know the concept of an identity provider (which holds authentication data) and a service provider (which protects a resource).

In this example. I’m going to use Shibboleth as a service provider, and the testshib.org service as an identity provider.

As before, I am doing all this on Ubuntu so if you’re on a different OS you’ll have to tweak things.

Important!

Shibboleth is quite solid but its logs and error messages are extremely cryptic and not particularly helpful. There are quite a few gotchas, and it simply doesn’t tell you exactly what went wrong. The main one is that all the entityID configs for Shibboleth and in Keystone MUST match up, and Apache must have its ServerName configured to the matching domain name.

Apache config

You will need the shibboleth module for Apache so go ahead and install it:

sudo apt-get install libapache2-mod-shib2

That will enable the module, so you don’t need to explicity do that. You’ll also have a shibd daemon running after installation.

Inside your Virtualhost block in /etc/apache2/sites-enabled/keystone, you’ll need to add some Shibboleth config:

<Virtualhost *:5000>
...

  WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/identity_providers/.*?/protocols/.*?/auth)$ /var/www/keystone/main/$1
  <Location ~ "/v3/auth/OS-FEDERATION/websso/saml2">
    ShibRequestSetting requireSession 1
    AuthType shibboleth
    # ShibRequireAll On  # Enable this if you're using 12.043
    ShibRequireSession On
    ShibExportAssertion Off
    Require valid-user
  </Location>
</Virtualhost>

<VirtualHost *:80>
  <Location /Shibboleth.sso>
    SetHandler shib
  </Location>
</VirtualHost>

You also need to make sure that your Apache knows what its server name is. If it complains that it doesn’t when you restart it, add an explicit ServerName directive that matches the exact domain name that you are going to give to testshib, shortly.

Now restart Apache.

sudo service apache2 restart

Testshib config

Visit http://testshib.org/ and follow the instructions carefully. It will eventually generate some Shibboleth configuration for your service provider, which you need to save as /etc/shibboleth/shibboleth2.xml

If you take a look in the config, you’ll see three main important things.

<ApplicationDefaults entityID="<your service provider ID>" REMOTE_USER="eppn">

You need to remove REMOTE_USER entirely as this causes Keystone to do the wrong thing.

Inside the ApplicationDefaults you’ll see:

<SSO entityID="https://idp.testshib.org/idp/shibboleth">
    SAML2 SAML1
</SSO>

This is the part that tells Shibboleth what the ID of the identity provider is. Further down the file you’ll see something like:

<MetadataProvider type="XML" uri="http://www.testshib.org/metadata/testshib-providers.xml"
 backingFilePath="testshib-two-idp-metadata.xml" reloadInterval="180000" />

It tells Shibboleth where to get the IdP’s metadata, which describes how to interact with it (mainly URLs and signing keys).

These three parts are the main parts of the config that describe the remote IdP. If you change the IdP for a different one, it’s unlikely you’ll need to edit anything else.

Keystone config

As in the Kerberos post, you need to enable some things in the keystone.conf. Since I wrote that post, I’ve seen that federation is enabled by default in Kilo, so there’s much less to do now. Basically:

[auth]
saml2 = keystone.auth.plugins.mapped.Mapped

[saml2]
remote_id_attribute = Shib-Identity-Provider
  • Copy the callback template to the right place:
cp /opt/stack/keystone/etc/sso_callback_template.html /etc/keystone/
  • Create the federation database tables if you haven’t already:
keystone-manage db_sync --extension federation

Keystone mapping data configuration

As before, we have to use the v3 API for federation. If you have sourced the credentials file already, you can just do two more environment variables:

export OS_AUTH_URL=http://$HOSTNAME:5000/v3
export OS_IDENTITY_API_VERSION=3

You may remember from the Kerberos post that we need a mapping file. The mapping used for kerberos can be re-used for this SAML authentication, here it is:

[
  {
    "local": [
      {
        "user": {
          "name": "{0}",
          "domain": {"name": "Default"}
        }
      },
      {
        "group": {
          "id": "GROUP_ID"
        }
      }
    ],
    "remote": [
      {
        "type": "REMOTE_USER"
      }
    ]
  }
]

Save this as a file called add-mapping.json.  Although it can be re-used from before, I’ll re-add it here for completeness:

openstack group create samlusers
openstack role add --project demo --group samlusers member
openstack identity provider create testshib
group_id=`openstack group list|grep samlusers|awk '{print $2}'`
cat add-mapping.json|sed s^GROUP_ID^$group_id^ > /tmp/mapping.json
openstack mapping create --rules /tmp/mapping.json saml_mapping
openstack federation protocol create --identity-provider testshib --mapping saml_mapping saml2
openstack identity provider set --remote-id <your entity ID> testshib

Replace <your entity ID> with the value mentioned above in the SSO entityId in the shibboleth2.xml config. Shibboleth sets Shib-Identiy-Id in the Apache request variables with the value of the entityId used, and we configured keystone to use this in keystone.conf above. This is the “remote id” for the identity provider, and keystone uses this to apply the correct protocol and mapping.

Horizon config

As before, a few Django config tweaks are needed. Edit the /opt/stack/horizon/openstack_dashboard/local/local_settings.py

WEBSSO_ENABLED = True
WEBSSO_CHOICES = (
 ("credentials", _("Keystone Credentials")),
 ("testshib", _("Testshib SAML")),
 )
WEBSSO_INITIAL_CHOICE="testshib"
OPENSTACK_API_VERSIONS = {
 "identity": 3
}
OPENSTACK_KEYSTONE_URL="http://$HOSTNAME:5000/v3"

Replace $HOSTNAME with your actual keystone hostname.

Now, restart apache2 and shibd:

service apache2 restart
service shibd restart

You should now be all set. After making sure “Testshib SAML” is selected in the login screen, click connect and you will be redirected to the testshib login page. It has its own fixed users and tells you what they are when you visit that page.

Good luck!

Federated Openstack logins using Kerberos

Why?

I recently had cause to try to get federated logins working on Openstack, using Kerberos as an identity provider. I couldn’t find anything on the Internet that described this in a simple way that is understandable by a relative newbie to Openstack, so this post is attempting to do that, because it has taken me a long time to find and digest all the info scattered around. Unfortunately the actual Openstack docs are a little incoherent at the moment.

Assumptions

  • I’ve tried to get this working on older versions of Openstack but the reality is that unless you’re using Kilo or above it is going to be an uphill task, as the various parts (changes in Keystone and Horizon) don’t really come together until that release.
  • I’m only covering the case of getting this working in devstack.
  • I’m assuming you know a little about Kerberos, but not too much :)
  • I’m assuming you already have a fairly vanilla installation of Kilo devstack in a separate VM or container.
  • I use Ubuntu server. Some things will almost certainly need tweaking for other OSes.

Overview

The federated logins in Openstack work by using Apache modules to provide a remote user ID, rather than credentials in Keystone. This allows for a lot of flexibility but also provides a lot of pain points as there is a huge amount of configuration. The changes described below show how to configure Apache, Horizon and Keystone to do all of this.

Important! Follow these instructions very carefully. Kerberos is extremely fussy, and the configuration in Openstack is rather convoluted.

Pre-requisites

If you don’t already have a Kerberos server, you can install one by following https://help.ubuntu.com/community/Kerberos

The Kerberos server needs a service principal for Apache so that Apache can connect. You need to generate a keytab for Apache, and to do that you need to know the hostname for the container/VM where you are running devstack and Apache. Assuming it’s simply called ‘devstackhost':

$ kadmin -p <your admin principal>
kadmin: addprinc -randkey HTTP/devstackhost
kadmin: ktadd -k keytab.devstackhost HTTP/devstackhost

This will write a file called keytab.devstackhost, you need to copy it to your devstack host under /etc/apache2/auth/

You can test that this works with:

$ kinit -k -t /etc/apache2/auth/keytab.devstackhost HTTP/devstackhost

You may need to install the krb5-user package to get kinit. If there is no problem then the command prompt just reappears with no error. If it fails then check that you got the keytab filename right and that the principal name is correct. You can also try using kinit with a known user to see if the underlying Kerberos install is right (the realm and the key server must have been configured correctly, installing any kerberos package usually prompts to set these up).

Finally, the keytab file must be owned by www-data and read/write only by that user:

$ sudo chown www-data /etc/apache2/auth/keytab.devstackhost
$ sudo chmod 0600 /etc/apache2/auth/keytab.devstackhost

Apache Configuration

Install the Apache Kerberos module:

$ sudo apt-get install libapache2-mod-auth-kerb

Edit the /etc/apache2/sites-enabled/keystone.conf file. You need to make sure the mod_auth_kerb module is installed, and add extra Kerberos config.

LoadModule auth_kerb_module modules/mod_auth_kerb.so

<VirtualHost *:5000>

 ...

 # KERB_ID must match the IdP set in Openstack.
 SetEnv KERB_ID KERB_ID
 
 <Location ~ "kerberos" >
 AuthType Kerberos
 AuthName "Kerberos Login"
 KrbMethodNegotiate on
 KrbServiceName HTTP
 KrbSaveCredentials on
 KrbLocalUserMapping on
 KrbAuthRealms MY-REALM.COM
 Krb5Keytab /etc/apache2/auth/keytab.devstackhost
 KrbMethodK5Passwd on #optional-- if 'off' makes GSSAPI SPNEGO a requirement
 Require valid-user
 </Location>

Note:

  • Don’t forget to edit the KrbAuthRealms setting to your own realm.
  • Don’t forget to edit Krb5Keytab to match your keytab filename
  • Pretty much all browsers don’t support SPNEGO out of the box, so KrbMethodK5Passwd is enabled here which will make the browser pop up one of its own dialogs prompting for credentials (more on that later). If this is off, the browser must support SPNEGO which will fetch the Kerberos credentials from your user environment, assuming the user is already authenticated.
  • If you are using Apache 2.2 (used on Ubuntu 12.04) then KrbServiceName must be configured as HTTP/devstackhost (change devstackhost to match your own host name). This config is so that Apache uses the service principal name that we set up in the Kerberos server above.

Keystone configuration

Federation must be explicitly enabled in the keystone config.
http://docs.openstack.org/developer/keystone/extensions/federation.html explains this, but to summarise:

Edit /etc/keystone/keystone.conf and add the driver:

[federation]
driver = keystone.contrib.federation.backends.sql.Federation
trusted_dashboard = http://devstackhost/auth/websso
sso_callback_template = /etc/keystone/sso_callback_template.html

(Change “devstackhost” again)

Copy the callback template to the right place:

$ cp /opt/stack/keystone/etc/sso_callback_template.html /etc/keystone/

Enable kerberos in the auth section of /etc/keystone/keystone.conf :

[auth]
methods = external,password,token,saml2,kerberos
kerberos = keystone.auth.plugins.mapped.Mapped

Set the remote_id_attribute, which tells Openstack which IdP was used:

[kerberos]
remote_id_attribute = KERB_ID

Add the middleware to keystone-paste.conf. ‘federation_extension’ should be the second last entry in the pipeline:api_v3 entry:

[pipeline:api_v3]
pipeline = sizelimit url_normalize build_auth_context token_auth admin_token_auth json_body ec2_extension_v3 s3_extension simple_cert_extension revoke_extension federation_extension service_v3

Now we have to create the database tables for federation:

$ keystone-manage db_sync --extension federation

Openstack Configuration

Federation must use the v3 API in Keystone. Get the Openstack RC file from the API access tab of Access & Security and then source it to get the shell API credentials set up. Then:

$ export OS_AUTH_URL=http://$HOSTNAME:5000/v3
$ export OS_IDENTITY_API_VERSION=3
$ export OS_USERNAME=admin

Test this by trying something like:

$ openstack project list

Now we have to set up the mapping between remote and local users. I’m going to add a new local group and map all remote users to that group. The mapping is defined with a blob of json and it’s currently very badly documented (although if you delve into the keystone unit tests you’ll see a bunch of examples). Start by making a file called add-mapping.json:

[
    {
        "local": [
            {
                "user": {
                    "name": "{0}",
                    "domain": {"name": "Default"}
                }
            },
            {
                "group": {
                    "id": "GROUP_ID"
                    }
            }
        ],
        "remote": [
            {
                "type": "REMOTE_USER"
            }
        ]
    }
]

Now we need to add this mapping using the openstack shell.

openstack group create krbusers
openstack role add --project demo --group krbusers member
openstack identity provider create kerb group_id=`openstack group list|grep krbusers|awk '{print $2}'`
cat add-mapping.json|sed s^GROUP_ID^$group_id^ > /tmp/mapping.json
openstack mapping create --rules /tmp/mapping.json kerberos_mapping
openstack federation protocol create --identity-provider kerb --mapping kerberos_mapping kerberos
openstack identity provider set --remote-id KERB_ID kerb

(I’ve left out the command prompt so you can copy and paste this directly)

What did we just do there?

In my investigations, the part above took me the longest to figure out due to the current poor state of the docs. But basically:

  • Create a group krbusers to which all federated users will map
  • Make sure the group is in the demo project
  • Create a new identity provider which is linked to the group we just created (the API frustratingly needs the ID, not the name, hence the shell machinations)
  • Create the new mapping, then link it to a new “protocol” called kerberos which connects the mapping to the identity provider.
  • Finally, make sure the remote ID coming from Apache is linked to the identity provider. This makes sure that any requests from Apache are routed to the correct mapping. (Remember above in the Apache configuration that we set KERB_ID in the request environment? This is an arbitrary label but they need to match.)

After all this, we have a new group in Keystone called krbusers that will contain any user provided by Kerberos.

Ok, we’re nearly there! Onwards to …

Horizon Configuration

Web SSO must be enabled in Horizon. Edit the config at /opt/stack/horizon/openstack_dashboard/local/local_settings.py and make sure the following settings are set at the bottom:

WEBSSO_ENABLED = True

WEBSSO_CHOICES = (
("credentials", _("Keystone Credentials")),
("kerberos", _("Kerberos")),
)

WEBSSO_INITIAL_CHOICE="kerberos"

COMPRESS_OFFLINE=True

OPENSTACK_KEYSTONE_DEFAULT_ROLE="Member"

OPENSTACK_HOST="$HOSTNAME"

OPENSTACK_API_VERSIONS = {
"identity": 3
}

OPENSTACK_KEYSTONE_URL="http://$HOSTNAME:5000/v3"

Make sure $HOSTNAME is actually the host name for your devstack instance.

Now, restart apache

$ sudo service apache2 restart

and you should be able to test that the federation part of Keystone is working by visiting this URL

http://$HOSTNAME:5000/v3/OS-FEDERATION/identity_providers/kerb/protocols/kerberos/auth

You’ll get a load of json back if it worked OK.

You can now test the websso part of Horizon by going here:

http://$HOSTNAME:5000/v3/auth/OS-FEDERATION/websso/kerberos?origin=http://$HOSTNAME/auth/websso/

You should get a browser dialog which asks for Kerberos credentials, and if you get through this OK you’ll see the sso_callback_template returned to the browser.

Trying it out!

If you don’t have any users in your Kerberos realm, it’s easy to add one:

$ ktadmin
ktadmin: addprinc -randkey <NEW USER NAME>
ktadmin: cpw -pw <NEW PASSWORD> <NEW USER NAME>

Now visit your Openstack dashboard and you should see something like this:

kerblogin

Click “Connect” and log in and you should be all set.

New MAAS features in 1.7.0

MAAS 1.7.0 is close to its release date, which is set to coincide with Ubuntu 14.10’s release.

The development team has been hard at work and knocked out some amazing new features and improvements. Let me take you through some of them!

UI-based boot image imports

Previously, MAAS used to require admins to configure (well, hand-hack) a yaml file on each cluster controller that specified precisely which OSes, release and architectures to import. This has all been replaced with a very smooth new API that lets you simply click and go.

New image import configuration page

Click for bigger version

The different images available are driven by a “simplestreams” data feed maintained by Canonical. What you see here is a representation of what’s available and supported.

Any previously-imported images also show on this page, and you can see how much space they are taking up, and how many nodes got deployed using each image. All the imported images are automatically synced across the cluster controllers.

image-import

Once a new selection is clicked, “Apply changes” kicks off the import. You can see that the progress is tracked right here.

(There’s a little more work left for us to do to track the percentage downloaded.)

Robustness and event logs

MAAS now monitors nodes as they are deploying and lets you know exactly what’s going on by showing you an event log that contains all the important events during the deployment cycle.

node-start-log

You can see here that this node has been allocated to a user and started up.

Previously, MAAS would have said “okay, over to you, I don’t care any more” at this point, which was pretty useless when things start going wrong (and it’s not just hardware that goes wrong, preseeds often fail).

So now, the node’s status shows “Deploying” and you can see the new event log at the bottom of the node page that shows these actions starting to take place.

After a while, more events arrive and are logged:

node-start-log2

And eventually it’s completely deployed and ready to use:

node-start-log3

You’ll notice how quick this process is nowadays.  Awesome!

More network support

MAAS has nascent support for tracking networks/subnets and attached devices. Changes in this release add a couple of neat things: Cluster interfaces automatically have their networks registered in the Networks tab (“master-eth0″ in the image), and any node network interfaces known to be attached to any of these networks are automatically linked (see the “attached nodes” column).  This makes even less work for admins to set up things, and easier for users to rely on networking constraints when allocating nodes over the API.

networks

Power monitoring

MAAS is now tracking whether the power is applied or not to your nodes, right in the node listing.  Black means off, green means on, and red means there was an error trying to find out.

powermon

Bugs squashed!

With well over 100 bugs squashed, this will be a well-received release.  I’ll post again when it’s out.

Enabling KVM via VNC access on the Intel NUC and other hurdles

While setting up my new NUCs to use with MAAS as a development deployment tool, I got very, very frustrated with the initial experience so I thought I’d write up some key things here so that others may benefit — especially if you are using MAAS.

First hurdle — when you hit ctrl-P at the boot screen it is likely to not work. This is because you need to disable the num lock.

Second hurdle — when you go and enable the AMT features it asks for a new password, but doesn’t tell you that it needs to contain upper case, lower case, numbers AND punctuation.

Third hurdle — if you want to use it headless like me, it’s a good idea to enable the VNC server.  You can do that with this script:

AMT_PASSWORD=<fill me in>
VNC_PASSWORD=<fill me in>
IP=N.N.N.N
wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k RFBPassword=${VNC_PASSWORD} &&\
wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k Is5900PortEnabled=true &&\
wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k OptInPolicy=false &&\
wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k SessionTimeout=0 &&\
wsman invoke -a RequestStateChange http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_KVMRedirectionSAP -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k RequestedState=2

(wsman comes from the wsmancli package)

But there is yet another gotcha!  The VNC_PASSWORD must be no more than 8 characters and still meet the same requirements as the AMT password.

Once this is all done you should be all set to use this very fast machine with MAAS.

Relapsed again

It seemed too good to be true after my last post, and it was.  Within days I had relapsed after finishing the last course of Bactrim my symptoms were back, worse than ever.  So bad, that I had a trip to hospital courtesy of an ambulance which had to be called because I was in so much pain.  Oh sweet, sweet morphine, you are a cruel mistress.

The Bactrim was only holding the Bartonella at bay, it seems. My LLMD has now put me on a month’s worth of Ciprofloxacin, after verifying that a sore tendon was not too damaged.  Why do that?  Well, Cipro screws up tendons and ligaments if you take it too long so I had to verify that things were OK to start with.  I also have to take it easy and not exert myself too much in case I damage weakened tendons.

The one piece of good news is that a recent endoscopy showed no fungal infection from all the antibiotics I’ve been taking.  Unfortunately an echo test on my heart still shows a lot of fluid in the pericardial sac and I still have a huge amount of pain there which keeps me awake at night.

Because of all this, I am sad to be missing a work function in Austin this week, but it would have been foolish to travel with the tendon risk (moving my luggage would be a problem), my high levels of fatigue, and not to mention the pericardial fluid can become life-threatening at any time.

My Road Through Hell

 

Hell

I’ve now been on treatment for Lyme disease for a little over twelve months.  Without a doubt, this has been the worst twelve months of my entire life.  It’s almost impossible to convey the range of pain that I have endured, the mental anguish, and the struggle to find the will to live.

Six months ago I was about at rock bottom.  I was going trough herxes from hell, suffering from heart complications including cardiac pauses (my heart would stop for several seconds at a time), and headaches that felt like someone was driving a pick axe into my skull. Then there was the brain fog; the confusion and memory loss that left me feeling stupid and helpless in front of people who just didn’t understand how I could not remember simple things I had talked about with them only a few hours ago.

On top of that, I had extreme fatigue that left me unable to climb the stairs at home without stopping every few steps to get my strength back in my legs.  Many of my days have been spent as a quivering mess on the floor, unable to speak, move or do anything because I was in so much pain and close to passing out.

In short, I was pretty fucked and thought I was about to die at any time.

Then I discovered an antibiotic that was actually making a difference to my heart symptoms—it’s called Bactrim (or Trimethoprim/sulfamethoxazole to give it its full name).  I started taking it in late December and two weeks later I was heart symptom free!  The course of drugs then ran out (I had 4 weeks’ worth) and ten days later I had relapsed and was getting chest pains and palpitations again.  I started another month’s course and felt better again after a couple of weeks, so it was clear that this drug was doing something to help me with my Bartonella infection.

It struck me that I have been so ill for a long time that I hadn’t really noticed that I was slowly getting better lately.  At least I hope I am getting better — I’m now at a “wait and see” stage after having stopped the Bactrim for a second time and hoping to hell that I don’t have another relapse.  I’m probably about 50% better than I was a year ago, I now have to wait for the last remnants of the Lyme and Bartonella bacteria to be driven out of my system.