Friday, 17 October 2014

Yosemite Upgrade - Apple does it again

It bowls me over - every time. From the first OS update I did from Tiger to Leopard I've been amazed how painless an OS upgrade is with Apple.

I know a little about IT and the upgrade of an Operating System is a major big deal. There are many new features, bug releases and updates to consider, and Apple always packs a punch when it comes to new features. Then you have the fact that the file system may have to be upgraded, along with the kernel.

It's not for nothing that a Windows update usually involves several reboots.

However, not only is Yosemite a single reboot (after a 15 minute install which is non too shabby for a 5Gb download!) it remember EVERYTHING.

I mean - what?

It had my settings, my background, my dock icons, my login picture - it even remember which apps were open when I kicked off the install, and where they were located on the screen. If it hadn't been for the fundamental change to styling and the dock I might have thought it had done nothing.

And... it was a free update.

Ok - this stuff costs you a lot when you buy it, but the total cost of ownership and overall satisfaction is superb!

Tuesday, 14 October 2014

UX versus EX (Employee eXperience)

In the world where I have spent most of my career - Web, and later Digital - we spend a lot of time on what used to be called the Graphical User Interface (GUI), which later became just UI (when it was really more HTML than graphics), and has now become User eXperience (UX). This reflects that the interface is a reflection on the experience - the feeling if you will - that users have when using the system.

Retailers particularly fight for the right balance of cool, easy-to-use, on-brand experience to allow people to shop more easily, in the case of my current employer (unnamed) where I spend much of my time, to actually inspire them to do projects, and therefore spend more money with us.

This is all great - the rub I have is that we, along with many other companies, really misses the trick of the Employee Experience. Our store colleagues are generally more engaged with the brand than the guys in the office. This is not unusual and is the same just about everywhere I've worked. The store colleagues are at the coal face, and are the real face of the brand in front of customers, so like the website we spend quite a bit of time on that. But what of the internal systems?

Let me give you the example that has irked me this morning - I have to log on 3 times in all to book a holiday. I needed to log on twice just to check my email. Why? Because we have provisioned systems based on their functional ability rather than the experience of using them. We don't do this for customers because we want loyalty and buy-in, but we ignore this when provisioning internal systems... why again? We're the same humans, we just happen to be on the payroll, but you can't "buy" loyalty. You can only buy time. 

First I logged onto the machine I'm sitting at - which is a perfectly reasonable hot-desk equipped with Windows 7 and all the usual Office and related tools.

But then I wanted to book a holiday - so I connect to the intranet. Our intranet has been moved to Office365 along with our email - a perfectly reasonable thing to do - but we haven't linked up authentication very well, so I'm required to log in again. I navigate to the HR area and find the link to book a holiday which takes me to one of a seemingly never ending list of third parties with SaaS solutions we use to provide services. Unfortunately none of these services use any form of single sign-on, so to book holidays I have to log on AGAIN. Worse, this third one doesn't use my network username it uses my employee ID, which is on my pass, but that's in my pocket... you get the point. 

The second rub is that the solutions we put in place are functionally rich but interactively very poor. The general web design is a lot, LOT less than we would demand from our own website, or indeed most companies would demand, but because the Employee eXperience is not a factor in choosing a solution noone cares. I'm sure the price is good, and I'm sure that the bulleted list of features is comprehensive, but still - they're terrible to use. I don't mind naming and shaming... Capita Travel's website is pretty terrible, and has forms which a) don't fit in the width of a reasonable browser window (say 1280 across) and b) don't provide scroll bars, so you have to resort to drag-selecting text to force the frame to scroll. Or we could take a look at Northgate, who provide much of our HR related systems. Again, poorly tested in browsers means it's all too easy to have content which doesn't fit in the frame. Apparently for some of these systems (not necessarily the two I've mentioned) Google Chrome is not supported. 

I'm going to say that again in case you didn't realise how fundamentally stupid that is. 

Google Chrome is not supported!

Stick that in your Non-Functional Requirements for a customer facing website and see where it gets you!

If employees are using good quality systems, which make them more efficient and less tetchy then I really feel that engagement with our employers in the office would be much improved, and it really doesn't cost much, but could yield massive benefit in employee satisfaction.

And don't even get me started on our timesheet system...


Securing a node.js app

If you're a relative newbie to NodeJS you may find this interesting - details some quick and easy things you can do to help safeguard the security of your Node.js app.

http://blog.risingstack.com/node-js-security-tips/?utm_campaign=Q4%20Social%20Media&utm_content=8704

Thanks to Gergely Nemeth of RisingStack for this

Monday, 29 September 2014

Net Neutrality should be preserved

If we allow erosion of the basic principles of a peer agreements at the behest of companies who should be serving us we will ultimately be worse off for it.

Applying artificial limits on the services of a certain type or those provided from certain companies does not improve the service for the end consumer. In effect it's like rolling blackouts - they're an emergency requirement to keep hospitals running in times of insufficient power on the grid - they should *not* be used to control the market in the flow of information that we are paying for.

There are already checks and balances in place to stop abusers of the internet (constant maxed out downloading of illegal videos, for instance), and these are a tiny fraction of Internet users, so there is no case there for imposing QoS type filtering on the rest of us, who just want to use Netflix or other streaming services already targeted in the US.

I just signed a letter to Ed Vaizey, the MP for Culture, Comms and Creative Industries here:

https://you.38degrees.org.uk/petitions/net-neutrality-protection

If you feel as I do maybe you could do the same.

Thursday, 18 September 2014

Forcing Bootcamp installer to build a USB bootable drive

I discovered that as my mac has a built in CD drive bootcamp assistant does NOT give you the option to create a bootable USB drive. This is a pain - who wants to burn one-time DVDs for installation when we have USB drives kicking around.

After banging my head against the wall trying to dd a converted ISO I thought maybe it's possible to trick the bootcamp installer...

...and it is...

Here's a link which explains what you have to do. Either do it all using sudo and a text editor, or tweak permissions and use the editor that comes as part of Xcode. If you're on Mavericks you'll need to run code sign but all the instructions you need are in here.

https://discussions.apple.com/thread/5479879

Enjoy!

EDIT: It turns out that my mac can't boot from USB anyway... You need to use the DVD, after all that! There may be more hacking you can do, but for now, you can only use the USB key you made on a mac without a SuperDrive.

Wednesday, 17 September 2014

Upgrading the primary HDD in a MacBook Pro

After 3 years of excellent service I've come to the realisation that the 256Gb HDD in my MacBook Pro is just not big enough any more, so after a little to-ing and fro-ing on crucial.com I acquired a new 1TB drive to act as a straight swap.

But here's the rub - how do I get all my data transferred over with the least amount of pain possible?

This got me thinking: OSX is basically UNIX, right? I have an iMac with a bunch of ports on it, and the laptop will boot into what Apple call Target Disk Mode, where the laptop is essentially an external hard drive to the iMac.

I thought I'd record how I did this in case I need to do it again, and indeed, in case someone else wants to know how to do this.

I would strongly recommend backing up your main drive, but as you can always put it back if anything goes too wrong I wouldn't stress about it too much.

Step 1: Connectivity

I have my laptop connected using a thunderbolt cable to my iMac. I have my new hard drive connected to a USB3 to eSATA converter, plugged into the USB3 drive on my iMac.

There are multiple ways of connecting things, but the laptop must be connected by firewire or thunderbolt to the iMac (or other laptop I suppose). The only adapter I could see for eSata was USB3, so that's what I've used.

When you connect the new hard drive to the USB3 you'll get a dialog inviting you to Initialize the disk, and this will bring up the Disk Utility.


You can see from this that I have selected the 1.02 TB ASMT 2105 Media, which is connected via USB. You can also see in the list a 251 GB APPLE SSD, which is the hard drive in my laptop. Bingo! We're connected!

Step 2: Preparing the target disk

We're going to use the Restore functionality of disk util to copy the data over in a minute, but before I can do that I need to prepare the partitions on the new disk. I actually have a bootcamp partition, so I'm going to do this twice. The old disk has a 60Gb BOOTCAMP partition for Windows (just so I can play Elite Dangerous when it comes out without waiting for the OSX version) and the rest is Mac OS. I'm going to partition the new drive as 900Gb for OSX and 100Gb for Bootcamp. 

Using Disk Utility select the new drive on the left, and set up your partitions. I have 2, one for Mac and one for Bootcamp, so my setup looks like this when I'm done:


Hit Apply and give it a few seconds to go, and you're done. On the left you should have two partitions named whatever you named them underneath the heading for the new drive.

Step 3: Copying the data

This is super easy, this bit! Select the TARGET partition on the left and select the Restore tab. This should have populated the Destination box with this target partition. Now drag the SOURCE partition from the left to the Source box. You should now have something in each of those boxes. Finally, hit Restore and confirm. You'll need to put your password in for this one. In my case the estimated time was about 45 minutes - that's SSD to SSD, both disks mounted on thunderbolt and USB3, so I'm thinking that's about as good as it gets for a 200Gb partition. 

The neat thing about doing it this way is it uses Apple's ASR tool under the hood, which does block copies, maintains volume information, and ignores blank space. It also performs a verification step once the copy is complete. As it happens I got an error in restore, not because there was a problem per se, but because of some odd "invalid argument" type error. This may be simply because the target size was not an exact multiple of the block size the tool was using for the copy or something. It would be nice if this could be ironed out in future releases of this tool, but it did perform the restore without error.

I should say that there are other ways of doing this, using tools like dd which can make a byte copy of the drive, and this works just fine, but requires some knowledge of how to execute command line tools to read special device file names and so on, and while I am completely happy using such tools you can't beat a good GUI. :-)

One final point worth mentioning is that I lost my system restore volume, which is installed by default on a mac disk. This is not the end of the world, as a restore image can be downloaded for free from Apple, and booted from a USB pen or other device, but you may want to reinstall this if it bothers you. I haven't tried that, so can't give any advice on that particular aspect.

Step 4: Testing the new drive

Select and "Unmount" the source and target drives in Disk Utility, power off the laptop, and disconnect the laptop and the new disk. Now refer to a guide such as this: https://www.ifixit.com/Guide/MacBook+Pro+17-Inch+Unibody+Hard+Drive+Replacement/3401 for replacing the drive. ifixit.com have guides for many different types of mac, so I'm sure you can find the right instructions!

While I was in there I upgraded the RAM to 16Gb, and there are guides for that on ifixit.com too but it's pretty obvious how to do that.

Then the moment of truth - powering on the Mac!

I pressed the power button, the CD whirred a bit - and then nothing...

So, I connected the power. This time it booted up but it did take a while. Be patient. You've just had the battery unplugged so that would have reset the power management and so on. When the machine came up it had reset the time, but that didn't last long when it connected to the internet.

I am finishing this post on the laptop, so that tells me all is well. One point I noticed is that I tried to verify the drive in Disk Utility and it had errors, so I shall investigate that further, but as far as I can tell the machine is just like the one I had about an hour ago, except now with a new, faster drive.

Good luck!


Monday, 8 September 2014

Rock music is NOT dead, Mr. Simmons

Gene Simmons (remember the guy from KISS?) has said that rock music is dead due to file sharing... I'm (just) old enough to remember reading that sort of crap about the taping of Vinyls (they're like big, black CD's, love!). There was OUTRAGE when TDK released a cassette that allowed a whole album to be stored on one side of a cassette, I kid you not.

Nothing's changed, only the tech.

There is no evidence that piracy harms the music or film industry - in fact, there's evidence it may actually help, as the sales made from hearing and following new bands legally outweighs those that only do it illegally. Loss of earnings is only loss of earnings if the guy taking a copy was ever going to buy it, which in most cases is not true. Some people just collect this stuff like stamps or old coins - there's not enough time left in their life to listen to it all.

That said - please don't think I'm condoning it - I absolutely am not, which is why I have both Spotify and iTunes Match accounts, and Netflix. I still buy the odd CD I really like. There is a moral grey area about stuff which is not available to buy (bootlegs / US imports / etc) which I'll not comment on...

Thanks - I've finished now. You may go about your day. :-)

Sunday, 31 August 2014

Node.JS Async thinking...

I've been working on a Node.JS project for the last few months. It uses all the usual suspects, including EJS, and Sequelize (http://sequelizejs.com) ORM, amongst other really useful modules.

One of the main issues you face when coming from a language like Java, as I did, is that most calls you make in Node are asynchronous. There is a single thread which is constantly making calls in the background, to which you provide callback handlers.

This is an easy enough concept to get - it means you get code like:

doMyThing(finished) {
  // do some stuff
  finished();
}
function finished() {
  // called when doMyThing() is all done
}

But this gets compounded, and more and more complicated when, for instance, doing a lot of database calls that depend on each other. Or if you have several sets of processing that need to be done in order. You end up chaining endless callbacks, which makes the code incredibly hard to read.

There is no inherent thread management in Node (that I've discovered) so what do you do if you need control over the fork() and join() calls (to quote C / POSIX terminology).

The answer I found is the async module I found here: https://github.com/caolan/async

I think there are many people using this, and it's fab! It's so fab I'm now using it when I need asynchronous control, but actually I've replaced all my for() loops with async.each - thus:

Consider this simple loop to work on the results of a DB call:

db.sequelize.findAll({where: {criteria: someCriteria}}).success(function(results) {
  // results will be provided from Sequelize as an array of objects populated from the db
  // for-loop method:
  for (var i = 0; i < results.length; i++) {
    // do something with results[i];
  }
  // async.each method:
  async.each(results, function(result, callback) {
    // do something with result
    // now tell the async library we're done with this result
    callback();
  }, function() {
    // this optional function will be called when all the results have been processed
  })
});

Forgive me if this doesn't quite have the right number of brackets and so on - I've not typed it in to check the syntax is 100%.

Note the difference - the for loop method iterates over the array, processing each element. The async.each method asynchronously calls the provided function for each element in the array, potentially calling a final method when all entries have returned. Note that the body of the handler function must contain a call to the provided callback function to tell the async library when this entry has been processed.

There are variations within the library to handle processing these in series (async.series) and a multitude of other useful things, so it has really become, for me, an essential module for programming in Node.

Thursday, 14 August 2014

D in your results? Chill out!

Just seen someone on the news really upset about working really hard and only getting a D. It's a bell curve. Get used to it! A D in this years results still makes you smarter than your parents on average, and smarter than their parents. People are getting three IQ points more intelligent each decade so don't get disheartened by it.  


We have to keep moving the goal posts to keep us getting smarter. The problem is that society tells everyone they can get an A if they work hard and it's simply not true. For the really smart people to get an A that means anything people like me have to get a D. And I did. That's just maths. Was I bummed? Yeah, but I adjusted my expectation and really focused on what I was good at and enjoyed. Am

I a world class pianist or software engineer? Nope! Am I ok with that? Yep (mostly) but I still work hard at both and am pretty damn happy where I've ended up with both. 


Conversely I DID get an A at GCSE English but I can't write for shit, which I really wish I could do. That's just how it goes. 


I hope all you results chasers get what you want, but more than that I hope that whatever happens you can look back in 10 or 20 years on the road you are setting out on and be happy where you ended up. 

Monday, 11 August 2014

3 useful tunnelling scripts for mac - SSH, VNC and AFP

I just thought I'd share these three scripts. I have previously blogged about setting up a tunnel, but I've had some people asking for these scripts specifically. The first sets up a permanent tunnel and changes location to one called Proxied - so you'll need to set this up. See my earlier post for details on this.

#! /bin/bash

handleSigint()
{
  echo SIGINT detected - reverting Location...
  scselect "Automatic"
  exit 0;
}

trap handleSigint SIGINT

scselect "Proxied"

sleep 5;

while [ 1 ]; do
  ssh -v -N -D8080 -o ServerAliveInterval=3 user@mydomain.com
  echo ssh exited... relaunching...
  sleep 5

done

Note that you'll need to change user@mydomain.com.

This script improves slightly on the original, as it catches CTRL-C and sets the location back to "Automatic".

Here's the next script - fires up screen sharing / VNC on a machine on my home network:

#! /bin/bash

echo Connecting...
ssh -f -v -N -L5900:192.168.0.11:5900 \
  -o ServerAliveInterval=3 user@mydomain.com
sleep 2; # Allow connection setup time

open vnc://localhost

Note that the 192.168.0.11 should be the machine on your network you want to connect to - note the use of the private network 192.168 here. This is the IP address my iMac at home as been assigned. Don't forget to change user@mydomain.com again.

Finally, here's a new one - this allows you to 'mount' drives from a machine on your home network. It's very similar to the VNC script, but binds the AFP port, not the VNC one:

#! /bin/bash

echo Connecting...
ssh -f -v -N -L15548:192.168.0.11:548 \
  -o ServerAliveInterval=3 user@mydomain.com
sleep 2; # Allow connection setup time

open afp://localhost:15548/

I've used a non standard port for this one to avoid conflicting with any local services you may have on this port.

You can share any service in exactly this same way - as long as you know the port number.

Enjoy!

Thursday, 7 August 2014

Theatre Ingestre present "Fiddler on the Roof"



Next week will see me playing accordion for Fiddler on the Roof, at Theatre Ingestre. An excellent cast lineup directed by Tim Downes, under the baton of Calum Robarts it should be a good one!

Ingestre Stables,  Stafford, opens 13th August.

Theatre1 is launched!



I am proud to announce the launch of Theatre1 - a new Theatre company, targeting excellence, openness and professionalism for young theatre. They are auditioning for Songs For a New World. You can sign up to audition and download the audition pack at http://theatre1.co.uk. Follow them  on twitter @t1stafford, or Facebook.com/T1Stafford.

Monday, 30 June 2014

MYTS - West Side Story



MYTS will soon be opening with West Side Story, at the Gatehouse Theatre in Stafford, for those who are a fan of the show. I'll be playing keys for this, and it will be the biggest show MYTS has ever done. It boasts an orchestra (that's right - not just a band) of 18 players, and a sensational cast. For me it's been a slight change of emphasis back to actually playing the piano rather than programming. :-) If you have a free night, and are in the area MYTS never fail to disappoint.


RPI as secure web proxy

I, like most other people in the modern age, spend a lot of time on the Internet. This time is probably, in order of usage, at work, at home, at other locations like coffee shops or other public WiFi. Like most people my own network is quite secure, but what about everywhere else?

The work WiFi is internal to the office, but how secure is that? To get on the WiFi you just need the password - there could be all kinds of devices connected. And who's to say that those devices don't have malware on them? Clearly I would like my connections to websites and so on to be protected.

Think about staying in a hotel, using the hotel guest WiFi. I'm now sharing a connection with strangers in other rooms, with God knows what intent.

The answer for me was tunneling.

This is the idea - you open a secure connection to a service running on a network you trust - in my case it's my home broadband, but could be a server in AWS if something - it's easier if it's Linux based, but essentially you can do it on anything. I have a Raspberry PI on the network at home, so that's what I went with.

You then use the ssh protocol to "tunnel" all traffic over a secured connection, via your trusted endpoint, out to the internet. The ssh client allows a proxy mode called SOCKS, which allows all network connections to be made by the proxy. So, when I hit facebook.com in my browser what's actually happening is the browser is asking the designated SOCKS proxy to establish that connection, and route traffic back. I've used the ssh command to set up that SOCKS proxy via my home network, so now all traffic is coming from the internet to my home network, and is then encrypted and tunnelled to my machine.

This works for all HTTP and HTTPS traffic, is very easy to set up, and I do it all the time now. I even have a script which sets it up on my mac for me.

The different pieces you need to configure are the PI itself, and the machine you're using. Let's start with the PI:

My home modem allows me to forward traffic from the internet to a specific host, so I have configured this to hit my Raspberry PI, and I bought a domain name for the job. I'm not telling you here what it is - sorry about that, but you know, this is a public blog... 

In the interests of not reinventing the wheel ensure your PI is good to go as an sshd server using the guide here. Note the section on generating keys - this is a really good idea as it negates the need to enter a password when setting up the tunnel.

Once you have it working you *should* be able to ssh to your PI from outside your network, if you've configured your modem correctly. If you have purchased and configured a domain for this, or used one of the dynamic IP services then this will work a treat.

If you can't get this bit working don't carry on, as this next part relies on the fact you can actually make a connection.

In order to create the SOCKS proxy tunnel you enter a command similar to this:

ssh -v -N -D8080 -o ServerAliveInterval=3 pi@yourdomainhere.com

Note the yourdomainhere.com - if you've not set up a domain you'll need the internet IP address of your modem here.

In the call here -v prints verbose information so you can see what's creating connections on the tunnel - leave it out for a quieter life. The -N stops the ssh default behaviour of executing a remote command - typically a shell, and the -D8080 is the magic which creates the tunnel. More on this in a second. The -o ServerAliveInterval=3 is a further optional parameter makes the client send a null packet to the server every 3 seconds, to keep the connection alive. Many ssh daemons kick off connections with no activity after some time, so this just stops that happening.

Now - more on that -D8080. This sets up a Dynamic proxy on port 8080. A dynamic proxy makes new connections as created on the remote host to service the requests on our local machine. SSH also allows the use of specific static routes, where a specific port on the client is routed to a specific port on the server, but we're not using that here.

I actually wrapped the above line into a script as shown here:

#! /bin/bash

scselect "Proxied"
sleep 5;

while [ 1 ]; do
  ssh -v -N -D8080 -o ServerAliveInterval=3 pi@yourdomainhere.com
  echo ssh exited... relaunching...
  sleep 5

done

This script uses the scselect command to automatically switch the Location on my mac to a location called Proxied. I have set this up as shown here in the screenshot below. Note that on a Windows machine I don't know how you'd do this in a system-wide way, but on a Mac this setting is honoured by all browsers in one hit.

The script also reconnects if the connection drops, after a 5s delay.

You can see that I have set up a SOCKS proxy on localhost, on port 8080, which matches the port in our -D parameter to ssh. If you need to use a different port that's fine - just make sure the port you put in your proxy settings match the port in your -D line.

Again, if you're doing this in your browser directly (in Windows, say) you need to find the SOCKS setting and change it in this way, and it should work just the same.

I have this script in a bin folder I can access by running terminal, and then just running ssh-tunnel. It takes over that terminal tab, which I like because I can see what's going on, and to exit it just kill the tab or CTRL-C the script. Easy.

I would strongly recommend if you're going to do this that you also take a look at your SSHD options  on the PI and remove password authentication altogether. I would also strongly recommend that you install fail2ban using this guide here. Fail2ban essentially monitors your access log file and automatically IP blocks failed login attempts. You'll likely never have any, so this means someone is trying to get into your system.

I would also do some googling on securing your PI and either set your modem to only forward port 22, or else bolt your PI down to prevent unauthorised access.

And finally....

Once you have an SSHD server on the internet you can access any of the machines on your internal connection. For instance, check out this bit of script which gives me a VNC client on my iMac INSIDE my home network. This uses the -L parameter to create a specific tunnel (rather than a dynamic one) from port 5900 locally to 5900 on 192.168.0.11. Now, what's this? 192.168 addresses are internal to my home network? That's right - this is in the context of the remote network. You can see the familiar pi@yourdomainhere.com to actually make the remote connection.

The last line is a Mac command to open a connection, but again, having made the connection you just need to open your VNC client and connect to localhost:5900 - like the dynamic proxy shown earlier you are making a LOCAL connection which is tunnelled for you.

Easy, huh! Now, go and be secure. :-)

#! /bin/bash

echo Connecting...
ssh -f -v -N -L5900:192.168.0.11:5900 -o ServerAliveInterval=3 pi@yourdomainhere.com
sleep 2; # Allow connection setup time

open vnc://localhost

Friday, 27 June 2014

Beware the Hero culture...

On paper at least a modern software project (assuming you're in the 80% of companies using Agile software development practises) follow an Agile approach, and should, in theory, have continuous integration, automated unit testing, atomic version control and release management all kinda sorted. If you're in the 20% it's possible you still are delivering great quality software but I'm willing to bet that you have more of a Hero culture.

What do I mean by Hero culture? I'll tell you. You have too many people who get off on saving the day and staying up for three days straight fixing problems rather than getting off on putting in place a rigorous process of software testing and release to ensure that go-live problems don't happen.

Too many organisations I have worked in have this problem - and what's worse is they don't recognise it.

I have been involved in two major projects recently where in both cases the bottles of bubbly were opened for the guys that stayed up all night, while the guys (like me, so no bitterness or anything) who were bleating on about the importance of repeatable unit testing, CI and so on were largely ignored, and at the end were fairly held up as cranks. That's not to say I'm not respected by my colleagues - I am - but when it comes to delivery they have a way that works for them, and what's worse is they can't see how dangerous the hero culture is.

Now, sometimes you need a few clever guys to get you through the gate, but just about every time I see extended periods of hacking in production by these very talented and expensive guys I can see how it could have been avoided. So why the hell does it keep happening?

In most cases this requirement comes about because the project has gone of the rails through poor project management, poor stakeholder management, or poor quality of deliverable. In all cases these can be trapped early and so there's no reason to be busting a gut doing 80 hour weeks when the application goes live.

To prevent that happening you absolutely must - I say again - MUST have a lean development environment, with a high degree of testing at each level, a high degree of automation in deployment, and a mechanism to track changes and issues with the application. This stuff isn't new so why are we (and many others) so bad at it?

It could be that because Sprint 0 slips we don't bother doing something that is essential so we can stay on track, but this is a falacy of the worst order. If you don't get the foundations right you need to crop later stories, not elements of the process essential for successful delivery. That may mean revisiting the business case. If you ignore the essentials you blow the business case to hell when the flakey thing goes live anyway. And probably more, because you have to release many more times with the ineffecient release process you have, because you didn't get the basics right (or something - could be VCS issues, release management, poor automated testing, or a mix - you get the idea).

The development burndown should be flat, not peaky. A release should be a BAU operation and the team goes home for the weekend afterwards without worrying about it. This is an ideal, I know, but you can get damned close. On the last agile project I ran in about one in three sprints we missed our pub lunch on a Friday. THAT'S IT!! No overnighters, no 80-hour weeks, and the quality of what we were pushing out was far in excess of anything I've seen on the projects in the two years since (working for a non Agile organisation with a serious Hero culture ethos).

I'm not having a go at the men and women that get the application out - the Heros - I'm really not. They're some of the brightest minds I've worked for. The company itself is rewarding the wrong people for the wrong things. If the board / senior management create a culture where Heroism is what's needed to get applications out then they should get called out on it, and something should fail. This may result in the city asking what the bloody hell is going on, and things will change, but too many companies don't see this as the problem and will just carry on regardless.

It's frustrating, not leastly because it's preventable, but also because it leaves people like me - people who can see a better way - looking for somewhere else to exercise these ideas. Somewhere they may actually get it!

Tuesday, 17 June 2014

A must-have gadget... not my normal type of post...

Every now and then a toy comes along that I just think fits the bill. It genuinely solves a problem (without inventing the market in the first place - thanks Apple for making me crave an iPad!). Examples might include Apple TV, or the Raspberry PI, or even just something mundane like a car stereo head unit that automatically sorts the EQ using test tones...

I took delivery today of another such toy.

Like just about all of you I have probably built a collection of AV equipment in my living room based on quality. I have an Arcam biwired stereo and Rotel CD that SOUND good, a Samsung TV that LOOKS good, an Apple TV which enables streaming of content from our main machine (and Netflix). The problem has always been that while these are all connected fine there's a combination of remotes, and honestly, everyone from the babysitter to my mother all need reminding how to use any of it.

Not any more - introducing the Logitech Harmony 350 - the baby of the range, but good enough for me, and a lot cheaper than the bigger remotes. It's a general purpose remote.



Now - I know what you're thinking - you've seen these before? Endlessly typing codes in from the back of some Japanese or Taiwanese translated pamphlet in the hope that just some of the functionality you need will work? Not any more - this thing has a USB interface and some software so you give it the model numbers of your kit and it configures itself.

Better than that you can see if your kit is compatible before you buy one.

Finally it has shortcut buttons for turning everyone on and setting up the correct input and so on, all in one go.

I have put the four separate remotes in a drawer now, and we have just the one remote. I know it's silly, but this means you can have good quality separate pieces of equipment and still one remote. At around £40 for the 35 it's not cheap as chips, but it works so well. It even controls the volume on the Arcam stereo!


Saturday, 14 June 2014

£50 iBeacon PoC - "Welcome Home, Roger"

So, this is a trivial use of iBeacon, but I have it sussed. Using a Raspberry PI, a £12 bluetooth dongle and a ludicrously simple App for my iPhone my phone now welcomes me when I get home.

I used a few different guides for this, which I'll reference here. I didn't invent anything here, but found I had to use information from one or two guides, so I'll link them here.

Firstly - the dongle I used was one of these.

Then you need to setup the software on the Pi to drive it, for which I would recommend this guide. I would say though that this guide fell foul of actually getting the thing going. It was good for getting the software installed though. My particular combination of dongle and UUID or whatever just didn't quite work, so I then used this guide which is where I realised that I had some extraneous zeros on the end of the hciconfig command.

One *really* useful thing is to have two ssh sessions going, and have hcidump running in one of the windows. It gives you output like this:

< HCI Command: LE Set Advertise Enable (0x08|0x000a) plen 1
> HCI Event: Command Complete (0x0e) plen 4
    LE Set Advertise Enable (0x08|0x000a) ncmd 1
    status 0x0c

    Error: Command Disallowed

You can see from my sample output here that it showed up a problem - when you issue the UUID and so on you should get something like:

< HCI Command: LE Set Advertising Data (0x08|0x0008) plen 32
> HCI Event: Command Complete (0x0e) plen 4
    LE Set Advertising Data (0x08|0x0008) ncmd 1

    status 0x00

The non-zero status is a bad thing!

The final part is to set up init.d scripts to automatically start the Pi broadcasting when you reboot the Pi, and this can also be found in the second guide - very handy.

I would advise doing a sudo apt-get update and probably upgrading the Pi firmware to the latest version using rpi-update (note that this caused my Pi to go into single user mode, so have that keyboard and HDMI connection handy if you do this).

The last part was to write the app, and that's still in progress, but to test the connection I would recommend the free app by Radius Networks. There's another app referenced in the docs, but it's not free anymore.

If you want to dig Xcode out and start cutting code then that's pretty easy too, and when my little welcome app isn't so hacky I'll share the code.

Note that if you do use the UUID in the second guide above there is already a profile which will detect this in the Radius App, called Apple Locate, so it's the quickest way of checking your BLE is working properly.

Have fun!

Wednesday, 11 June 2014

Like a kid again - Elite: Dangerous is nearly here

When I was a nipper a game consumed my life like no other addiction had up to that point (Everquest did rather take over my life about 15 years ago). My dad came home with a copy of Elite for the Acorn Electron we had at the time, got me started on it, and awakened a geek in me that never really got back in the closet.

Elite was originally created by Dave Braben, and, with some imagination, was bloody fantastic. I played with the lights off late into the night - first on the Electron, then the Spectrum and finally my PC (actually, I have been reliving my youth with Oolite - www.oolite.org - on my Mac...).

The basic premise of the first game was a MASSIVE galaxy you could explore, flying between systems trading, bounty hunting, trying not to get killed by pirates, or being a pirate.

Here's thing thing though - the game originally was based on black and white vector graphics with no textures at all. Graphically it was good at the time, but a bit sparse, but the controls worked well, the game was reliable, and actually when played late at night at sleepovers with friends put everyone in an imaginary world flying between systems, taking notes of stock prices for the next trade...

Super.

At the time my friends and I dreamed what it would be like if you could do missions, team up with other players, or buy other ships. And now we are nearly there. Braben got a kickstarter project (https://www.kickstarter.com/projects/1461411552/elite-dangerous) to raise over £1.5m and build Elite: Dangerous. And my friends it's everything my 12 year old inner self is dying to get hold of.

The premise is the same but now it's truly massive (the whole Milky Way is mapped - I know!!) and is massively multiplayer too. You can have different ships, and take on missions. Group with friends and do joint missions together. 

Basically it's World of Warcraft meets Elite and I can't wait!! I've already warned my life she will lose me for about 6 months of my life when this game comes out and for the first time EVER I shall be installed bootcamp on my laptop so I can play it on Windows when it comes out rather than wait for the Mac version (which will follow - thanks boys!)

If you want to see more head on over to elite.frontier.co.uk.

Friday, 6 June 2014

Nuclear safety - why do we get all bent out of shape about it?


You may have seen in the media that the Office for Nuclear Regulation is considering the safety limit for the degradation of graphite bricks which protect the nuclear core of a power station. The proposal from EDF is to raise the limit from 6.2% to 8%.

What concerns me is that already anti-nuclear lobbyists are jumping on this as the government putting power generation ahead of public safety, but this is simply not the case. The ONR has a good track record of imposing good safety measures based on actual science (you know - that thing that doesn’t give a shit about public opinion, it just is) so I would urge anyone thinking of getting up in arms about this to at least wait for the science to come in.

The ONR has told EDF to commission independent scientific consultation, as it is believed that 6.2% was extremely conservative. If this is true then raising the limit is just common sense - and it's not putting anyone at risk.

The thing is - if we rush too quickly to condemn this raise based on our misguided opinion of nuclear safety we will shut down our power stations 10 years earlier than we need to, and that would just be bad for power. We have a real power generation problem looming, so turning off our stations needlessly would be a bad thing. We are looking at rolling blackouts towards the end of this decade, and I love my internet connection way too much to not get vocal when I see that coming!

Also, a quick reminder, in terms of critical illness and deaths per megawatt nuclear power is just about the safest form of power generation - including solar (making the panels is a very toxic business and people die in their manufacture).

And now I leave you to your weekend… Enjoy the sun.

Thursday, 5 June 2014

Parsing Roman Numerals

I had reason to write a Roman Numeral parser - the spec was simple - pass in the number as a Roman Numeral, which I take as a String, for instance, "MCMLXXXIV", and return the decimal equivalent.

It took me a few minutes to work out a neat way of doing that, so I thought I'd share in case anyone else needs it. It doesn't check the numerals are valid, and comes without warranty, blah, blah, but it works for all my tests. It's in Java, but would be easy to convert to any other language

There's the class Parser which does the work, and the RomanNumeral enum which stores the values for each letter. There's not much error handling either.

Enjoy!

public class Parser {
    public static int parse(String romanNumerals) {
        int[] values = new int[romanNumerals.length()];
        for (int i = 0; i < romanNumerals.length(); i++) {
            values[i] = RomanNumeral.valueOf("" + romanNumerals.charAt(i)).value();
        }
        return parse(values);
    }
    public static int parse(int... values) {
        int total = 0;
        int subtraction = 0;
        for (int idx = 0; idx < values.length - 1; idx++) {
            if (values[idx] < values[idx + 1]) {
                subtraction = values[idx];
            }
            else {
                total += (values[idx] - subtraction);
                subtraction = 0;
            }
        }
        total += (values[values.length - 1] - subtraction);
        return total;
    }
    private Parser() {}
}
public enum RomanNumeral {
    I (1),
    V (5),
    X (10),
    L (50),
    C (100),
    D (500),
    M (1000);
    private int value;
    private RomanNumeral(int value) {
        this.value = value;
    }
    public int value() {
        return value;
    }
}

Wednesday, 4 June 2014

When is a stereo jack not a stereo jack? Know your cables!

I'm helping a young lad I've really seen grow up with his first MD post - I'm playing second keys - and we had a really interesting chat I thought it worth sharing.

I got to band call, and he was complaining that his piano sounded really tinny and thin - and indeed it did. It sounded fine on headphones, so I started tracing the wiring to the amp.

Here's the funny thing - he had a stereo splitter cable - 1/4" stereo jack to 2x 1/4" mono jacks - left and right. He'd connected the left and right to his Presonus box and the other end to the amp.

Now I know what you're thinking - I bet that's not a stereo input on the amp. You'd be right - it's a balanced input.

This led to a conversation with the young lad...

So, in a balanced TRS jack (which looks just the same as a stereo jack) the ring and tip both carry the mono signal, but crucially they are out of phase. This allows the receiving system to reduce noise more effectively, as the noise will be present and equal on both signals. There are many technical journals about differential balancing if you're interested.

The main point for this blog post is that the amp that Tom connected his keyboard to expected the ring to be a phase inversion of the tip, not a Right channel, and the processing that then took place effectively wiped out the bass entirely from the mix.

Have you ever wired up a car stereo system and got one of the speakers out of phase? I've done that, and it has the same effect. The relatively low frequency bass sound waves cancel each other out. You can get the same problem in large studios that don't have bass traps - the sound waves bounce of a wall and mix with the direct sound waves, but at a different phase, reducing (or building) the bass sound.

So, he unplugged the Right output from his Presonus and all was well. Whether or not he remembers why that worked is another matter. :-)

Incidentally, I've started using balanced line outs on my keyboards and have found a significant reduction in noise and a slight boost in input signal, so I'd recommend it.

Using a Raspberry PI as DNS and DHCP tool

This morning I set up my RPI to run DNS for my network. This means I can resolve internal machines (the PI itself and the iMac downstairs) and make use of internal DNS caching, save going out to Virgin for every DNS lookup.

It's actually trivially easy to do. If you're already using udhcpd you may want to switch, as you can do it all in one config file, but you don't have to.

Firstly apt-get the package:

apt-get install dnsmasq

You then need to edit the following lines in /etc/dnsmasq.conf. Look for the following settings, and change them for your own settings:

server

This should be the address of a good external DNS, so either your ISP one, or I've used Google's DNS'

server=8.8.8.8
server=8.8.4.4

domain

This should be the name of your internal network. As it happens mine is softfox.net. This enables you to be able to resolve addresses on your network, so in my case pi.softfox.net

domain=softfox.net

dhcp-range

If you're going to use dnsmasq for DHCP you need to edit this, but you don't have to. If you are using a different DHCP (for instance, udhcpd) you'll need to update the network DNS settings it hands out to your PI.

dhcp-range=192.168.0.100,192.168.0.199,12h

This takes the form of range start, range end, lease time. There are other options you can do for multiple ranges on different networks and all kinds of things

dhcp-host

I've not used this, but this is how you hand out a specific IP to a MAC address - could be useful. I actually have my fixed things on a fixed IP so didn't need it.

dhcp-option

Important one this - dnsmasq assumes that it's running on the router, but in my case my router is a virgin box on 192.168.0.1, so you have to set this up to point to that box.

dhcp-option=option:router,192.168.0.1

Again there are ton of things you can do here, but I have a fairly simple setup. There are examples of setting up WINS names and so on, but I have an all mac network, so don't worry about such things. You can also set up bootp and tftp for network boots from this - again, I have no need of such things.

cache-size

This sets the number of DNS address dnmasq will cache, and I would advise you to set it to it's maximum setting. It's way quicker to get the address from your own network than go out to your ISP, so I've set mine to the maximum, 10000

cache-size=10000

Stephen Wood has done a really cool post about this with more information, and some scripts to help you profile the improvements here so thanks, Steve for that, and getting me up and running so quickly.

Thursday, 29 May 2014

RSync PI anyone?

I have a fairly slow and noisy 2TB external drive, and recently bought a 3TB drive as a backup drive. I'm using an iMac so Time Machine is handling the backup for me - all is good.

Here's the problem - my home studio has this constant background noise of the 2TB hard drive, so I've decided to move things around a little. I hadn't really noticed, but I've started recording ta 48KHz, 24 bit, with a Rode NT1-A and it just picks up this noise when you have sensible gain levels on the mic.

Further to that a little speed test surprised me - it showed that my trusty Iomega RAID drive is only doing about 30Mb/s write speed. That's ok for recording, but the new drive (USB3 in a USB2 slot as the Mac doesn't have USB3) is writing at over 100Mb/s. Hmm.

So it feels like I should do this the other way round. Have the relatively quiet, fast drive as my recording drive and the slow noisy one as backup. That's the easy part. How do I get the noisy machine out of my studio?

The answer is to put it on the network, connected to a hub upstairs (where I already have a hub for the PS3 and so on). That's easier said than done, given that it's formatted with HFS+ and reformatting in something a NAS will understand will almost certainly lose some information. I need to use HFS+ or something that supports everything HFS+ supports, such as the Linux ext4 format.

I researched ways of putting a non-NAS drive on the net, and there are solutions, but they all seem a bit of a compromise for the £0 I want to spend doing it.

The answer? 

I have moved my Raspberry PI upstairs (powered off the telly - currently - I'd best deal with that), and will connect the Iomega to this thing, formatted as ext4. I can then use rsync over ssh to pull the files from the iMac that I want to backup (I could do a push, I suppose but I'm also pulling some stuff from offsite servers so it keeps all the rsync jobs in one crontab if I pull).

Perfect - I can't use Time Machine, but rsync is great.

Rsync

For those that have not used it rsync is a command line utility in just about every unix - OSX included - which syncs two file trees. These trees can be local or on a variety of different network endpoints, including ssh - perfect for my needs. When run with the -a flag (archive) it preserves file permissions and symlinks and so on.

The really great thing about rsync is it only copies files that it needs to, and in fact only copies "parts" of files that it needs to - brilliant for syncing large files. There's no version history, but I don't really care about that. I just need my files somewhere safe.

Crontab

The cron table is an ages old mechanism for unix machines to run the equivalent of Windows scheduled tasks. The table consists of an interval specification and the command to run. It's fairly straightforward.

It's true that data throughput speeds on a Pi aren't that great, as the network interface is on the same bus as the USB, but again, it doesn't really matter.

The final part of this is to run rsync in the crontab, and use the 'nice' command to lower it's execution priority. I will then set it to run overnight, so it doesn't try to read files while I'm recording.

I'm still copying files to my new fast drive (and will be for hours yet - about 15 hours to copy 2TB it seems). Once that's done I need to move the Iomega drive up next to the Pi, and implement the rsync - then I can leave it to it. 

This means that little £30 pi is running a webserver, sshd endpoint for me to do secure tunnelling into my network from outside, dhcp for my network (because the virgin superhub is crap at it) and now is in charge of my backups. Genius!

Friday, 16 May 2014

Motu 828x Thunderbolt / USB audio interface - a review



So I have taken posession of my new Motu 828x. I wanted to upgrade my trusty Edirol UA101 to something which can be properly plumbed into my home studio, and I take the UA on the road all the time.

I'm not a professional reviewer, but this is a new model of soundcard and one of the few taking advantage of Apple's new Thunderbolt technology. Thunderbolt is taking over as the defacto interface for high end use on Apple kit. It allows six devices to daisy chain, with a full duplex 10Gbps connection to each, which outstrips Firewire 800 (800 Mbps) and USB3 (5Gbps). This means that I hope to have future proofed my studio a while.

True the pro studios are using ethernet these days, but this is a hobby for me, so I won't be going that route any time soon.

So, what do you get with one of these things? Well, as well as two combi inputs on the front for easy access when using guitars and so on you get 8 balanced TRS ins and outs, ADAT, SPDIF and MIDI. I probably forgot something, but the main thing that you might think is missing is the XLR Mic inputs - it has only the two combi inputs. But that's perfect for me as I'm going to use my Focusrite Octopre with this, so I have that covered. This means they've spent on the money on internal quality, and things like an effects unit, rather than Mic preamps.

In the box you get the unit itself, with ears for attaching to a rack, the power cable of course, a USB cable (bit of a shame - have to get the Thunderbolt cable for yourself) and the drivers. You also get AudioDesk software, which I'll review separately if it's any good. I use Logic, so not sure what I'll use that for.

 And here it is installed above the Octopre in my desk rack. For the nostalgiac that's a JV-1080 underneat the Octopre. I've wired it to have 4 of the inputs from my Octopre, as I never use more than 4 at a time anyway, with the other four from my two modules - the JV and an aging TG300 which, to be fair, I don't use often, but it's a quick way of getting Sibelius to sound decent.

My first barrier is that the drivers were shipped on CD, but fortunately I have an external Blu-Ray player so I can get it installed.

As seems common for sound devices, even on a mac, a restart seems necessary (I wonder if it actually is...) After a reboot and checking the settings in the dialog which auto opened I fired up Logic and let's see what we've got...

So, first impressions on the sound quality are as good as you'd expect. Certainly the clarity through my little Yamaha MSPs is excellent, not least because I'm able to use the XLR inputs and not the jacks, so the tiny little bit of mid range distortion, which I thought was the monitors, has actually gone. The bass is just ever so slightly more reinforced too. Lovely!

Using the Audio MIDI Setup tool I was able to see which inputs in logic correspond to which inputs on the device too. It's as follows:

1/2 the combis on the front
3/4-9/10 the TRS jack inputs on the back
11/12 reverb from the inbuilt effects processor, so you can track this on a separate track - neat
13/14 the FX return (did I mention this thing has an insert on the back too?)
15/16 SPDIF (I have my DAT connected to that)
17/18-23/24 ADAT channel A
25/26-31/32 ADAT channel B

That's one hell of a set of interfaces. From the blurb it looks like you can run the whole thing at 192KHz (assuming you have the processor for it which I don't) but there's something about the SPDIF maxing out at 96KHz if you use 192KHz on the ADAT. I suggest if you're going to be using high frequencies on the ADAT you read the tech specs.

So, the next thing I did was rename the I/O Labels in Logic and save this as my default template.

I noticed that the sample rate was set to 44.1KHz, and I prefer to work in at least 48KHz. When I tried to change this in the MOTU Audio Setup it changed it back, and the clock light on the front of the panel flashed a few times. I thought it might be because I have a DAT connected, which will have whatever frequency I recorded on that, but turning that off didn't help. One nice thing this device has over my UA is that it seems to be able to change frequency on the fly. The UA actually had a different device for the different frequencies, which meant it was a pain to change between interfaces as you had to power cycle the unit. This doesn't seem to have that constraint, so why can't I change the frequency?

Well, it turns out it's Logic is doing this. I'd not seen this before because in my other device, as I said, the setting was provided from the hardware, but in the Audio options in logic there's an option to change the sample rate. This looks per-project, but I can change my default template to support this I suppose. As it happens, this project was recorded in 44.1 - if I open one that was recorded at 48KHz I guess it would switch. Funnily enough, though, going through looking for a project recorded at 48KHz I can't find one - which means something funny was going on with my last sound card. It looks like even though it had a hardware setting of 48KHz Logic was downmixing this to 44.1 in software or something. How had I not spotted that before?

Sorry about the picture quality - I'm writing this on my ipad using the camera rather than taking screenshots, which would be way more professional, but I wanted to record how I'd got this running right, and this is the easiest way of doing it.

I have a recording day coming up with a barbershop quartet soon and I'm keen to see what the latency is like on this new kit, so I shall post anything interesting. Meanwhile, please feel free to drop me any questions you might have about this device. I got this one for a little over £600 from GAK (with some free headphones actually) but I think the retail is closer to £799. It's definitely a step up in capability for me, and there's lots of potential in this little unit.

Wednesday, 7 May 2014

Devout or fundamentalist?

An article on Radio 4 this morning got me thinking and I thought I'd share. 

It's interesting when two words which apparently appear very similar in meaning get used to refer to different things. Take, for instance the phrase coined after the London 7/7 bombings by a passer by commenting on emergency measures. She said "we should be prepared to give up some of our liberties for freedom". What does that even mean?

What she meant was civil liberties within our borders and freedom from oppression from without. 

The article this morning made me think of this phrase again. The Muslim being interviewed took offense that being a good catholic is to be devout - which is seen as a good thing. But a devout Muslim is labelled 'fundamentalist' which is seen as a bad thing. 

I think I agree that these are inconsistent but where my thoughts differ is that I think both are bad. The closer you get to the 'extreme' end of a religious spectrum the more dangerous you are, and I don't think it matters much which religion. Compare the Taliban with Christians in the Bible Belt. Both groups use an extremely fundamental view of the world and force this world view on those around them. 

The Old Testament and the Qu'ran both have some pretty gruesome teachings (some of the same stories actually) and moderates leave out the bits which modern society deem inappropriate or just silly.

Of course the old stories are good if you're racist and looking to back that up, or if you think women shouldn't drive or own property. Again, society hasn't quite made these inappropriate or silly just yet but I'm optimistic. 

For now I'll remain a devout atheist, dedicated to the rational and logical analysis of our world and our behaviors within it. 


Wednesday, 30 April 2014

Social Equality vs. Fairness

I had a really interesting conversation over lunch today, which extended over Facebook into a wider discussion with other friends on the merits of equality vs. fairness.

This was sparked by the news that Lloyds TSB are to offer bank accounts which obey Shariah Law, and as part of that pay no interest, but interestingly, charge no overdraft interest either. This led me to kick off a discussion around how this was unfair on non-muslims. In general, as an atheist, I see many things in society that are deemed acceptable because they have a religious basis, which are ridiculous if you actually treat all religion with the same disdain. Why, for instance, do religious bodies get tax benefits and CoE ministers are paid by the tax payer to attend the beds of the terminally ill? This is surely not fair on the rest of us?

Now it turns out these bank accounts are actually available to anyone, so in theory, anyone could move a credit card balance to an agreed overdraft facility at Lloyds and get it interest fee. So, despite the pandering to the Islamic community to get more business (too cynical?) it's actually not unfair in that everyone is treated equally in that case, but let's stick with the equality-vs-fairness theme a while longer.

It got me thinking a little about the nature of fairness and equality in society, and I wondered if maybe fairness and equality are mutually exclusive. Let me try to explain my reasoning.

I suggest that fairness cannot be measured quantitively, and as such is a purely relative and subjective measure. Equality, however, CAN be measured. Let's look at an example to try to illustrate my point. I pay a higher rate of income tax because my income is in the high rate tax payers bracket (like nearly every other white collar IT consultant I'd say). Is this fair? Not on me it isn't! A percentage based system ensures the more you earn the more you pay, but arbitrarily I am on a higher percentage. That doesn't sound fair at all. However, society has deemed a certain set of social services which must be paid for, and that only works on a raked income tax system. If I agree with paying for schools, hospitals, care for the elderly, etc. that's just the way it is. We all have equal access to these services - the rules are largely applied equally to every member of our society, but it's hardly fair. I'm paying more for the same set of services.

Let's try to make this mathematical sounding to see if rules of equality and fairness can be made simple - let us talk about applying rule A to bodies X and Y. If rule A is applied to both, irrespective of any properties of X and Y this might suggest that both bodies have been treated equally. Rule A is applied blindly without any cause to refer to the bodies to which the rule is applied - a bit like access to the NHS. The NHS is free at the point of use, and all of us can rock up at A&E if we need to. Easy. If, however, the properties of X or Y are considered in the decision as to whether or not to apply rule A this would suggest that they are not treated equally in order to attempt to treat them fairly. So you get rules which target certain individuals which are fair, but by definition not promoting equality.

Let us make this a bit more real - let us say that body X is a man, and body Y is a woman. Now let us say that childcare benefit is provided only to body Y on the basis that she is a woman - this may seem fair (and indeed is largely how our society is still geared up) but it's hardly equal.

So society decides some abstract, empirical, historical, political and perhaps religious rules for defining the properties which define different types of bodies, and then these properties are factored into the rules, making any potential for an "equal" society surely impossible. Given that we can also not be fair to everyone it feels like in our attempt to be "fair" we are actually perpetuating social inequality in the truest sense of the word.

Would it be better to only apply rules which can be applied equally to everyone, and discard the rest? Most of our laws are like this - religious people who feel compelled to wear certain clothing still have to wear crash helmets on motorbikes, for instance. Equally, rules around trade and industry do not take account of Jewish dogma regarding the Sabbath - there are many examples like this. Those people may choose not to do certain things on certain days in the week, but there's no law (nor should there ever be a law) that says I can't buy stuff and work on a Saturday.

One problem with this whole post is that back to my first example, because, actually, anyone can apply for an Islam account with Lloyds TSB the rules are applied irrespective of any religion "property" the applicant may have, and so it is an equal rule, but in this case that also sounds fair, so it's possible this is a circular argument as in this case it is both equal and fair, which may make the whole thing moot, but I think the end result is that it is not possible to treat everyone equally because we are not equal. Some people need more social support than others, some people need to pay more tax than others, and we need to treat these people unequally in order to be fair.

The only thing society then needs to work out is what is a valid set of properties on which to base the differentiating rules - I would argue that we should restrict this to empirical and scientific based properties only, otherwise we'll end up chairing arguments between religions, psychics, homeopaths and astrologers, and that can only end up in a world of shit!


Tuesday, 22 April 2014

SQL vs. NoSQL - some initial findings

Let me first say this post is not a scientifically controlled report - it is nothing more than some initial findings I thought it worthwhile sharing when it comes to SQL vs. NoSQL - in this case MongoDB. I know way more about SQL than I do NoSQL, but I am embarking on a project which will involve large amounts of real time data manipulation and processing for retail, and I’ve been having a play with MongoDB. There were some surprises:

1. I believed based on things I had read that a carefully structured relational database, with an optimised schema would be faster answers the questions it was designed for than any NoSQL solution. I’m not entirely sure this is true in the vast majority of cases. I actually wonder if the edge cases where this may be true make it worth using SQL unless you can really, REALLY show that it’s a good idea.

2. I had not really thought too much about the boilerplate that goes with SQL vs. NoSQL, but compare these two bits of code:




and:




These both do the same thing - use a DataGenerator to generate a bunch of test data, which in this case creates a load of first and last names, and emails. In the case of the SQL test (no ORM to be fair) there’s way more code because you have to somehow map the object domain to the data domain. Not so in the second, where you just chuck the object into the db.

This leads me to:

3. There is no “schema” per se. There are just buckets. This feels a bad thing - strong typing is good, right? Well, adding a column to a database that’s been running for a while and may have millions of records is a big deal. In a NoSQL solution you just add the new kind of object. Old objects will simply not have the new field (so one assumes you need some null checking and maybe be careful how you rebuild your Java object akin to serialisation perhaps (?) but the db is ultimately flexible.

4. Last one for now - check out the search code below. Each does the same thing - does a search for all names that start with a capital “A”:




and now the MongoDB version:



See the function chaining in the MongoDB solution? How elegant is that? Rather like Java8 streams or the kind of sequence processing you see in Clojure this is a really elegant way to process the results. Incidentally, it’s a load quicker in MongoDB than it is in Derby DB for any number of records. I’d like to say that this is because it’s a LIKE type query which is just about the most horrible thing you can do in an RDBMS, but actually, EVERY query is quicker in MongoDB. Counting the records, searching for a specific record - they’re all just quicker in my MongoDB version.

I’m sure there will be people thinking I’ve got my RDBMS tables set up badly or whatever, and while there may be optimisations I could do I’ve done nothing to either Derby or Mongo - this is just out of the box.

Interesting stuff - download MongoDB from www.mongodb.org and have a play.

Thursday, 17 April 2014

My first bit of Clojure... Hello Pi.

In trying to learn Clojure I have been knocking up various silly little programs to try to understand the language. For a Java head there were various things that were quite a paradigm shift for me. I’m an old C hack so the move (back) to functional wasn’t so hard, but never having done Lisp I struggled with a few things at first.

Postfix notation

Put simply the operator is pushed first, and the operands afterwards. This looks weird at first (compare (2 + 3) with (+ 2 3) for instance, but actually, in clojure every “form” consists of an operator and it’s operands in brackets (str “abc” “def”), or (conj set-a set-b), or (+ 2 3) - make sense now? It does make long mathematical equations odd to look at, as you can see in the code fragment which calculates Pi below.

Looping

Clojure does have loops, but idiomatically good Clojure doesn’t really use them in the way you think it might. My little Pi generator below makes use of loop and recurs, which is quite standard, but it’s not really a for loop. In fact, in Clojure, there isn’t really a for loop - only a for-each loop, for processing sets or sequences.

Lazy Sequences

Not something I’ve used outside of database or file access in Java this is a way of creating a sequence of infinite length. You can define how a sequence is generated, but it is not actually processed until you take numbers from the sequence, with the (take) function, or in some other way force evaluation of members of the sequence. Very neat indeed!

The Syntax

It does make sense when you spend a while working with it, but I’m still a beginner, and sometimes find myself doing something like:
(printf myVar ” is a bit like ” + myOtherVar) 
Of course, I get some odd output as the + is evaluated and it’s .toString representation called. + in Clojure (like the other operators) is actually a Function, so you get an odd looking function reference - much like if you .toString an object in Java.

You do get used to the brackets, and code is broken down into much smaller fragments generally, which is no bad thing, but you do end up with a more fragmented programming style. It really is like going back to my first language in some ways - Logo! Remember that? Control a turtle to draw stuff on paper, and write simple functions which you then nest up in more complicated functions. Super fun!

My little Pi generator

I’m sure there is a ton of stuff wrong with this as I’m really just learning the language, so I welcome comment and guidance, but here it is anyway.
(defn calc-pi-method1
  ([] (calc-pi-method1 100))
  ([max-recurs] (loop [r 1 pi 3.0]
    (if (> r max-recurs)
      pi
      (do
        ;(println “r=” r ” pi=” pi)
        (def div (/ 4 (* (* r 2) (+ (* r 2.0) 1) (+ (* r 2.0) 2))))
        (if (even? r)
          (def newpi (- pi div))
          (def newpi (+ pi div)))
          (recur (inc r) newpi))))
))
; main
(println (calc-pi-method1) ” compared to a Math.PI value of: ” (Math/PI))
Note that the more max-recurs you pass in the closer to Pi you get, but due to limitations in the program somewhere you never quite get to Math.PI. But it’s only to try and learn the language but not be the definitive Pi generator. :-)

Laters.

Sunday, 6 April 2014

SFX for How To Succeed

As well as playing the piano pad I’ve also programmed and am triggering the sound effects. This is usually a good idea where SFX appear at very specific points in the music. Now, I take an almost anal pride in this and have carefully researched the appropriate sounds, often recording or creating them myself.

So, why this post?

My telephone ringing sound effect - not the most complicated thing ever - is a perfect 1950’s bell phone ring.

But this is an American show. The ringing cadence is completely different. Also the props are trim phones not bell phones. Gah! So that’s me redoing them tomorrow. Any job worth doing… No one would notice, but I would know!

Thursday, 3 April 2014

Big Data Analytics and Security

At the #RSASummit earlier this week there were three keynotes, which along with my current thinking around microservices for retail consumer data, and right action right time analytics presented an interesting orthogonal use case I’d not thought of.

The idea of Big Data Analytics in general is to make use of a sea of data (or a lake, if you prefer) to perform predictive analytics, insight, and generally use clever heuristic or other algorithms to tell you things you couldn’t have found out with a million monkeys and infinite time. Because of my current work with retail this for me has meant interacting with customers in the right way, at the right time, on the right channel.

However, the RSA boys have started preaching an alternative security model to the “Prepare to repel borders” perimeter focused security I certainly am used to. I’m no security expert so this may not be news to people in that space, but I found it really interesting what the alternative view is, and where it leads.

Essentially the security community is moving away from perimeter defence focussed to security - prevention, in other words - to analytics - detection. This is taking some time to work it’s way through the board rooms and across the golf courses of senior boards, but it’s a really important step.

Let us assume, for a start, that security breaches WILL happen. It’s inevitable. What tradiitonally happens then is a massive effort to work out what has happened, how, and how to stop it again - if the breach was even detected in the first place, and this is an important point. It is in most hackers interests to not even let you know they were there, so you leave the door open.

Introduce an analytical model - this anticipates that breaches will happen but automatically flags up when they do based on unusual activity. This means that the actual security event could be shut down way quicker than normal, and reports generated to try to close those back doors down.

Combine this with the quite inspiring take on employee freedom and you have a quite different model to the one I’ve seen in any large company I’ve been in.

In order for an analytical model to work you need the data. Currently the perimeter and blocking hard handed mechanism employed by most IT depts. means employees find workarounds - Dropbox, or SSH tunnelling - external VPNs - whatever they need to use to get their job done. The problem is none of these can be monitored by the organisation. The employees just need the tools to do their job, so introduce SSO on the web proxy, and generally allow employees to use what they want and now you can analyse usage on your network much more effectively.

This may mean you let employees use Facebook, but I put it to you that if you stop them doing this on your machine they’ll do it on their own machine, or on their own 4G dongle, and now you have no control or worse, any view on that behaviour, so if someone DOES use malware to take over control of a corporate PC there’s a lot less a chance of you seeing it.