Man vs Horse

Today I did something a little different, instead of going to the local park to run 5km at Parkrun I set off to the New Forest to take part in a little event that Helen Bowyer had put together. The plan was to run 15km across the New Forest broken up in to 3 5km between pubs in a race against Helen on her horse Muttley. There where 7 runners from the ETS team, Me, James, Luke, Graham, Joe, Dominic and Peter also Rob on his road bike.

The runners and the Helen would be taking the same route with the runners getting a 10min head start on each leg. Rob had a route that was about twice as long and set off at the same time as Helen.

Leg 1

From The Rock at Canada Common to The Lamb at Nomansland. We set off across Canada Common from the back gate of The Rock, reasonably early on myself, James and Luke hit the front. Luckily Luke had run the route for this leg before so we didn’t have to do any looking at the map and managed to make our way to the Dealze Wood easily enough. We could probably have cut the corner a bit at the end and shaved some more time off. We had about a 5 min lead on Rob and another 3 over Helen (though she did have to detour a little to point Peter in the right direction).


View Leg 1 in a larger map

Leg 2

From The Lamb to The Royal Oak at Fritham. This was the shortest leg, but after a short climb through Bramshaw Wood it was across the open plain. The good view meant that Helen could see us and helped by the soft ground meant Muttley could go faster and caught us up with about 800m to go. Rob arrived pretty much at the same time as well.


View Leg 2 in a larger map

Leg 3

From The Royal Oak to The High Corner Inn. Jame, Luke and me hit the front again setting off, but the fact I’ve not been doing much more than 5ks recently really started to bite. I managed to stick with them both for the first half until we crossed the stream then we started to spread out. The spread ended up big enough that I lost site of Luke and James was long gone. I had a small nav failure at the bottom of the last climb up to the High Corner Inn, I think it was just my subconscious not wanting to climb, as I went past by about 400m and had to turn round and come back. Rob was first back this time.


View Leg 3 in a larger map

After we’ve finished we all went back to The Royal Oak for some lunch. It was a really good day out and I’m really up for having another go next year and maybe even have a look at the full marathon version at some point.

Man vs Horse

Today I did something a little different, instead of going to the local park to run 5km at Parkrun I set off to the New Forest to take part in a little event that Helen Bowyer had put together. The plan was to run 15km across the New Forest broken up in to 3 5km between pubs in a race against Helen on her horse Muttley. There where 7 runners from the ETS team, Me, James, Luke, Graham, Joe, Dominic and Peter also Rob on his road bike.

The runners and the Helen would be taking the same route with the runners getting a 10min head start on each leg. Rob had a route that was about twice as long and set off at the same time as Helen.

Leg 1

From The Rock at Canada Common to The Lamb at Nomansland. We set off across Canada Common from the back gate of The Rock, reasonably early on myself, James and Luke hit the front. Luckily Luke had run the route for this leg before so we didn’t have to do any looking at the map and managed to make our way to the Dealze Wood easily enough. We could probably have cut the corner a bit at the end and shaved some more time off. We had about a 5 min lead on Rob and another 3 over Helen (though she did have to detour a little to point Peter in the right direction).


View Leg 1 in a larger map

Leg 2

From The Lamb to The Royal Oak at Fritham. This was the shortest leg, but after a short climb through Bramshaw Wood it was across the open plain. The good view meant that Helen could see us and helped by the soft ground meant Muttley could go faster and caught us up with about 800m to go. Rob arrived pretty much at the same time as well.


View Leg 2 in a larger map

Leg 3

From The Royal Oak to The High Corner Inn. Jame, Luke and me hit the front again setting off, but the fact I’ve not been doing much more than 5ks recently really started to bite. I managed to stick with them both for the first half until we crossed the stream then we started to spread out. The spread ended up big enough that I lost site of Luke and James was long gone. I had a small nav failure at the bottom of the last climb up to the High Corner Inn, I think it was just my subconscious not wanting to climb, as I went past by about 400m and had to turn round and come back. Rob was first back this time.


View Leg 3 in a larger map

After we’ve finished we all went back to The Royal Oak for some lunch. It was a really good day out and I’m really up for having another go next year and maybe even have a look at the full marathon version at some point.

Even More MQTT enabled TVs

Kevin Modelling the headset

A project at work recently came up to do with using one of the Emotiv headsets to help out a former Italian IBMer who was suffering from locked in syndrome. The project was being led by Kevin Brown who was looking for ways to use the headset to drive things sending email and browsing the web but he was also looking for a way to interact with some other tech around the house. The TV was the first on the list.

Continuing on from my previous work with controlling TVs and video walls with MQTT I said I would have a crack at this. My earlier solution was limited to LG TVs with serial ports and this needed to work with any make so a different approach was needed. It also needed to run on Windows so it was also a chance to play with C# and .Net.

To be TV agnostic it was decided to use a USB IR remote from a company called RedRat. They make a number of solutions, but their RedRat III was perfect for what was needed.

RedRat IR transmitter & receiver

The RedRat API comes with bindings for C++, .Net on Windows (and a LIRC plugin for Linux). The RedRat III is not just a IR transmitter, it is also a receiver which means it can “learn” from existing remote controls so it can be used with any TV.

Kevin is using Emotiv headset to drive Dasher as the input device. Dasher is a sort of keyboard replacement that allows the user to build up words using at a minimum a single input e.g. a single push button. As well as building words other actions can be added to the selector, Kevin added actions for browsing and also to control the TV. These actions publish MQTT messages to a topic with payloads like “volUp”, “chanDown” or “power”.

So now we had the inputs it was time to get round to writing the code to turn those messages into IR signals. There are 2 MQTT .NET libraries listed on the MQTT.org software page. I grabbed the first off the list MqttDotNet and got things working pretty quickly.

The following few lines sets up a connection and subscribes to a topic.

String connectionString = "tcp://" + 
  Properties.Settings.Default.host.Trim() + ":" +
  Properties.Settings.Default.port;
IMqtt client = MqttClientFactory.CreateClient(connectionString, "mqtt2ir");
try
{
	client.Connect(true);
	client.PublishArrived += new PublishArrivedDelegate(onMessage);
	client.ConnectionLost += new ConnectionDelegate(connectionLost);
	client.Subscribe(Properties.Settings.Default.topic.Trim(), 
	  QoS.BestEfforts);
}

Where the onMessage callback looks like this:

bool onMessage(object sender, PublishArrivedArgs msg)
{
	String command = msg.Payload.ToString().Trim();
	IRPacket packet = loadSignal(command);
	if (packet != null) 
	{
		redrat.OutputModulatedSignal(packet);
	}
	return true;
}

MQTT2IR Settings Window

And that is pretty much the meat of the whole application, the rest was just some code to initialise the RedRat and to turn it into a System Tray application with a window for entering the broker details and training the commands.

Unfortunately just a few days before we were due to have delivered this project we learned that the intended recipient had picked up a respiratory infection and had passed away. I would like to extend my thoughts to their family and I hope we can find somebody who may find this work useful in the future.

Even More MQTT enabled TVs

Kevin Modelling the headset

A project at work recently came up to do with using one of the Emotiv headsets to help out a former Italian IBMer who was suffering from locked in syndrome. The project was being led by Kevin Brown who was looking for ways to use the headset to drive things sending email and browsing the web but he was also looking for a way to interact with some other tech around the house. The TV was the first on the list.

Continuing on from my previous work with controlling TVs and video walls with MQTT I said I would have a crack at this. My earlier solution was limited to LG TVs with serial ports and this needed to work with any make so a different approach was needed. It also needed to run on Windows so it was also a chance to play with C# and .Net.

To be TV agnostic it was decided to use a USB IR remote from a company called RedRat. They make a number of solutions, but their RedRat III was perfect for what was needed.

RedRat IR transmitter & receiver

The RedRat API comes with bindings for C++, .Net on Windows (and a LIRC plugin for Linux). The RedRat III is not just a IR transmitter, it is also a receiver which means it can “learn” from existing remote controls so it can be used with any TV.

Kevin is using Emotiv headset to drive Dasher as the input device. Dasher is a sort of keyboard replacement that allows the user to build up words using at a minimum a single input e.g. a single push button. As well as building words other actions can be added to the selector, Kevin added actions for browsing and also to control the TV. These actions publish MQTT messages to a topic with payloads like “volUp”, “chanDown” or “power”.

So now we had the inputs it was time to get round to writing the code to turn those messages into IR signals. There are 2 MQTT .NET libraries listed on the MQTT.org software page. I grabbed the first off the list MqttDotNet and got things working pretty quickly.

The following few lines sets up a connection and subscribes to a topic.

String connectionString = "tcp://" + 
  Properties.Settings.Default.host.Trim() + ":" +
  Properties.Settings.Default.port;
IMqtt client = MqttClientFactory.CreateClient(connectionString, "mqtt2ir");
try
{
	client.Connect(true);
	client.PublishArrived += new PublishArrivedDelegate(onMessage);
	client.ConnectionLost += new ConnectionDelegate(connectionLost);
	client.Subscribe(Properties.Settings.Default.topic.Trim(), 
	  QoS.BestEfforts);
}

Where the onMessage callback looks like this:

bool onMessage(object sender, PublishArrivedArgs msg)
{
	String command = msg.Payload.ToString().Trim();
	IRPacket packet = loadSignal(command);
	if (packet != null) 
	{
		redrat.OutputModulatedSignal(packet);
	}
	return true;
}

MQTT2IR Settings Window

And that is pretty much the meat of the whole application, the rest was just some code to initialise the RedRat and to turn it into a System Tray application with a window for entering the broker details and training the commands.

Unfortunately just a few days before we were due to have delivered this project we learned that the intended recipient had picked up a respiratory infection and had passed away. I would like to extend my thoughts to their family and I hope we can find somebody who may find this work useful in the future.

Natural Language Processing Course


Over the first few months of this year I have been taking part in a mass online learning course in Natural Language Processing (NLP) run by Stanford University.  They publicised a group of eight courses at the end of last year and I didn't hesitate to sign up to the Natural Language Processing course knowing it would fit very well with things I'm working on in my professional role where I'm doing more and more with text analytics and continuing my work in speech to text.  There were others I could easily have signed up for too, things like security or machine learning, more or less all of them are relevant for something I'm doing.  However, given the time commitment required I decided to fully commit to one course and the NLP one was to be it.

I passed the course with a grade of 85% which was well above the required 70% pass mark.  However, the effort and time required to get there was way more than I was expecting and quite a lot more than the expected time the lecturers (Chris Manning and Dan Jurafsky) had said.  From memory it was an 8 week course with 10 hours a week required effort to complete the work. As it went on the amount of time required went up significantly, so rather than the 80 hours total I think I spent more like 1½ times that at over 120 hours!

There were four of us at work (that I know of) who embarked on the course but due to the commitment of time I've mentioned above only myself and Dale finished.  By the way, Dale has written an excellent post on the structure and content of the course so I'd suggest reading his blog for more details on that stuff, there's little point in me re-posting it as he's written such a good summary.

In terms of the participants on the course, it seems to have been quite a success for Stanford University - this is the first time they have run courses in this way it seems.  The lecturers gave us some statistics at a couple of strategic points throughout the course and it seems there were around 40,000 people registering an interest, of which around 5000 were watching the lecture material and around 2000 completed the course having taken part in the homework assignments.

I'm glad I committed as much as I did.  If I were one of the 5000 just watching the lectures and not doing the homework material I don't think I would have got as much out of it, but the added time required to complete the homework was significant so perhaps there's a trade-off here?  It's certainly the first time I've committed this much of my own personal time (it took over the lives of myself and Dale for quite a few weeks) as I was too busy at work to spend many business hours working on the course so it was all done in evenings and weekends.  That's certainly one piece of feedback I gave at the end of the course, Stanford could make the course timing more flexible but also allow more time for the course to be completed.

My experience with the way the assignments were marked was a little different to the way Dale has described in his post.  I was already very familiar with the concepts of test, development and held-out sets (three different sets of data used when training NLP systems) so wasn't surprised to see that the modules in the course didn't necessarily have an exact answer to them or more precisely that the code your wrote to perfectly analyse some data on your local system may not get full marks as it was marked against a different data set.  This may seem unfair but is common practice in all NLP system training that I know of.

All in all, an excellent course that I'm glad I did.  From what I hear of the other courses, they're not as deeply involved as the NLP course so I may well give another one a go in the future but for now I need to get a little of my life back and have a well earned rest from education.

Mad thermostat plan

Something I’ve really wanted to have a go at for a long time is hacking together a smarter heating system. The long process of moving house prevented any progress until now but I think a few things fell in to place today to get the project off the ground. And so a slightly mad thermostat plan was hatched…

The first part of the puzzle is a side effect of getting a solar water panel; to make the most of the solar panel we should only be using the boiler to top up the hot water at the end of the day. (Obviously that’s just theoretical at the moment because its pretty much been raining non stop since we got the solar panel!) Unfortunately the current central heating controller will only turn on the heating if the hot water is on at the same time, which is no help at all, so we really need a new controller to make the most of our zero carbon supply of hot water. There’s another, purely aesthetic reason to want a new heating controller; the kitchen upgrade got under way this week and the old controller has seen better days.

The current kitchen destruction has a bigger part to play though; now is an ideal opportunity to hide cables behind the new cupboards. For a while that didn’t actually seem like it was going to be all that much help, based on where the old thermostat was (hidden behind a door in the living room). I was looking at various programmable thermostats but the existing wiring from the thermostat restricted the options somewhat. The programmable thermostat we had in the old house seemed to work quite well with the existing wiring and controller… as long as the battery was fresh, otherwise it got confused about the temperature. Obviously not ideal for a thermostat, so I was hoping to avoid batteries this time!

Then, while being distracted by the wonky light switches yet again, inspiration struck…

The house hasn’t been constructed with the greatest care in the world, but those switches just could not have been original. The only thing that makes sense is if they were another botched DIY job, and it seemed highly unlikely that anyone would have dropped another cable run down the wall to do it. My hunch, based on the fact that there’s a water cylinder directly above those switches, is that there’s a horizontal cable run between the two. I checked, and… eureka! So now it’s a simple job to put both switches back on the same box, leaving an empty recessed box with a now bare kitchen wall behind it, making it perfect to run a new thermostat cable through the back of the box and round to the boiler! (Well I was pretty excited by this plan at the time.)

The thermostat to finish off this puzzle is a Heatmiser combined programmable thermostat and hot water timer. My theory is that I need the PRT/HW-N thermostat to go in the living room and a PRC powered relay card in place of the old central heating controller. I’m almost certain that the wiring will work with the existing system anyway, but if anyone has any experience/tips/gotchas, please let me know! That programmable thermostat should give me an RS485 interface to the thermostat which, if all goes well, won’t be too difficult to connect to my nanode- either with a bit of soldering, or one of these IO shields if I’m feeling lazy! The thing I like about this arrangement is that it should be possible to achieve plenty of automation if all goes well but, if there are any technical hitches, there’s a decent off the shelf controller to fall back on.

Update: a quick update since I’m doing some head scratching over whether the existing wiring from the central heating timer to the junction box in the airing cupboard will allow the heating to run independently from the hot water. If it does, the new thermostat is in place ready to go…

If it doesn’t, the new thermostat will just be a decorative feature while I figure out where I can sneak a new cable upstairs without disturbing the new kitchen! I don’t want to break the heating until I’m sure everything will work, so I’m working off a photo for now…

I’d love to hear from anyone who can decipher that lovely nest of wires! Here’s my theroy so far:

The black cable is the valve, and the other two cables that enter with it at the bottom are the pump and cylinder stat. It looks to me like the grey cable should be to turn the hot water off, which seems to be connected to the cylinder stat and a red wire from one of the cables above, which I’m hoping is from the timer. That just seems too easy for this house though, and I’m a bit puzzled by what the connections on the orange wire actually are. Lucky it’s all neatly connected and labelled so I can check the orange wire is connected to the cylinder stat and pump… bother. I guess I’m going to have to wait until Jo’s not looking so I can investigate more thoroughly!


Smile!

This is my mood (as identified from my facial expressions) over time while watching Never Mind the Buzzcocks.

The green areas are times where I looked happy.

This shows my mood while playing XBox Live. Badly.

The red areas are times where I looked cross.

I smile more while watching comedies than when getting shot in the head. Shocker, eh?

A couple of years ago, I played with the idea of capturing my TV viewing habits and making some visualisations from them. This is a sort of return to that idea in a way.

A webcam lives on the top of our TV, mainly for skype calls. I was thinking that when watching TV, we’re often more or less looking at the webcam. What could it capture?

What about keeping track of how much I smile while watching a comedy, as a way of measuring which comedies I find funnier?

This suggests that, overall, I might’ve found Mock the Week funnier. But, this shows my facial expressions while watching Mock the Week.

It seems that, unlike with Buzzcocks, I really enjoyed the beginning bit, then perhaps got a bit less enthusiastic after a bit.

What about The Daily Show with Jon Stewart?

I think the two neutral bits are breaks for adverts.

Or classifying facial expressions by mood and looking for the dominant mood while watching something more serious on TV?

This shows my facial expressions while catching a bit of Newsnight.

On the whole, my expression remained reasonably neutral whilst watching the news, but you can see where I visibly reacted to a few of the news items.

Or looking to see how I react to playing different games on the XBox?

This shows my facial expressions while playing Modern Warfare 3 last night.

Mostly “sad”, as I kept getting shot in the head. With occasional moments where something made me smile or laugh, presumably when something went well.

Compare that with what I looked like while playing Blur (a car racing game).

It seems that I looked a little more aggressive while driving than running around getting shot. For last night, at any rate.

Not just about watching TV

I’m using face recognition to tell my expressions apart from other people in the room. This means there is also a bunch of stuff I could look into around how my expressions change based on who else is in the room, and their expressions?

For example, looking at how much of the time I spend smiling when I’m the only one in the room, compared with when one or both of my kids are in the room.

To be fair, this isn’t a scientific comparison. There are lots of factors here – for example, when the girls are in the room, I’ll probably be doing a different activity (such as playing a game with them or reading a story) to what I would be doing when by myself (typically doing some work on my laptop, or reading). This could be showing how much I smile based on which activity I’m doing. But I thought it was a cute result, anyway.

Limitations

This isn’t sophisticated stuff.

The webcam is an old, cheap one that only has a maximum resolution of 640×480, and I’m sat at the other end of the room to it. I can’t capture fine facial detail here.

I’m not doing anything complicated with video feeds. I’m just sampling by taking photos at regular intervals. You could reasonably argue that the funniest joke in the world isn’t going to get me to sustain a broad smile for over a minute, so there is a lot being missed here.

And my y-axis is a little suspect. I’m using the percentage level of confidence that the classifier had in identifying the mood. I’m doing this on the assumption that the more confident the classifier was, the stronger or more pronounced my facial expression probably was.

Regardless of all of this, I think the idea is kind of interesting.

How does it work?

The media server under the TV runs Ubuntu, so I had a lot of options. My language-of-choice for quick hacks is Python, so I used pygame to capture stills from the webcam.

For the complicated facial stuff, I’m using web services from face.com.

They have a REST API for uploading a photo to, getting back a blob of JSON with information about faces detected in the photo. This includes a guess at the gender, a description of mood from the facial expression, whether the face is smiling, and even an estimated age (often not complimentary!).

I used a Python client library from github to build the requests, so getting this working took no time at all.

There is a face recognition REST API. You can train the system to recognise certain faces. I didn’t write any code to do this, as I don’t need to do it again, so I did this using the API sandbox on the face.com website. I gave it a dozen or so photos with my face in, which seemed to be more than enough for the system to be able to tell me apart from someone else in the room.

My monitoring code puts what it measures about me in one log, and what it measures about anyone else in a second “guest log”.

This is the result of one evening’s playing, so I’ve not really finished with this. I think there is more to do with it, but for what it’s worth, this is what I’ve come up with so far.

The script

####################################################
#  IMPORTS
####################################################

# imports for capturing a frame from the webcam
import pygame.camera
import pygame.image

# import for detecting faces in the photo
import face_client

# import for storing data
from pysqlite2 import dbapi2 as sqlite

# miscellaneous imports
from time import strftime, localtime, sleep
import os
import sys

####################################################
# CONSTANTS
####################################################

DB_FILE_PATH="/home/dale/dev/audiencemonitor/data/log.db"
FACE_COM_APIKEY="MY_API_KEY_HERE"
FACE_COM_APISECRET="MY_API_SECRET_HERE"
DALELANE_FACETAG="dalelane@dale.lane"
POLL_FREQUENCY_SECONDS=3

class AudienceMonitor():

    #
    # prepare the database where we store the results
    #
    def initialiseDB(self):
        self.connection = sqlite.connect(DB_FILE_PATH, detect_types=sqlite.PARSE_DECLTYPES|sqlite.PARSE_COLNAMES)
        cursor = self.connection.cursor()

        cursor.execute('SELECT name FROM sqlite_master WHERE type="table" AND NAME="facelog" ORDER BY name')
        if not cursor.fetchone():
            cursor.execute('CREATE TABLE facelog(ts timestamp unique default current_timestamp, isSmiling boolean, smilingConfidence int, mood text, moodConfidence int)')

        cursor.execute('SELECT name FROM sqlite_master WHERE type="table" AND NAME="guestlog" ORDER BY name')
        if not cursor.fetchone():
            cursor.execute('CREATE TABLE guestlog(ts timestamp unique default current_timestamp, isSmiling boolean, smilingConfidence int, mood text, moodConfidence int, agemin int, ageminConfidence int, agemax int, agemaxConfidence int, ageest int, ageestConfidence int, gender text, genderConfidence int)')

        self.connection.commit()

    #
    # initialise the camera
    #
    def prepareCamera(self):
        # prepare the webcam
        pygame.camera.init()
        self.camera = pygame.camera.Camera(pygame.camera.list_cameras()[0], (900, 675))
        self.camera.start()

    #
    # take a single frame and store in the path provided
    #
    def captureFrame(self, filepath):
        # save the picture
        image = self.camera.get_image()
        pygame.image.save(image, filepath)

    #
    # gets a string representing the current time to the nearest second
    #
    def getTimestampString(self):
        return strftime("%Y%m%d%H%M%S", localtime())

    #
    # get attribute from face detection response
    #
    def getFaceDetectionAttributeValue(self, face, attribute):
        value = None
        if attribute in face['attributes']:
            value = face['attributes'][attribute]['value']
        return value

    #
    # get confidence from face detection response
    #
    def getFaceDetectionAttributeConfidence(self, face, attribute):
        confidence = None
        if attribute in face['attributes']:
            confidence = face['attributes'][attribute]['confidence']
        return confidence

    #
    # detects faces in the photo at the specified path, and returns info
    #
    def faceDetection(self, photopath):
        client = face_client.FaceClient(FACE_COM_APIKEY, FACE_COM_APISECRET)
        response = client.faces_recognize(DALELANE_FACETAG, file_name=photopath)
        faces = response['photos'][0]['tags']
        for face in faces:
            userid = ""
            faceuseridinfo = face['uids']
            if len(faceuseridinfo) > 0:
                userid = faceuseridinfo[0]['uid']
            if userid == DALELANE_FACETAG:
                smiling = self.getFaceDetectionAttributeValue(face, "smiling")
                smilingConfidence = self.getFaceDetectionAttributeConfidence(face, "smiling")
                mood = self.getFaceDetectionAttributeValue(face, "mood")
                moodConfidence = self.getFaceDetectionAttributeConfidence(face, "mood")
                self.storeResults(smiling, smilingConfidence, mood, moodConfidence)
            else:
                smiling = self.getFaceDetectionAttributeValue(face, "smiling")
                smilingConfidence = self.getFaceDetectionAttributeConfidence(face, "smiling")
                mood = self.getFaceDetectionAttributeValue(face, "mood")
                moodConfidence = self.getFaceDetectionAttributeConfidence(face, "mood")
                agemin = self.getFaceDetectionAttributeValue(face, "age_min")
                ageminConfidence = self.getFaceDetectionAttributeConfidence(face, "age_min")
                agemax = self.getFaceDetectionAttributeValue(face, "age_max")
                agemaxConfidence = self.getFaceDetectionAttributeConfidence(face, "age_max")
                ageest = self.getFaceDetectionAttributeValue(face, "age_est")
                ageestConfidence = self.getFaceDetectionAttributeConfidence(face, "age_est")
                gender = self.getFaceDetectionAttributeValue(face, "gender")
                genderConfidence = self.getFaceDetectionAttributeConfidence(face, "gender")
                # if the face wasnt recognisable, it might've been me after all, so ignore
                if "tid" in face and face['recognizable'] == True:
                    self.storeGuestResults(smiling, smilingConfidence, mood, moodConfidence, agemin, ageminConfidence, agemax, agemaxConfidence, ageest, ageestConfidence, gender, genderConfidence)
                    print face['tid']

    #
    # stores face results in the DB
    #
    def storeGuestResults(self, smiling, smilingConfidence, mood, moodConfidence, agemin, ageminConfidence, agemax, agemaxConfidence, ageest, ageestConfidence, gender, genderConfidence):
        cursor = self.connection.cursor()
        cursor.execute('INSERT INTO guestlog(isSmiling, smilingConfidence, mood, moodConfidence, agemin, ageminConfidence, agemax, agemaxConfidence, ageest, ageestConfidence, gender, genderConfidence) values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)',
                        (smiling, smilingConfidence, mood, moodConfidence, agemin, ageminConfidence, agemax, agemaxConfidence, ageest, ageestConfidence, gender, genderConfidence))
        self.connection.commit()

    #
    # stores face results in the DB
    #
    def storeResults(self, smiling, smilingConfidence, mood, moodConfidence):
        cursor = self.connection.cursor()
        cursor.execute('INSERT INTO facelog(isSmiling, smilingConfidence, mood, moodConfidence) values(?, ?, ?, ?)',
                        (smiling, smilingConfidence, mood, moodConfidence))
        self.connection.commit()

monitor = AudienceMonitor()
monitor.initialiseDB()
monitor.prepareCamera()
while True:
    photopath = "data/photo" + monitor.getTimestampString() + ".bmp"
    monitor.captureFrame(photopath)
    try:
        faceresults = monitor.faceDetection(photopath)
    except:
        print "Unexpected error:", sys.exc_info()[0]
    os.remove(photopath)
    sleep(POLL_FREQUENCY_SECONDS)

Why Doctor Who Confidential mattered

Behind-the-scenes documentaries, like Doctor Who Confidential, matter. They matter because they show viewers, in particular children still deciding what to do with their lives, that it takes more to produce a high-class TV programme than just a few actors who become famous. It shows what other creative and/or technical jobs there are in television.

A couple of weekends ago, we went to the Doctor Who Official Convention (#dwcuk) in Cardiff. While one of the three main panels featured the three stars, Matt Smith, Karen Gillan and Arthur Darvill (along with executive producers Stephen Moffat and Caroline Skinner), most of the other scheduled events were focused on how Doctor Who is made.

dwc-danny-snow

At the very start of the day, we went to see Danny Hargreaves blow things up talk about the Special Effects on Doctor Who. In his Q&A session (after making it snow indoors), the first question asked was “How did you get into special effects work?” and, between questions like how he blew up the Torchwood Hub and how he makes the Doctor’s hands and head fiery during a regeneration, a later question was “When did you realise you wanted to work in special effects?”. Attendees were interested not just in the fictional stories and characters but in how the programme is made and the interesting careers they might not otherwise have come across.

2012-03-24 14.26.38

Throughout the day, I heard audience members ask how to become costume and prosthetics designers and how to become script writers. Danny described how his team designs and creates the effects, assess the risks of blowing things up, and who they work with to make it all happen. He also explained how he came to be a trainee in the nascent world of special effects before studying Mechanical Engineering so that he could build the devices they need for Doctor Who (and the other shows he’s worked on, like Coronation Street). Directors of photography, set designers, executive producers, writers, and directors went on to talk about what their own jobs entailed day-to-day and how it all comes together to make an episode of Doctor Who.

These discussions continued the story that used to be told after each new episode of Doctor Who by Doctor Who Confidential on BBC3. Doctor Who Confidential started in 2005 with the return of Doctor Who. As well as talking about some interesting perspective on making that night’s episode of Doctor Who, it featured interviews with, and ‘day-in-the-life’ documentaries about, the actors (including showing the less glamorous side of shivering in tents and quilted coats between takes), the casting directors, the producers, the writers, the choreographers, the costume designers, the special effects supervisors, the monster designers, the prosthetics experts, the directors, the assistant directors, and many, many others. It also held competitions for children to write a mini episode and then see the process of making it, which would’ve been an amazing experience!

Yes, it took a slightly odd turn in the last series when it turned a bit Top Gear by showing Karen Gillan having a driving lesson and Arthur Darvill swimming with sharks; possibly a misguided attempt to increase its popularity before it got canned anyway to cut costs.

I think it’s a real shame to lose Doctor Who Confidential and its insights into the skill, hard work, and opportunities in TV and film production.


Cool photo of Danny in the snow by Tony Whitmore.

The post Why Doctor Who Confidential mattered appeared first on LauraCowen.co.uk.

Reflecting on our total home energy usage

The graph of our total gas usage per year doesn’t decrease quite so impressively as our electricity graph, which I blogged about halving over five years. Because the numbers were getting ridiculously big and difficult to compare at a glance, I’ve re-created the electricity graph here in terms of our average daily electricity usage instead of our annual usage (click the graph to see a larger version):

Graph of daily electricity usage per year.

 

If you compare it with the average daily gas usage graph below, you can see (just from the scales of the y-axes) that we use much more gas than electricity (except in 2007, which was an anomalous year because we didn’t have a gas fire during the winter so we used a electric halogen heater instead):

 

Graph of daily gas usage per year.

Our gas usage has come down overall since 2005 (from 11280 kWh in 2005 to 8660 kWh in 2011; or 31 kWh per day to 24 kWh per day on average) but not so dramatically as our electricity usage has. Between 2005 and 2011, we reduced our electricity usage by about a half  and our gas usage by about a quarter.

Gas, in our house, is used only for heating rooms and water. So if I were to chart the average outside temperatures of each year, they’d probably track reasonably closely to our gas usage. In 2005 (when we used an average of 31 kWh per day), we still had our old back boiler (with a lovely 1970s gas fire attached) which our central heating installer reckoned was about 50% efficient. In 2006 (26 kWh per day), we replaced it with a new condensing boiler (apparently 95% efficient) but didn’t replace the gas fire until mid-2007 (the dodgy year that doesn’t really count). In 2006, we also had the living-room (our most heated room) extended so it had a much better insulated outside wall, door, and window. These changes could explain the pattern of reducing gas usage year by year up till then.

Old boiler being removed

In 2009, January saw sub-zero temperatures and it snowed in November and December. I think that must be the reason why our usage for the whole year shot back up again, despite the new boiler, to 31 kWh per day. In 2010 (21 kWh per day), it was again very cold and snowy in January; I think the slight dip in gas usage that year compared with both 2008 (25 kWh per day) and 2011 (24 kWh per day) was down to a problem with the gas fire that meant we used the electric halogen heater again during the coldest month. In 2011 it snowed in January but was fairly mild for the rest of the year.

I think 2008, 2010, and 2011 probably represent ‘typical’ years of heating our house with its new boiler and gas fire. Like I concluded about reducing our electricity usage, I think our gas usage went down mostly by getting some better insulation and a more efficient boiler but we did also reduce the default temperature of our heating thermostat to about 17 degrees C (instead of 20 degrees C) a couple of years ago too (we increase it when we need to but it stays low if we don’t), which I think has made some difference but it’s hard to tell when our heating usage is so closely tied to the outside temperature. Also, we don’t currently have any way of separating out our water heating from our central heating, and our gas fire from the boiler.

Of course, what really matters overall is the total amount of energy we use (that is, the gas and electricity numbers combined). So I’ve made a graph of that too. Now we’re talking numbers like 48 kWh per day in 2005 to 33 kWh per day in 2011.

 

Graph of total daily energy usage per year.

Overall, that means we reduced our total energy usage by about one-third over seven years.


Thanks again to @andysc for helping create the graph from meter readings on irregular dates.

The post Reflecting on our total home energy usage appeared first on LauraCowen.co.uk.

Failing to Invent

We IBM employees are encouraged, indeed incented, to be innovative and to invent.  This is particularly poignant for people like myself working on the leading edge of the latest technologies.  I work in IBM emerging technologies which is all about taking the latest available technology to our customers.  We do this in a number of different ways but that's a blog post in itself.  Innovation is often confused for or used interchangeably with invention but they are different, invention for IBM means patents, patenting and the patent process.  That is, if I come up with something inventive I'm very much encouraged to protect that idea using patents and there are processes and help available to allow me to do that.


This comic strip really sums up what can often happen when you investigate protecting one of your ideas with a patent.  It struck me recently while out to dinner with friends that there's nothing wrong with failing to invent as the cartoon above says Leibniz did.  It's the innovation that's important here and unlucky for Leibniz that he wasn't seen to be inventing.  It can be quite difficult to think of something sufficiently new that it is patent-worthy and this often happens to me and those I work with while trying to protect our own ideas.

The example I was drawing upon on this occasion was an idea I was discussing at work with some colleagues about a certain usage of your mobile phone [I'm being intentionally vague here].  After thinking it all through we came to the realisation that while the idea was good and the solution innovative, all the technology was already known available and assembled in the way we were proposing, but used somewhere completely different.

So, failing to invent is no bad thing.  We tried and on this particular occasion decided we could innovate but not invent.  Next time things could be the other way around but according to these definitions we shouldn't be afraid to innovate at the price of invention anyway.