A Conversational Internet of Things – ThingMonk talk

Earlier this year, Tom Coates wrote a blog post about his session at this year’s O’Reilly Foo Camp. Over tea with colleagues, we talked about some of the ideas from the post and how some of our research work might be interesting when applied to them.

One thing led to another and I found myself talking about it at ThingMonk this year. What follows is a slightly expanded version of my talk.


ciot-0

ciot-2

Humanising Things

We have a traditional of putting human faces on things. Whether it’s literally seeing faces on the Things in our everyday lives, such as the drunk octopus spoiling for a fight, or possibly the most scary drain pipe ever.

Equally, we have a tendency to put a human persona onto things. The advent of Twitter brought an onslaught of Things coming online. It seemingly isn’t possible for me to do a talk without at least a fleeting mention of Andy Standford-Clark’s twittering ferries; where regular updates are provided for where each ferry is.

ciot-5

One of the earliest Things on Twitter was Tower Bridge. Tom Armitage, who was working near to the bridge at the time, wrote some code that grabbed the schedule for the bridge opening and closing times, and created the account to relay that information.

ciot-6

One key difference between the ferries and the bridge is that the ferries are just relaying information, a timestamp and a position, whereas the bridge is speaking to us in the first-person. This small difference immediately begins to bring a more human side to the account.
But ultimately, they are simple accounts that relay their state with whomever is following them.

This sort of thing seems to have caught on particularly with the various space agencies. We no longer appear able to send a robot to Mars, or land a probe on a comet without an accompanying twitter account bringing character to the events.

ciot-7

There’s always a sense of excitement when these inanimate objects start to have a conversation with one another. The conversations between the philae lander and its orbiter were particularly touching as they waved goodbye to one another. Imagine, the lander, which was launched into space years before Twitter existed, chose to use its last few milliamps of power to send a final goodbye.

ciot-8

But of course as soon as you peek behind the curtain, you see someone running Tweetdeck, logged in and typing away. I was watching the live stream as the ESA team were nervously awaiting to hear from philae. And I noticed the guy in the foreground, not focused on the instrumentation as his colleagues were, but rather concentrating on his phone. Was he the main behind the curtain, preparing Philae’s first tweet from the surface? Probably not, but for the purposes of this talk, let’s pretend he was.

ciot-9

The idea of giving Things a human personality isn’t a new idea. There is a wealth of rigorous scientific research in this area.

One esteemed academic, Douglas Adams, tells us about the work done by the The Sirius Cybernetics Corporation, who invented a concept called Genuine People Personalities (“GPP”) which imbue their products with intelligence and emotion.

He writes:

Thus not only do doors open and close, but they thank their users for using them, or sigh with the satisfaction of a job well done. Other examples of Sirius Cybernetics Corporation’s record with sentient technology include an armada of neurotic elevators, hyperactive ships’ computers and perhaps most famously of all, Marvin the Paranoid Android. Marvin is a prototype for the GPP feature, and his depression and “terrible pain in all the diodes down his left side” are due to unresolved flaws in his programming.

In a related field, we have the Talkie Toaster created by Crapola, Inc and seen aboard Red Dwarf. The novelty kitchen appliance was, on top of being defective, only designed to provide light conversation at breakfast time, and as such it was totally single-minded and tried to steer every conversation to the subject of toast.

ciot-13

Seam[less|ful]ness

In this era of the Internet of Things, we talk about a future where our homes and workplaces are full of connected devices, sharing their data, making decisions, collaborating to make our lives ‘better’.

Whilst there are people who celebrate this invisible ubiquity and utility of computing, the reality is going to much more messy.

Mark Weiser, Chief Scientist at Xerox PARC, coined the term “ubiquitous computing” in 1988.

Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.

Discussion of Ubiquitous Computing often celebrated the idea of seamless experiences between the various devices occupying our lives. But in reality, Mark Weiser advocated for the opposite; that seamlessness was undesirable and self-defeating attribute of such a system.

He preferred a vision of “Seamfulness, with beautiful seams”

ciot-15

The desire to present a single view of the system, with no joins, is an unrealistic aspiration in the face of the cold realities of wifi connectivity, battery life, system reliability and whether the Cloud is currently turned on.

Presenting a user with a completely monolithic system gives them no opportunity to connect with and begin to understand the constituent parts. That is not it say this information is needed to all users all of the time. But there is clearly utility to some users some of the time.

When you come home from work and the house is cold, what went wrong? Did the thermostat in the living room break and decide it was the right temperature already? Did the message from the working thermostat fail to get to the boiler? Is the boiler broken? Did you forgot to cancel the entry in your calendar saying you’d be late home that day?

Without some appreciation of the moving parts in a system, how can a user feel any ownership or empowerment when something goes wrong with it. Or worse yet, how can they avoid feeling anything other than intimidated by this monolithic system that simply says “I’m Sorry Dave, I’m afraid I can’t do that”.

Tom Armitage wrote up his talk from Web Directions South and published it earlier this week, just as I was writing this talk. He covers a lot of what I’m talking about here so much more eloquently than I am – go read it. One piece his post pointed me at that I hadn’t seen was Techcrunch’s recent review of August’s Smart Lock.

ciot-16

Tom picked out some choice quotes from the review which I’ll share here:

“…much of the utility of the lock was negated by the fact that I have roommates and not all of them were willing or able to download the app to test it out with me […] My dream of using Auto-Unlock was stymied basically because my roommates are luddites.”

“Every now and then it didn’t recognize my phone as I approached the door.”

“There was also one late night when a stranger opened the door and walked into the house when August should have auto-locked the door.”

This is the reason for having beautiful seams; seams help you understand the edges of a devices sphere of interaction, but should not be so big to trip you up. Many similar issues exists with IP connected light bulbs. When I need to remember which app to launch on my phone depending on which room I’m walking into, and which bulbs happen to be in there, the seams have gotten too big.

In a recent blog post, Tom Coates wrote about the idea of a chatroom for the house – go read it.

Much like a conference might have a chatroom, so might a home. And it might be a space that you could duck into as you pleased to see what was going on. By turning the responses into human language you could make the actions of the objects less inscrutable and difficult to understand.

ciot-17

This echoes back to the world of Twitter accounts for Things. But rather than them being one-sided conversations presenting raw data in a more consumable form, or Wizard-of-Oz style man-behind-the-curtain accounts, a chatroom is a space where the conversation can flow both ways; both between the owner and their devices, but also between the devices themselves.

What might it take to turn such a chatroom into a reality?

ciot-18

Getting Things Talking

Getting Things connected is no easy task.

We’re still in the early days of the protocol wars.

Whilst I have to declare allegiance to the now international OASIS standard MQTT, I’m certainly not someone who thinks one protocol will rule them all. It pains me whenever I see people make those sorts of claims. But that’s a talk for a different day.

Whatever your protocol of choice, there are an emerging core set that seem to be the more commonly talked about. Each with its strengths and weaknesses. Each with its backers and detractors.

ciot-19

What (mostly) everyone agrees on is the need for more than just efficient protocols for the Things to communicate by. A protocol is like a telephone line. It’s great that you and I have agreed on the same standards so when I dial this number, you answer. But what do we say to each other once we’re connected? A common protocol does not mean I understand what you’re trying to say to me.

And thus began the IoT meta-model war.

There certainly a lot of interesting work being done in this area.

For example, HyperCat, a consortium of companies coming out of a Technology Strategy Board funded Demonstrator project in the last year or so.

ciot-21

HyperCat is an open, lightweight JSON-based hypermedia catalogue format for exposing collections of URIs. Each HyperCat catalogue may expose any number of URIs, each with any number of RDF-like triple statements about it. HyperCat is simple to work with and allows developers to publish linked-data descriptions of resources.

URIs are great. The web is made of them and they are well understood. At least, they are well understood by machines. What we’re lacking is the human view of this world. How can this well-formed, neatly indented JSON be meaningful or helpful to the user who is trying to understand what is happening.

This is by no means a criticism of HyperCat, or any of the other efforts to create models of the IoT. They are simply trying to solve a different set of problems to the ones I’m talking about today.

ciot-23

Talking to Computers

We live in an age where the talking to computers is becoming less the reserve of science fiction.

Siri, OK Google, Cortana all exist as ways to interact with the devices in your pocket. My four year old son walks up to me when I have my phone out and says: “OK Google, show me a picture of the Octonauts” and takes over my phone without even having to touch it. To him, as to me, voice control is still a novelty. But I wonder what his 6 month old sister will find to be the intuitive way of interacting with devices in a few years time.

The challenge of Natural Language Parsing, NLP, is one of the big challenges in Computer Science. Correctly identifying the words being spoken is relatively well solved. But understanding what those words mean, what intent they try to convey, is still a hard thing to do.

To answer the question “Which bat is your favourite?” without any context is hard to do. Are we talking to a sportsman with their proud collection of cricket bats? Is it the zoo keeper with their colony of winged animals. Or perhaps a comic book fan being asked to chose between George Clooney and Val Kilmer.

Context is also key when you want to hold a conversation. The English language is riddled with ambiguity. Our brains are constantly filling in gaps, making theories and assertions over what the other person is saying. The spoken word also presents its own challenges over the written word.

ciot-28

“Hu was the premiere of China until 2012″

When said aloud, you don’t know if I’ve asked you a question or stated a fact. When written down, it is much clearer.

ciot-29

In their emerging technology report for 2014, Gartner put the Internet of Things at the peak of inflated expectation. But if you look closely at the curve, up at the peak, right next to IoT, is NLP Question Answering. If this was a different talk, I’d tell you all about how IBM Watson is solving those challenges. But this isn’t that talk.

ciot-30

A Conversational Internet of Things

To side step a lot of the challenges of NLP, one area of research we’re involved with is that of Controlled Natural Language and in particular, Controlled English.

CE is designed to be readable by a native English speaker whilst representing information in a structured and unambiguous form. It is structured by following a simple but fully defined syntax, which may be parsed by a computer system.

It is unambiguous by using only words that are defined as part of a conceptual model.

CE serves as a language that is both understandable by human and computer system – which allows them to communicate.

For example,

there is a thermometer named t1 that is located in the room r1

A simple sentence that establishes the fact that a thermometer exists in a given room.

the thermometer t1 can measure the environment variable temperature

Each agent in the system builds its own model of the world that can be used to define concepts such thermometer, temperature, room and so on. As the model is itself defined in CE, the agents build their models through conversing in CE.

there is a radiator valve v1 that is located in the room r1
the radiator valve v1 can control the environment variable temperature

It is also able to using reasoning to determine new facts.

the room r1 has the environment variable temperature that can be measured and that can be controlled

As part of some research work with Cardiff University, we’ve been looking at how CE can be extended to a conversational style of interaction.

These range from exchanging facts between devices – the tell

the environment variable temperature in room r1 has value "21"

Being able to ask question – ask-tell

for which D1 is it true that
      ( the device D1 is located in room V1 ) and
      ( the device D1 can measure the environment variable temperature ) and
      ( the value V1 == "r1")

Expanding on and explaining why certain facts are believed to be true:

the room r1 has the environment variable temperature that can be measured and that can be controlled
    because
the thermometer named t1 is located in the room r1 and can measure the environment variable temperature
    and
the radiator valve v1 is located in the room r1 and can control the environment variable temperature

The fact that the devices communicate in CE means the user can passively observe the interactions. But whilst CE is human readable, it isn’t necessarily human writeable. So some of the research is also looking at how to bridge from NL to CE using a confirm interaction:

NL: The thermometer in the living room has moved to the dining room
CE: the thermometer t1 is located in the room r2

Whilst the current research work is focused on scenarios for civic agencies – for example managing information exchange in a policing context, I’m interested in applying this work to the IoT domain.

With these pieces, you can begin to see how you could have an interaction like this:

    User: I will be late home tonight.
    House: the house will have a state of occupied at 1900
    User: confirmed
    House: the room r1 has a temperature with minimum allowable value 20 after time 1900
           the roomba, vc1, has a clean cycle scheduled for time 1800

Of course this is still quite dry and formal. It would be much more human, more engaging, if the devices came with their own genuine people personality. Or at least, the appearance of one.

    User: I will be late home tonight.
    House: Sorry to hear that, shall I tell everyone to expect you back by 7?
    User: yes please    
    Thermometer: I'll make sure its warm when you get home
    Roomba: *grumble*

I always picture the Roomba as being a morose, reticent creature who really hates its own existence. We have one in the kitchen next to our lab at work, set to clean at 2am. If we leave the door to the lab open, it heads in and, without fail, maroons itself on a set of bar stools we have with a sloped base. Some might call that a fault in its programming, much like Marvin, but I like to think its just trying to find a way to end it all.

This is all some way from having a fully interactive chat room for your devices. But the building blocks are there and I’ll be exploring them some more.

A Conversational Internet of Things – ThingMonk talk

Earlier this year, Tom Coates wrote a blog post about his session at this year’s O’Reilly Foo Camp. Over tea with colleagues, we talked about some of the ideas from the post and how some of our research work might be interesting when applied to them.

One thing led to another and I found myself talking about it at ThingMonk this year. What follows is a slightly expanded version of my talk.


ciot-0

ciot-2

Humanising Things

We have a traditional of putting human faces on things. Whether it’s literally seeing faces on the Things in our everyday lives, such as the drunk octopus spoiling for a fight, or possibly the most scary drain pipe ever.

Equally, we have a tendency to put a human persona onto things. The advent of Twitter brought an onslaught of Things coming online. It seemingly isn’t possible for me to do a talk without at least a fleeting mention of Andy Standford-Clark’s twittering ferries; where regular updates are provided for where each ferry is.

ciot-5

One of the earliest Things on Twitter was Tower Bridge. Tom Armitage, who was working near to the bridge at the time, wrote some code that grabbed the schedule for the bridge opening and closing times, and created the account to relay that information.

ciot-6

One key difference between the ferries and the bridge is that the ferries are just relaying information, a timestamp and a position, whereas the bridge is speaking to us in the first-person. This small difference immediately begins to bring a more human side to the account.
But ultimately, they are simple accounts that relay their state with whomever is following them.

This sort of thing seems to have caught on particularly with the various space agencies. We no longer appear able to send a robot to Mars, or land a probe on a comet without an accompanying twitter account bringing character to the events.

ciot-7

There’s always a sense of excitement when these inanimate objects start to have a conversation with one another. The conversations between the philae lander and its orbiter were particularly touching as they waved goodbye to one another. Imagine, the lander, which was launched into space years before Twitter existed, chose to use its last few milliamps of power to send a final goodbye.

ciot-8

But of course as soon as you peek behind the curtain, you see someone running Tweetdeck, logged in and typing away. I was watching the live stream as the ESA team were nervously awaiting to hear from philae. And I noticed the guy in the foreground, not focused on the instrumentation as his colleagues were, but rather concentrating on his phone. Was he the main behind the curtain, preparing Philae’s first tweet from the surface? Probably not, but for the purposes of this talk, let’s pretend he was.

ciot-9

The idea of giving Things a human personality isn’t a new idea. There is a wealth of rigorous scientific research in this area.

One esteemed academic, Douglas Adams, tells us about the work done by the The Sirius Cybernetics Corporation, who invented a concept called Genuine People Personalities (“GPP”) which imbue their products with intelligence and emotion.

He writes:

Thus not only do doors open and close, but they thank their users for using them, or sigh with the satisfaction of a job well done. Other examples of Sirius Cybernetics Corporation’s record with sentient technology include an armada of neurotic elevators, hyperactive ships’ computers and perhaps most famously of all, Marvin the Paranoid Android. Marvin is a prototype for the GPP feature, and his depression and “terrible pain in all the diodes down his left side” are due to unresolved flaws in his programming.

In a related field, we have the Talkie Toaster created by Crapola, Inc and seen aboard Red Dwarf. The novelty kitchen appliance was, on top of being defective, only designed to provide light conversation at breakfast time, and as such it was totally single-minded and tried to steer every conversation to the subject of toast.

ciot-13

Seam[less|ful]ness

In this era of the Internet of Things, we talk about a future where our homes and workplaces are full of connected devices, sharing their data, making decisions, collaborating to make our lives ‘better’.

Whilst there are people who celebrate this invisible ubiquity and utility of computing, the reality is going to much more messy.

Mark Weiser, Chief Scientist at Xerox PARC, coined the term “ubiquitous computing” in 1988.

Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.

Discussion of Ubiquitous Computing often celebrated the idea of seamless experiences between the various devices occupying our lives. But in reality, Mark Weiser advocated for the opposite; that seamlessness was undesirable and self-defeating attribute of such a system.

He preferred a vision of “Seamfulness, with beautiful seams”

ciot-15

The desire to present a single view of the system, with no joins, is an unrealistic aspiration in the face of the cold realities of wifi connectivity, battery life, system reliability and whether the Cloud is currently turned on.

Presenting a user with a completely monolithic system gives them no opportunity to connect with and begin to understand the constituent parts. That is not it say this information is needed to all users all of the time. But there is clearly utility to some users some of the time.

When you come home from work and the house is cold, what went wrong? Did the thermostat in the living room break and decide it was the right temperature already? Did the message from the working thermostat fail to get to the boiler? Is the boiler broken? Did you forgot to cancel the entry in your calendar saying you’d be late home that day?

Without some appreciation of the moving parts in a system, how can a user feel any ownership or empowerment when something goes wrong with it. Or worse yet, how can they avoid feeling anything other than intimidated by this monolithic system that simply says “I’m Sorry Dave, I’m afraid I can’t do that”.

Tom Armitage wrote up his talk from Web Directions South and published it earlier this week, just as I was writing this talk. He covers a lot of what I’m talking about here so much more eloquently than I am – go read it. One piece his post pointed me at that I hadn’t seen was Techcrunch’s recent review of August’s Smart Lock.

ciot-16

Tom picked out some choice quotes from the review which I’ll share here:

“…much of the utility of the lock was negated by the fact that I have roommates and not all of them were willing or able to download the app to test it out with me […] My dream of using Auto-Unlock was stymied basically because my roommates are luddites.”

“Every now and then it didn’t recognize my phone as I approached the door.”

“There was also one late night when a stranger opened the door and walked into the house when August should have auto-locked the door.”

This is the reason for having beautiful seams; seams help you understand the edges of a devices sphere of interaction, but should not be so big to trip you up. Many similar issues exists with IP connected light bulbs. When I need to remember which app to launch on my phone depending on which room I’m walking into, and which bulbs happen to be in there, the seams have gotten too big.

In a recent blog post, Tom Coates wrote about the idea of a chatroom for the house – go read it.

Much like a conference might have a chatroom, so might a home. And it might be a space that you could duck into as you pleased to see what was going on. By turning the responses into human language you could make the actions of the objects less inscrutable and difficult to understand.

ciot-17

This echoes back to the world of Twitter accounts for Things. But rather than them being one-sided conversations presenting raw data in a more consumable form, or Wizard-of-Oz style man-behind-the-curtain accounts, a chatroom is a space where the conversation can flow both ways; both between the owner and their devices, but also between the devices themselves.

What might it take to turn such a chatroom into a reality?

ciot-18

Getting Things Talking

Getting Things connected is no easy task.

We’re still in the early days of the protocol wars.

Whilst I have to declare allegiance to the now international OASIS standard MQTT, I’m certainly not someone who thinks one protocol will rule them all. It pains me whenever I see people make those sorts of claims. But that’s a talk for a different day.

Whatever your protocol of choice, there are an emerging core set that seem to be the more commonly talked about. Each with its strengths and weaknesses. Each with its backers and detractors.

ciot-19

What (mostly) everyone agrees on is the need for more than just efficient protocols for the Things to communicate by. A protocol is like a telephone line. It’s great that you and I have agreed on the same standards so when I dial this number, you answer. But what do we say to each other once we’re connected? A common protocol does not mean I understand what you’re trying to say to me.

And thus began the IoT meta-model war.

There certainly a lot of interesting work being done in this area.

For example, HyperCat, a consortium of companies coming out of a Technology Strategy Board funded Demonstrator project in the last year or so.

ciot-21

HyperCat is an open, lightweight JSON-based hypermedia catalogue format for exposing collections of URIs. Each HyperCat catalogue may expose any number of URIs, each with any number of RDF-like triple statements about it. HyperCat is simple to work with and allows developers to publish linked-data descriptions of resources.

URIs are great. The web is made of them and they are well understood. At least, they are well understood by machines. What we’re lacking is the human view of this world. How can this well-formed, neatly indented JSON be meaningful or helpful to the user who is trying to understand what is happening.

This is by no means a criticism of HyperCat, or any of the other efforts to create models of the IoT. They are simply trying to solve a different set of problems to the ones I’m talking about today.

ciot-23

Talking to Computers

We live in an age where the talking to computers is becoming less the reserve of science fiction.

Siri, OK Google, Cortana all exist as ways to interact with the devices in your pocket. My four year old son walks up to me when I have my phone out and says: “OK Google, show me a picture of the Octonauts” and takes over my phone without even having to touch it. To him, as to me, voice control is still a novelty. But I wonder what his 6 month old sister will find to be the intuitive way of interacting with devices in a few years time.

The challenge of Natural Language Parsing, NLP, is one of the big challenges in Computer Science. Correctly identifying the words being spoken is relatively well solved. But understanding what those words mean, what intent they try to convey, is still a hard thing to do.

To answer the question “Which bat is your favourite?” without any context is hard to do. Are we talking to a sportsman with their proud collection of cricket bats? Is it the zoo keeper with their colony of winged animals. Or perhaps a comic book fan being asked to chose between George Clooney and Val Kilmer.

Context is also key when you want to hold a conversation. The English language is riddled with ambiguity. Our brains are constantly filling in gaps, making theories and assertions over what the other person is saying. The spoken word also presents its own challenges over the written word.

ciot-28

“Hu was the premiere of China until 2012″

When said aloud, you don’t know if I’ve asked you a question or stated a fact. When written down, it is much clearer.

ciot-29

In their emerging technology report for 2014, Gartner put the Internet of Things at the peak of inflated expectation. But if you look closely at the curve, up at the peak, right next to IoT, is NLP Question Answering. If this was a different talk, I’d tell you all about how IBM Watson is solving those challenges. But this isn’t that talk.

ciot-30

A Conversational Internet of Things

To side step a lot of the challenges of NLP, one area of research we’re involved with is that of Controlled Natural Language and in particular, Controlled English.

CE is designed to be readable by a native English speaker whilst representing information in a structured and unambiguous form. It is structured by following a simple but fully defined syntax, which may be parsed by a computer system.

It is unambiguous by using only words that are defined as part of a conceptual model.

CE serves as a language that is both understandable by human and computer system – which allows them to communicate.

For example,

there is a thermometer named t1 that is located in the room r1

A simple sentence that establishes the fact that a thermometer exists in a given room.

the thermometer t1 can measure the environment variable temperature

Each agent in the system builds its own model of the world that can be used to define concepts such thermometer, temperature, room and so on. As the model is itself defined in CE, the agents build their models through conversing in CE.

there is a radiator valve v1 that is located in the room r1
the radiator valve v1 can control the environment variable temperature

It is also able to using reasoning to determine new facts.

the room r1 has the environment variable temperature that can be measured and that can be controlled

As part of some research work with Cardiff University, we’ve been looking at how CE can be extended to a conversational style of interaction.

These range from exchanging facts between devices – the tell

the environment variable temperature in room r1 has value "21"

Being able to ask question – ask-tell

for which D1 is it true that
      ( the device D1 is located in room V1 ) and
      ( the device D1 can measure the environment variable temperature ) and
      ( the value V1 == "r1")

Expanding on and explaining why certain facts are believed to be true:

the room r1 has the environment variable temperature that can be measured and that can be controlled
    because
the thermometer named t1 is located in the room r1 and can measure the environment variable temperature
    and
the radiator valve v1 is located in the room r1 and can control the environment variable temperature

The fact that the devices communicate in CE means the user can passively observe the interactions. But whilst CE is human readable, it isn’t necessarily human writeable. So some of the research is also looking at how to bridge from NL to CE using a confirm interaction:

NL: The thermometer in the living room has moved to the dining room
CE: the thermometer t1 is located in the room r2

Whilst the current research work is focused on scenarios for civic agencies – for example managing information exchange in a policing context, I’m interested in applying this work to the IoT domain.

With these pieces, you can begin to see how you could have an interaction like this:

    User: I will be late home tonight.
    House: the house will have a state of occupied at 1900
    User: confirmed
    House: the room r1 has a temperature with minimum allowable value 20 after time 1900
           the roomba, vc1, has a clean cycle scheduled for time 1800

Of course this is still quite dry and formal. It would be much more human, more engaging, if the devices came with their own genuine people personality. Or at least, the appearance of one.

    User: I will be late home tonight.
    House: Sorry to hear that, shall I tell everyone to expect you back by 7?
    User: yes please    
    Thermometer: I'll make sure its warm when you get home
    Roomba: *grumble*

I always picture the Roomba as being a morose, reticent creature who really hates its own existence. We have one in the kitchen next to our lab at work, set to clean at 2am. If we leave the door to the lab open, it heads in and, without fail, maroons itself on a set of bar stools we have with a sloped base. Some might call that a fault in its programming, much like Marvin, but I like to think its just trying to find a way to end it all.

This is all some way from having a fully interactive chat room for your devices. But the building blocks are there and I’ll be exploring them some more.

A lunchtime run

An event that has ignited competitive passions at Hursley for a number of years is the annual Quad-Department Games (previously known as the Tri-Department Games). Each year, the Barbarians, Hatters, Mavericks and Titans compete in a series of events with a rolling aggregated scoreboard. It is not just about outdoor sports, although the running, football and touch rugby are major parts of the calendar… the departments can also demonstrate their prowess in a cake bake, in a quiz, or at table football. It’s a lot of fun 🙂

Yesterday’s event was a running race around Hursley Park. On a brilliant, sunny and clear November day, a total of 57 runners completed a 5km course. There are a couple of sets of photos on Flickr, but here are some highlights…

Runners gathering

Quick start

Through the trees

Beneath the autumn trees

Congratulations to all involved, congratulations to the Mavericks for the overall team win, and to Dave Currie for his organisation (and for bringing along MiniMe support!)

Extreme Blue covered by BBC News

A really quick follow-up to the write-up of the Extreme Blue presentations to note that the BBC News website also has a nice report on this year’s programme at Hursley:

Every year, IBM runs a summer internship programme for the most talented young software designers and business students.

Participants are divided into groups, each of which works on a pet project. At the end of their 12 week design period, their prototypes are presented.

The UK leg of Extreme Blue takes place at IBM’s Hursley lab near Winchester. BBC News went along to see what they had dreamt up.

Read more at Cars and Cursors go Smart at IBM’s Extreme Blue.

Minihacks and Open Technologies

It’s not all about process, software development, and quadricopters… 🙂

Guruplug This week we’ve had what could be described as a “mini Hackday”, instigated by an idea from Andy Stanford-Clark and organised by Hursley newcomer Vaibhavi Joshi. The idea was to spend a few hours exploring the world of plug computers (in this case, a model called a Guruplug); to brainstorm some ideas around utility computers; and to generally see what we could do with this kind of a form factor.

Some great ideas emerged, and quite a few of us were severely tempted to order our new shiny gadgets on the spot… by the end of the morning the Really Small Message Broker was built and running on the Guruplug and some exciting MQTT-related thoughts were flying around. A nice break from the norm for all of us!

Inspired by some of the “social technologies for internal communications” discussions I’d had with Abi Signorelli at Social Media Week London the previous week – in particular, the ease of capturing a brief audio snippet on any particular topic – I thought I’d ask Vaibhavi what she thought – here’s a quick interview:

Straight after the hacking, it was time to move on to the Open Technologies event that was being run to promote Linux, Firefox and Symphony. I’m a user and a big fan of all of these tools so it was nice to see a local Hursley event as part of IBM’s global awareness month dedicated to helping those within the internal community not yet up-to-speed on what people were using. The best part? Free stickers 🙂

Open Technologies

OggCamp (2009)

Two weekends ago, I, with the rest of the Ubuntu-UK Podcast team and the Linux Outlaws podcast team, was in Wolverhampton to run a new one-day open source community unconference called OggCamp.

A few people have asked “why Wolverhampton?”. Which is a fair question considering that four of the organisers live in Hampshire, one in the South-West, one in Liverpool, and one in Bonn.

Well, Wolverhampton is the location of the annual LugRadio Live open source community conference. The organisers of LugRadio Live (the LugRadio podcast presenters) are, or were, based in Wolverhampton. While there are many things you could say about Wolverhampton, one thing that always impressed me was that, to attend LugRadio Live, people flew to Wolverhampton from all over the UK, from all over Europe, all over the States, and even from Hong Kong and Australia at times (see my blog post about past LRLs for more).

Last year, after four hugely popular LugRadio Live events, including one in San Francisco sponsored by Google, the team announced that the fortnightly LugRadio podcast was going to end, and so the fifth LugRadio Live (in July 2008) would be the last ever LugRadio Live.

And then, under pressure from Popular Demand, they agreed to do another last ever LugRadio Live – in October 2009. This last ever LugRadio Live, though, would only be one day, the Saturday, like their first ever LugRadio Live. Which left a whole Sunday to fill. Which is where OggCamp comes in.

The Connaught Hotel Welcomes OggCampWhen we decided to organise OggCamp, we had no idea how it would go down. We figured that, between the two podcasts (Ubuntu-UK Podcast and Linux Outlaws) we’d had enough positive feedback that we could get at least 50 people along. Because it would be the day after LRL, there was a chance that enough LRL attendees would stick around for the day on Sunday and coming to OggCamp too. To make extra sure of that, we decided to hold OggCamp in the official LRL hotel (so that the geeks could just roll out of bed and into OggCamp), and to make the event free to attend.

In the end, about 130 people came to OggCamp. Which was brilliant!

The sight of people queuing up three flights of stairs to come in at 10.30 on the Sunday morning left us briefly gob-smacked.

We kicked off at about 11am with a quick introduction from all the presenters in which we explained how there was no pre-arranged schedule and that to sign up for a talk you just had to write it on a sticky note (large notes for full-hour talks; half-sized notes for half-hour talks) and stick it in a slot in the grid on the wall.

First up was Andy Stanford-Clark who did a brand new talk, specially written for OggCamp (and completed the night before while the rest of us were at the LRL kareoke party), about the geekier details of his Twittering House (the stuff the BBC didn’t get!). By midday, the schedule was getting pretty full (something of a relief!) and the planned topics included web services, how to prove identity on the Net, how to encourage young people to use Open Source Software, politics and geeks (from ORG), translating Playstation 2 games, and how to explain programming to non-programmers!

At 3pm, everyone gathered in the main room to watch a live joint recording of the Ubuntu-UK Podcast and Linux Outlaws. This started with a live raffle draw (surely a first in open source events?) for some very cool prizes donated by our sponsors, including a couple of Viglen MPC-Ls, some Ubuntu laptop bags and hoodies, an O’Reilly book, and an Arduino Mega. After the raffle, we did two segments: one about producing media using Open Source Software, and one about whether or not the Open Source community spreads itself too thin by creating so many different distributions. The segments included a lot of audience interaction, and also real-time twittering from the audience on to the TwitterFall screen behind us on-stage.

The live show was something that we had been nervous about because six is a large number of people to be talking anyway but also because the two podcasts (UUPC and LO) are quite different in style so we had no idea how well we would integrate. The two podcasts released their own versions of the live show during the following week and, if you’re keen, you can compare and contrast the two: UUPC (family friendly) and LO (includes the naughty words). I don’t think either podcast did much editing of content, which drew this comment from a UUPC listener.

So, all in all, I think we can say that OggCamp was a success. 🙂

It was certainly a lot of fun – if exhausting!

We also sold enough raffle tickets and OggCamp limited edition souvenir mugs to financially break even on the whole event. Which was good from our point of view. And there has been a load of positive feedback from the attendees, including questions about whether we’ll do it again next year. Although we’ve tried to not to commit to anything, by now I think we can safely say that there is likely to be another OggCamp next year.

(For more photos, see the OggCamp group on Flickr.)

September Equinox

Chinese Calendar

Hursley is a culturally as well as a technically diverse place, so we’ve got some great opportunities to learn from each other. This lunchtime I popped along to one of the events organised by the lab’s Chinese Connect team, which was all about Understanding the Chinese Calendar (the title of the post refers to a significant date this week in that calendar, September 23rd).

Previous talks in the Chinese Culture series, which is organised by Hursley’s Jenny He, have covered subjects such as the evolution of the Chinese languages, how to understand Chinese names, and Chinese music and instruments. I’m embarrassed to say that this is the first of the talks I’ve been to, despite working here for some time… I really should take more advantage of the range of activities and opportunities that Hursley has to offer!

Today’s talk was delivered by Darren Beard, who was particularly interested in the astronomical background to the Chinese calendar (having published a paper on the same topic several years ago). Darren covered the scientific background of this lunisolar calendar, and the changes that have taken place to it historically over the ~3500 years it has been around – particularly interesting to me, since I’m a historian by background. It’s a complicated system which takes account of 19 year lunar cycles, requires things like leap months, and has a set of rules which specify how it works… but it is certainly more comprehensible once you understand those aspects. It was interesting to realise just how much my own perceptions of time are based on the calendar system I’ve grown up with!

Linux Users descend on the House

[thanks to the brilliant Laura Cowen, producer of the Ubuntu UK Podcast and uber UX god at Hursley, for writing up this event – it’s a shame I wasn’t able to make it!]

As is usually the case when I’m attending a HantsLUG (Hampshire Linux User Group) meeting, it was a lovely sunny day on Saturday. It’s like as if it knows that I’m going to be spending the day inside, geeking in front of a laptop screen. This meeting, however, we put the sun to good use, first of all showing off Hursley Park at its best, and then lunching out on the decking at the Clubhouse.

When I was a more frequent attendee of the HantsLUG bring-a-box meetings (where I installed my first Debian distro, and later my first Ubuntu), I’d often thought how cool it would be to host a meeting at IBM Hursley. But I never got as far as investigating the security and wifi hassles I’d have to overcome. Fortunately, Anton Piatek was a little braver and sent some emails to nearly the right people (who helpfully forwarded them on to really the right people), and suggested his plan to Adam Trickett, Chair of HantsLUG. Adam says he nearly bit Anton’s hand off and so it happened.

HantsLUG is one of the biggest LUGs in the UK and is our local Linux user group but has surprisingly never really (in the 7 years I’ve known them) had a huge amount of interaction with IBM Hursley. For a long time, though, there has been a good pool of Linux skills and interest in the Lab, and over the last couple of years the number of people around the Lab voluntarily using Linux as their desktop OS has risen (as has the number of Ubuntu lanyards to be seen as you walk the corridors of Hursley).

Image courtesy of fluffydragon

So what makes Hursley a good place for a LUG meeting? Well, for a start, it’s just a really nice place to be – and Hursley House as well as the Park are very impressive to show off to visitors 🙂

On Saturday, we were mostly in the Auditorium (where Spitfires were built during WWII), then when we led everyone down to the Clubhouse for lunch, we took the usual site tour scenic route via the Sunken Garden and fish pond. Although Hursley is out in the country, seemingly the middle of nowhere, it’s actually on the bus-route from Winchester so we had an excellent turnout of about 30 people. IBM Hursley also has a lot of cool people who do cool things that we can tell people about (although one piece of feedback I heard from a LUG person was that they thought we didn’t talk enough about what IBM does!).

Although we had the House to ourselves, and everyone was free to stand around and chat in the Main Hall, most of the day revolved around talks in the Auditorium. It all kicked off at 11am with an introduction to IBM Hursley (and, of course, directions to the fire exits and toilets) from Anton. The inimitable Andy Stanford-Clark, fresh from a week of press interviews, enthused everyone till lunchtime with tales of mouse traps, MQTT, twittering houses, twittering ferries, water meters, and energy monitoring. I say ‘enthused’ but there must be a better term to describe the way the audience rushed the stage when Andy offered to sell Current Cost monitors at a discount…

After lunch, we had a collection of shorter talks on a range of topics:

  • I talked about InfoSlicer, the open source software that my Extreme Blue student team developed last Summer and IBM released under the GPL
  • Anton described the anatomy of Ubuntu packages (he’s the guy that provides Ubuntu users in IBM with the flawless packages we’ve come to rely on)
  • Tony Whitmore related his experiences of producing the popular Ubuntu UK Podcast – and pimped the upcoming OggCamp unconference
  • Adam Trickett, Chair of HantsLUG, gave out free books in return for promises of book reviews on the HantsLUG wiki

Then everyone just hung around chatting for ages.

It was a really enjoyable and relaxed day; kudos to Anton, Stephen, and John for organising it from the IBM end. Thanks also to the IBMers who came along and to the many HantsLUG members who turned up. I’d say it was a success and we should definitely do it again.

IMG_3971 IMG_3970 IMG_3969 IMG_3968 IMG_3966 IMG_3965 IMG_3963 IMG_3962 IMG_3960
IMG_3971

An unconference and a little bit of history

Yesterday lunchtime the auditorium in Hursley House became the venue of an internal “unconference” of sorts – a very relaxed session with a bunch of short, snappy 5 minute presentations by folks from around the lab who related their experiences from different tech conferences.

Dale Lane spoke about Hackdays and Barcamps; Alex Hutter talked about last weekend’s Barcamp in Brighton; Robin Fernandes talked about user groups and his involvement with PHP; Iain Gavin from Amazon Web Services told us about external views on IBM; and Andy Stanford-Clark was, well, Andy 🙂 I think he may have mentioned something about some service called Twitter, I was’t really paying attention… 😉 Most of it was Ignite-style high-speed babble, and mostly without slides.

Unlunch, unlearn

It was all the brainchild of the brilliant Zoe Slattery, who also had some exciting announcements to share with us (more to come on these once I get clearance to post!). There were guest appearances of photographs by Alice, too.

Oh, and my contribution? I gave a potted, high-speed history of eightbar from the perspective of someone who jumped in to the Hursley world from the outside. Here’s a pictorial tour. You’ll note few mentions of virtual worlds – not because that’s not something eightbar does anymore, but rather to remind people of the breadth of our interests. Oh, and guess what, the blog has been around for nearly 4 years – just a week or so to go!

(dunno what happened with the bizzaro blank slide #12, it’s not supposed to be there…)

IBM Demos at the TEDGlobal Conference

Posted on behalf of Bharat Bedi…

The TEDGlobal Conference was an amazing week of learning, taking inspiration from and connecting with 700 of the world’s thinkers and doers. The speakers at TED gave excellent talks on subjects ranging from how humans might have evolved from aquatic apes to jumping from the edge of space.

Bharat Interview

IBM’s smart planet vision fits in well with TED’s approach of ideas worth spreading and IBM sponsored the Innovation Lounge and the 25 TED fellows at the conference.
The fellows are an amazing group of world changing innovators from around the world.

IBM created two demonstrations for the TED and I had the opportunity to lead the effort around putting these demos together. The demos incorporate a number of technologies including Zigbee, messaging, ambient devices, mobile phone based remote control and monitoring, SMS, RFID, web & AJAX, current cost and home automation!

The first one of these was around using RFID technology to help facility interaction and conversations between the TED fellows and the other attendees at the TED Innovation Lounge . Each fellow was given an RFID tag that detected their presence in the lounge and displayed their profiles on 3 large screens. At the same time wireless ambient devices changed colour to highlight the presence of the fellows.

TED Lounge

The second demo was about being smarter about our energy consumption and home automation. This was a good example of the smarter planet principles of an instrumented, interconnected and intelligent in action. We set up a home lounge environment with appliance such as lamps and fans whose electricity consumption was being monitored. These appliances could be remote controlled via SMS and a mobile phone application. The amount of energy being consumed by the appliance was conveyed in subtle ways again using an ambient device which changed colour.

Huge thanks to Dave Conway-Jones, Andy Stanford-Clark and Andrew Nowell for all their help with creating the demos.