About Andy Stanford-Clark

IBM Master Inventor. Distinguished Engineer, Chief Technologist for Smarter Energy, IBM Global Business Services, Member of the IBM Academy of Technology.

Node-RED Flows that make flows

Executive Summary

When a mummy function node and a daddy function node love each other very much, the daddy node injects a payload into the mummy node through a wire between them, and a new baby message flow is made, which comes out of mummy’s output terminal. The baby’s name is Jason.

Where did this idea come from?

In Node-RED, if you export some nodes to the clipboard, you will see the flow and its connections use a JSON wire format.

For a few weeks I’ve been thinking about generating that JSON programmatically, in order to create flows which encode various input and output capabilities and functions, according to some input parameters which give the specific details of what the flow is to do.

I think there’s been an alignment of planets that’s occurred, which led to this idea:

@stef_k_uk has been talking to me about deploying applications onto “slave” Raspberry Pis from a master Pi.

Dritan Kaleshi and Terence Song from Bristol University have been discussing using the Node-RED GUI as a configurator tool for a system we’re working on together, and also we’ve been working on representing HyperCat catalogues in Node-RED, as part of our Technology Strategy Board Internet of Things project, IoT-Bay.

@rocketengines and I were cooking up cool ideas for cross-compiling Node-RED flows into other languages, a while back, and

in a very productive ideas afternoon last week, @profechem and I stumbled upon an idea for carrying the parser for a data stream along with the data itself, as part of its meta-data description.

All of the above led me to think “wouldn’t it be cool if you could write a flow in Node-RED which produced as output another flow.” And for this week’s #ThinkFriday afternoon, I gave it a try.

First experiment

The first experiment was a flow which had an injector node feeding a JSON object of parameters into a flow which generated the JSON for an MQTT input node, a function node to process the incoming data in some way, and an MQTT output node. So it was the classic subscribe – process – publish pattern which occurs so often in our everday lives (if you live the kind of life that I do!).
first Node-RED flowAnd here’s the Node-RED flow which generates that: flow-generator

So if you sent in
{
"broker": "localhost",
"input": "source_topic",
"output": "destination_topic",
"process": "msg.payload = \"* \"+msg.payload+\" *\";\nreturn msg;"
}

the resulting JSON would be the wire format for a three-node flow which has an MQTT input node subscribed to “source_topic” on the broker on “localhost”, a function node which applies a transformation to the data: in this case, wrapping it with an asterisk at each end, and finally an MQTT publish node sending it to “destination_topic” on “localhost”.
N.B. make sure you escape any double quotes in the “process” string, as shown above.

The JSON appears in the debug window. If you highlight it, right-mouse – Copy, and then do Import from… Clipboard in Node-RED and ctrl-V the JSON into the text box and click OK, you get the three node flow described above ready to park on your Node-RED worksheet and then Deploy.
the resulting And it works!!

So what?

So far so cool. But what can we do with it?

The next insight was that the configuration message (supplied by the injector) could come from somewhere else. An MQTT topic, for example. So now we have the ability for a data stream to be accompanied not only by meta-data describing what it is, but also have the code which parses it.
flow with added MQTT configuratorMy thinking is that if you subscribe to a data topic, say:
andy/house/kitchen/temperature
there could be two additional topics, published “retained” so you get them when you first subscribe, and then any updates thereafter:

A metadata topic which describes, in human and/or machine readable form, what the data is about, for example:
andy/house/kitchen/temperature/meta with content
“temperature in degrees Celcius in the kitchen at Andy’s house”

And a parser topic which contains the code which enables the data to be parsed:
andy/house/kitchen/temperature/parser with content
msg.value = Math.round(msg.payload) + \" C\"; return msg;
(that’s probably a rubbish example of a useful parser, but it’s just for illustration!)

If you’re storing your data in a HyperCat metadata catalogue (and you should think about doing so if you’re not – see @pilgrimbeart‘s excellent HyperCat in 15 minutes presentation), then the catalogue entry for the data point could include the URI of the parser function along with the other meta-data.

And then…

Now things get really interesting… what if we could deploy that flow we’ve created to a node.js run-time and set it running, as if we’d created the flow by hand in the Node-RED GUI and clicked “Deploy”?
Well we can!

When you click Deploy, the Node-RED GUI does an HTTP POST to “/flows” in the node.js run-time that’s running the Node-RED server (red.js), and sends it the list of JSON objects which describe the flow that you’ve made.
So… if we hang an HTTP request node off the end of the flow which generates the JSON for our little flow, then it should look like a Deploy from a GUI.

Et voila!
flow that deploys to a remote Node-RED
Note that you have to be careful not to nuke your flow-generating-flow by posting to your own Node-RED run-time! I am posting the JSON to Node-RED on my nearby Raspberry Pi. When you publish a configuration message to the configuration topic of this flow, the appropriate set of nodes is created – input – process – output – and then deployed to Node-RED on the Pi, which dutifully starts running the flow, subscribing to the specified topic, transforming the data according to the prescribed processing function, and publishing it to the specified output topic.

I have to say, I think this is all rather exciting!

@andysc

Footnote:

It’s worth mentioning, that Node-RED generates unique IDs for nodes that look like “8cf45583.109bf8”. I’m not sure how it does that, so I went for a simple monotonically increasing number instead (1, 2 …). It seems to work fine, but there might be good reasons (which I’m sure @knolleary will tell me about) why I should do it the “proper” way.

Sending, not taking, the biscuit

Teleportation becomes one step closer as UK scientists collaborate. A team of astronomers have joined forces with IBM’s software engineers and taken tentative steps to transfer material across the Internet.

Being unable to take up the offer of a biscuit at a recent meeting of a sub-group of the Isle of Wight’s Vectis Astronomical Society, Dr Andy Stanford-Clark, IBM Master Inventor, who was at home at the time, accepted instead the offer of a Virtual Biscuit.

He then laid down a challenge to the team of astronomers, led by Dr Lucy Rogers (@drlucyrogers), to deliver the biscuit to him using IBM’s Smarter Planet messaging middleware technology, MQTT.

This challenge was enthusiastically accepted, and the next day, Andy took the biscuit by subscribing to the appropriate topic on an IBM message broker and downloading the confectionery – picture.

In an interesting twist, not unusual in the early stages of the invention of new technology, the image arrived intact, but subtly altered – it is horizontally flipped. This is reminiscent of some of the teething problems in the matter transportation technology described in Michael Crichton’s book TimeLine.

Stanford-Clark said: “clearly there’s a very long way to go before we can transfer an ACTUAL biscuit across the Internet using MQTT, but this is an exciting first step, and a great motivation for further research.”

Long Live the infocenter !

I’ve always been a bit scared of infocenters – even though, deep down, I know they’re “just HTML”; they never quite seem that way. Javascript and to-the-pixel object placement is just getting too good these days. You could almost mistake it for a java applet or at least some kind of fancy AJAX application.

But no, it’s just a set of good-old framesets, frames, HTML content, hyperlinks and images, bound together with some javascript eggwhite and stirred vigorously for a few minutes to make the infocenters we know and (some, I hear) love.

However, to make it seem like it’s “alive”, there is a Java servlet lurking back at the server, generating parts of the Infocenter dynamically, including rendering the Table of Contents from a behind-the-scenes XML description, and running search and bookmarks and things like that.

What I became curious about, then, were two things:

  • Could we extract a sub-set of an infocenter and just display that, rather than having to wade through everything we were given? For example, I might only be interested in the administration section of a product, or might only need to know about one component of a toolkit of many components. Having a more navigable and less intimidating sub-set would greatly improve productivity.
  • Rather than having to install an Eclipse infocenter run time on a server to host a set of documentation, is there a way to run it on any plain old HTTPd (e.g. Apache)? I accept that search, bookmarks, and other dynamic features won’t work, but the real information – the useful stuff in the right-hand window, which we use to do our jobs with the products we’re trying to understand; and the all-important navigational Table of Contents structure in the left-hand window – would be available to us “anywhere” we can put an HTTPd.

With a ThinkFriday afternoon ahead of me, I thought I’d see what could be done. And the outcome (to save you having to read the rest of this!) is rather pleasing: Lotus Expeditor micro broker infocenter.

This is a subset of the Lotus Expeditor infocenter containing just the microbroker component, being served as static pages from an Apache web server.

First the information content. The challenge I set was to extract the sections of the Lotus Expeditor documentation which relate to the microbroker component. It has always been a bit of a struggle to find these sections hidden amongst all the other information, as it’s in rather non-obvious places, and somewhat spread around. This means creating a new navigation tree for the left-hand pane of the Infocenter. When you click on a link in the navigation tree, that particular topic of information is loaded into the right-hand window.

However, it quickly became apparent that just picking the microbroker references from the existing nav tree would yield an unsatisfactory result: the topics need to be arranged into a sensible structure so that someone looking for information on how to perform a particular task would be guided to the right information topic. Just picking leaf nodes from the Lotus Expeditor navigation tree would leave us with some oddly dangling information topics.

Fortunately Laura Cowen, a colleague in the Hursley User Technologies department for messaging products, does this for a living, and so was able to separate out the microbroker wheat from the rest of the Expeditor documentation and reorganise the topics into a structure that makes sense out of the context of the bigger Expeditor Toolkit, but also, to be honest, into a much more meaningful and sensible shape for micro broker users

First we needed to recreate the XML which the infocenter runtime server uses to serve up the HTML of the navigation tree. Laura gave me a sample of the XML, which contains the title and URL topic link. From the HTML source of the full Expeditor navigation tree, using a few lines of Perl, I was able to re-create XML stanzas for the entries in the navigation tree. Laura then restructured these into the shape we wanted, throwing out the ones we didn’t want, and adding in extra non-leaf nodes in the tree to achieve the information architecture she wanted to create.

Wave a magic wand, and that XML file becomes a plug-in zip file that can be offered-up to an infocenter run time, and the resulting HTML content viewed. After some iterative reviews with potential future users of the micobroker infocenter, we finalised a navigation tree that balanced usability with not having to create new information topics, apart from a few placeholders for non-leaf nodes in the new navigation tree.

So far so good – we had an infocenter for just the microbroker component of Expeditor, and it was nicely restructured into a useful information architecture.

Now for phase two of the cunning plan: can we host that on a plain-old HTTPd without the infocenter run time behind it? The information topics (the pages that appear in the right-hand window) are static already, and didn’t need to be rehosted – the existing server for the Lotus Expeditor product documentation does a perfectly good job of serving up those HTML pages. It’s the rest of the Infocenter, the multiple nested framesets which make up the Infocenter “app”, and the all-important navigation tree, which are dynamically served, from a set of Java Server Pages (JSPs).

A quick peek at the HTML source revealed that several JSPs were being used with different parameter sets to create different parts of the displayed HTML. These would have to be “flattened” to something that a regular web server could host. A few wgets against the infocenter server produced most of the static HTML we would need, but quite a few URLs needed changing to make them unique when converted to flat filenames. A bit of Perl and a bit of hand editing sorted that lot out.

Then it transpired there is a “basic” and an “advanced” mode which the back-end servlet makes use of to (presumably) support lesser browsers (like wget 😐 ). Having realised what was going on, and a bit of tweaking of the wget parameters to make it pretend to be Firefox, and the “advanced” content came through from the server.

Then we had to bulk get the images – there are lots of little icons for pages, twisties, and various bits of window dressing for the infocenter window structure. All of this was assembled into a directory structure and made visible to an Apache HTTPd.

Et voila! It worked! Very cool! An infocenter for the microbroker running on a straight HTTPd. Flushed with success, we moved it over to MQTT.org (the friendly fan-zine web site for the MQ Telemetry Transport and related products like microbroker). Tried it there…

Didn’t work. Lots of broken links, empty windows and error loading page stuff. Seems the HTTPd on MQTT.org isn’t quite as forgiving as mine: files with a .jsp extension were being served back with the MIME type text/plain rather than text/html, which may not look like much, but makes all the difference. So a set of symlinks of .jsp files to .html files, and another quick wave of a perl script over the HTML files put everything right.

So with an afternoon’s work, we were able to demonstrate to our considerable satisfaction, that we could excise a sub-set of an Infocenter from a larger book, restructure it into a new shape, and take the resulting Infocenter content and flatten it to a set of HTML pages which can be served from a regular HTTP server.

Film Star for a day

On Friday, a film crew came to visit me at home for the day to film some of my home automation inventions, and talk to me about the process of innovation and how it sometimes leads to products and solutions for IBM.

The background to this is that our ad agency were hunting round the Corporation to find something “cool” to talk about in some advertising material on the web. They heard about “this guy in the UK who has electronic mousetraps”, and knew immediately this was what they were looking for.

A video conference and a few conference calls later, we’d scheduled a film shoot at my house on the Isle of Wight (a little island just off the coast of Southampton in the UK). I had been sent a “brief” about what they were planning, but I still didnt know exactly what would be entailed. So the key thing for me with regards to preparation was to make sure all the bits of my home automation system were up and running.

I’ve been playing with home automation projects for a few years now, all based around the IBM “microbroker” and MQ Telemetry Transport (MQTT) protocol for publish/subscribe messaging, which is just perfect for hooking in wizzy new gadgets in a very short time to enable me to try out new ideas.

So three car-loads of people showed up on Friday morning – 2 had flown in from the US just for the occasion – I felt very honoured already! There was the obligatory “IBM Minder” who lurks in the background making sure I don’t say anything “illegal”, or at least “highly regrettable”, or “significantly off-message”; the producer, interviewer, camera-person, sound-person, stills photographer, “key grip” (you always have to have one of those when you’re making a film, even though everyone knows that nobody fully understands the exact details of the job role!), um, er, oh, and a make up person to stop me being too shiny under the spotlights that were being assembled in my kitchen, and a few more people to make the number up to eleven.

The theme was that the interviewer drops in on this inventor chap at his very English olde-worlde home, on the ever so quaint Isle of Wight, and amongst oak beams, stone walls, thatched roof, miscellaneous dogs, and a modest herd of llamas, he would explain these wizzy gadgets he’s implemented and experimented with at home, which are generally spring boards to solutions that IBM sells to customers across a range of different industries. Experimenting with the concepts in the home environment gives me a chance to work out all the issues that make it difficult, and (very importantly) give me a demonstrable system to show to people.

So with lights blazing, cameras rolling, and fluffy boom mic being fluffy, I showed off my power monitoring system (live graph of how much power my house is consuming), X10 lighting system controlled using MQTT from my Java-enabled cellphone (how cool is THAT!), cellphone-activated Reindeer lights in the garden left over from Christmas, and my MQTT-enabled caller ID system that screen pops the name and a picture of the person who’s calling (if the system recognises the number) on the Kitchen Computer.

Then some of the crew went out scavenging for food (you didn’t think I was going to attempt to feed an entire film crew do you!), and they came back with, well, what can I say – I think a picture is worthwhile at this point. I think “the contents of the local shop” would be the best description! We were still eating those sandwiches on Sunday!

lunch for how many?

After lunch we went out to see the llamas, which the film crew instantly decided were far more interesting than me. In fact, I suspect this might end up being a film about dogs and llamas, with a voice-over by Andy Stanford-Clark! I lost count of the number of stills the photographer took of the dogs, and the makeup person, not really having a huge task, spent most of the afternoon happily playing football with the Airedale on the lawn (“to keep her out of your way”…. yeah, right!). Holding a llama on a lead in one hand, I explained to the camera how my llama tracking system will work – “track the trek with MQTT”.

llamas, camera, action!

Then we went back in the house for more interviewing, and to demonstrate the system for which I’m most famous: the electronic mousetraps, which send a message to my cellphone when one of them catches a mouse, so I know to go and reset it and dispose of the “stiff” before it starts to decompose. This simple but effective system, which publishes a “mouse event” message over MQTT to a broker out on the internet, has been running in production for 5 years now, and so is the longest-running MQTT application!

kitchen / film studio

With a looming deadline of 5.30pm, the crew went into clean-up mode, packing tripods, cameras, lights, mics, dog toys (oops!), and packing them into the cars. By 5.25 they were standing in the kitchen in their coats, thanking me so much for allowing them into my home, and showing them such cool technology.

I’m so glad I took a few photos of the day, because after they’d gone, there was not a trace that they had ever been there – I am in awe of the courtesy and professionalism that they showed throughout the day.

So there you are… film star for a day!

Andy Stanford-Clark, Master Inventor, Pervasive Messaging Technologies, IBM Hursley, UK