Node-RED Flows that make flows

Executive Summary

When a mummy function node and a daddy function node love each other very much, the daddy node injects a payload into the mummy node through a wire between them, and a new baby message flow is made, which comes out of mummy’s output terminal. The baby’s name is Jason.

Where did this idea come from?

In Node-RED, if you export some nodes to the clipboard, you will see the flow and its connections use a JSON wire format.

For a few weeks I’ve been thinking about generating that JSON programmatically, in order to create flows which encode various input and output capabilities and functions, according to some input parameters which give the specific details of what the flow is to do.

I think there’s been an alignment of planets that’s occurred, which led to this idea:

@stef_k_uk has been talking to me about deploying applications onto “slave” Raspberry Pis from a master Pi.

Dritan Kaleshi and Terence Song from Bristol University have been discussing using the Node-RED GUI as a configurator tool for a system we’re working on together, and also we’ve been working on representing HyperCat catalogues in Node-RED, as part of our Technology Strategy Board Internet of Things project, IoT-Bay.

@rocketengines and I were cooking up cool ideas for cross-compiling Node-RED flows into other languages, a while back, and

in a very productive ideas afternoon last week, @profechem and I stumbled upon an idea for carrying the parser for a data stream along with the data itself, as part of its meta-data description.

All of the above led me to think “wouldn’t it be cool if you could write a flow in Node-RED which produced as output another flow.” And for this week’s #ThinkFriday afternoon, I gave it a try.

First experiment

The first experiment was a flow which had an injector node feeding a JSON object of parameters into a flow which generated the JSON for an MQTT input node, a function node to process the incoming data in some way, and an MQTT output node. So it was the classic subscribe – process – publish pattern which occurs so often in our everday lives (if you live the kind of life that I do!).
first Node-RED flowAnd here’s the Node-RED flow which generates that: flow-generator

So if you sent in
{
"broker": "localhost",
"input": "source_topic",
"output": "destination_topic",
"process": "msg.payload = \"* \"+msg.payload+\" *\";\nreturn msg;"
}

the resulting JSON would be the wire format for a three-node flow which has an MQTT input node subscribed to “source_topic” on the broker on “localhost”, a function node which applies a transformation to the data: in this case, wrapping it with an asterisk at each end, and finally an MQTT publish node sending it to “destination_topic” on “localhost”.
N.B. make sure you escape any double quotes in the “process” string, as shown above.

The JSON appears in the debug window. If you highlight it, right-mouse – Copy, and then do Import from… Clipboard in Node-RED and ctrl-V the JSON into the text box and click OK, you get the three node flow described above ready to park on your Node-RED worksheet and then Deploy.
the resulting And it works!!

So what?

So far so cool. But what can we do with it?

The next insight was that the configuration message (supplied by the injector) could come from somewhere else. An MQTT topic, for example. So now we have the ability for a data stream to be accompanied not only by meta-data describing what it is, but also have the code which parses it.
flow with added MQTT configuratorMy thinking is that if you subscribe to a data topic, say:
andy/house/kitchen/temperature
there could be two additional topics, published “retained” so you get them when you first subscribe, and then any updates thereafter:

A metadata topic which describes, in human and/or machine readable form, what the data is about, for example:
andy/house/kitchen/temperature/meta with content
“temperature in degrees Celcius in the kitchen at Andy’s house”

And a parser topic which contains the code which enables the data to be parsed:
andy/house/kitchen/temperature/parser with content
msg.value = Math.round(msg.payload) + \" C\"; return msg;
(that’s probably a rubbish example of a useful parser, but it’s just for illustration!)

If you’re storing your data in a HyperCat metadata catalogue (and you should think about doing so if you’re not – see @pilgrimbeart‘s excellent HyperCat in 15 minutes presentation), then the catalogue entry for the data point could include the URI of the parser function along with the other meta-data.

And then…

Now things get really interesting… what if we could deploy that flow we’ve created to a node.js run-time and set it running, as if we’d created the flow by hand in the Node-RED GUI and clicked “Deploy”?
Well we can!

When you click Deploy, the Node-RED GUI does an HTTP POST to “/flows” in the node.js run-time that’s running the Node-RED server (red.js), and sends it the list of JSON objects which describe the flow that you’ve made.
So… if we hang an HTTP request node off the end of the flow which generates the JSON for our little flow, then it should look like a Deploy from a GUI.

Et voila!
flow that deploys to a remote Node-RED
Note that you have to be careful not to nuke your flow-generating-flow by posting to your own Node-RED run-time! I am posting the JSON to Node-RED on my nearby Raspberry Pi. When you publish a configuration message to the configuration topic of this flow, the appropriate set of nodes is created – input – process – output – and then deployed to Node-RED on the Pi, which dutifully starts running the flow, subscribing to the specified topic, transforming the data according to the prescribed processing function, and publishing it to the specified output topic.

I have to say, I think this is all rather exciting!

@andysc

Footnote:

It’s worth mentioning, that Node-RED generates unique IDs for nodes that look like “8cf45583.109bf8”. I’m not sure how it does that, so I went for a simple monotonically increasing number instead (1, 2 …). It seems to work fine, but there might be good reasons (which I’m sure @knolleary will tell me about) why I should do it the “proper” way.

MQTT powered video wall

Scaling things up a little from my first eightbar post.

This was one of those projects that just sort of “turned up”. About 3 weeks ago one of the managers for the ETS department in Hursley got a call from the team building the new IBM Forum in IBM South Bank. IBM Forums are locations where IBM can showcase technologies and solutions for customers. The team were looking for a way to control a video wall and a projector to make them show specific videos on request. The requests will come from pedestals known as “provokers”, each having a perspex dome holding a thought-provoking item. The initial suggestions had been incredibility expensive and we were asked if we could come up with a solution.

Provoker

The provokers have access to power and an Ethernet connection. Taking all that into account a few ideas came to mind but the best seamed to be an Arduino board with Ethernet support and a button/sensor to trigger the video. There is a relatively new arduino board available that has a built in Ethernet shield which seemed perfect for this project. Also, since a number of the items in the provokers would be related to IBM’s Smarter Planet initiative, it made sense to use MQTT as a messaging layer as this has been used to implement a number of solutions in this space.

Nick O’Leary was enlisted to put together the hardware and also the sketch for the Arduino as he had already written a MQTT client for Arduino in the past.

Each provoker will publish a message containing a playload of “play” to a topic like

provoker/{n}/action

Where ‘{n}’ is the unique number identifying which of the 6 provokers sent the message.

To provide some feedback to the guest that pressed the button, the LED has been made to pulse while one of the provoker-specific videos is playing. This is controlled by each provoker subscribing to the following topic

provoker/{n}/ack

Sending “play” to this topic causes the LED pluse, sending “stop” turns the LED solid again.

The video wall will be driven by software called Scala InfoChannel which has a scripting interface supporting (among other things) Python. So a short script to subscribe to the ‘action’ topics and to publish on on the ‘ack’ got the videos changing on demand.

And sat in the middle is an instance of the Really Small Message Broker to tie everything together.

Arduino in a box

This was also the perfect place to use some of my new “MQTT Inside” stickers.

First sticker deployed

This project only used one of the digital channels (for the button) and one of the analogue channels (for the LED) available on the Arduino – which leaves a lot of room for expansion for these type of devices. I can see them being used for future projects.

Parts list

  1. Arduino Ethernet
  2. Blue LED Illuminated Button
  3. A single resistor to protect the LED
  4. 9v power supply
  5. Sparkfun Case

Virtual Forbidden City

I’ve been away for a couple of weeks so I’m very late in posting this!

On 28 and 29 April, IBM is going to be running an SOA tour being using the virtual Forbidden City: Beyond Space and Time. Ian wrote about the Forbidden City launch last year.

According to the press release:

Attendees will be able to discuss SOA with IBM’s leading architects and strategists in an innovative setting, and learn first-hand how to shape the future of business communication. The virtual world tour provides a chance to:
  • See a real-life SOA case study in action
  • Hear how IBM solutions and products map to and enable specific SOA concepts and capabilities
  • Learn how to solve architectural challenges through SOA in a way that is non-disruptive to existing IT systems
  • Network with technical experts and peers

This is a good example of how we’re continuing to explore the use of virtual spaces for education and business. If you want to get involved, there’s really very little time to register (sorry! my bad!) – final day is tomorrow, April 17th.

Update – @ibmvfc reports via Twitter that registration is now open until Tuesday so if you’re interested, there are a few more days.

Virtual Forbidden City – Live. History in the making

The virtual forbidden city project, beyond space and time has now gone live.
This has been a fascinating journey to follow, that is as much part of the history of my involvement in virtual worlds as anything.
forbidden city
Way back in 2006 John Tolva and I bumped into one another again, having both worked together on Wimbledon and also having helped with another of the projects that came to Hursley for some extra development skills from Rob and Daz and many others.
John had hit the nebulus Second Life on the same day as I had, for no reason that we could fathom. We then noticed one anothers blog posts.
So the famous virtual world serendipty that I have learned to trust kicked in veyr early.
John was exploring options for the project that rolled on from his previous one of Eternal Egypt. John specializes in running large innovative projects that use the web for more philanthropic reasons as part of what is called corporate community relations.
So there we were in SL, I had my personal shiny new island Hursley and he and his team were looking at how they might represent the forbidden city in the growing world of the virtual, non game metaverse.
So I loaned the team the island, and a massively detailed chinese build started to form in the sky over the next few weeks whilst they procured their own official island.
That island then became the venue, after the 2006 innovation jam, for our CEO Sam Palmisano to announce virtual worlds as one of the top five findings from this virtual chinese palace, and getting this all on the cover of business week.
So Second Life proved a testing ground, but the development then moved platforms to a more controlled environment. The team chose Torque. (Yes the very same platform we used for the initial CIO metaverse work so you can see how this flow is going!). As this was a service that was focused on one subject in many ways, like a game, it does not need the full dynamic nature of SL. Also the scaling of this needs to work in a different way, running on and with underlying IBM infrastructure this becomes part reference account for being able to build and run things. Being torque the client download is large(ish) as it contains most of the reosurces you need as there is no need to stream the forbidden city all the time as its not constantly changing in structure, though new content can be delivered.
John’s post on the launch is the best reference I think (hence only a single picture here from me). You will also find it on IBM.com
One other feature that I always have to mention is the ability to not just follow NPC tours, but to be a tour leader yourself. This means teachers and educators can guide a tour around adding their own structure to the experience for a willing group of participants.
Enjoy Beyond Space and Time

Swarming code development visualizations and emotions

You may think sometimes that we are only bothered with avatars and islands. However things like this that changes the way you get insight into the flow of a project are equally fascinating.
This video is from mediamolecule showing the development of Little Big Planet (the game that may well save the PS3 in my house atleast). Using code_swarm it shows people joing the project and what they are editing. I am a fan of organic flocking algorythms and the complexity that forms from the simplest of rules should teach us all something. A flock you let go, and see what happens, control is not part of the agenda.

LittleBigBang : The Evolution Of LittleBigPlanet from Media Molecule on Vimeo.
This links nicely into the extreme blue project that had a very cool name but got renamed by someone to sound a lot less cool for release (but what do eightbars know about such things!). Group Persona Visualization is available on alphaworks. The aim was to gather the state of the emotion and feelings of a group based on their social media acitivity. It was a short, intern based project, with a lot of eightbar mentors :-). I think it turned out to be very cool.

And the winner of award for innovation in the enterprise is…

Me and IBM 🙂
Yes last night at the end of the first day of the virtual world expo I was asked up on stage after a very nice intro from Steve Prentice (whose speech featured the brilliant quote from Pirates of the Carribean that the pirates code is not so much of a rule as guideline ) to pick up the award for Virtual World innovation in the Enterprise.
And the winner is
It was brilliant to be able to accept this, and whilst I may have started this all off it has been very much a team effort so this is really for all the eightbars and former eightbars out there.
I also have to say a huge thankyou to the organizers of the expo and to the judges for deciding on this.

Christopher V. Sherman Executive Director Virtual Worlds Management / Virtual Worlds Expo
Joey Seiler, Editor, Virtual Worlds News and awards chairman
Christian Renaud, CEO, Technology Intelligence Group
Erica Driver, Co-Founder and Principal, ThinkBalm
Nic Mitham, Managing Director, K Zero
Steve Prentice, VP and Fellow, Gartner
Robert Bloomfield, Founder and Host, Metanomics

The other winners can be found here

So what now…. Well I guess just keep on pushing and see where we can all take this as an industry. I do of course have some ideas, just need to find a way or a place to make them happen.

Virtual World Conference and Expo – LA here we come

Next week, September 3rd/4th is the next major Virtual worlds expo and conference. I will be over there along with lots of my IBM collegues to meet, greet, share, explain and talk all about various aspects of the gorwing virtual worlds business.
Hollywood beckons
So come find me at the IBM booth or just grab anyone with a striped leather jacket 🙂
We have Colin Parris our VP of digital convergence doing a keynote, I am on the technology visionaries panel in the futures stream and Boas Betzler will be on the enterpise track on the panel The Future of Virtual Collaboration in the Enterprise.
Michael Rowe of dogearnation fame will be wandering the floors doing interviews for a podcast with all sorts of key people in the industry.
Looking at the speaker list, and the sponsor list this is really going to be a huge show. They get incrementally bigger, so its great to see the growth happening.
If you check the other keynotes, John Landau is opening the show, Steve Parkis an SVP from Disney online and Tim Kring creator of Heroes are all speaking too.
The entire speaker list is something I get a kick out of reading. CEO’s and Senior VP’s and a few of us with other titles peppering the list.
I know I am going to be torn by my speaking schedule, booth schedule, catching up with the metarati schedule and the fact I really want to see some of the high end Hollywood sessions. Whilst we have driven this into the enterprise based on human interaction and meeting style communication, the blend back into the entertainment and game space is clearly going to have a major impact.
Anyway, if you are there see you there, safe journey’s everyone.

What did I/We learn this year at Wimbledon 08 in Second Life

Wimbledon is like a very very long plane journey. We all tune into the event and our various roles and focus completely on them. Having added the extra extreme sport of standing in Second Life at the same place for 15 hours a day for 14 days (except middle sunday), and having spent many of those hours talking to people in world and in real life about what we do I thought I would share a perspective on it.

  1. I can explain why I have been saying “People stopped asking why? and started asking if?”” Our real life clients and visitors were fascinated this year more than the previous two year. In 2006 it was “ha thats funny”, in 2007 it was “why are you doing this?” in 2008 it has been “Oh! I didnt realize there was so much to it, so can I do x?”
  2. We had less visitors in Second Life this year, but the ones that came stayed longer and asked more detailed questions about the various modes of working. We also still had more visitors than the physical hospitality tours. Business opportunities arose too in virtual discussions. The depth of conversation and the quality of interaction proved to be way more important that the volume. This is a change that many marketeers would not yet understand, but clearly need too. I realized that in many ways I had turned into a Social Media Strategy Consultant. We segued from the official RL wimbledon tour showing the website and how it was reaching out to social media sites allowing people to take feeds and widgets wherever they happened to be. Metaverses are on that continuum. The 3d wiki, mixed with social network, mixed with being a fan, mixed with behind the scenes blogging all merge in a virtual world event.
  3. Identity versus expression through avatars came up alot. Many people see my predator AV and assume I am hiding. “I wear a mask but I dont hide behind it”. It was very useful to have Judge Hocho there too in SL and in RL. Judge’s choice is a more real world expression of his physical form. Though interestingly he refuses to have photos of himself in RL. Those two ends of the spectrum allowed for me to explain that visual representation is not the same as knowing who someone actually is. Persona’s are difficult and many people are not confronted by that balance. Proving who someone is from a trust and security perspective is not based on what their username is or their avatar appearence.
  4. Shared web browsing worked really well. The embedded web browser, albeit read only worked very well. It is a pity it did not do flash as much of the widget content was flash based as it the realtime scoring feed for the pub-sub elements. However, showing people the wimbledon.org site either in world or on a RL tour worked very well. Demonstrating in the RL room the SL version of the website at the build allowed me to show how I would say the same things to people in world as Andy and Elizabeth would have just said to the visiting customers on our tech tour. The power of the familiar worked. Also in world we drove a few extra people to the site, more traffic. The official numbers will be published soon.
  5. The complexity of shared web browsing becomes more apparent when actually trying to do it. I spent a good few conversations in RL and SL showing people why shared web browsing is complicated. It is not obvious to many people until they see this or do it. The web is a single user experience. There may be 8.5 million people hitting the same site, but your view is your view. Content gets personalized, you login etc. The LL implementation has an embedded client render a URL provided to it. After that it is your client creating the session. If the pages are not personalized in any way then thing will remain in synch. We will all see the same page. If you were able to just click and navigate following links etc, soon the web would start to personalize to you. Each view may start to diverge, cookie trails of browsing, preferences etc. Also being able to see any page on the web would mean peoples browser may be taken places they dont want to be. NSFW sites etc. Judge built the monitor for browsing so that it had a menu of defined Wimbledon pages. So people got to know they had shared control. We then had the odd occasion when two people asked for different pages at the same time. They would then feel the shared problem of losing the page they wanted to see. The facebook page also highlights this. Its asking the user to logon. If the SL user used the break out object judge provided they would be able to view the URL in their personal embedded browser not on a prim. They could then logon to facebook. The prim object would then show them they were logged on and show their page. This woudl cause concern as they would ask if everyone could see thier details. The answer in this case is no. Each user’s embedded browser forms its own session with the website, just like any browser. So very quickly content gets out of synch. The alternative though, a server based proxy to show the same content to all would mean that people details once logged on would then be shared. All these problems are solvable, but we need some new metaphors in web browsing to make it obvious what is happening. The same as when https started to be shown as a padlock on the browser. We will need standard iconography for shared pages, individual pages but similar view, divergent pages etc.
  6. Why have we not modelled all the players? Another common question. There are several answers. The whole Wimbledon SL build is still done effectively for free. A lot of volunteer effort. Building hundreds of player AV’s is complex and time consuming. Down the line when the virtual worlds can represent things even more accurately we would be able to completely reconstruct the match in intricate detail. We know where the ball is, where the player is, what stroke has been played. All this information is mashed together with video in a DVD we (IBM – the atlanta sports events team) provide to the players and coaches after a match. So we know we can take crowd noise to indicate an exciting rally and index video based on that. To take a virtual event to the next level it needs this detail. However we then run into player image rights. Even the top video games do not feature all the player models in tennis. Wimbledon does not feature as a brand in any of the tennis gams either. The blur of copyright, players image rights, broadcast rights and sheer politeness (do you like you AV) gets complicated. Who knows where we could be for things like the 2012 olympics with a virtual presentation of the event live? For now though we keep it simple. Though if we do it again I still want to have more data and more atmosphere.
  7. Any event or build needs people. The single biggest draw had to be being able to talk to people at the RL event, behind the scenes. Everyone was always amazed and interested. Are you really there? Wow that must be great! It is of course great, but never for the reasons people assume. For me it is the amazing sense of doing something so well known and immediate. Having the whole of IBM behind us helping and people interested in our work. Pride does not pay the mortgage, being away from home and family for so long is awful, but its worth it. I could have done SL Wimbledon from anywhere, but the truth of being there came through the build and the avatar. Nothing beats Real Life, and reporting on that in a virtual world at a human level is the important thing to remember

Maybe see you all next year. Thankyou for all the support and conversations. Hi to Sean Krams our most regular visitor, always good to see you there Sean. It meant a great deal to Judge and I.

Fred Perry, Second Life, Wimbledon video fun

I had an idea for a video presentation with something a bit different this year for the Second Life Wimbledon build in IBM 7. This is more of a rush of it, but features crazy talk and voices from Cepstral. Edited up with my newly purchased Premiere elements. I like to get the ideas out there.
Its a bit of an example of life immitating art cross over augmentation. You will see what I mean. Also I have been typing the same things all day to explain what we do, text to voice seemed to make sense.
The Wimbledon website has gone very well this year too, cant say the numbers they are embargoed until post event, but we like them 🙂

Wimbledon the final week get your widgets here.

This year more than ever before the Wimbledon web experience is much more what us Web 2.0 geekanisters would like to see. Letting people experience Wimbledon wherever they happen to be. I am not just talking about the Second Life presence
This widget is another prime example, able to be embedded and posted all over the place, facebook, blogs etc. It is also personalized to the user, the players they choose to follow. Its been a quiet revolution for a website that got 266,311,332 page views last year over the event, but one that I am very happy to see.
So props to Stephen Hammer and the Atlanta sports event crew for putting this widget together. You have the next 7 days to enjoy its live features.