My picks of the Virtual Worlds 2008 New York show

Virtual Worlds 2008 has been a very intense experience, both from seeing just the sheer amount of growth in the industry, seeing all my colleagues as deeply immersed as I am and seeing some very cool new things emerge.
I have three picks of the show that stood out for me personally. As well as being virtual world technologies they also have two other things on common. Two are from Australia (an odd coincidence but I like linkages like this) the other is that they are middleware related, they represent a flexible and more diverse approach to virtual worlds that allows us to break some of the patterns we have already become used too.
Those picks are

VW2008 badge

The first pick is Mycosm. This company came out of stealth at the show. I know they gave demos to a few colleagues and there was a general buzz. As I investigate more I will share what they actually do. In the demos and conversations I saw a very flexible system allowing for all sorts of objects and types to be treated in a uniform manner, 3d object import, painting with brushes that are also composite objects, meta data properties on objects such as friction, rag doll physics, rich avatars, a very rich rendering on the front end overall. All this is very good but also under the covers it appears to have a more modular approach to functions. Being able to run peer to peer or centralized (or a mix and match) due to the nature of the underlying services model. Being able to operate with web services which for me means being able to link with existing business systems as well as the web2.0 world.
Being able to take advantage of a richer game front end but drive it with services starts to fit more with mirror worlds and with simulations. I am looking forward to delving deeper and seeing where this one can go.

The second is Vastpark. This solution is also middleware. Its distinguishing elements are based around the consumption of services to create virtual worlds. The interesting elements around this are the use of a markup language to describe scenes. It is also interesting that its pulling of objects from other repositories appears to be more Service Oriented Architecture based(SOA). Delivering a markup can mean that assets are referred to elsewhere such as turbosquid. Being able to place function or data elsewhere and have it pulled together in a service compositional way opens up interoperability in the correct fashion, not just pure import export but live. The Vastpark site also refers to existing forming standards as being relevant to them, opensocial, metawss and imml. These to me are all good signs.

The third and final pick is the Multiverse proof of concept of a full 3d world and 2.5d flash client both interface to work talking to the same server. The ability to use different platforms, either through choice or through need, yet have people respond to one another and benefit from the virtual worlds is very important. To see Multiverse showing the blending will help people consider this path and not a generic one platform answer.

I also have to mention the IBM demonstrations, but it would be wrong for me to pick those as favourites.

  • Craig demonstrating the Second Life instances running on our blade servers
  • The IQ team demonstrating our Metaverse integrated with our enterprise services
  • Peter showing the lenticular 3d monitor running a Second Life client talking to OpenSimulator
  • Michael showing Activeworlds running on a 3g enabled mobile phone connected to other users on richer clients.

This is not to say there were not other significant things, the keynotes and panels surfaced a few things. Such as in Roo’s panel Christian from Millions Of Us mentioning that MOU are doing the builds for Sony Home and the backchannel and post conference party conversations which give another insight into the industry.
I hope that as we get more take up at these events we also start to move to representing them virtually too. Lots of people were not able to attend, but because there are so many diverse platforms and we don’t have interoperability yet we tended to stick to twitter as an information channel for people. Capturing atmosphere and buzz to mirror events is an interesting challenge, but I am guessing that the people attending and presenting at the show are best placed to solve it for ourselves and others.