Machine Learning Course

Enough time has passed since I undertook the Stanford University Natural Language Processing Course for me to forget just how much hard work it was for me to start all over again.  This year I decided to have a go at the coursera Machine Learning Course.

Unlike the 12 week NLP course last year which estimated 10 hours a week and turned out to be more like 15-20 hours a week, this course was much more realistic in estimation at 10 weeks of 8 hours.  I think I more or less hit the mark on that point spending about 1 day every week for the past 10 weeks studying machine learning - so around half the time required for the NLP course.

The course was written and presented by Andrew Ng who seems to be rather prolific and somewhat of an academic star in his fields of machine learning and artificial intelligence.  He is one of the co-founders of the coursera site which along with their main rival, Udacity, have brought about the popular rise of Massive Open Online Learning.

The Machine Learning Course followed the same format as the NLP course from last year which I can only assume is the standard coursera format, at least for technical courses anyway.  Each week there were 1 or two main topic areas to study which were presented in a series of videos featuring Andrew talking through a set of slides on which he's able to hand write notes for demonstration purposes, just as if you're sitting in a real lecture hall at university.  To check your understanding of the content of the videos there are questions which must be answered on each topic against which you're graded.  The second main component each week is a programming exercise which for the Machine Learning Course must be completed in Octave - so yet another programming language to add to your list.  Achieving a mark of 80% or above across all the questions and programming exercises results in a course pass.  I appear to have done that with relative ease for this course.

The 18 topics covered were:

  • Introduction
  • Linear Regression with One Variable
  • Linear Algebra Review
  • Linear Regression with Multiple Variables
  • Octave Tutorial
  • Logistic Regression
  • Regularisation
  • Neural Networks Representation
  • Neural Networks Learning
  • Advice for Applying Machine Learning
  • Machine Learning System Design
  • Support Vector Machines
  • Clustering
  • Dimensionality Reduction
  • Anomaly Detection
  • Recommender Systems
  • Large Scale Machine Learning
  • Application Example Photo OCR
The course served as a good revision of some maths I haven't used in quite some time, lots of Linear Algebra for which you need a pretty good understanding and lots of calculus which you didn't really need to understand if all you care about is implementing the algorithms rather than working out how they're derived or proven.  Being quite maths based, the course used matrices and vectorisation very heavily rather than using the loop structures that most of us would use as a go-to framework for writing complex algorithms.  Again, this was some good revision as I've not programmed in this fashion for quite some time.  You're definitely reminded of just how efficient you can make complex tasks on modern processors if you stand back from your algorithm for a bit and work out how best to utilise the hardware (via the appropriately optimised libraries) you have.

The major thought behind the course seems to be to teach as many different algorithms as possible.  There really is a great range.  Starting of simply with linear algorithms and progressing right up to the current state-of-the-art Neural Networks and the ever fashionable map-reduce stuff.

I didn't find the course terribly difficult, I'm no expert in any of the topics but have studied enough maths not to struggle with that side of things and don't struggle with programming either.  I didn't need to use the forums or any of the other social elements offered during the course so I don't really have a feel for how others found the course.  I can certainly imagine someone finding it a real struggle if they don't have a particularly deep background in either maths or programming.

There was, as far as I can think right now, one (or maybe two depending on how you count) omission from the course.  Most of the programming exercises were heavily frameworked for you in advance, you just have to fill in the gaps.  This is great for learning the various different algorithms presented during the course but does leave a couple of areas at the end of the course you're not so confident with (aside from not really having a wide grasp of the Octave programming language).  The omission of which I speak is that of storing and bootstrapping the models you've trained with the algorithm.  All the exercises concentrated on training a model, storing it in memory, using it and as the program terminates then so your model disappears.  It would have been great to have another module on the best ways to persist models between program runs, and how to continue training (bootstrap) a model that you have already persisted.  I'll feed that thought back to Andrew when the opportunity arises over the next couple of weeks.

The problem going forward wont so much be applying what has been offered here but working out what to apply it to.  The range of problems that can be tackled with these techniques is mind-blowing, just look at the rise of analytics we're seeing in all areas of business and technology.

Overall then, a really nice introduction into the world of machine learning.  Recommended!


W4A : Accessibility of the web

This is the last of four posts sharing some of the things I saw while at the International World Wide Web Conference for w4a.

Several presentations looked at how accessible the web is.

Web Accessibility Snapshot

In 2006, an audit was performed by Nomensa for the United Nations. They reviewed 100 popular websites for conformance to accessibility guidelines.

The results weren’t positive: 97% of sites didn’t meet WCAG level 1.

Obviously, conformance to guidelines doesn’t mean a site is accessible, but it’s an important factor. It’s not sufficient, but it is required. Conformance to guidelines can’t prove that a website is accessible, however there are some guidelines that we can be certain would break accessibility if not followed. So they are at least a useful starting point.

However, 2006 is a long time ago now, and the Internet has changed a lot since. One project, from colleagues of mine at IBM, is creating a more up to date picture of the state of the web. They analysed a thousand of the most popular websites (according to Alexa) as well as a random sampling of a thousand other sites.

(Interestingly, they found no statistically significant difference between conformance in the most popular websites and the randomly selected ones).

Their intention is to perform this regularly, creating a Web Accessibility Snapshot, with regular updates on the status of accessibility of the web. It looks like it could become a valuable source of information.

Assessing accessibility

There was a lot of discussion about how to assess accessibility.

One paper argued there is an over-reliance on automated tools and a lack of awareness of the negative effects of this. They demonstrated a manual review of websites, comparing results with output from six popular tools. Their results showed how few accessibility problems automated tools discover.

Accurately assessing a website against accessibility guidelines doesn’t necessarily mean that you can prove a site is accessible or easy to use.

Some research presented suggests guidelines only cover a little over half of problems encountered by users. Usability studies suggest some websites that don’t meet guidelines may be easier to use than websites that do, as users may have effective coping strategies for (technically) non-compliant sites. This suggests we need a better way of assessing accessibility.

A better approach might be to observe users interact with a website and assess based on their experiences. One tool presented, WebTactics, showed an automated approach to assessing accessibility by observing a user and identifying behaviours they employ.

Another paper detailed how to add accessibility monitoring to a live website by adding additional JavaScript that captures and evaluates mouse clicks and button presses client-side before submitting them to a server for processing. Instead of requiring the user to perform predefined, and perhaps artificial, tasks, they hope to be able to discover tasks implicitly – that common tasks will emerge from the low-level actions that they collect.

Accessibility training

Given that most websites have some sort of accessibility problems, there was some talk about how this could be improved.

One project presented showed training that has been developed to raise awareness of how people with disabilities access the web, and the implications of the accessibility guidelines. It’s a practical course including hands-on assignments, and looks like it could be the sort of thing that could help web developers make a real difference.

Social Accessibility

Another project is using crowd-sourcing to improve web sites that already exist. Social Accessibility, another IBM project, enables volunteers to make web pages more accessible to the visually impaired.

It provides a mechanism for accessibility problems to be gathered directly from visually impaired users. Volunteers are then notified, and can respond using a tool that allows them to externally modify web pages to make them more accessible. It lets them publish metadata associated with the original web page. This can be applied to the web page for all visually impaired users who visit it in future using this tool, so that many users can benefit from the improvement.

cloud4all

Finally, a project called cloud4all is developing a roaming profile that stores your preferences in a way that multiple services can access. The focus is on accessibility – a user can store their accessibility needs in one place, and then interfaces can use this to adapt for them.


W4A : Accessibility of the web

This is the last of four posts sharing some of the things I saw while at the International World Wide Web Conference for w4a.

Several presentations looked at how accessible the web is.

Web Accessibility Snapshot

In 2006, an audit was performed by Nomensa for the United Nations. They reviewed 100 popular websites for conformance to accessibility guidelines.

The results weren’t positive: 97% of sites didn’t meet WCAG level 1.

Obviously, conformance to guidelines doesn’t mean a site is accessible, but it’s an important factor. It’s not sufficient, but it is required. Conformance to guidelines can’t prove that a website is accessible, however there are some guidelines that we can be certain would break accessibility if not followed. So they are at least a useful starting point.

However, 2006 is a long time ago now, and the Internet has changed a lot since. One project, from colleagues of mine at IBM, is creating a more up to date picture of the state of the web. They analysed a thousand of the most popular websites (according to Alexa) as well as a random sampling of a thousand other sites.

(Interestingly, they found no statistically significant difference between conformance in the most popular websites and the randomly selected ones).

Their intention is to perform this regularly, creating a Web Accessibility Snapshot, with regular updates on the status of accessibility of the web. It looks like it could become a valuable source of information.

Assessing accessibility

There was a lot of discussion about how to assess accessibility.

One paper argued there is an over-reliance on automated tools and a lack of awareness of the negative effects of this. They demonstrated a manual review of websites, comparing results with output from six popular tools. Their results showed how few accessibility problems automated tools discover.

Accurately assessing a website against accessibility guidelines doesn’t necessarily mean that you can prove a site is accessible or easy to use.

Some research presented suggests guidelines only cover a little over half of problems encountered by users. Usability studies suggest some websites that don’t meet guidelines may be easier to use than websites that do, as users may have effective coping strategies for (technically) non-compliant sites. This suggests we need a better way of assessing accessibility.

A better approach might be to observe users interact with a website and assess based on their experiences. One tool presented, WebTactics, showed an automated approach to assessing accessibility by observing a user and identifying behaviours they employ.

Another paper detailed how to add accessibility monitoring to a live website by adding additional JavaScript that captures and evaluates mouse clicks and button presses client-side before submitting them to a server for processing. Instead of requiring the user to perform predefined, and perhaps artificial, tasks, they hope to be able to discover tasks implicitly – that common tasks will emerge from the low-level actions that they collect.

Accessibility training

Given that most websites have some sort of accessibility problems, there was some talk about how this could be improved.

One project presented showed training that has been developed to raise awareness of how people with disabilities access the web, and the implications of the accessibility guidelines. It’s a practical course including hands-on assignments, and looks like it could be the sort of thing that could help web developers make a real difference.

Social Accessibility

Another project is using crowd-sourcing to improve web sites that already exist. Social Accessibility, another IBM project, enables volunteers to make web pages more accessible to the visually impaired.

It provides a mechanism for accessibility problems to be gathered directly from visually impaired users. Volunteers are then notified, and can respond using a tool that allows them to externally modify web pages to make them more accessible. It lets them publish metadata associated with the original web page. This can be applied to the web page for all visually impaired users who visit it in future using this tool, so that many users can benefit from the improvement.

cloud4all

Finally, a project called cloud4all is developing a roaming profile that stores your preferences in a way that multiple services can access. The focus is on accessibility – a user can store their accessibility needs in one place, and then interfaces can use this to adapt for them.


Dyslexia at W4A

This is the third of four posts sharing some of the things I saw while at the International World Wide Web Conference for w4a.

There were a few sessions presenting work done to improve understanding of how to better support people with dyslexia.

One interesting study investigated the effect of font size and line spacing on the readibility of wikipedia articles.

This was assessed in a variety of ways, some of which were based on the reader’s opinions, while others were based on measurements made of the reader during reading and of their understanding of the content after. The underlying question (can we make Wikipedia easier to read for dyslexics?) was compelling. It was also interesting to see this performed not on abstract passages of text, but in the context of using an actual website.

Accessibility isn’t just about the presentation but also the content itself. Another study looked at strategies for simplifying text that could make web pages more readable for dyslexic readers.

It compared the effectiveness of two strategies: firstly, providing synonyms on demand – giving a reader a way to be able to request an alternative for any word. The second was providing synonyms automatically – with complex words automatically substituted for simpler equivalents. Again, this was assessed in several ways, such as the speed of reading, the reader’s comprehension, on the reader’s opinion of easiness, on the effort it took (e.g. interpreting facial expression, etc.), on fixation duration measured using eye tracking, and so on.

On a more practical note, there were also tools presented that are being created to help support people with dyslexia.

Firefixia is a Firefox toolbar extension being created by colleagues of mine in IBM. It provides options for users to customise the web page they are looking at, offering modifications that have been demonstrated to make it easier for dyslexic users.

Dyseggxia is an impressive looking iPad game that aims to support children with dyslexia through fun word games.


W4A : Future of screen readers

This is the second of four posts sharing some of the things I saw while at the International World Wide Web Conference for w4a.

Several of the projects that I saw showed glimpses of a possible future for screen readers.

I’ve written about screen readers before, and some of the challenges with using them.

Interactive SIGHT

One project interpreted pictures of charts or graphs and created a textual summary of the information shown in them.

I’m still amazed at this. It takes a picture of a graph, not the original raw data, and generates sensible summaries of what it shows.

For example, given this image:

It can generate:

This graphic is about United States. The graphic shows that United States at 35 thousand dollars is the third highest with respect to the dollar value of gross domestic product per capita 2001 among the countries listed. Luxembourg at 44.2 thousand dollars is the highest

or

The dollar value of gross domestic product per capita 2001 is 25 thousand dollars for Britain, which has the lowest dollar value of product per capita 2001. United States has 1.4 times more product per capita 2001 than Britain. The difference between the dollar value of gross domestic product per capita 2001 for United States and that for Britain is 10 thousand dollars.

The original version was able to process bar graphs, and was presented to W4A in 2010. What I saw was an extension that added support for line graphs.

Their focus is on the sort of graphics found in newspapers and magazines – informational, rather than scientific graphs. They want to be able to generate a high level summary, rather than a list of plot points that require the user to build a mental model in order to interpret.

For example:

The image shows a line graph. The line graph presents the number of Walmmart’s sales of leather jackets. The line graph shows a trend that changes. The changing trend consists of a rising trend from 1997 to 1999 followed by a falling trend through 2006. The first segment is the rising trend. The rising trend is steep. The rising trend has a starting value of 1890. The rising trend has an ending value of 36840. The second segment is the falling trend. The falling trend has a starting value of 36840. The falling trend has an ending value of 12606.

The image shows a line graph. The line graph presents the number of people who started smoking under the age of 18 in the US. The line graph shows a trend that changes. The changing trend consists of a rising trend from 1962 to 1966 followed by a falling trend through 1980. The first segment is the rising trend. The rising trend is steep. The second segment is the falling trend.

It’s able to interpret an image and recognise trends, recognise how noisy or smooth it is, recognise if the trend changes, and more. Impressive.

Interpreting data in tables

Another project demonstrated restructuring data tables in web pages to make them easier to explore with a screenreader.

They have an interesting approach of analysing an HTML table and reorganising it to make it more accessible, abstracting out complex sections into a series of menus.

For example, given a table such as this:

it can produce navigable menus such as this:

Even quite complex tables, with row and column spans, which would otherwise be quite difficult to interpret if read row-by-row by a screenreader, is made much more accessible.

Capti web player

Another technology I saw demonstrated was the Capti web player.

Tools such as instapaper and read it later have showed that we can take most web pages and extract the body text for the article on the page.

This capability should be ideal for visually impaired users, but the tools themselves are still quite difficult to use and integrate poorly with assistive technologies. Someone described them as obviously “designed by sighted people for sighted people”.

Capti combines this capability with an accessible media player making it easy to navigate through an article, move through a list of articles, and so on. To a sighted user like me, it looked like they’ve mashed together instapaper with an audiobook-type media player. I often listen to podcasts while I go running, and am a heavy user of pocket and Safari’s reading list. So this looks ideal for me.

Multiple simultaneous audio streams

Finally, one fascinating project looked at how to make it quicker to scan large amounts of content with a screenreader to find a specific piece of information. I’ve written before that relying on a screenreader (which creates a sequential audio representation of the information on the page, starting at the beginning and going through the contents) can be tremendously time-consuming, and that it results in visually impaired users taking considerably more time to find information on the web.

This project investigated whether this could be improved by using multiple simultaneous sound sources.

It sounds mad, but they’re starting from observations such as the cocktail party effect – that in a noisy room with several conversations going on, we’re able to pick out a specific conversation that we want to listen for. Or that a student not paying attention in a lecture will hear if a lecturer says something like “this will be on the exam”.

They’re looking at a variety of approaches, such as separating the channels directionally, so one audio stream will sound like it’s coming from the left, while another is in front. Or having different voices, such as different genders, for the different streams. It’s an intriguing idea, and I’d love to see if it could be useful.


Web technologies I saw at W4A

WWW2013

Last month I went to the International World Wide Web Conference for w4a. I saw a lot of cool web technologies and accessibility projects while I was there, so thought I would share links to some of the more interesting bits.

There are too many to put in a single post, so I’ll write a few posts to cover them all.

Subtitles

Subtitles and transcripts came up a few times. One study presented looked at online video, comparing single-line subtitle captions overlaid on the video with multi-line off-screen transcripts adjacent to it.

It examined which is more effective from a variety of perspectives, including readability, reader enjoyment, the effect on understanding and so on. In summary, it found that overlaid captions are generally better, although transcripts are better for content which is more technical.

Real-time transcription from a stenographer at W4A

We had subtitles for all the talks and presentations. Impressively, a separate screen projected a live transcription of the speaker. For deaf attendees, it allowed them to follow what the speaker was saying. For talks given in Portuguese, the English subtitles allowed non-Portuguese speakers like me to understand.

They did this by having live stenographers listening to an audio feed from the talks. This is apparently expensive as stenography is a skilled expertise, and it needs to be scheduled in advance. It’s perhaps only practical for larger conferences.

Legion Scribe

This was the motivation for one of the more impressive projects that I saw presented : Legion Scribe, which crowd-sourced real-time captioning so that you wouldn’t need an expert stenographer.

Instead, a real-time audio stream is chopped up into short bits, and divided amongst a number of people using Mechanical Turk. Each worker has to type the short phrase fragment they are given. The fragments overlap, so captions that each worker types can be stitched back together to form captions for the whole original audio stream.

All of this is done quickly enough to make the captions appear more or less in real-time.

Seriously impressive.

And they’re getting reasonable levels of coverage and accuracy. The system has been designed so that workers don’t need to be experts in the domain that they’re transcribing, as they’re only asked to type in a few words at a time not whole passages. With enough people, it works. If they have at least seven workers, it’s approaching the coverage you can get with a professional stenographer.

Assuming that Mechanical Turk can provide a plentiful supply of workers, then this would not only be cheaper than a stenographer, but also let you start captioning at a moments notice, rather than needing to arrange for a stenographer in advance.

Map Reduce in the browser

Speaking of crowd-sourcing, the idea of splitting up a large computing task between a large number of volunteer computers isn’t new. SETI@home is perhaps the best known, while World Community Grid is a recent example from IBM.

But these need users to install custom client software to receive the task, perform it and submit the results.

One project showed how this could be done in web browsers. A large computing task is divided up into map reduce jobs, which are made available through a website. Each web browser that visits the website becomes a map reduce worker, running their task in the background using web workers. As long as the user remains on the site, their browser can continue to contribute to the overall task in the background, without the user having had to install custom client software.

It’s an elegant idea. Not all sites would be well suited to it, but there are plenty of web sites that I keep open all day (e.g. GMail, Remember The Milk, Google Calendar, etc.) so I think the idea has potential.

Migrating browser sessions

An interesting project I saw showed how the state of a browser app could be migrated from one browser to another, potentially a different browser running on a different machine even a different platform.

This is more than just the client-server session, which you could migrate by transferring cookies. They’re transferring the entire state of dynamic AJAX-y pages: what bits are open, enabled, and so on, for any arbitrary web app.

Essentially, they started by wanting to be able to serialize the contents of window, so that it could transferred to another browser where it could be used to restore from.

That wouldn’t be enough. window doesn’t have access to local variables in functions, it wouldn’t have access to most event listeners such as those added with addEventListener, it wouldn’t have access to the contents of some HTML5 tags like canvas, it wouldn’t have access to events scheduled with setTimeout or setInterval, and so on.

Serializing window gets you the current state of the DOM which is a good start, but not sufficient to transfer the state for most web apps.

A prototype system called Imagen shows how this could be done. Looking at how they’ve implemented it, they’ve had to resort to using a proxy server which intercepts JavaScript going to the browser and instruments it with enough additional calls to let them access all of the stuff that wouldn’t normally be in scope. This is enough for them to be able to serialize the entire state of the page.

I can see a lot of uses for this, such as in testing, debugging or service scenarios, as well as just the convenience of being able to resume work in progress as you move between devices.

Inferring constraints on REST API query parameters

Many web services include constraints and dependencies for the query parameters. For example: “this option is always required”, “that parameter is optional”, or “you have to specify at least one of this or that”. For example, the twitter API docs explain how you have to specify a user_id or screen_name when requesting a user timeline.

One project I saw was an attempt to automatically infer these rules and dependencies through a combination of natural language processing to recognise them in API documentation, and automated source code analysis of sample code provided for web services. It combines these into an estimated model of the constraints in the REST APIs, which are then verified by submitting requests to the API.

They demonstrated it on APIs like twitter, flickr, last.fm, and amazon, and it was surprisingly effective.

duolingo

Finally, there was a keynote talk on Wednesday by the founder of duolingo.

Captcha is particularly interesting because it uses a task that people need to do anyway (verify that they’re human) to crowd-source the completion of a task that needs to be done (digitise the text of old books that cannot be read by automated OCR).

Duolingo is similar. It takes a task that people need to do, which is to learn a new language, and uses that effort to translate texts into different languages.

It’s better explained by their demo video.

It’s been around for a little while, but I’d not come across it before. Since getting back from www, I’ve been trying it out. Even Grace has been using it to improve her French and seems to be getting on really well with it.

What else?

There were a lot of other cool projects and technologies that I saw, so I’ll follow this up with another post or two to share some more links.


Web technologies I saw at W4A

WWW2013

Last month I went to the International World Wide Web Conference for w4a. I saw a lot of cool web technologies and accessibility projects while I was there, so thought I would share links to some of the more interesting bits.

There are too many to put in a single post, so I’ll write a few posts to cover them all.

Subtitles

Subtitles and transcripts came up a few times. One study presented looked at online video, comparing single-line subtitle captions overlaid on the video with multi-line off-screen transcripts adjacent to it.

It examined which is more effective from a variety of perspectives, including readability, reader enjoyment, the effect on understanding and so on. In summary, it found that overlaid captions are generally better, although transcripts are better for content which is more technical.

Real-time transcription from a stenographer at W4A

We had subtitles for all the talks and presentations. Impressively, a separate screen projected a live transcription of the speaker. For deaf attendees, it allowed them to follow what the speaker was saying. For talks given in Portuguese, the English subtitles allowed non-Portuguese speakers like me to understand.

They did this by having live stenographers listening to an audio feed from the talks. This is apparently expensive as stenography is a skilled expertise, and it needs to be scheduled in advance. It’s perhaps only practical for larger conferences.

Legion Scribe

This was the motivation for one of the more impressive projects that I saw presented : Legion Scribe, which crowd-sourced real-time captioning so that you wouldn’t need an expert stenographer.

Instead, a real-time audio stream is chopped up into short bits, and divided amongst a number of people using Mechanical Turk. Each worker has to type the short phrase fragment they are given. The fragments overlap, so captions that each worker types can be stitched back together to form captions for the whole original audio stream.

All of this is done quickly enough to make the captions appear more or less in real-time.

Seriously impressive.

And they’re getting reasonable levels of coverage and accuracy. The system has been designed so that workers don’t need to be experts in the domain that they’re transcribing, as they’re only asked to type in a few words at a time not whole passages. With enough people, it works. If they have at least seven workers, it’s approaching the coverage you can get with a professional stenographer.

Assuming that Mechanical Turk can provide a plentiful supply of workers, then this would not only be cheaper than a stenographer, but also let you start captioning at a moments notice, rather than needing to arrange for a stenographer in advance.

Map Reduce in the browser

Speaking of crowd-sourcing, the idea of splitting up a large computing task between a large number of volunteer computers isn’t new. SETI@home is perhaps the best known, while World Community Grid is a recent example from IBM.

But these need users to install custom client software to receive the task, perform it and submit the results.

One project showed how this could be done in web browsers. A large computing task is divided up into map reduce jobs, which are made available through a website. Each web browser that visits the website becomes a map reduce worker, running their task in the background using web workers. As long as the user remains on the site, their browser can continue to contribute to the overall task in the background, without the user having had to install custom client software.

It’s an elegant idea. Not all sites would be well suited to it, but there are plenty of web sites that I keep open all day (e.g. GMail, Remember The Milk, Google Calendar, etc.) so I think the idea has potential.

Migrating browser sessions

An interesting project I saw showed how the state of a browser app could be migrated from one browser to another, potentially a different browser running on a different machine even a different platform.

This is more than just the client-server session, which you could migrate by transferring cookies. They’re transferring the entire state of dynamic AJAX-y pages: what bits are open, enabled, and so on, for any arbitrary web app.

Essentially, they started by wanting to be able to serialize the contents of window, so that it could transferred to another browser where it could be used to restore from.

That wouldn’t be enough. window doesn’t have access to local variables in functions, it wouldn’t have access to most event listeners such as those added with addEventListener, it wouldn’t have access to the contents of some HTML5 tags like canvas, it wouldn’t have access to events scheduled with setTimeout or setInterval, and so on.

Serializing window gets you the current state of the DOM which is a good start, but not sufficient to transfer the state for most web apps.

A prototype system called Imagen shows how this could be done. Looking at how they’ve implemented it, they’ve had to resort to using a proxy server which intercepts JavaScript going to the browser and instruments it with enough additional calls to let them access all of the stuff that wouldn’t normally be in scope. This is enough for them to be able to serialize the entire state of the page.

I can see a lot of uses for this, such as in testing, debugging or service scenarios, as well as just the convenience of being able to resume work in progress as you move between devices.

Inferring constraints on REST API query parameters

Many web services include constraints and dependencies for the query parameters. For example: “this option is always required”, “that parameter is optional”, or “you have to specify at least one of this or that”. For example, the twitter API docs explain how you have to specify a user_id or screen_name when requesting a user timeline.

One project I saw was an attempt to automatically infer these rules and dependencies through a combination of natural language processing to recognise them in API documentation, and automated source code analysis of sample code provided for web services. It combines these into an estimated model of the constraints in the REST APIs, which are then verified by submitting requests to the API.

They demonstrated it on APIs like twitter, flickr, last.fm, and amazon, and it was surprisingly effective.

duolingo

Finally, there was a keynote talk on Wednesday by the founder of duolingo.

Captcha is particularly interesting because it uses a task that people need to do anyway (verify that they’re human) to crowd-source the completion of a task that needs to be done (digitise the text of old books that cannot be read by automated OCR).

Duolingo is similar. It takes a task that people need to do, which is to learn a new language, and uses that effort to translate texts into different languages.

It’s better explained by their demo video.

It’s been around for a little while, but I’d not come across it before. Since getting back from www, I’ve been trying it out. Even Grace has been using it to improve her French and seems to be getting on really well with it.

What else?

There were a lot of other cool projects and technologies that I saw, so I’ll follow this up with another post or two to share some more links.