Online Information Conference 2010

    Start a conversation 
Estimated reading time 3–4 minutes

Despite the recession, tube strikes, and snow, there was a fine collection of speakers, exhibitors, and delegates at a smaller than usual Online Information Conference and Exhibition this year.

Librarians seem to be getting heavily into Linked Data, while the corporate sector is still mainly concerned with business intelligence and search.

On day one I enjoyed the practical explanations of how Linked Data principles have been made to work at The Guardian, The Press Association, the Dutch Parliament, and the ALISS health project in Scotland.

Linked Data tags are a form of metadata that can be used to automatically generate content aggregations for web pages. This means that not only can you re-use your own content, increasing its lifespan, but you can gather cheap content that is openly available online. This is very familiar territory to me, as we used to build products in the same way back in the 90s, the difference being that we didn’t have much of an external web to link to back then. In the meantime, using a linkable, interoperbale format for your tags has very many benefits, and whether your focus is primarily for content within or beyond a firewall, the arguments for using standards that have the potential to link to the wider world seem very compelling. I can’t see any logical reasons not to standardise the format your metadata is held in (technical and practical issues are another matter), although standardising the semantic content of the metadata is a far more difficult problem.

It was reassuring to hear that everyone else is struggling with the problems of who mints IDs and URIs, who settles arguments about what exactly the IDs refer to – especially across domains – and who resolves and manages mappings. Such issues are difficult to resolve within a firewall, out there on the Web they become vast. The W3C is starting to work on provenance standards (the parametadata or meta-metadata), a pet worry of mine, because I am certain we need to get that layer of semantic information into our tags as soon as possible if we are going to be able to advance the semantic web beyond crunching databases together.

In the meantime, Linked Data is working very well especially for mashups and information visualisations. I particularly liked the Dutch Parliament’s “Attaquograms” – a diagram showing how often MPs were interrupted in debates and how much they interrupted others, although it doesn’t appear to have changed their behaviour yet. I also enjoyed The Guardian’s “league tables” of MPs’ performance. When MPs protested that such analyses ignored qualitative issues, The Guardian pointed out that if MPs advocate such data crunching as a way to judge schools and hospitals, then it must be good enough to apply to MPs themselves.

Andy Hyde from the ALISS project is working on ways to use Linked Data to help people manage their health, especially for patients with long term conditions such as diabetes. He stressed the importance of involving the users in any information project and addressing them on their terms, stating “The most collaborative tool we have is the cup of tea”.

My only concern about using Linked Data to pull in Wikipedia content, is whether the audience will start recognising it. If every website that mentions a topic has the same Wikipedia content attached to it, won’t people get bored? Perhaps there are just so many people out there contributing, so many varieties of alogrithmic aggregations, and so much content to read, it will never happen!

There is a related Guardian technology blog post.

I will post summaries of days two and three shortly.

Augmented reality

    Start a conversation 
Estimated reading time 3–5 minutes

I went to a British Computer Society talk on Augmented Reality a few weeks ago. The BCS audience is typically highly technical, but the talks themselves are always accessible and entertaining. People often wonder why I am interested in augmented reality, because they assume it has nothing to do with information, but to me it is all about information. I would love to be able to serve up archive content to someone’s mobile phone using location data – a clip of a scene from an episode of their favourite programme that was filmed in that location, or an old news report about an event that took place there. Managing vast data sets containing huge amounts of content in a searchable form will form the backbone of many augmented reality tools and applications. If this isn’t an area that information scientists should be exploring, I don’t know what is!

The speakers were Professor Anthony Steed, Head of Virtual Environments and Computer Graphics at UCL, and Lester Madden, founder and director of Augmented Planet.

They explained the difference between visual search, true augmented reality, and virtual reality. Visual search is using an image as a search term (as in Google Goggles) and then returning results. Because this can be done via a camera, the image can be one that is in the searcher’s immediate environment, and the results can be returned as an soverlay on the original image. True augmented reality is not just adding graphics to an unrelated camera feed, but is responsive to the real surroundings. Virtual reality is an entirely computer-generated environment.

3-D models of the world are being built, but keeping them up to date is proving a challenge, and crowdsourcing may be the only pragmatic option. Another technical challenge suggested was how to render the augmentation visually indistinguishable from “real” vision, which raises all sorts of interesting philosophical and ethical questions about how we handle the behaviour of people who become confused or cease to be able to tell the difference, either temporarily or permanently. At the moment, augmented reality is quite distinct from virtual reality, but eventually the two will presumably meet. However, nobody seems to think that is likely anytime soon.

In the meantime, there was a rather lovely video of an augmented reality audience, designed to help people who have difficulty speaking in public. Apparently, this is a particular problem for those people in the software industry who are not natural extroverts but find that their careers can only advance if they get out from behind the screen and start talking at conferences, trade shows, etc., where audiences can be quite hostile. University students are hopeless at pretending to be a hostile audience – they are too polite, apparently (this week’s events notwithstanding!) – and actors are too expensive. Avatars, however, can be programmed to look bored, chat on their mobiles, get up and walk out, etc., and real people tend to have similar emotional reactions to the behaviour of avatars as they do to other humans, making an augmented reality theatre a perfect place for practising speaking and building confidence.

Augmented reality is also finding practical applications in the construction industry, to create visualisations of buildings before they are constructed, in medicine to help surgeons, and for improved videoconferencing. There are also many ways that augmented reality can be used to sell things – show me information about the restaurant or shoe shop in this street. Amusingly, identifying unique buildings is quite easy, but for the branded chains disambiguation is proving a challenge – their outlets look the same in every town – which brings us back again to familiar information science territory.

There is also a BCS blog post about the event.