Digital Asset Management – DAM EU Conference

    Start a conversation 
Estimated reading time 6–10 minutes

I enjoyed DAM EU – a Henry Stewart Event – last Friday and was particularly struck by the range of projects, the variety of approaches, and the diversity of industries represented. Sponsors included Capture, Kit Digital, and iBrams.

I will publish a series of posts over the next few days to cover the event, session by session. It has also been written up by Michael Wells.

The Art and Practice of Managing Digital Media

The conference was chaired and opened by David Lipsey, who has a distinguished career in Digital Asset Management (DAM). He introduced the day by talking about the need for “fungibility and liquidity” to make digital assets earn their keep. He pointed out that DAM is an emerging and maturing field. None of us thought of a career in DAM when we were at school, it is something that we have arrived at during our working lives. Now educational institutions are considering providing DAM courses to meet a growing need for DAM skills. DAM is also starting to become an enterprise-wide infrastructure layer, so “soft” skills, such as gaining cross-departmental buy-in to projects, are becoming increasingly important.

Getting More out of DAM: The Latest Technologies and What is Possible Now

It is always a pleasure to listen to the eminently sensible Theresa Regli of the Real Story Group. She observed that distribution channels are changing and there is now more focus not on gathering assets together, but on how you send out assets in the right format to customers – to all phone platforms, all PC and Mac platforms, etc. Mobile apps are very much in the limelight at the moment, but it is important to see them as just another channel. It is also worth remembering that you don’t need to build specialised apps for all devices, as cross-browser formats may well be adequate. The purpose of the Digital Asset Manager should be to get the right media to the right people in the right format.

To be successful, DAM projects need to handle change management and get people to work in different ways. Creative people want to work collaboratively and simultaneously, so DAM systems need to support this, rather than imposing rigid linear workflow processes.

When assessing DAM technologies, the differences are usually not in the functionality they offer, but in the way they provide that functionality. Too much emphasis is often put on technology to solve problems. Theresa said that she thought technology could solve about 20 per cent of business problems, and even that was probably a bit generous. The core problems are caused by metadata, collaboration, diligence, and governance.

Different people need different views of a DAM system at different times, so customisation and personalisation is useful – for example providing limited functionality on an iPad app that enables workers who are away from their desks to access specific aspects of a system. However, a danger to avoid is creating multiple versions of the same asset. It is much better to have a master version and transcode out to each device as necessary.

New developments in metadata include colour palette searching which is popular with creatives more interested in moods than particular things. Another development is AVID’s “layered metadata”, which offers the ability to annotate directly on to video, as well as have general metadata for the whole asset. Much work has been done to improve video handling tools, so automated scene detection and facial recognition software are now becoming available in products.

When buying a DAM system, it is worth remembering that the now dominant technologies are relatively new, and that what may be ideal at a workgroup level may not scale up to enterprise level.

How to be a Good DAM Customer

Sarah Saunders of Electric Lane spoke on an often overlooked, but vital, aspect of the DAM procurement process – how to be a good customer. She pointed out that you get to choose your DAM vendor, but they don’t get to choose you. Projects can be derailed by customers being unsure what they want, being poor project managers, or having unrealistic expectations of what they can get for their money. It is in no-one’s interests to drive the software vendor out of business.

For a project to be a success, it is important to have a project leader who is not part of any of the stakeholder departments and is empowered to make decisions and overrule any individual department if their focus on their own departmental needs threatens the entire project. Creative people and IT people tend to think very differently, to the extent that they can completely misunderstand each other. So, it is not enough just to get people to sign off a complex requirements document, they need to understand it on their own terms otherwise they are likely to be disappointed. Building a system that the creatives are happy using, even though their requirements may seem bizarre to the IT department, is just as important as keeping documentation in good order, something the creatives might not appreciate.

One of the hardest aspects is working out what you really want from the system, and where you are prepared to compromise in order to get something that can actually be built for the budget. In terms of functionality, it is important to think beyond what people ask for, as that is likely to just be what they are using already, to what they would like if they knew it existed.

It is also important to see any system working in practice, as demos may not be enough to know that it will work with your data and business processes. A small delay in keywording speed can have a huge impact on productivity.

An iterative build process good if needs are complex. It can be expensive, but may still be cheaper than ending up with an underperforming system.

Some customers expect the vendor to do all the thinking for them, but the vendors do not know the business needs and processes of every single business. It is not an easy job to be a software vendor, and a difficult, disorganised customer will always end up having a hard time.

Panel session: how we did our procurement

Laura Caulkin from Net-a-Porter and Lisa Hayward from Shell described their recent DAM procurement projects.

Net-a-Porter took a very user-centric approach, running lots of requirements gathering workshops and writing user stories rather than a requirements spreadsheet. They spent 6-8 months planning what they wanted the new system to do, with the help of external expert consultants.

At Shell the build process of their new DAM system took about 18 months and involved the migration of 50,000 assets. They selected Vyre, as a good cultural fit. Lisa stressed the importance of getting vendors to demo their products with your data and felt that details requirements documents and scoring systems were worthwhile. She argued for the “soft” value of a good cultural fit between the team and the vendor to be included in the scores, which the IT team did not agree with. She pointed out that the IT team have a very different perspective as they want the product that they will find easiest to integrate as their short-term project, but you are left to work with the system on a daily basis.

In terms of lessons learned, she felt that their initial requirements were not specific enough, and that this led to change requests and delays. This caused probpems wiht senior managers, who had not expected the project to take so long. However, once it was built, it was well received. Lisa pointed out that change management is really just talking to people and explaining what is going on. They had great success with the adoption of the new system as it was such a vast improvement on the system that had been built for them 10 years previously.

The Organizational Digital Divide

    Start a conversation 
Estimated reading time 2–2 minutes

Catching up on my reading, I found this post by Jonah Bossewitch: Pick a Corpus, Any Corpus and was particularly struck by his clear articulation of the growing information gulf between organizations and individuals.

I have since been thinking about the contrast between our localised knowledge organization systems and the semantic super-trawlers of the information oceans that are only affordable – let alone accessible – to the megawealthy. It is hard not to see this as a huge disempowerment of ordinary people, swamping the democratizing promise of the web as a connector of individuals. The theme has also cropped up in KIDMM discussions about the fragmentation of the information professions. The problem goes far beyond the familiar digital divide, beyond just keeping our personal data safe, to how we can render such meta-industrial scale technologies open for ordinary people to use. Perhaps we need public data mines to replace public libraries? It seems particularly bad timing that our public institutions – our libraries and universities – are under political and financial attack just at the point when we need them to be at the technological (and expensive) cutting edge.

We rely on scientists and experts to advise us on how to use, store and transport potentially hazardous but generally useful chemicals, radioactive substances, even weapons, and information professionals need to step up to the challenges of handling our new potentially hazardous data and data analysis tools and systems. I am reassured that there are smart people like Jonah rising to the call, but we all need to engage with the issues.

There’s no such thing as originality

    Start a conversation 
Estimated reading time 5–8 minutes

Back in 1995 my brother wrote his MA dissertation on copyright: Of Cows and Calves: An Analysis of Copyright and Authorship (with Implications for Future Developments in Communications Media) or How I Learned To Stop Worrying and Love Home-Taping. It is interesting how relevant it remains today, especially in the light of the Hargreaves Report delivered to the government in May. Essentially, nothing much has changed in the intervening 16 years. My brother reflected the predictions of the profound changes that digital technologies would make – and were already making – to the creative industries. Although details such as the excitement over ISDN lines and no mention of mobile technologies date his work, the core issues he covers – who owns and idea and who should get paid for it – remain remarkably current.

I’ve written a brief overview of the Hargreaves Report for Information Today, Europe. The two aspects of most interest to me are the proposals for a Digital Copyright Exchange and for handling of orphan works.

Ideas as objects

My brother argues that creative works are all part of an ongoing cultural dialogue that no one individual can really “own” and that copyright only made sense for the short period of time where technology reified ideas as artefacts that could be traded as commodities (like potatoes or coal). The business model of “content” as “physical item” started to fail with the invention of the printing press, as the process of copying ceased to be a creative act, so each individual copy was not a “new” work in its own right. Copyright law was developed to commodify the “idea” within a book, not the physical book and was enforceable for only as long as access to the copying technologies – printing presses – could be limited. The digital age has made control of the copying process impossible, as the computer replaced the printing press, so one could exist on every desk, and now, thanks to mobile technology, in every pocket. He notes that in pre-literate societies, authorship of a myth or a folklore was not important, and I find it interesting that crowd sourcing (e.g. Wikipedia, citizen journalism) has in some ways returned us to the notion of a culturally held store of knowledge contributed and curated by volunteers, rather than by paid professionals.

Music is not the only art

Many of the discussions of free v. paid for content seem to run to extremes, and seem to be coloured by the popular music industry’s taste for excess. The music industry inflated commodity prices far beyond what consumers were willing to pay just as cheap copying technologies became widely available, making the pirates feel morally justified. It is hard to feel sympathy for people living a millionaire rockstar lifestyle. The inevitable increase in piracy was met not by lower prices, but by the industry issuing alarmist statements about home taping killing music. It didn’t! Music industry profits have risen steadily. The industry has simply turned its attention to charging more for merchandise and live events. The lesson to be learned is that people are willing to pay for experiences, services, and commodities that they perceive as being worth the price and better than the alternatives. Most music fans would rather pay for an easy, virus-free reliable download service than deal with illegal download sites, just as back in the 1980s sticking a microphone in front of the radio to record the charts wasn’t as good as buying the vinyl. The effects on the reference publishing industry were very different, affecting many small businesses and people on far less than rockstar wages, but most displaced people found ways to transfer their skills to numerous new areas of work – obvious examples are content strategy or user experience design – that simply didn’t exist in the 1990s.

It has become a bit of a cliche that there aren’t any business models that work, or have been shown to work, in the new digital economy, but are things really so different? In order to have a business somebody somewhere has to be persuaded to pay for something. Everything else is just a complication. If your free content is supported by advertising, it just means that someone needs to be persuaded to pay for the advertised product, instead of the free bit that appears on the way. Similarly, “freemium” is really just old-style free samples and loss leaders. You can’t have the “free” bit without paying for the “premium”. The two key questions for producers remains, as they have always been, how do you produce content that is so useful, entertaining, or attractive that people are willing to pay for it, and how do you deliver it in ways that make the buying process as easy as possible?

Hargreaves suggests a light touch towards enforcement of rights and anti-piracy, firmly supporting the view that if content and services are good enough, people will pay, and that education about why artists have a right to be paid for their work is as important as catching the pirates. Attempting to “lock” copies with Digital Rights Management systems certainly don’t seem to have been very successful. They are expensive to implement, unpopular, and pirates always manage to hack them. Watermarking doesn’t attempt to prevent copying but does help prove origin if a breach is discovered. Piracy is less of a worry for business-to-business trade, as most legitimate businesses want to be sure they have the correct rights and licences for content they use, rather than face embarrassing and expensive lawsuits, and a simplified, secure Digital Copyright Exchange would presumably be in their interests.

Digital Copyright Exchange

Hargreaves proposes the Digital Copyright Exchange as a mechanism to make the buying and selling of rights far easier. At the moment, piracy can be a temptation because of the time and effort required to attempt to purchase rights. Collections agencies and the law form layers of bureaucracy that hamper start-ups from developing new products and simply confuse ordinary users. This represents real lost revenue to the content providers.

Metadata analyst and music fan Sam Kuper suggested an interesting proposal for setting fair prices – that artists should put a “reserve price” on their work, with an initial fee for purchasers. Once the “reserve price” has been reached, any subsequent purchase is shared between the artist and the early purchasers. This would guarantee a level of income for artists, allow keen fans to get hold of new material quickly, and allow those less sure to wait to see if the price drops before purchase. Such a system sounds complex, but could work through some kind of centralised system, so that “returns” to early purchasers would be returned as credits to their accounts.

From an archive point of view, Hargreaves’s call to allow the digitisation and release of orphan works without endless detective work in trying to trace origins would be a huge boon.

So, there is much to think about in the Hargreaves report and some very sensible practical suggestions, but much detail to be worked out as well. I wonder if in another 16 years, my brother and I will have seen any real change or will we still be going through the same debate?

KO

Review of The Accidental Taxonomist

    Start a conversation 
Estimated reading time 1–2 minutes

Rather late to the party on this one, but I finally got around to reading The Accidental Taxonomist by Heather Hedden. I have to confess to bias as I was very pleased to see that my FUMSI article on folksonomies was mentioned in the recommended reading section. Written in a clear, sensible and readable tone, Hedden gives a very thorough overview of practical taxonomy work. The book works as a textbook, but reads pleasantly and although I anticipate referring to it as a reference resource, I enjoyed reading it chapter by chapter.

I am very pleased to have so much practical and useful information in one place (for example lists of relevant standards, definitions of taxonomies, ontologies, and thesauri, the functions of taxonomies) as in my day-to-day work, I often have to explain the basics to people. I have been recommending the book to my team, especially those who are new to taxonomies, and they have appreciated its clarity and comprehensiveness as a “field guide”. It covered familiar ground, but for me much of that was my “tacit knowledge” that I had never fully articulated to myself, so I am sure that this “knowledge capture” from the mind of an experienced taxonomy practitioner will be very useful.

UK Archives Discovery Forum

    1 comment 
Estimated reading time 6–10 minutes

I very much enjoyed the UKAD UK Archives Discovery Forum event at the National Archives. There were three tracks as well as plenary sessions, so I couldn’t attend everything.

Linked Data and archives

After an introduction from Oliver Morley, John Sheridan opened by talking about the National Archives and Linked Data. Although not as detailed as the talk he gave at the Online Information Conference last December, he still gave the rallying call for opening up data and spoke of a “new breed” of IT professionals who put the emphasis on the I rather than the T. He spoke about Henry Maudslay who invented the screw-cutting lathe, which enabled standardisation of nuts and bolts. This basically enabled the industrial revolution to happen. Previously, all nuts and bolts were made individually as matching pairs, but because the process was manual, each pair was unique and not interchangeable. If you lost the bolt, you needed a new pair. This created huge amounts of management and cataloguing of individual pairs, especially if a machine had to be taken apart and re-assembled, and meant interoperability of machinery was almost impossible. Sheridan asserted that we are at that stage with data – all our data ought to fit together but at the moment, all the nuts and bolts have to be hand crafted. Linked Data is a way of standardising so that we can make our data interchangeable with other people’s. (I like the analogy because it makes clear the importance of interoperability, but obviously getting the nuts and bolts to fit is only a very small part of what makes a successful machine, let alone a whole factory or production line. Similarly Linked Data isn’t going to solve broad publishing or creative and design problems, but it makes those big problems easy to work on collaboratively.)

Richard Wallis from Talis spoke about Linked Data. He likes to joke that you haven’t been to a Linked Data presentation unless you’ve seen the Linked Open Data cloud diagram. My version is that you haven’t been to a Linked Data event unless at least one of the presenters was from Talis! Always an engaging speaker, his descriptions of compartmentalisation of content and distinctions between Linked Data, Open Data, and Linked Open Data were very helpful. He likes to predict evangelically that the effects of linking data will be more profound to the way we do business than the changes brought about by the web itself. Chatting to him over tea, he has the impression that a year ago people were curious about Linked Data and just wanted to find out what it could do, but this year they are now feeling a bit more comfortable with the concepts and are starting to ask about how they can put them into practice. There certainly seemed to be a lot of enthusiasm in the archive sector, which is generally cash-strapped, but highly co-operative, with a lot of people passionate about their collections and their data and eager to reach as wide an audience as possible.

A Vision of Britain

Humphrey Southall introduced us to A Vision of Britain, which is a well-curated online gazetteer of Britain, with neat functions for providing alternative spellings of placenames, and ways of tackling the problems of boundaries, especially of administrative divisions, that move over time. I’m fascinated by maps, and they have built in some interesting historical map functionality too.

JISC and federated history archives

David Flanders from JISC talked about how JISC and its Resource Discovery Task Force can provide help and support to educational collections especially in federation and Linked Data projects. He called on archives managers to use hard times to skill up, so that when more money becomes available staff are full of knowledge, skills, and ideas and ready to act. He also pointed out how much can be done in the Linked Data arena with very little investment in technology.

I really enjoyed Mike Pidd’s talk about the JISC-funded Connected Histories Project. They have adopted a very pragmatic approach to bringing together various archives and superimposing a federated search system based on metadata rationalisation. Although all they are attempting in terms of search and browse functionality is a simple set of concept extractions to pick out people, places, and dates, they are having numerous quality control issues even with those. However, getting all the data into a single format is a good start. I was impressed that one of their data sets took 27 days to process and they still take delivery of data on drives through the post. They found this was much easier to manage than ftp or other electronic transfer, just because of the terabyte volumes involved (something that many people tend to forget when scaling up from little pilot projects to bulk processes). Mike cautioned against using RDF and MySql as processing formats. They found that MySql couldn’t handle the volumes, and RDF they found too “verbose”. They chose to use a fully Lucene solution, which enabled them to bolt in new indexes, rather than reprocess whole data sets when they wanted to make changes. They can still publish out to RDF.

Historypin

Nick Stanhope enchanted the audience with Historypin, an offering from wearewhatwedo.org. Historypin allows people to upload old photos, and soon also audio and video, and set them in Google streetview. Although flickr has some similar functions, historypin has volunteers who help to place the image in exactly the right place, and Google have been offering support and are working on image recognition techniques to help place photos precisely. This allows rich historical street views to be built up. What impressed me most, however, was that Nick made the distinction between subjective and objective metadata, with his definition being objective metadata is metadata that can be corrected and subjective metadata is data that can’t. So, he sees objective metadata as the time and the place that a photo was taken – if it is wrong someone might know better and be able to correct it, and subjective metadata as the stories, comments, and opinions that people have about the content, which others cannot correct – if you upload a story or a memory, no-one else can tell you that it is wrong. We could split hairs over this definition, but the point is apposite when it comes to provenance tracking. He also made the astute observation that people very often note the location that a photo is “of”, but it is far more unusual for them to note where it was taken “from”. However, where it was taken from is often more use for augmented reality and other applications that try to create virtual models or images of the world. Speaking to him afterwards, I asked about parametadata, provenance tracking, etc. and he said these are important issues they are striving to work through.

Women’s history

Theresa Doherty from the Women’s Library ended the day with a call to stay enthusiastic and committed despite the recession, pointing out that it is an achievement that archives are still running despite the cuts, and that this shows how valued data and archives are in the national infrastructure, how important recording our history is, and that while archivists continue to value their collections, enjoy their visitors and users, and continue to want their data to reach a wider audience the sector will continue to progress. She described how federating the Genesis project within the Archives hub had boosted use of their collections, but pointed out that funders of archives need to recognise that online usage of collections is just as valid as getting people to physically turn up. At the moment funding typically is allocated on visitor numbers through the doors, and that this puts too much emphasis on trying to drag people in off the street at the expense of trying to reach a potentially vast global audience online.

Serendipity and large video collections

    Start a conversation 
Estimated reading time 2–2 minutes

I enjoyed this blog post: On Serendipity. Ironically, it was recommended to me, and I am now recommending it!

Serendipity is rarely of use to the asset manager, who wants to find exactly what they expect to find, but is a delight for the consumer or leisure searcher. People sometimes cite serendipity as a being a reason to abandon classification, but in my experience classification often enhances serendipity and can be lost in simple online search systems.

For example, when browsing an alphabetically ordered collection in print, such as an encyclopedia or dictionary, you just can’t help noticing the entries that sit next to the one you were looking for. This can lead you to all sorts of interesting connections – for example, looking up crescendo, I couldn’t help noticing that crepuscular means relating to twilight, and that there is a connection between crepe paper and the crepes you can eat (from the French for “wrinkly”), but crepinette has a different derivation (from the French for “caul”). What was really interesting was the fact that there was no connection, other than an accident of alphabetical order. I wasn’t interested in things crepuscular, or crepes and crepinettes, and I can’t imagine anyone deliberately modelling connections between all these things as “related concepts”.

Wikipedia’s “random article” function is an attempt to generate serendipity alogrithmically. On other sites the “what people are reading/borrowing/watching now” functions use chronological order to throw out unsought items from a collection in the hope that they will be interesting. Twitter’s “trending topics” use a combination of chronological order and statistics on the assumption that what is popular just now is intrinsically interesting. These techniques look for “interestingness” out of what can be calculated and it is easy to see how they work, but the semantic web enthusiasts aim to open up to automated processing the kind of free associative links that human brains are so good at generating.

Identifiers for asset management

    1 comment 
Estimated reading time 3–4 minutes

I met Rob Wilson of RCWS consulting at the DAM London conference last summer, where he talked about digital identifiers. He has long career working on information architecture for range of organisations and industry standards bodies, including the ultimately doomed e-GMS project, which was an attempt to unify government metadata standards for sharing and interoperability.

Identifiers are a hot topic at the moment, and so I was pleased Rob was willing to talk to me and some of my colleagues about some of the problems and attempts at solving them in the wider industry. Rob described the need for stable and persistent IDs for asset management and that this is not a new problem but one that has been worked on by various groups for many years. He distinguished between ID management and metadata management, pointing out that metadata may change but the ID of an asset can be kept stable. In just the same way that managing metadata as a distinct element of content is important, so IDs need to be managed as distinct from other metadata.

Rob is an independent consultant and has no commercial affiliation with the EIDR content asset identifier system, but suggested it is a robust model in many ways. The EIDR system embeds IDs within asset files, using a combination of steganography, strong encryption, and watermarking and “triangulation” of combined aspects to create a single ID. The idea is that the ID is so deeply ingrained and fragmented within the structure of the file that it cannot easily be removed and can be recovered from a small fraction of the file – e.g. if someone takes a clip from a video, the “parent” ID is discoverable from the clip. Rob thought the EIDR system was better than digital rights management (DRM) methods, which rely on trying to prevent distribution, and “locking” content after a certain amount of time or use, which gives an incentive to everyone who has such content to try to break the DRM system. If an individual can still access their illegally held content despite the ID, they have little incentive to try to remove the ID.

The system does not try to prevent theft from source, but helps to prove when copyright has been breached, because the ID – the ownership – remains within the file. It is intended as a tool to deter systematic copyright theft by large organisations, by making legal cases easy to win. Large organisations typically have the funds, or insurances, to cover large claims, unlike individuals, so it is unlikely to be cost-effective to pursue individuals.

The EIDR system also has an ID resolver that manages rights and supply authentication (as well as ownership) so that when content is accessed, the system checks that the appropriate rights and licences have been obtained before delivery.

Rob also outlined the elements of information architecture that need to be considered for unified organisational information management – establish standards, develop models, devise policies, select tools, maintain governance, etc. He emphasised that IDs are not a “silver bullet” to solve all issues, but that if a few problematic key use cases are known, they can be investigated to see if robust federated ID management architecture would help.

Online Information Conference – day three

    Start a conversation 
Estimated reading time 4–6 minutes

Managing content in a mobile world

On Day 3, I decided to try the “Mobile content” track. The speakers were Alan Pelz Sharpe from The Real Story Group, Russell Reeder from Libre Digital and Dave Kellogg from MarkLogic.

Augmented reality is one of my pet topics, so I was pleased to hear the speakers confirm it is all about data and meaning. It is just one aspect of how consumers want more and more data and information presented to them without delay on smaller and simpler devices. However, this means a greater need for metadata and more work for usability specialists.

The whole way people interact with content is very different when they are on the move, so simply trying to re-render websites is not enough. Entire patterns of information needs and user behaviour have to be considered. “A mobile person is not the same as a mobile device…the person needs to be mobile, not necessary the content.” For example, mobile workers often prefer to contact a deskbound researcher and get answers sent to them, not do the research themselves while on the move.

It is not enough just to worry about the technological aspects, or even just the information architecture aspects of versions of content for mobile users. A different editorial style needs to be used for small screens, so content needs to be edited to a very granular level for mobile – no long words!

Users don’t care about formats, so may get a very bad impression of your service if you allow them to access the wrong content. One customer was cited as complaining that they could watch You Tube videos easily on their phone (You Tube transcodes uploads so they are low res and streamable), but another video caused huge problems and took ages (it turned out to be a download of an entire HD feature film).

The session made me feel quite nostalgic, as back in 1999 we spent much time pondering how we would adapt our content to mobile devices. Of course, then we were trying to present everything on tiny text-only screens – 140 characters was seen as a luxury. There is just no comparison with today’s multifunctional multicoloured shiny touch screen devices.

New World of Search – Closing Keynote

I think every conference should include a speaker who rises above day-to-day business concerns and looks at really big pictures. Stephen Arnold outlined the latest trends in search technologies, both open source and proprietary, and how people are now getting better at picking elements from different systems, perhaps combining open source with proprietary options and building modular, integrated systems to prevent lock-in. However, he also talked engagingly about digital identity, privacy, and the balance of power between individuals and large organisations in the digital world.

He reiterated the point that Google (and other search engines) are not free. “Free is not what it seems to the hopeful customer” but that we haven’t yet worked out the value of data and content – let alone information and knowledge – in the light of the digital revolution. Traditional business models do not work and old economic theories no longer apply: “19th century accounting rules don’t work in the 21st century knowledge economy”.

He noted that Facebook has managed to entice users and developers to give up their content, work, time, and intellectual property to a locked-in proprietary walled garden. People have done this willingly and apparently without prompting, enabling Facebook to achieve something that software and content vendors such as IBM and AOL have been trying unsuccessfully to do for decades.

There is no clear way of evaluating the service that Facebook provides against the value of the content that is supplied to it free by users. However, it is clear that it is of huge value in terms of marketing. It is possibly the biggest and most sophisticated marketing database ever built.

As well as content, people are willing to surrender personal information, apparently with minimal concerns for privacy and security. It is not clear what the implications of this are: “What is the true cost of getting people to give up their digital identities?” It is clear that the combined data held by Facebook, Google, Foursquare, mobile phone companies, credit card companies, travel card companies, etc. creates a hugely detailed “digital profile” of human lives. Stephen urged the audience to take seriously the potential for this to be used to cause harm to individuals or society – from cyberstalking to covert government surveillance – because technology is moving so fast that any negative social and political effects may already be irreversible.