Tag Archives: Digital Asset Management

Digital Asset Management – DAM EU Conference – Third Session

    Start a conversation 
Estimated reading time 2–3 minutes

Sustaining your DAM

Sara Winmill from the V&A talked about the huge shifts in mindset that were needed to accompany their DAM work. They needed to stop thinking about storing pictures of things and start thinking about managing those digital images as the things. Their needs for storage were vastly underestimated at first. Unlike the myth, storage is not so cheap – the V&A need some £330K for storage annually. They have been investigating innovative approaches to “backup bartering” – finding a similar organisation and storing a copy of each other’s data, so that the backups exist offsite but without the expense of using commercial storage companies.

Despite having a semantically enabled website, they have not been able to link their Library Catalogue’s MARC records with the images, and have three sets of identifiers that are not mapped.

One of their major DAM problems is trying to stop people storing multiple copies and refusing to delete anything. The core collections images need to be kept, but publicity and marketing material is now being stored in the system without any selection and disposal policies in place, The original system was designed without a delete button altogether.

Can we fix it? Yes we Can! Successfully Implementing a Multi-faceted DAM system at HiT entertainment

It was a pleasure to hear of Tabitha Yorke’s successful DAM implementation at HiT as they built their first digital library. This was a relatively constrained collection and two fulltime members of staff were able to catalogue it in a year. This provided the metadata they needed for a straightforward taxonomy-based search system that is simple and easy to use. This meant that self-research was supported, saving the team much time and increasing productivity hugely. They are now working to integrate the library with rights systems. They worked hard at getting users to test the metadata and made sure that they were cataloguing with terms the users wanted to search with, rather than those that occurred first to the cataloguers. They now have two digital librarians managing 150,000 assets.

Tabitha stayed on the stage and was joined in a panel session by David Bercovic, Digital Project Manager at Hachette UK, and Fearghal Kelly of Kit digital. The afternoon ended with David Lipsey’s concluding remarks.

Digital Asset Management – DAM EU Conference – Second Session

    Start a conversation 
Estimated reading time 4–6 minutes

Serco Artemis Digital – Realising the Value of Archives and Rehabilitating Prisoners

Bruce Hellman from Serco described the work they have been doing to employ prisoners as cataloguers and transcribers. The work, which varied from project to project, but which included typing up handwritten archival documents that were not suitable for OCR capture techniques and adding metadata, was very popular with prisoners.

Bruce argued that it gave them a chance to develop skills that would be useful in the workplace on their release, and allowed organisations to get work done more cheaply than by paying standard market rates.

How Metadata and Semantic Technologies will Revolutionise your Workflow

John O’Donovan of the Press Association gave an entertaining presentation about using semantic technologies to index or re-index and publish to the web content from a range of systems, including legacy systems and external feeds. He pointed out – with a series of amusing ambiguities and unintentional innuendos – that simple text search lacks context, and that newspaper headlines often contain jokes, ambiguous terms, and terms that quickly become obsolete. So, metadata is vital in assembling assets that are about the same topic.

He stressed the importance of keeping your metadata management separate from your content management, so that metadata can be changed without having to re-index assets. (An exception is rights and other non-subjective metadata that needed to be embedded in the asset for further tracking. This is not a major concern to the Press Association as they do not track assets once they are published onto the web. I wasn’t sure what would happen if you decided you wanted to repurpose your content, and so needed a new set of metadata, how you link content and metadata, and how you manage the metadata and content within their separate stores.)

The PA are using Mark Logic as the content repository and a BigOWLIM triplestore to handle the associated metadata. Content is fed into the content store, then out again to a suite of indexing technologies, including concept extraction and other text-processing systems, as well as facial recognition software, to create semantic metadata. Simple ontologies are used to model the content, mainly indexing people, places, and events – themes chosen as covering the most popular search terms entered by users of the website.

John argued that such gathering and indexing of assets in order to automatically create and publish collections of associated content was simpler and easier than ingesting diverse content and metadata into traditional search, content management, and online publishing systems.

DAM for Content Marketing, Curation, and Knowledge Organisation

Mark Davey of the DAM Foundation took us on an animated and musical tour of different perspectives on metadata, engagement, social media, and how different the “digital natives” – young people who have grown up with digital technologies – will be to previous generations. Kids of the future will be able to have an idea in the morning, go to an online website app and create their site, their brand, and their marketing strategy in the afternoon, and be engaging with their potential clients by the evening.

Mark pointed out that people have moved on from the initial narcissism of social media and self-publishing and now want compelling stories they can engage with. He pointed out that as semantic technologies advance, we are caught in a feedback loop with them – we are the ontology that is driving the machines – and so we should be aware and vigilant. As the technologies become more powerful and all pervasive, we may lose sight of how they are working to serve us, rather than how we are serving up information about ourselves to them.

Marketing will have to become more sophisticated. Amongst the many statistics he quoted, I noted that 84% of 25-34 year olds have left a favourite website because of ads. At the same time, our networks become more interconnected. In a “six degrees of separation” game, we discovered that three people in the audience had met the Dalai Lama, and we are linking to more and more people through social media sites every day.

The metaphor of information as water is a familiar one, especially in the knowledge management area, but Mark’s colleague Dave pointed out how appropriate it is when talking about a DAM/dam. The DAM system forms the reservoir of content.

(I couldn’t help comparing and contrasting the ever-changing semantic seas of information at the Press Association with the more manageable streams of content that flow within smaller organisations, and how very different approaches are needed for such different contexts. The other day I saw the metaphor used again, in an interview with – apparently – one of the LulzSec hackers who talked about their pirate boat and “copywrong” as an enemy of the seas. )

Black Holes and Revelations: DAM and a museum collection

As if to continue the water metaphor, the next speaker was Douglas McCarthy from the National Maritime Museum. However, he took the metaphor up a stage, to space ships and black holes, with their content assets hidden in black holes as 100,000 uncatalogued image files.

Having catalogued and improved their DAM system, the Musuem’s Picture Library is now showing a healthy profit. Many sales come from the “long tail” of images that no-one anticipated anyone would want. Rather than saturating the market, putting the images online has been stimulating demand, with customers calling for more collections to be made available.

Digital Asset Management – DAM EU Conference

    Start a conversation 
Estimated reading time 6–10 minutes

I enjoyed DAM EU – a Henry Stewart Event – last Friday and was particularly struck by the range of projects, the variety of approaches, and the diversity of industries represented. Sponsors included Capture, Kit Digital, and iBrams.

I will publish a series of posts over the next few days to cover the event, session by session. It has also been written up by Michael Wells.

The Art and Practice of Managing Digital Media

The conference was chaired and opened by David Lipsey, who has a distinguished career in Digital Asset Management (DAM). He introduced the day by talking about the need for “fungibility and liquidity” to make digital assets earn their keep. He pointed out that DAM is an emerging and maturing field. None of us thought of a career in DAM when we were at school, it is something that we have arrived at during our working lives. Now educational institutions are considering providing DAM courses to meet a growing need for DAM skills. DAM is also starting to become an enterprise-wide infrastructure layer, so “soft” skills, such as gaining cross-departmental buy-in to projects, are becoming increasingly important.

Getting More out of DAM: The Latest Technologies and What is Possible Now

It is always a pleasure to listen to the eminently sensible Theresa Regli of the Real Story Group. She observed that distribution channels are changing and there is now more focus not on gathering assets together, but on how you send out assets in the right format to customers – to all phone platforms, all PC and Mac platforms, etc. Mobile apps are very much in the limelight at the moment, but it is important to see them as just another channel. It is also worth remembering that you don’t need to build specialised apps for all devices, as cross-browser formats may well be adequate. The purpose of the Digital Asset Manager should be to get the right media to the right people in the right format.

To be successful, DAM projects need to handle change management and get people to work in different ways. Creative people want to work collaboratively and simultaneously, so DAM systems need to support this, rather than imposing rigid linear workflow processes.

When assessing DAM technologies, the differences are usually not in the functionality they offer, but in the way they provide that functionality. Too much emphasis is often put on technology to solve problems. Theresa said that she thought technology could solve about 20 per cent of business problems, and even that was probably a bit generous. The core problems are caused by metadata, collaboration, diligence, and governance.

Different people need different views of a DAM system at different times, so customisation and personalisation is useful – for example providing limited functionality on an iPad app that enables workers who are away from their desks to access specific aspects of a system. However, a danger to avoid is creating multiple versions of the same asset. It is much better to have a master version and transcode out to each device as necessary.

New developments in metadata include colour palette searching which is popular with creatives more interested in moods than particular things. Another development is AVID’s “layered metadata”, which offers the ability to annotate directly on to video, as well as have general metadata for the whole asset. Much work has been done to improve video handling tools, so automated scene detection and facial recognition software are now becoming available in products.

When buying a DAM system, it is worth remembering that the now dominant technologies are relatively new, and that what may be ideal at a workgroup level may not scale up to enterprise level.

How to be a Good DAM Customer

Sarah Saunders of Electric Lane spoke on an often overlooked, but vital, aspect of the DAM procurement process – how to be a good customer. She pointed out that you get to choose your DAM vendor, but they don’t get to choose you. Projects can be derailed by customers being unsure what they want, being poor project managers, or having unrealistic expectations of what they can get for their money. It is in no-one’s interests to drive the software vendor out of business.

For a project to be a success, it is important to have a project leader who is not part of any of the stakeholder departments and is empowered to make decisions and overrule any individual department if their focus on their own departmental needs threatens the entire project. Creative people and IT people tend to think very differently, to the extent that they can completely misunderstand each other. So, it is not enough just to get people to sign off a complex requirements document, they need to understand it on their own terms otherwise they are likely to be disappointed. Building a system that the creatives are happy using, even though their requirements may seem bizarre to the IT department, is just as important as keeping documentation in good order, something the creatives might not appreciate.

One of the hardest aspects is working out what you really want from the system, and where you are prepared to compromise in order to get something that can actually be built for the budget. In terms of functionality, it is important to think beyond what people ask for, as that is likely to just be what they are using already, to what they would like if they knew it existed.

It is also important to see any system working in practice, as demos may not be enough to know that it will work with your data and business processes. A small delay in keywording speed can have a huge impact on productivity.

An iterative build process good if needs are complex. It can be expensive, but may still be cheaper than ending up with an underperforming system.

Some customers expect the vendor to do all the thinking for them, but the vendors do not know the business needs and processes of every single business. It is not an easy job to be a software vendor, and a difficult, disorganised customer will always end up having a hard time.

Panel session: how we did our procurement

Laura Caulkin from Net-a-Porter and Lisa Hayward from Shell described their recent DAM procurement projects.

Net-a-Porter took a very user-centric approach, running lots of requirements gathering workshops and writing user stories rather than a requirements spreadsheet. They spent 6-8 months planning what they wanted the new system to do, with the help of external expert consultants.

At Shell the build process of their new DAM system took about 18 months and involved the migration of 50,000 assets. They selected Vyre, as a good cultural fit. Lisa stressed the importance of getting vendors to demo their products with your data and felt that details requirements documents and scoring systems were worthwhile. She argued for the “soft” value of a good cultural fit between the team and the vendor to be included in the scores, which the IT team did not agree with. She pointed out that the IT team have a very different perspective as they want the product that they will find easiest to integrate as their short-term project, but you are left to work with the system on a daily basis.

In terms of lessons learned, she felt that their initial requirements were not specific enough, and that this led to change requests and delays. This caused probpems wiht senior managers, who had not expected the project to take so long. However, once it was built, it was well received. Lisa pointed out that change management is really just talking to people and explaining what is going on. They had great success with the adoption of the new system as it was such a vast improvement on the system that had been built for them 10 years previously.

For digital asset management, search is not enough

    Start a conversation 
< 1 minute

I was very flattered to be asked by Kate Simpson to write another article – For digital asset management, search is not enough – for the excellent resource FUMSI.

The article sums up some of the latest DAM trends and technologies with the intention of demystifying some of the services that software vendors are offering. It owes much to Theresa Regli‘s excellent work as a “professional cynic”.

Content Identifiers for Digital Rights Persistence

Estimated reading time 4–6 minutes

This is another write-up from the Henry Stewart DAM London conference.

Identity and identification

Robin Wilson discussed the issue of content identifiers, which are vitally important for digital rights management, but yet tend to be overlooked. He argued that although people become engaged in debates about titles and the language used in labels and classification systems, people overlook the need to achieve consensus on basic identification.

(I was quite surprised, as I have always thought that people would argue passionately about what something should be called and how using the wrong terminology affects usability, but that they would settle on machine-readable IDs quite happily. Perhaps it is the neutrality of such codes that makes the politics intractable. If you have invested huge amounts of money in a database that demands certain codes, you will argue that those codes are used by everyone else to save you the costs of translation or acquiring a compatible system, and there are no appeals to usability, or brokerage via editorial policy, that can be made. It simply becomes a matter of whoever shouts the loudest gets to spend the least money in the short term. )

Robin argued that the only way to create an efficient digital marketplace is to have a trusted authority oversee a system of digital identifiers that are tightly bound within the digital asset, so they cannot easily be stripped out even when an asset is divided, split, shared, and copied. The authority needs to be trusted by consumers and creators/publishers in terms of political neutrality, stability, etc.

(I could understand how this system would make it easier for people who are willing to pay for content to see what rights they need to buy and who they should pay, but I couldn’t see how the system could help content owners identify plagiarism without an active search mechanism. Presumably a digital watermark would persist throughout copies of an asset, provided that it wasn’t being deliberately stripped, but if the user simply decided not to pay, I don’t see how the system would help identify rights breaches. Robin mentioned in conversation Turnitin’s plagiarism management, which has become more lucrative than their original work on content analysis, but it requires an active process instigated by the content owner to search for unauthorised use of their content. This is fine for the major publishers of the world, who can afford to pay for such services, but is less appealing to individuals, whether professional freelances or amateur content creators, who would need a cheap and easy solution that would alert them to breaches of copyright without their having to spend time searching.)

The identifiers themselves need to be independent of any specific technology. At the moment, DAM systems are often proprietary and therefore identifiers and metadata cannot easily flow from one system to another. Some systems even strip away any metadata associated with a file on import and export.

Robin described five types of identifier currently being used or developed:

  • Uniform Resource Name (URN)
  • Handle System
  • Digital Object Identifier
  • Persistent URL (PURL)
  • ARK (Archival Resource Key).

He outlined three essential qualities for identifiers – that they be unique, globally registered, and locally resolved.

So why don’t we share?

Robin argued that it is easier for DAM vendors to build “safe” systems that lock all content within an enterprise environment, only those with a public service/archival remit tend to be collaborative and open. DAM vendors resist a federated approach online and prefer to use a one-to-one or directly intermediated transaction model. Federated identifier management services exist but vendors and customers don’t trust them. The problem is mainly social, not technological.

One of the problems is agreeing to share the costs of services, such as infrastructure, registration and validation, governance and development of the system, administration, and outreach and marketing.

(Efforts to standardise may well benefit the big players more than the small players and so there is a strong argument for them bearing the initial costs and offering support for smaller players to join. Once enough people opt in, the system gains critical mass and it becomes both easier to join and costs of joining become less of an unquantifiable risk – you can benefit from the experiences of others. The semantic web is currently attempting to acquire this “critical mass”. As marketers realise the potential of semantic web technology to make money, no doubt we will see an upsurge in interest. Facebook’s “like” button may well be heralding the advent of the ad-driven semantic web, which will probably drive uptake far faster than the worthy efforts of academics to improve the world by sharing research data!)