Tag Archives: rights

There’s no such thing as originality

    Start a conversation 
Estimated reading time 5–8 minutes

Back in 1995 my brother wrote his MA dissertation on copyright: Of Cows and Calves: An Analysis of Copyright and Authorship (with Implications for Future Developments in Communications Media) or How I Learned To Stop Worrying and Love Home-Taping. It is interesting how relevant it remains today, especially in the light of the Hargreaves Report delivered to the government in May. Essentially, nothing much has changed in the intervening 16 years. My brother reflected the predictions of the profound changes that digital technologies would make – and were already making – to the creative industries. Although details such as the excitement over ISDN lines and no mention of mobile technologies date his work, the core issues he covers – who owns and idea and who should get paid for it – remain remarkably current.

I’ve written a brief overview of the Hargreaves Report for Information Today, Europe. The two aspects of most interest to me are the proposals for a Digital Copyright Exchange and for handling of orphan works.

Ideas as objects

My brother argues that creative works are all part of an ongoing cultural dialogue that no one individual can really “own” and that copyright only made sense for the short period of time where technology reified ideas as artefacts that could be traded as commodities (like potatoes or coal). The business model of “content” as “physical item” started to fail with the invention of the printing press, as the process of copying ceased to be a creative act, so each individual copy was not a “new” work in its own right. Copyright law was developed to commodify the “idea” within a book, not the physical book and was enforceable for only as long as access to the copying technologies – printing presses – could be limited. The digital age has made control of the copying process impossible, as the computer replaced the printing press, so one could exist on every desk, and now, thanks to mobile technology, in every pocket. He notes that in pre-literate societies, authorship of a myth or a folklore was not important, and I find it interesting that crowd sourcing (e.g. Wikipedia, citizen journalism) has in some ways returned us to the notion of a culturally held store of knowledge contributed and curated by volunteers, rather than by paid professionals.

Music is not the only art

Many of the discussions of free v. paid for content seem to run to extremes, and seem to be coloured by the popular music industry’s taste for excess. The music industry inflated commodity prices far beyond what consumers were willing to pay just as cheap copying technologies became widely available, making the pirates feel morally justified. It is hard to feel sympathy for people living a millionaire rockstar lifestyle. The inevitable increase in piracy was met not by lower prices, but by the industry issuing alarmist statements about home taping killing music. It didn’t! Music industry profits have risen steadily. The industry has simply turned its attention to charging more for merchandise and live events. The lesson to be learned is that people are willing to pay for experiences, services, and commodities that they perceive as being worth the price and better than the alternatives. Most music fans would rather pay for an easy, virus-free reliable download service than deal with illegal download sites, just as back in the 1980s sticking a microphone in front of the radio to record the charts wasn’t as good as buying the vinyl. The effects on the reference publishing industry were very different, affecting many small businesses and people on far less than rockstar wages, but most displaced people found ways to transfer their skills to numerous new areas of work – obvious examples are content strategy or user experience design – that simply didn’t exist in the 1990s.

It has become a bit of a cliche that there aren’t any business models that work, or have been shown to work, in the new digital economy, but are things really so different? In order to have a business somebody somewhere has to be persuaded to pay for something. Everything else is just a complication. If your free content is supported by advertising, it just means that someone needs to be persuaded to pay for the advertised product, instead of the free bit that appears on the way. Similarly, “freemium” is really just old-style free samples and loss leaders. You can’t have the “free” bit without paying for the “premium”. The two key questions for producers remains, as they have always been, how do you produce content that is so useful, entertaining, or attractive that people are willing to pay for it, and how do you deliver it in ways that make the buying process as easy as possible?

Hargreaves suggests a light touch towards enforcement of rights and anti-piracy, firmly supporting the view that if content and services are good enough, people will pay, and that education about why artists have a right to be paid for their work is as important as catching the pirates. Attempting to “lock” copies with Digital Rights Management systems certainly don’t seem to have been very successful. They are expensive to implement, unpopular, and pirates always manage to hack them. Watermarking doesn’t attempt to prevent copying but does help prove origin if a breach is discovered. Piracy is less of a worry for business-to-business trade, as most legitimate businesses want to be sure they have the correct rights and licences for content they use, rather than face embarrassing and expensive lawsuits, and a simplified, secure Digital Copyright Exchange would presumably be in their interests.

Digital Copyright Exchange

Hargreaves proposes the Digital Copyright Exchange as a mechanism to make the buying and selling of rights far easier. At the moment, piracy can be a temptation because of the time and effort required to attempt to purchase rights. Collections agencies and the law form layers of bureaucracy that hamper start-ups from developing new products and simply confuse ordinary users. This represents real lost revenue to the content providers.

Metadata analyst and music fan Sam Kuper suggested an interesting proposal for setting fair prices – that artists should put a “reserve price” on their work, with an initial fee for purchasers. Once the “reserve price” has been reached, any subsequent purchase is shared between the artist and the early purchasers. This would guarantee a level of income for artists, allow keen fans to get hold of new material quickly, and allow those less sure to wait to see if the price drops before purchase. Such a system sounds complex, but could work through some kind of centralised system, so that “returns” to early purchasers would be returned as credits to their accounts.

From an archive point of view, Hargreaves’s call to allow the digitisation and release of orphan works without endless detective work in trying to trace origins would be a huge boon.

So, there is much to think about in the Hargreaves report and some very sensible practical suggestions, but much detail to be worked out as well. I wonder if in another 16 years, my brother and I will have seen any real change or will we still be going through the same debate?

Identifiers for asset management

    1 comment 
Estimated reading time 3–4 minutes

I met Rob Wilson of RCWS consulting at the DAM London conference last summer, where he talked about digital identifiers. He has long career working on information architecture for range of organisations and industry standards bodies, including the ultimately doomed e-GMS project, which was an attempt to unify government metadata standards for sharing and interoperability.

Identifiers are a hot topic at the moment, and so I was pleased Rob was willing to talk to me and some of my colleagues about some of the problems and attempts at solving them in the wider industry. Rob described the need for stable and persistent IDs for asset management and that this is not a new problem but one that has been worked on by various groups for many years. He distinguished between ID management and metadata management, pointing out that metadata may change but the ID of an asset can be kept stable. In just the same way that managing metadata as a distinct element of content is important, so IDs need to be managed as distinct from other metadata.

Rob is an independent consultant and has no commercial affiliation with the EIDR content asset identifier system, but suggested it is a robust model in many ways. The EIDR system embeds IDs within asset files, using a combination of steganography, strong encryption, and watermarking and “triangulation” of combined aspects to create a single ID. The idea is that the ID is so deeply ingrained and fragmented within the structure of the file that it cannot easily be removed and can be recovered from a small fraction of the file – e.g. if someone takes a clip from a video, the “parent” ID is discoverable from the clip. Rob thought the EIDR system was better than digital rights management (DRM) methods, which rely on trying to prevent distribution, and “locking” content after a certain amount of time or use, which gives an incentive to everyone who has such content to try to break the DRM system. If an individual can still access their illegally held content despite the ID, they have little incentive to try to remove the ID.

The system does not try to prevent theft from source, but helps to prove when copyright has been breached, because the ID – the ownership – remains within the file. It is intended as a tool to deter systematic copyright theft by large organisations, by making legal cases easy to win. Large organisations typically have the funds, or insurances, to cover large claims, unlike individuals, so it is unlikely to be cost-effective to pursue individuals.

The EIDR system also has an ID resolver that manages rights and supply authentication (as well as ownership) so that when content is accessed, the system checks that the appropriate rights and licences have been obtained before delivery.

Rob also outlined the elements of information architecture that need to be considered for unified organisational information management – establish standards, develop models, devise policies, select tools, maintain governance, etc. He emphasised that IDs are not a “silver bullet” to solve all issues, but that if a few problematic key use cases are known, they can be investigated to see if robust federated ID management architecture would help.

Content Identifiers for Digital Rights Persistence

    4 comments 
Estimated reading time 4–6 minutes

This is another write-up from the Henry Stewart DAM London conference.

Identity and identification

Robin Wilson discussed the issue of content identifiers, which are vitally important for digital rights management, but yet tend to be overlooked. He argued that although people become engaged in debates about titles and the language used in labels and classification systems, people overlook the need to achieve consensus on basic identification.

(I was quite surprised, as I have always thought that people would argue passionately about what something should be called and how using the wrong terminology affects usability, but that they would settle on machine-readable IDs quite happily. Perhaps it is the neutrality of such codes that makes the politics intractable. If you have invested huge amounts of money in a database that demands certain codes, you will argue that those codes are used by everyone else to save you the costs of translation or acquiring a compatible system, and there are no appeals to usability, or brokerage via editorial policy, that can be made. It simply becomes a matter of whoever shouts the loudest gets to spend the least money in the short term. )

Robin argued that the only way to create an efficient digital marketplace is to have a trusted authority oversee a system of digital identifiers that are tightly bound within the digital asset, so they cannot easily be stripped out even when an asset is divided, split, shared, and copied. The authority needs to be trusted by consumers and creators/publishers in terms of political neutrality, stability, etc.

(I could understand how this system would make it easier for people who are willing to pay for content to see what rights they need to buy and who they should pay, but I couldn’t see how the system could help content owners identify plagiarism without an active search mechanism. Presumably a digital watermark would persist throughout copies of an asset, provided that it wasn’t being deliberately stripped, but if the user simply decided not to pay, I don’t see how the system would help identify rights breaches. Robin mentioned in conversation Turnitin’s plagiarism management, which has become more lucrative than their original work on content analysis, but it requires an active process instigated by the content owner to search for unauthorised use of their content. This is fine for the major publishers of the world, who can afford to pay for such services, but is less appealing to individuals, whether professional freelances or amateur content creators, who would need a cheap and easy solution that would alert them to breaches of copyright without their having to spend time searching.)

The identifiers themselves need to be independent of any specific technology. At the moment, DAM systems are often proprietary and therefore identifiers and metadata cannot easily flow from one system to another. Some systems even strip away any metadata associated with a file on import and export.

Robin described five types of identifier currently being used or developed:

  • Uniform Resource Name (URN)
  • Handle System
  • Digital Object Identifier
  • Persistent URL (PURL)
  • ARK (Archival Resource Key).

He outlined three essential qualities for identifiers – that they be unique, globally registered, and locally resolved.

So why don’t we share?

Robin argued that it is easier for DAM vendors to build “safe” systems that lock all content within an enterprise environment, only those with a public service/archival remit tend to be collaborative and open. DAM vendors resist a federated approach online and prefer to use a one-to-one or directly intermediated transaction model. Federated identifier management services exist but vendors and customers don’t trust them. The problem is mainly social, not technological.

One of the problems is agreeing to share the costs of services, such as infrastructure, registration and validation, governance and development of the system, administration, and outreach and marketing.

(Efforts to standardise may well benefit the big players more than the small players and so there is a strong argument for them bearing the initial costs and offering support for smaller players to join. Once enough people opt in, the system gains critical mass and it becomes both easier to join and costs of joining become less of an unquantifiable risk – you can benefit from the experiences of others. The semantic web is currently attempting to acquire this “critical mass”. As marketers realise the potential of semantic web technology to make money, no doubt we will see an upsurge in interest. Facebook’s “like” button may well be heralding the advent of the ad-driven semantic web, which will probably drive uptake far faster than the worthy efforts of academics to improve the world by sharing research data!)