Category Archives: semantic web

Classification and ontology in specific subjects – UDCC Seminar

    Start a conversation 
Estimated reading time 3–5 minutes

Day two of the UDC consortium international seminar opened with two subject-specific talks – Wolfram Sperber described a classification of mathematics and Andrew Buxton showed how similar chemistry classification and ontologies are, using the ChEBI ontology. He also described the different ways classifications and ontologies could be used to support each other and about the lack of good graphical tools and visualisations to represent ontologies.

Categories and relations: key elements of ontologies – Categorial Distinctions

Roberto Poli, University of Trento (Italy) talked about the compliexisties of part-whole relationships. There are simple wholes, composed of a sum of their parts, but some parts of wholes cannot simply be added together – for example, the social, psychological, and physical aspects of a person. He also discussed the difference between science as epistemological – dealing with what can be known – and ontological – deraling with what exists.

Towards a relation ontology for the Semantic Web

Dagobert Soergel made a bold claim that the only way for the Semantic Web to deliver its promise is if we adopt a relation ontology and map each dataset to the standard, to allow interoperability. He pointed out that you “do not getting semantics from syntax alone”.

Relations in the notational hierarchy of the Dewey Decimal Classification

Rebecca Green from OCLC described the difficulties encountered when trying to automatically create ontologies from the Dewey Decimal Classification. These included semantic differences in the way subclasses had been defined, meaning that no single rule would handle them all appropriately.

Modelling concepts and structures in analytico-synthetic classifications

The eminent Ingetraut Dahlberg compared Aristotle and Ranganathan’s key facets and UDC and Colon Classification systems. She also presented a survey of academic subject areas analysed into facets.

Representing the structural elements of a freely faceted classification

Claudio Gnoli of the University of Pavia, talked about freely faceted classifications, in comparison with systems such as UDC. He emphasised the urgency of publishing classifications on line, but highlighted the limitations of SKOS and OWL to fully expressed faceted systems despite the fact that faceted systems are extremely good tools for obtaining precise search results. Faceted systems are also excellent for combining information across disciplines, allowing you to combine aspects of one subject areas with aspects of a different one, and interdisciplinarity is becoming increasingly important as an approach, as innovation often happens at the boundaries between disciplines.

He pointed out that a polyhierarchical approach can be modelled in SKOS as a way of representing facets, but that this approach is often overlooked. He also called for more work to be done on SKOS so that it can represent facets directly.

Facet analysis as a tool for modelling subject domains and terminologies

Vanda Broughton, University College London, offered the Bliss Classification as a useful tool for online subject classification, but called for help in how best to publish it for general use. Should it be released as a text document, database, or should work be done to convert it to an ontology – and if so, in what form?

She stressed how the logical approach of facet analysis and regular syntax makes it predictable and hence ideal for machine manipulation.

Analytico-synthetic approach for handling knowledge diversity in media content analysis

Devika P. Madalli, Indian Statistical Institute, DRTC (India), described the Living Knowledge project that used an analytico-synthetic approach in order to bring together around useful themes diverse content from different sources using varied means of expression. This supported a rich faceted search system.

Slides and audio recordings of all 21 talks can be now downloaded from the conference website.

Conference proceedings are published by Ergon Verlag and can now be
purchased/ordered online from http://seminar.udcc.org/2011/php/proceedings.php.

Classification meets the Web – UDCC Seminar 2011

    Start a conversation 
Estimated reading time 2–2 minutes

This post is 4th in a series about the UDC consortium international seminar in The Hague, 19-20 September, 2011.

Interoperability of knowledge organization systems with and through ontologies

Daniel Kless from the University of Melbourne pointed out that problems with ontologies arise when combining them, as errors in combination can have disastrous effects on subsequent reasoning. A well-defined modelling method is needed to minimise this. Standards such as OWL and RDF do not address the problems of methodology or terminology control.

Towards the integration of knowledge organization systems with the linked data cloud

Vincenzo Maltese of the University of Trento, Italy, explained how it is vital to make clear the semantics and purpose of any ontology when attempting to share Linked Data. Ontologies may differ in their scope, purpose, structure, terminology, language, coverage, formality, and conceptualization. He drew a distinction between descriptive ontologies and classification ontologies. It is very easy to convert a descriptive ontology to a classification ontology and the process can be automated, but extremely difficult to convert a classification ontology to a descriptive one and the process requires human intellectual and editorial effort.

Classification and reference vocabulary in linked environment data

Joachim Fock of the Federal Environment Agency (Germany) talked about how they transformed their keyword thesaurus to a Linked Data format.

Classifications and ontologies on their own terms – UDCC Seminar 2011

    Start a conversation 
Estimated reading time 2–2 minutes

This post is the third in a series about the UDC consortium international seminar in The Hague, 19-20 September, 2011.

Approaches to providing context in knowledge representation structures

Barbara Kwasnik, Syracuse University (USA), talked about ways that context can be used as a disambiguation tool, and described different kinds of contexts: warrant, scientific, educational, cultural, etc. However, interdisciplinary approaches can be difficult. It is easy to have different ontological commitments, but you need a mapping to know when and which bits need to work across domains. Ontologies will need updating as the world and world views shift and change, so we need ways of defining their scope, as well as provenance and mappings. There are also difficulties in establishing the neutrality of ontologies.

Interaction between elementary structures in universes of knowledge

Richard P. Smiraglia, University of Wisconsin (USA),
talked about how people want to turn the multidimensional world into a unidimensional top-down model. He pointed out that people tend to assume UDC is like Dewey, but it actually works far more like Ranganathan’s Colon Classification. He called for new theories of organizing knowledge in shifting contexts and theories about how to mediate between concepts and structures like UDC.

Demystifying ontology

Emad Khazraee, Drexel University (USA), talked about how ontological approaches are as old as literature itself, showing a picture of what I think was the ancient Sumerian king list. He talked about boundary objects and the overlap between different academic areas that are interested in knowledge organisation and learning. He also discussed the differences between ontology-as-categorial-analysis and ontology-as-technology.

The role of classification and ontology on the Web – UDCC Seminar 2011

    Start a conversation 
Estimated reading time 2–3 minutes

This post is the second in a series about the UDC consortium international seminar in The Hague, 19-20 September, 2011.

Knowledge Organization Systems (KOSs) as hubs in the Web of Data

In a minor change of schedule, Thomas Baker from the DCMI talked about some of the practical issues with using Linked Data. Provenance data can be recorded as additional information but it is not standardised or an integral part of RDF and this is something that is a growing concern, receving attention from W3C. URI persistence and alignment remain concerns for data managment and governance.

Aligning web vocabularies

Guus Schreiber also dealt with the problem of making sure we are all talking about the same thing when we try to align our vocabularies. He called for ontologists to be modest about what they can achieve and not to try to hide the problems that occur when you try to transfer an ontology form one domain to another. Errors typically occur due to failures to notice subtle differences between domains.

Vocabulary alignment is a complex business that requires a lot of intellectual effort and multiple techniques should be used to reinforce and support each other. It is much better to map small vocabularies to large ones that can then act as “pivots”.

There is still no adequate methodology for evaluating alignments nor for mediating consensus between observers. Perhaps there should be a way of recording the strength of consensus and the presence of disagreements and alternative views.

Classification, Collaboration and the Web of Data

Dan Brickley described three types of graph – the hypertext graph of the Internet’s links between documents, the social graph of links between people, and the factual graph of links between data. Currently Linked Data is bringing together the hypertext and factual graphs, and another step would be to add in the social dimension.

He called for a focus on what the various tools can actually do, to be wary of over-evangelical ontologists, and to remember that subject classifications are strong and robust tools that are more appropriate for many types of work than ontologies.

He said that you could expect Linked Data to solve about a third of your information linking problems.

Classification and Ontology – UDCC Seminar 2011

    Start a conversation 
Estimated reading time 2–4 minutes

I thoroughly enjoyed the third biennial International UDC Consortium seminar at the National Library of the Netherlands, The Hague, last Monday and Tuesday. The UDC conference website includes the full programme and slides and the proceedings have been published by Ergon Verlag.

This is a first of a series of posts covering the conference.

Aida Slavic, UDC editor-in-chief, opened the conference by pointing out that classification is supposed to be an ordered place, but systems and study of it are difficult and complex. We still lack terminology to express and discuss our work clearly. There is now an obvious need to instruct computers to use and interpret classifications and perhaps our work to make our classifications machine readable will also help us explain what we do to other humans.

On being the same as

Professor Patrick Hayes of the Florida Institute for Machine Learning and Cognition delivered the keynote address, pointing out that something so simple as asserting that one thing is the same as another is actually incredibly difficult and one of the problems facing the development of the Semantic Web is that people are asserting that two things are the same when actually they are merely similar.

He explained that the formalisms and logic underpinning the Semantic Web are all slimmed down versions of modern 20th century logic based on a particular world view and set of assumptions. This works very well in theory, but once you start applying such logics to the real messy and complex world with real objects, processes, and ideas, the logics are put under increasing stress.

In logic, when two things are referred to as the same, this means they are two different names for the same thing, not that there are two things that are logically equivalent. So, Paris, the city of my dreams, and Paris the administrative area, Paris throughout history, and Paris – the capital of France are not necessarily all the same. This means that in logic we have to separate out into different versions aspects of an idea that in ordinary language we think of as the same thing.

He described this as the problem of “logic versus Occam” (as in Occam’s razor). Logic drives us to create complexity, in that we have to precisely define every aspect of a concept as a different entity. In order for the Semantic Web to work, we need to be very clear about our definitions so that we don’t muddle up different aspects of a concept.

IASA Conference 2011: Turning archives into assets

    Start a conversation 
Estimated reading time 2–3 minutes

Semantic enrichment

Guy Maréchal continued the Linked Data theme by talking in more detail about how flat data models can be semantically enriched. He pointed out that if you have good structured catalogue records, it takes very little effort to give concepts URIs and to export this data as sets of relationships. This turns your database into a graph, ready for semantic search and querying.

He argued that “going to semantics cannot be avoided” and that “born digital” works will increasingly be created with semantically modelled metadata.

From Mass Digitisation to Mass Content Enrichment

The next talk was a description of the SONUMA digitisation and metadata enhancement project. Sonuma and Memnon Archiving Services have been working on inventories and dictionaries to help them index audio visual assets. They have been converting speech to text, holding the text as XML files, and then associating sections of the XML with the appropriate point in the AV content, so that it can be searched.

They identify breaks in programmes by looking for the time stamps using OCR techniques, and then looking for jumps in the numerical sequences. They assume that jumps in the numbers are breaks in programmes. This enables them to break up long tapes into sections, which usually correspond to programmes.

Social networking and Knowledge Management

Tom Adami described Knowledge Management projects at the United Nations Mission in Sudan (Best Practice Lessons Learnt: How the Exit Interview and Oral History Project at UNMIS is building a knowledge database). The UN in Africa faces problems of high staff turnover, remote locations, and difficulties in maintaining infrastructure. However, they have been using social networking to encourage people to share their knowledge and experience in a user-friendly way and so add to the official knowledge base.

Archive as a social media lab: Creative dissemination of digital sound and audiovisual collections

Budhaditya Chattopadhyay talked about a project to bring together archival practice, artistic practice, and social media. He also referred to the problems of preserving social media which is in essence ephemeral but may be an integral part of an artwork.

IASA Conference 2011: Keynote speech on Linked Open Data

    Start a conversation 
Estimated reading time 4–6 minutes

Kevin Bradley, IASA president, gave the welcome address to the 42nd IASA annual conference. He characterised the digital revolution as one that will continue reverberating for years. He reminded us that it is not always easy to sort the sense from the nonsense and that we are often surprised by what turns out to be valid and how easy it is not to see the wood for the trees – or perhaps to “lose the word to the bits”, or the “picture to the pixels”.

Keynote address – on Linked Open Data

The keynote speech was given by Ute Schwens, deputy director of the Deutsche Nationalbibliothek / German National Library (DNB). She opened with a lovely visualisation by the Opte Project of various routes through a portion of the Internet. It looked a bit like visualisations of neurons in a brain or stars and galaxies.

Ute’s talk was in support of publishing Linked Open Data. She outlined some of the concerns – lack of money, open access versus intellectual property rights, poor quality of data and assets themselves, and inadequate legal frameworks. She said that we shouldn’t be trying to select for digitisation, because everything in an archive has already been selected or it wouldn’t have been kept in the first place. She also highlighted the benefits of making digital versions of unique or fragile artefacts, in order to allow access without risk to the original. She talked about how there are many ways to digitise and that these produce different versions, so an archival master that is as close to the original as possible should always be preserved.

She used as an illustration original piano rolls. These can only be played on very specialised electrical pianos and users cannot practically be given access to them directly, but they were played by specialists and the music recorded, so users can be given access to that. The recordings are not the same as the piano rolls, but are a new and interesting product. It seems obvious that you would not destroy the original piano rolls simply because the music from them had been recorded and now exists in a digital version, so why should you destroy other forms of media such as film, simply because you have a digital version? The digital version in such cases is for access, not preservation.

One fear is that free access to information will diminish usage of an archive or library, but by opening up you can gain new users, especially by providing free access to catalogues and metadata (I like to think of these as “advertising” – shops make their catalogues freely available because they see them primarily as marketing tools).

Another fear is loss of control, but new scientific ideas often arise when diverse strands of thought are brought together and unexpected uses are made of existing data. The unusual and the unforeseen is often the source of the greatest innovation.

She pointed out that we have drafted searching and indexing rules over centuries to try to make objects as findable as possible, so Linked Open Data is merely the next logical step. We can combine automatically generated information with data we already have to provide multiple access points. We need to describe to put objects into context, but we don’t have to describe what they look like in the ways that we used to for catalogues. Good metadata is metadata that useful for users, not metadata merely for maintaining catalogues.

She ended by calling for more open access to data as ways to promote our collections and their value, adding that in uncertain times, our only security is our ability to change.

In the discussion afterwards, she said that Google needs our data and the best way to engage with – and even influence – Google is by gaining recognition as a valued supplier and making sure Google understands how much it needs us to provide it with good quality data.

The conference was hosted by Deutsche Nationalbibliothek / German National Library (DNB), Hessischer Rundfunk / Hessian Broadcasting (hr), and the Deutsches Rundfunkarchiv / German Public Broadcasting Archives (DRA). The sponsors were EMC2 (gold); Memnon Archiving Services, NOA audio solutions, and Arvato digital services (Bertelsmann) (silver); and Cedar audio, Cube-tec International, Front Porch Digital, and Syylex Digital Storage (bronze).

UK Archives Discovery Forum

    1 comment 
Estimated reading time 6–10 minutes

I very much enjoyed the UKAD UK Archives Discovery Forum event at the National Archives. There were three tracks as well as plenary sessions, so I couldn’t attend everything.

Linked Data and archives

After an introduction from Oliver Morley, John Sheridan opened by talking about the National Archives and Linked Data. Although not as detailed as the talk he gave at the Online Information Conference last December, he still gave the rallying call for opening up data and spoke of a “new breed” of IT professionals who put the emphasis on the I rather than the T. He spoke about Henry Maudslay who invented the screw-cutting lathe, which enabled standardisation of nuts and bolts. This basically enabled the industrial revolution to happen. Previously, all nuts and bolts were made individually as matching pairs, but because the process was manual, each pair was unique and not interchangeable. If you lost the bolt, you needed a new pair. This created huge amounts of management and cataloguing of individual pairs, especially if a machine had to be taken apart and re-assembled, and meant interoperability of machinery was almost impossible. Sheridan asserted that we are at that stage with data – all our data ought to fit together but at the moment, all the nuts and bolts have to be hand crafted. Linked Data is a way of standardising so that we can make our data interchangeable with other people’s. (I like the analogy because it makes clear the importance of interoperability, but obviously getting the nuts and bolts to fit is only a very small part of what makes a successful machine, let alone a whole factory or production line. Similarly Linked Data isn’t going to solve broad publishing or creative and design problems, but it makes those big problems easy to work on collaboratively.)

Richard Wallis from Talis spoke about Linked Data. He likes to joke that you haven’t been to a Linked Data presentation unless you’ve seen the Linked Open Data cloud diagram. My version is that you haven’t been to a Linked Data event unless at least one of the presenters was from Talis! Always an engaging speaker, his descriptions of compartmentalisation of content and distinctions between Linked Data, Open Data, and Linked Open Data were very helpful. He likes to predict evangelically that the effects of linking data will be more profound to the way we do business than the changes brought about by the web itself. Chatting to him over tea, he has the impression that a year ago people were curious about Linked Data and just wanted to find out what it could do, but this year they are now feeling a bit more comfortable with the concepts and are starting to ask about how they can put them into practice. There certainly seemed to be a lot of enthusiasm in the archive sector, which is generally cash-strapped, but highly co-operative, with a lot of people passionate about their collections and their data and eager to reach as wide an audience as possible.

A Vision of Britain

Humphrey Southall introduced us to A Vision of Britain, which is a well-curated online gazetteer of Britain, with neat functions for providing alternative spellings of placenames, and ways of tackling the problems of boundaries, especially of administrative divisions, that move over time. I’m fascinated by maps, and they have built in some interesting historical map functionality too.

JISC and federated history archives

David Flanders from JISC talked about how JISC and its Resource Discovery Task Force can provide help and support to educational collections especially in federation and Linked Data projects. He called on archives managers to use hard times to skill up, so that when more money becomes available staff are full of knowledge, skills, and ideas and ready to act. He also pointed out how much can be done in the Linked Data arena with very little investment in technology.

I really enjoyed Mike Pidd’s talk about the JISC-funded Connected Histories Project. They have adopted a very pragmatic approach to bringing together various archives and superimposing a federated search system based on metadata rationalisation. Although all they are attempting in terms of search and browse functionality is a simple set of concept extractions to pick out people, places, and dates, they are having numerous quality control issues even with those. However, getting all the data into a single format is a good start. I was impressed that one of their data sets took 27 days to process and they still take delivery of data on drives through the post. They found this was much easier to manage than ftp or other electronic transfer, just because of the terabyte volumes involved (something that many people tend to forget when scaling up from little pilot projects to bulk processes). Mike cautioned against using RDF and MySql as processing formats. They found that MySql couldn’t handle the volumes, and RDF they found too “verbose”. They chose to use a fully Lucene solution, which enabled them to bolt in new indexes, rather than reprocess whole data sets when they wanted to make changes. They can still publish out to RDF.

Historypin

Nick Stanhope enchanted the audience with Historypin, an offering from wearewhatwedo.org. Historypin allows people to upload old photos, and soon also audio and video, and set them in Google streetview. Although flickr has some similar functions, historypin has volunteers who help to place the image in exactly the right place, and Google have been offering support and are working on image recognition techniques to help place photos precisely. This allows rich historical street views to be built up. What impressed me most, however, was that Nick made the distinction between subjective and objective metadata, with his definition being objective metadata is metadata that can be corrected and subjective metadata is data that can’t. So, he sees objective metadata as the time and the place that a photo was taken – if it is wrong someone might know better and be able to correct it, and subjective metadata as the stories, comments, and opinions that people have about the content, which others cannot correct – if you upload a story or a memory, no-one else can tell you that it is wrong. We could split hairs over this definition, but the point is apposite when it comes to provenance tracking. He also made the astute observation that people very often note the location that a photo is “of”, but it is far more unusual for them to note where it was taken “from”. However, where it was taken from is often more use for augmented reality and other applications that try to create virtual models or images of the world. Speaking to him afterwards, I asked about parametadata, provenance tracking, etc. and he said these are important issues they are striving to work through.

Women’s history

Theresa Doherty from the Women’s Library ended the day with a call to stay enthusiastic and committed despite the recession, pointing out that it is an achievement that archives are still running despite the cuts, and that this shows how valued data and archives are in the national infrastructure, how important recording our history is, and that while archivists continue to value their collections, enjoy their visitors and users, and continue to want their data to reach a wider audience the sector will continue to progress. She described how federating the Genesis project within the Archives hub had boosted use of their collections, but pointed out that funders of archives need to recognise that online usage of collections is just as valid as getting people to physically turn up. At the moment funding typically is allocated on visitor numbers through the doors, and that this puts too much emphasis on trying to drag people in off the street at the expense of trying to reach a potentially vast global audience online.