ISKO UK | Google Ups its Stakes

    Start a conversation 
Estimated reading time 1–2 minutes

ISKO UK’s KOnnect blog notes that at least Google is taking metadata seriously.

Chatting about Wolfram Alpha the other day, it was pointed out to me that specialist knowledge for a general audience is actually a very niche area, and this is the source of the hype. You need to persuade your VC funders you are revolutionary, when actually you have a very tricky business model. Serious researchers will be using specialised systems already and most people want to look up things like train times rather than atomic weights of elements, so your market is people like students and journalists, who have an intermediate level of interest. Perhaps there are enough of them in the world to generate plenty of advertising revenue, but it seems like a tough call.

I hope the funders are happy with the old reference publishing model – lots of investment up front, in the hope not that the finished product will generate huge initial profits, but will have a long steady life. Wolfram Alpha employed 150 people in essentially traditional content creation roles and it will be interesting to see how they get their money back. Google doesn’t have to pay for its own content or metadata creation!

BBC NEWS | Wolfram Alpha ‘as important as Google’

    1 comment 
Estimated reading time 2–2 minutes

BBC NEWS | Technology | Web tool ‘as important as Google’. Here’s a new search tool that will – apparently – be “like interacting with an expert, it will understand what you’re talking about, do the computation, and then present you with the results”. Dr Wolfram says: “Wolfram Alpha is like plugging into a vast electronic brain…It computes answers – it doesn’t merely look them up in a big database.”

It is clearly a very sophisticated search engine – I imagine it has a bit of natural language processing with some “mashup” algorithms – and all such developments are very exciting. I am sure it will be very, very useful in relevant contexts and will have lots of very productive applications. It just seems to me to be ironic that the experts who devote themselves to promoting knowledge and understanding are so bad at picking words to describe in a sensible way what they have achieved. Is it marketing departments gone mad? Are they all misquoted by mischievous journalists? I hope if I spoke about this to Dr Wolfram he would understand what I’m talking about…

UPDATE: There’s a New Scientist preview of Wolfram Alpha, which explains a bit more about how it works. As far as I can work out, they have built a big database and are promoting its “authoritativeness” – so back to the “quality information has been mediated by experts” model.

ANOTHER UPDATE: First impressions from BBC technology: Wolfram Alpha first impressions.

Karen Blakeman’s Blog » Blog Archive » Wolfram Alpha is out – hmmm…

KO

A proposal to expand the design space of classification

    Start a conversation 
Estimated reading time 1–2 minutes

In Beyond retrieval: A proposal to expand the design space of classification, Melanie Feinberg argues that classifications are not just about efficient retrieval, but about mapping a conceptual space as an active part of problem-solving or design.

A classification highlights connections and contrasts, and fuzzy boundaries, so seems to me to be an obvious tool to help analysis. Comparing different classifications can also illustrate different aspects of an idea or domain. I am very used to the principle of building classifications, seeing how things fit, looking at the things that don’t fit, throwing the classification away, and starting again. You always learn a lot about the topic you are working with in the process.

There seem to be a lot of classifications in Human-Computer Interaction that are used as checklists (things like the DECIDE framework), rather than retrieval tools. It strikes me that library classifications are the special case, rather than “checklist classifications”, which are very common. But then that is just a question of how you classify classifications.

Human-Machine Symbiosis for Data Interpretation

    4 comments 
Estimated reading time 2–4 minutes

I went to the ISKO event on Thursday. The speaker, Dave Snowden of Cognitive Edge was very entertaining. He has already blogged about the lecture himself.

He pointed out that humans are great at pattern recognition (“intuition is compressed experience”) and are great satisficers (computers are great at optimising), and that humans never read or remember the same word in quite the same way (has anyone told Autonomy this?). I suppose this is the accretion of personal context and experience affecting your own understanding of the word. I remember as a child forming very strong associations with names of people I liked or disliked – if I disliked the person, I thought the name itself was horrible. This is clearly a dangerous process (and one I hope I have grown out of!) but presumably is part of the way people end up with all sorts of irrational prejudices and also explains why “reclaiming” words like “queer” eventually works. If you keep imposing new contexts on a word, those contexts will come to dominate. This factors into taxonomy work, as it explains the intensity people feel about how things should be named, but they won’t all agree. It must also be connected to why language evolves (and how outdated taxonomies start to cause rather than solve problems – like Wittgenstein’s gods becoming devils).

Snowden also talked about the importance of recognising the weak signal, and has developed a research method based on analysing narratives, using a “light touch” categorisation (to preserve fuzzy boundaries) and allowing people to categorise their own stories. He then plots the points collected from the stories to show the “cultural landscape”. If this is done repeatedly, the “landscapes” can be compared to see if anything is changing. He stressed that his methodology required the selection of the right level of detail in the narratives collected, disintermediation (letting people speak in their own words and categorise in their own way within the constraints), and distributed cognition.

I particularly liked his point that when people self-index and self-title they tend to use words that don’t occur in the text, which is a serious problem for semantic analysis algorithms (although I would comment that third party human indexers/editors will use words not in the text too – “aboutness” is a big problem!). He was also very concerned that computer scientists are not taught to see computers as tools for supporting symbiosis with humans, but as black box systems that should operate autonomously. I completely agree – as is probably quite obvious from many of my previous blog posts – get the computers to do the heavy lifting to free up the humans to sort out the anomalies, make the intuitive leaps, and be creative.

UPDATE: Here’s an excellent post on this talk from Open Intelligence.

More on semantics

    Start a conversation 
< 1 minute

Here’s a straightforward mini-review of the state of semantic search from the Truevert search engine, dividing semantic search techniques into four groups, with no mention of digital essences or other mysticism.

Meaning – solved

    3 comments 
< 1 minute

Ever pondered what exactly something meant? Not sure if you’d missed some subtext subtlety or failed to grasp a nuance of technical term usage? Wonder no more. The merger of Autonomy and Interwoven will explain it all. According to Mike Lynch, founder and chief executive of Autonomy, in an interview published in Information World Review in March, the merger “will let Autonomy put its technology inside Interwoven products and make them capable of understanding meaning….meaning-based computing extracts the digital essence of information and understands the meaning of content and interactions.”

I really need to get my hands on the new product so it can explain to me what “the digital essence of information” means!

Communities of Practice

    Start a conversation 
Estimated reading time 3–5 minutes

I found Communities of Practice (CoP) by Etienne Wenger to be one of those strange books that lots of people told me I must read – and it is relevant to taxonomy work (although this post digresses) – but when I did read it, it all seemed so totally obvious I could hardly believe it had taken until the 1980s to be formulated. Barbara Rogoff and Jean Lave also pioneered the thinking, but I feel sure the ideas must date back at least to medieval trade guilds. It is one of the odd features of academia that sometimes the obvious has simply not been noticed and it is the recognition of the obvious that is revolutionary.

The core ideas are that we don’t just learn about doing something or even how to do something, we learn to be a person that does those things, and this shapes our identities. So, I can get my editorial assistants to read Judith Butcher on copy editing to teach them about editing, I can give them practical exercises so they learn how to copy edit, but it is only after they have been given real copy editing work, amongst other copy editors, that they experience how copy editors behave, and so learn how to be copy editors. Learning is therefore a continuous lifelong process.

In the UK there has traditionally been a divide between learning about (academic) and learning how (vocational), with learning to be happening outside the educational system, in workplaces (e.g. via apprenticeships). Wenger emphasises the need to encourage learning to be, and of course it is vital, but politically it worries me that too much responsibility for this is currently falling on academia and not enough on employers (I’m probably misrepresenting Wenger here). As an employer I think I ought to invest in training new staff (and in ongoing staff development), mainly because I can train staff to be exactly the way they need to be in the specific employment context. There is no practical way that a national education system could be so specific, unless it only caters to a handful of big corporations, which don’t need the help or the additional social power. On the other hand, I really don’t want to have to teach new staff lots of learning about – grammar and spelling, for example – that can be taught perfectly well in the classroom.

I think a civilised society should be willing to pay collectively for some essentially uncommercialised public spaces (e.g. universities) where people can just think in order to get better at thinking. A vocational element is great (I have personally enjoyed and benefited from the vocational aspects of my course) but part of my motivation for returning to university was to have time to explore questions and experiment with ideas without limiting myself to only those that I could show in advance would bring in some cash.

How does all this relate to taxonomy work? A taxonomy may be needed within a single community of practice, in which case recognising the user group as a CoP may help make sense of the project and the terminology required. Conversely, a taxonomy may need to be a boundary object between CoPs, perhaps even linking numerous CoPs together. By recognising and identifying different CoPs in an organisation, a taxonomist can get a picture of the different dialects and practices that exist and need to be taken into account.

A new taxonomist also needs to learn to be a taxonomist, and the taxonomy communities of practice (both specific and theoretical) already out there play a vital role in this process.