Human-Machine Symbiosis for Data Interpretation

    4 comments 
Estimated reading time 2–4 minutes

I went to the ISKO event on Thursday. The speaker, Dave Snowden of Cognitive Edge was very entertaining. He has already blogged about the lecture himself.

He pointed out that humans are great at pattern recognition (“intuition is compressed experience”) and are great satisficers (computers are great at optimising), and that humans never read or remember the same word in quite the same way (has anyone told Autonomy this?). I suppose this is the accretion of personal context and experience affecting your own understanding of the word. I remember as a child forming very strong associations with names of people I liked or disliked – if I disliked the person, I thought the name itself was horrible. This is clearly a dangerous process (and one I hope I have grown out of!) but presumably is part of the way people end up with all sorts of irrational prejudices and also explains why “reclaiming” words like “queer” eventually works. If you keep imposing new contexts on a word, those contexts will come to dominate. This factors into taxonomy work, as it explains the intensity people feel about how things should be named, but they won’t all agree. It must also be connected to why language evolves (and how outdated taxonomies start to cause rather than solve problems – like Wittgenstein’s gods becoming devils).

Snowden also talked about the importance of recognising the weak signal, and has developed a research method based on analysing narratives, using a “light touch” categorisation (to preserve fuzzy boundaries) and allowing people to categorise their own stories. He then plots the points collected from the stories to show the “cultural landscape”. If this is done repeatedly, the “landscapes” can be compared to see if anything is changing. He stressed that his methodology required the selection of the right level of detail in the narratives collected, disintermediation (letting people speak in their own words and categorise in their own way within the constraints), and distributed cognition.

I particularly liked his point that when people self-index and self-title they tend to use words that don’t occur in the text, which is a serious problem for semantic analysis algorithms (although I would comment that third party human indexers/editors will use words not in the text too – “aboutness” is a big problem!). He was also very concerned that computer scientists are not taught to see computers as tools for supporting symbiosis with humans, but as black box systems that should operate autonomously. I completely agree – as is probably quite obvious from many of my previous blog posts – get the computers to do the heavy lifting to free up the humans to sort out the anomalies, make the intuitive leaps, and be creative.

UPDATE: Here’s an excellent post on this talk from Open Intelligence.

More on semantics

    Start a conversation 
< 1 minute

Here’s a straightforward mini-review of the state of semantic search from the Truevert search engine, dividing semantic search techniques into four groups, with no mention of digital essences or other mysticism.

Meaning – solved

    3 comments 
< 1 minute

Ever pondered what exactly something meant? Not sure if you’d missed some subtext subtlety or failed to grasp a nuance of technical term usage? Wonder no more. The merger of Autonomy and Interwoven will explain it all. According to Mike Lynch, founder and chief executive of Autonomy, in an interview published in Information World Review in March, the merger “will let Autonomy put its technology inside Interwoven products and make them capable of understanding meaning….meaning-based computing extracts the digital essence of information and understands the meaning of content and interactions.”

I really need to get my hands on the new product so it can explain to me what “the digital essence of information” means!

Communities of Practice

    Start a conversation 
Estimated reading time 3–5 minutes

I found Communities of Practice (CoP) by Etienne Wenger to be one of those strange books that lots of people told me I must read – and it is relevant to taxonomy work (although this post digresses) – but when I did read it, it all seemed so totally obvious I could hardly believe it had taken until the 1980s to be formulated. Barbara Rogoff and Jean Lave also pioneered the thinking, but I feel sure the ideas must date back at least to medieval trade guilds. It is one of the odd features of academia that sometimes the obvious has simply not been noticed and it is the recognition of the obvious that is revolutionary.

The core ideas are that we don’t just learn about doing something or even how to do something, we learn to be a person that does those things, and this shapes our identities. So, I can get my editorial assistants to read Judith Butcher on copy editing to teach them about editing, I can give them practical exercises so they learn how to copy edit, but it is only after they have been given real copy editing work, amongst other copy editors, that they experience how copy editors behave, and so learn how to be copy editors. Learning is therefore a continuous lifelong process.

In the UK there has traditionally been a divide between learning about (academic) and learning how (vocational), with learning to be happening outside the educational system, in workplaces (e.g. via apprenticeships). Wenger emphasises the need to encourage learning to be, and of course it is vital, but politically it worries me that too much responsibility for this is currently falling on academia and not enough on employers (I’m probably misrepresenting Wenger here). As an employer I think I ought to invest in training new staff (and in ongoing staff development), mainly because I can train staff to be exactly the way they need to be in the specific employment context. There is no practical way that a national education system could be so specific, unless it only caters to a handful of big corporations, which don’t need the help or the additional social power. On the other hand, I really don’t want to have to teach new staff lots of learning about – grammar and spelling, for example – that can be taught perfectly well in the classroom.

I think a civilised society should be willing to pay collectively for some essentially uncommercialised public spaces (e.g. universities) where people can just think in order to get better at thinking. A vocational element is great (I have personally enjoyed and benefited from the vocational aspects of my course) but part of my motivation for returning to university was to have time to explore questions and experiment with ideas without limiting myself to only those that I could show in advance would bring in some cash.

How does all this relate to taxonomy work? A taxonomy may be needed within a single community of practice, in which case recognising the user group as a CoP may help make sense of the project and the terminology required. Conversely, a taxonomy may need to be a boundary object between CoPs, perhaps even linking numerous CoPs together. By recognising and identifying different CoPs in an organisation, a taxonomist can get a picture of the different dialects and practices that exist and need to be taken into account.

A new taxonomist also needs to learn to be a taxonomist, and the taxonomy communities of practice (both specific and theoretical) already out there play a vital role in this process.

People better than algorithms – official!

    1 comment 
< 1 minute

Google Invests in Pixazza, An AdSense for Images is another neat little crowdsourcing initiative. What interested me the most was this: “As James Everingham, Pixazza’s CTO, states, “No computer algorithm can identify a black pair of Jimmy Choo boots from the 2009 fall collection in the same way a person can. Rather than rely on image analysis algorithms, our platform enlists product experts to drive the process.” ”

In other words, they are paying indexers/cataloguers. Not very much, it’s true, but it is still good to see someone in tech admitting that old fashioned human beings still have their uses! Image recognition algorithms are getting better all the time, but we haven’t even really conquered text processing yet.