Is Google Wave a Twitter Killer? heralds the new kid soon to be on the block. Facebook seem to fall into the “so over” category a while back, and now Twitter has hit the mainstream, clearly it is next to go.
Google Wave: Five Things You Must Know says a bit more about what Wave will do.
The combination of easy transfer of time-invested work – such as carefully written documents – with instant communication should appeal to businesses but will present some records management challenges, I’m sure. I can’t decide whether we need social media coalescence – just give me everything in one place – or clearer fragmentation – Wave for work, Facebook for family, etc. to help with information overload.
UPDATE: Amplified clip on Wave.
Do tags work? is a detailed informal study of tagging on flickr, via Taxonomy Watch: Tags – Not so Useful.
I liked the way this Pasta&Vinegar post highlighted the different information sources used to generate different measures of technology adoption. It also reminded me of Dave Snowden‘s emphasis on the importance of detecting weak signals. At the “prophecy/fantasy” stage the important signals will inevitably be weak, and surrounded by a lot of noise. Spotting trends once they have happened is one thing, but the prediction game is quite different.
ISKO UK’s KOnnect blog notes that at least Google is taking metadata seriously.
Chatting about Wolfram Alpha the other day, it was pointed out to me that specialist knowledge for a general audience is actually a very niche area, and this is the source of the hype. You need to persuade your VC funders you are revolutionary, when actually you have a very tricky business model. Serious researchers will be using specialised systems already and most people want to look up things like train times rather than atomic weights of elements, so your market is people like students and journalists, who have an intermediate level of interest. Perhaps there are enough of them in the world to generate plenty of advertising revenue, but it seems like a tough call.
I hope the funders are happy with the old reference publishing model – lots of investment up front, in the hope not that the finished product will generate huge initial profits, but will have a long steady life. Wolfram Alpha employed 150 people in essentially traditional content creation roles and it will be interesting to see how they get their money back. Google doesn’t have to pay for its own content or metadata creation!
BBC NEWS | Technology | Web tool ‘as important as Google’. Here’s a new search tool that will – apparently – be “like interacting with an expert, it will understand what you’re talking about, do the computation, and then present you with the results”. Dr Wolfram says: “Wolfram Alpha is like plugging into a vast electronic brain…It computes answers – it doesn’t merely look them up in a big database.”
It is clearly a very sophisticated search engine – I imagine it has a bit of natural language processing with some “mashup” algorithms – and all such developments are very exciting. I am sure it will be very, very useful in relevant contexts and will have lots of very productive applications. It just seems to me to be ironic that the experts who devote themselves to promoting knowledge and understanding are so bad at picking words to describe in a sensible way what they have achieved. Is it marketing departments gone mad? Are they all misquoted by mischievous journalists? I hope if I spoke about this to Dr Wolfram he would understand what I’m talking about…
UPDATE: There’s a New Scientist preview of Wolfram Alpha, which explains a bit more about how it works. As far as I can work out, they have built a big database and are promoting its “authoritativeness” – so back to the “quality information has been mediated by experts” model.
ANOTHER UPDATE: First impressions from BBC technology: Wolfram Alpha first impressions.
Karen Blakeman’s Blog » Blog Archive » Wolfram Alpha is out – hmmm…
I’ve just discovered the Librarian of Fortune blog, and found these two useful posts on specialised search engines: Searching the deep web and Legal Research Engine.
In Beyond retrieval: A proposal to expand the design space of classification, Melanie Feinberg argues that classifications are not just about efficient retrieval, but about mapping a conceptual space as an active part of problem-solving or design.
A classification highlights connections and contrasts, and fuzzy boundaries, so seems to me to be an obvious tool to help analysis. Comparing different classifications can also illustrate different aspects of an idea or domain. I am very used to the principle of building classifications, seeing how things fit, looking at the things that don’t fit, throwing the classification away, and starting again. You always learn a lot about the topic you are working with in the process.
There seem to be a lot of classifications in Human-Computer Interaction that are used as checklists (things like the DECIDE framework), rather than retrieval tools. It strikes me that library classifications are the special case, rather than “checklist classifications”, which are very common. But then that is just a question of how you classify classifications.