Live content matters

Put together on November 2, 2009 5:30 pm by Dimitris

A few months back I was talking to a guy who works at the Greek subscription channel Nova and he was telling me that the content most sought after by TV channels is none other than the coverage of live events. In Greece this means mostly football and basketball games and perhaps to a lesser extent other sports and music concerts.

Sports from childhood. Football (soccer) shown...
Image via Wikipedia

And there’s good reason for that. As broadband use and torrents’ downloading increases any type of content other than live offered by the provider can be found online. Films can be downloaded, news and information can be found in many more sources online, even TV series produced explicitly for (and by) specific Greek TV channels can easily be found online if one knows where to look – as fast as a day after they air. And that knowledge becomes less and less obscure – in fact most of it is quite well-known.

It doesn’t matter whether it’s legal or not – which in most of the cases it isn’t. People’s morality (and even their desire for high-quality) will most often be put aside in the face of a free, effortless choice without side-effects.

So the providers are left with only the content that cannot be found elsewhere: live events. And now as internet TV becomes the next battle zone for those providers a prime prize is them.

tags: analysis


Digg Digg | Del.icio.us Del.icio.us | Ma.gnolia Ma.gnolia | Newsvine Newsvine | Technorati Technorati

Question search engine

Put together on October 29, 2009 12:15 pm by Dimitris

Questions
Image by Oberazzi via Flickr

These days, if you think about it, it’s super easy to find the answer to pretty much any question we can think of – perhaps excluding very exotic and obscure issues. A simple search engine query is all it takes. If you allow for asking questions to other individuals (who might be keepers of such exotic information), email and the social media are also relatively simple tools to employ when getting answers.

So if it’s not about having the answers, what is it about?

Often, it’s that we don’t know the questions themselves. Picasso said that “Computers are useless, they can only give you answers”. This holds true for the internet as well – and highlights our information overload in conjunction (and contrast) to our knowledge deficit.

Think, however, of a search engine that would accept keywords or entire passages for input and return interesting questions based on the content submitted. As artificial intelligence is not that advanced yet (right?) to create such questions from scratch, that search engine could at least find relevant questions already posed by other people.

And why would that be useful?

For one thing for educational purposes. Questions can lead a mind along a learning path it never knew existed. In particular, when it comes to life-long learning on specific subjects or to educating oneself on a subject, it would be valuable to have the right questions as guidance.

And that’s the other thing. Questions identify what’s important. They pick out what’s worth thinking about and they allow the rest to be ignored. They generate focus and meaning.

And the best thing is that in our current situation the answers to those questions already exist. The web is teeming with FAQs, mailing lists, newsgroups and papers addressing particular and broad questions. So, questions can act as ‘lenses’ that allow us to change focus and evolve our knowledge from one piece of information to the next.

I wonder if it’d be possible to create a service which when fed with a paragraph from a news article or a blog post, it would identify its main keywords (think the top 3 or 4 words in its tag cloud) and supply right next to that text some relevant questions. These questions could be found by as simple means as querying Google with the main keywords and keeping only sentences with a question mark at the end – just that. Such results would also be accompanied by a link to the text following the question – presumably the answer.

Wouldn’t such a feature provide additional valuable content to existing text?

(By the way, such a service could also be integrated quite well with the ‘paragraph summary’ platform I described in this post)

tags: idea, question


Digg Digg | Del.icio.us Del.icio.us | Ma.gnolia Ma.gnolia | Newsvine Newsvine | Technorati Technorati

“I hate Mozilla”

Put together on October 26, 2009 3:56 pm by Dimitris

Well, perhaps ‘hate’ is a bit too strong a word but just as I was taking a break from work today I bumped on this.

Raindrop UX Design and Demo from Mozilla Messaging on Vimeo.

Mozilla Raindrop aims to become a ‘unified inbox’ for all your online activity. Now that’s hardly a new idea and way too many attempts have been made to address the issue of information overload from the constantly increasing number of sources these days.

But here’s the thing with this particular announcement. My instinctive reaction was ‘Ok, where do I download it?’ – I never do that. After having played around with many applications that proclaim to manage this and achieve that, I just don’t bother any more. Why? Because I know that in the online world it’s very difficult to deliver – especially with startups offering early versions. I’ll usually wait for the early-adopters to weed out the wheat from the chaff. So, unless it’s a friend asking me to test his product or idea, ‘uh-oh, not for me, thanks’.

But not today. Raindrop got me.

Why? Is it the excellent description of Next Web? No, you read stuff like that all the time. Is it the list of features? No, they’re worthless without a good implementation behind them. Is it the video demo? Hardly impressive (but seems on the right track). So, it must be that at least unconsciously I trust Mozilla and Mozilla Labs. From Firefox (ok, leaving its memory usage aside) to it being open-source and from experimental projects like Ubiquity to Jetpack, Mozilla is (or is expected to be) there to come up with the right ideas and deliver – and I’m happy for this. And although I’m all up for, for example, Chrome and Opera when it comes to browsers, and Gmail when it comes to email, I have to admit that Mozilla has me in its stranglehold.

Mozilla Foundation logo
Image via Wikipedia

And that’s why I hate it.

Oh, and also because Raindrop is only at version 0.1.

tags: product


Digg Digg | Del.icio.us Del.icio.us | Ma.gnolia Ma.gnolia | Newsvine Newsvine | Technorati Technorati

Enabling content shortening

Put together on October 23, 2009 11:38 am by Dimitris

[ TALL & short ]
Image by CCiYn via Flickr

In my previous post I wrote about how information can be broken down to its components. Following that, these components can be summarised and the summaries combined again to create a shorter and perhaps easier to digest and spread piece of information.

So, if this ‘analysis, summary and combination’ approach is a good idea, what would be the best way to implement it on existing bodies of information – e.g. blog posts, news articles and other passages on the web?

Most probably a number of methods can be combined. For example, the author of the text can do this at the time of writing to produce a shorter version of it. In some cases it’s already happening in some way or another: WordPress blogs allow for filling in an ‘Excerpt’ field – ‘optional hand-crafted summaries of your content that can be used in your theme’, CNN articles have 3-5 bullet points summary of the facts of some articles and I bet there are other examples out there.

In addition to this, this process can be crowdsourced. Technically, this is as simple as enabling your site to receive comments – only instead of comments users would leave summaries. And instead of writing them e.g. at the end of a blog post, an icon at the start of the paragraph can be clicked to leave a summary of the paragraph that follows. The summary could then appear as a permanent addition to the post (after it has been moderated, of course).

In a sense that’s an even better approach than letting authors do it themselves since the ‘crowds’ will introduce a selection criterion: only the most worthy texts will be provided with a more concise version. (Of course, there’s nothing stopping the actual author from providing a summary himself at a later time).

Such crowdsourcing is already happening using various web-annotation services (mostly for commenting or editing purposes e.g. GooseGrade) or can be achieved by adapting online translation services (e.g. Transifex). The list of similar services is quite long if one looks into it.

Such services however are not made specifically for the purpose of shortening, promoting and adding meaning in this way to an existing passage, so perhaps a more specific service could be created to cover this particular niche (especially as the business of providing meaning becomes more relevant as Semantic Web technologies mature).

tags: analysis, idea, product


Digg Digg | Del.icio.us Del.icio.us | Ma.gnolia Ma.gnolia | Newsvine Newsvine | Technorati Technorati

Information building blocks

Put together on October 20, 2009 11:48 am by Dimitris

My social Network on Flickr, Facebook, Twitter...
Image by luc legay via Flickr

While code is compiling I thought I’d do some maintenance work here and (after upgrading to WordPress v2.8.4) I stumbled upon a draft post which I had started ages ago but never finished. It was inspired by an article by Jeff Jarvis whom I deeply respect as a blogger. The article was about how the hyperlink and the topic are becoming the new most important building blocks in news. (Ironically, I haven’t kept the particular link…) Considering information in general, the idea is of considerable relevance now that a lot of information spreads via one sentence (its title) and an accompanying link – whether in the Twitter, Facebook or FriendFeed news feeds.

Perhaps then information can be thought of to be a bit like language and its components where you have a large piece of text, split into paragraphs, sentences, words and finally their roots. Similarly, a blog post or a news article – or even a podcast or a video since they too can nowadays be ‘converted’ to text – can be broken down to its constituents going from larger to smaller parts.

So, for example, in the same way that a word can be broken down to 2-3 parts that indicate different things (e.g. un-forget-able) and in the same way that a blog post has paragraphs with distinct information in them maybe information in general can also be broken down to its constituents – the main building blocks it’s made up of: its topics and its subtopics, the facts building up to an argument, a list of arguments towards a thesis, etc.

Take a closer look at the next article you’ll read – can you break it down to its constituents? These components may be concentrated in the first sentence from each paragraph taken verbatim (works well in some news sites) – or a title/summary of the paragraph written explicitly to contain its core idea. Some blog posts in particular – especially if they are really short – may be ‘summarised’ to a single sentence or two.

I think the main idea of an article’s paragraph can be summarised with minimal loss of content using a simple sentence (hence the invention of titles). So if you could collapse e.g. a 5-paragraph post in 5 relatively simple sentences – that’s a huge step towards limiting information overload.

The question remains however how you can go from the expanded idea to its summary (e.g. from a paragraph to a sentence) but let’s leave this for another post.

tags: analysis


Digg Digg | Del.icio.us Del.icio.us | Ma.gnolia Ma.gnolia | Newsvine Newsvine | Technorati Technorati