Put together on December 8, 2008 1:53 am by Dimitris
Apart from the recently launched howsocial.ru, I’ve been recently involved in another project that launched last week and was announced at the previous Open Coffee Athens event. It’s greekstartups.com – an index of the Greek startups.
The concept is very simple: greekstartups.com aims to collect all the basic information about the startups operating in Greece and become a reference point for Greek entrepreneurs, developers, graphic artists, marketers, business and PR people but also for anyone from abroad who would like to learn or become involved in the Greek startup community. Anyone – whether a member or simply a supporter or fan of a startup – can submit the details requested about a startup: a link, a logo, some screenshots, a short description, a few tags about its current stage (alpha, beta, etc) and what it’s about (e.g. cinema, sports, etc) and of course who the startup members are. Members can also claim the startup and update it with information as it evolves.
That’s all the project offers at the moment: a very simple functionality to serve exactly the single purpose of indexing – no social networking, project management, collaboration between startups etc (others do these things already and much better than us): if interested parties want to take it a step further, they can take it from there using greekstartups.com as a starting point. And who knows? Maybe the website can be used as a focal point to attract foreign attention to what’s been going on in our neck of the woods – yes, even possibly funders.
Many thanks to all the people who were also involved in helping greekstartups.com see the light of day – it’s been a pleasure working with you: Nikos Anagnostou, Nektarios Sylligardakis, Dimitris Kalogeropoulos, Stavros Papadakis as well as everyone who’s already submitted information to it. Of course, the real work of filling it in and keeping it updated only starts now… and the project obviously depends on the community to achieve its goal.
For more information you can check the website’s About section and why not submit your favourite project, if you’re feeling up to it.tags: project
|Digg |||Del.icio.us |||Ma.gnolia |||Newsvine |||Technorati|
Put together on November 28, 2008 12:07 pm by Dimitris
The Athens Startup Weekend that I wrote about in my previous posts did not result for my part in just what I think is a great implementation of an idea (given the constraints). Prior to that weekend and caught in the everyday life (yes, we have at last officially found a house!) I didn’t have time to prepare pretty much anything for SUW – and obviously that’s the idea. An hour or so before the opening of the Microsoft Innovation Centre doors where the event was to be hosted I found myself sitting at a cafe nearby trying to think of something that could combine 3 basic elements: a. be useful b. be feasible in 2.5 days c. be low in required expertise (as you can’t really foretell what skills you will be teaming up with). I also could not help but adding that I wanted it to be relatively innovative – otherwise I’d have no drive in making it.
That’s how I came up with 3 ideas – the howsocial.ru one and these two:
TwitterTraffic is a simple project to ask Twitter users (or any sms user if possible) who are stuck in traffic to send a tweet (or an sms) to a service saying something like ‘the #traffic at Syndagma Sq is bad’. All this information would be collected at a single place where everyone could see what the traffic is like in a city – hopefully mashed up on a Google Maps website. Also, a user could tweet ‘#traffic at Syndagma Sq’ and be sent what other people had recently said as a reply for his question. Some revenue for the service could come by providing ads on the main website and location-based ads on the replies users get sent. Furthermore, to encourage incoming tweets, points could be awarded for each one which can then be cashed in for sponsor’s items. The whole concept can be given a green approach as well as it can be assumed to alleviate congestion and pollution while at the same time attract companies which are willing to improve their brand as sponsors. That was the idea I pitched at the Athens Startup Weekend and I was pleasantly surprised when a couple of people approached me with their view for it – an approval nod of sorts to look more into it.
VirtualPointsExchange the idea with the rather bad name came about as a continuation of the previous idea. Why should users bother sending in contributions to a service? My answer was to earn ‘service points’. But what use are ‘service points’ if they refer to a single limited service like TwitterTraffic? Wouldn’t it be great to have a way of exchanging points between online services? So you could spend some ‘points’ or currency you have e.g. in a recycling project running online to buy TwitterTraffic points. And if after that you get bored of TwitterTraffic just move over to the next thing – by buying points there. The main thing to get right here is an exchange rate – which would probably be the result of what the people using both services think the prices of each particular service’s points should be. A bit complicated but seems to fill a need.
(Photo by Polifemus)tags: idea
|Digg |||Del.icio.us |||Ma.gnolia |||Newsvine |||Technorati|
Put together on November 26, 2008 3:11 pm by Dimitris
Meanwhile, back in the real world, lunch had come and gone (healthier than I expected) and sometime after that, in the midst of looking for information about how to include blogs in our process, tinkering with the crawlers and developing the back end we decided it was time to pick a name. That way we could register the domain and have it ready before the next day’s deadline while also making the logos and design the website could start.
After spending perhaps too much time on it (although I’m the first to say how important such a process is) and after discarding some very attractive options because we were unable to find a satisfactory Albanian registrar (.al is excellent for adjectives) we were already into the afternoon and had to make a decision to keep moving: we went for howsocial.ru! And with that in place we were in business – web design could start. At the same the blog crawler started being developed and a business plan was set up based on a template which Alexandros G started to work on.
Towards the end of the day Vicky who had been pulling the main weight of web design worked us through some decisions on it – how many pages we’ll have, homepage real-estate and content, colouring and font etc. We also registered our brand with the main services out there to secure it. And we also started giving some more thought to monetisation plans. The main concept was to have a 3-tiered service:
1. Free: Any user can visit the site, enter their usernames in the platforms we support and we give him an impact factor – no authentication is required. That’s just a percentile number representing what fraction of the users he or she belongs to in terms of his impact – e.g. a very influential person could be in the top 5%.
2. Standard: We produce reports on a monthly or weekly basis that include impact indices of lists of people we choose broken down by topic or keyword. This is offered for a premium.
3. Extended: Clients can contact us to create customised reports on lists of people or topics.
Apart from the sporadic and mainly vanity checks of (1), packages (2) and (3) can be an essential tool for marketeers but also of startups and individual professionals. Such parties using howsocial.ru can easily identify the primary channels to get their message across without wasting time and losing in reliability trying to identify the appropriate ‘megaphones’. A key item in our approach is that the premium packages are rankable in cost depending on what section of the impact pyramid one is interested in: top 5% is more valuable than top 10% (or isn’t it?).
With that figured out and the crawlers left to do their thing we stopped for Saturday at about 22.00.
The following morning, having reached a semi-finalised design, we had to start fleshing it in with content. Yes, this means those little (or not so little as it turned out) texts that explain everything your site is about to visitors. They sometimes can make the difference between returning to it or forgetting all about it: who we are, what we offer in terms of packages and of course what exactly is it that our website and algorithm does. That last bit is significant to share with our users (at least to some extent) in order to prevent people from using howsocial.ru in every conceivable comparison between people online. It is meant to give an idea what we mean by the social impact index.
Unfortunately, we had to make do with only sporadic and remote support from one of the developers who could not make it due to a review he had to have in on Monday – back in the real non-SUW world, that is. Moreover, crawling had to start anew to accommodate the changes made since these programs started working – one of the problems in trying to work fast to fit everything in 2 days. With these developments in mind we decided to also do away with crawling blogs and instead focus on Twitter and FriendFeed (which anyway does have some blogging information) to provide at least a proof-of-concept for the combination our service does. So during most of the morning the algorithm we had came up with the previous day was integrated into the code and following that George K and Vicky worked together to connect the front-end with back-end.
I was not feeling very productive overall in the second day and seemingly all I could do was help people with some website bugs and jumping from minor problem solving to minor problem solving here and there. In the afternoon George T started creating the presentation that was due later that evening while at the same time people from the GIVE fund overlooked the teams to hear about what everyone was doing.
As we learned later from Andrew Hyde, the Athens Startup Weekend, apart from being the largest in Europe, was one of the few – if not the only one – where a prospect of an investment had been announced. The startup that delegates of the GIVE fund would be most impressed about would get a chance to have a more serious collaboration with them. Such discussions where therefore of mutual gain: these possible funders offered advice to the startups while intimately learning about how they work.
And just as we were filling in the last details of the business plan and George T was rehearsing the presentation the crowd that would listen to the talks started to file in and a bit past 8 in the evening the culmination of the weekend had arrived. Everyone would present what they had done and the funders would choose. In order of appearance (excluding howsocial.ru) they were:
pettycards – a business process that allows micropayments using mobiles and scratch cards. I’m not sure I grasped the idea but my business sense was tingling from the initial Friday pitch.
Digital Rights Protection – a site to collect all patents from various worldwide legal systems and allow search to look for infringements. That presentation seemed promising even though a bit complicated to implement and use. I bet that it would be greatly improved by aiming it to the lay people too – not just lawyers and patent firms.
rentawife.gr – using which you can outsource household chores to others. Political correctness aside it tackles a favorite project of mine matching a job with someone who wants to do it and in free-time starved culture like the Greek one it definitely has potential.
blognudge – a widget on your blog so people can that you write something (including a topic or adonation). A simple idea so useful that you kind of wonder how it hadn’t been done before. Plus, en excellent weekend project too.
mobcommerce.com – a website that allows e-commerce store owners to get a mobile store version automatically. It is based on existing e-store generators like shopify which already have 20Kstores online.
freecycling – a Facebook app to give stuff you don’t want to others instead of throwing them away.
mydoulapa – another FB app that organises your wardrobe or helps you donate some clothing items to charity while at the same time giving access to fashion houses to connect with their demographic.
beeshopper.net – a one-page website that presents an array of e-shops collected categorised per field with its main innovation being an easy to navigate single-page interface.
uArt – a deviantArt clone website which however allows artists to earn money from their work
betcafe – an online gambling house (?)
To be honest I wasn’t terribly excited by most of the talks – but maybe that’s just me. I thought the concept of rentawife.gr stands out and could offer some real relief to many people – although not terribly innovative. Mobcommerce.com is sound business-wise and I’m sure it has monetisation potential. And pettycards can solve a real pain – especially in Greece – by addressing to some extent how backward-looking telco’s are.
The relevance of the latter startup to the Greek reality as well as it’s solid foundation in familiar concepts make it no surprise that it was elected as the most promising one – many congratulations to everyone in their team! I sincerely hope they are successful in passing the funds evaluation and take off to make the mobile situation better in Greece – and why not abroad too!
And with that and lots of thanking going back and forth between the organisers and the participants, 2.5 days of very successful work and starting up came to an end.tags: idea, report
|Digg |||Del.icio.us |||Ma.gnolia |||Newsvine |||Technorati|
Put together on November 24, 2008 7:42 pm by Dimitris
When the Athens Startup Weekend was first announced I wasn’t really sure what to expect. It was a totally foreign idea to me and I could never imagine a group of strangers forming a team and managing to go from conception to prototype in 2.5 days. The event was to take place in a 3-story building which was only recently put to use by Microsoft but which proved very handy for the job. I got there at 16.30 and met with George Tziralis, Efthimios Mpothos of askmarkets and George Kasselakis hoping that we could all use the opportunity of the SUW to make something useful. By 17.15 when the event started the ground floor seats were pretty much all taken – a pretty impressive turnout.
After an intro by Patrick Malone, the Microsoft host, Alexandros Pagidas, the organiser, and Andrew Hyde, who conceived SUW, the idea pitches started. Hesitantly, at first but quite confidently eventually, more than a dozen ideas were laid in front of us on a slide. George K took on pitching the nugget of an idea I had suggested earlier to them: ‘a metric of how popular or social or influential one is by measuring the impact they have not just on one platform like in Twitter (where such services do exist – albeit being very simplistic) but in all major social networks: both individually but also combined in a single index’. I pitched another one (hopefully to be written up at a future post) and George T yet another.
And once the pitches had ended the chaotic and difficult to follow forming of teams started. After a bit of talking around eventually we decided to go with the social impact thing. Somehow apart from George K, George T and Efthimios, another two people joined us around the white-board where we had started scribbling some first notes about the idea. That was Vicky Kolovou of Netwire and Alexandros Georgiadis of wiredpot. The team seemed suited for the project and its scope: we had two data mining/analyst people, two developers, a web designer and a technical/project manager, respectively.
Our first task for the day which was quickly drawing to an end was to basically think through the idea and identify what we would focus on qualitatively and draw some specs or paper prototype the project so that in the next morning we would be ready to start working on it. Part arbitrarily and part due to some already known technical characteristics of those services we initially selected Twitter, FriendFeed, Facebook and blogs as the platforms we would calculate the combined social impact of users for.
The rest of the evening up until 9 or so was spent scouring the web mainly for technical information as to what could be done and could not be done using the API in these platforms as well what scripts have been written by the developer community that would be useful to us. Eventually, we ended up with a list of a few parameters per platform that would be available to us to use in our algorithm that would calculate a user’s impact factor. Having settled on them we called it a day and agreed to reconvene the next day at 9.
The following morning we got down to work right away. Us ‘analysts’ started looking for previous efforts in quantifying the impact people have in their communities online while the developers looked in detail into the APIs starting with Twitter and Friendfeed and started the coding of the crawlers based on the variables we had agreed we wanted. We needed basically two elements for our project. Primarily, it was necessary to have the social graph of users for each platform we were to include and its associated data. Secondarily, we wanted an algorithm to process the data on the graph. It was important get crawling as soon as possible so that when the algorithm was ready there would be enough data to work on.
We were fortunate to confirm both our guesses. Indeed, we had two very competent coders in our team who before noon had written a first version of a crawler for Twitter and FriendFeed and set them running. Moreover, from what we could find in a cursory search at least, most attempts to monitor what the social impact of people online were rather oversimplifying things. We had to – and it seemed straightforward – to have a more sophisticated go at the problem.
A major problem we had was that different online platforms handled people and their interaction differently and in order to be able to combine the impact a person had online you had to have a common method that could be applied/adopted on most platforms out there. The following day I wrote up a detailed description of what we do to calculate the social impact someone has online but the main concept we ended up with George Tziralis in the first day can be summarised in the following principles:
1. For technical purposes (mainly computational power, time limitations and API restrictions) we need to process only a few and cheap variables. That excludes text processing (e.g. what a tweet says exactly) and going moving way back in time.
2. Taking a few parameters into account – just the most important ones – helps in ensuring, to some extent at least, that all platforms will have an equivalent across them. All platforms have the concept of friending in one way or another, for example.
3. It is very important to understand about a certain user whether they are surrounded at the 1st degree mostly by ‘black hole’ connections (sucking in information but not retransmitting it) or ‘megaphone’ connections (retransmitting a large fraction of what comes their way thus amplifying the message)
4. A measure of how effective one is in attracting attention is what comes back to them as an answer once they have said something. And although it’s too expensive computationally and usually of dubious quality to do a text analysis of responses or to cover the entire time after a post has been made looking for responses, you can still get some information by looking at, say, the last 20 responses (20 being a semi-arbitrary maximum) to an action and over how long a time they are spread out. For instance, when Robert Scoble twits something he will probably get some replies and they will be spread over a couple of minutes. When I say something, I may not get any replies due to my smaller following and if I do get any they are likely to be spread over a longer time – again just because my possible responders are fewer.
5. The same way of thinking can be applied for ‘likes’ or the votes of confidence people send to each other (diggs, stars in Twitter, etc).
6. Factoring everything in has to be somehow weighted of course. For one thing, these parameters are not equal in monitoring impact and for another, people are not equally active in all the platforms they have a profile. Of course, not all platforms are equally influential overall (i.e. regardless of a particular user) – for example FriendFeed has too few users to have the same overall effect as Facebook.
It took us the better part of the day to sort out these details of the algorithm, the formula to find the impact factor per service, to estimate the weights and to combine everything in a single impact factor in a way that makes sense. Considerable back and forth took place between us analysts and the developers on what exactly was computationally feasible and somewhere along the road we discovered that Facebook unfortunately had to be excluded due to their API not allowing us access to crucial information.
However, the first steps in developing our (yet unnamed) howsocial.ru had been taken…
(Photos by Andrew Hyde, Vicky Kolovou and Robert Scarth)tags: idea, project, report
|Digg |||Del.icio.us |||Ma.gnolia |||Newsvine |||Technorati|
Put together on October 17, 2008 4:29 pm by Dimitris
Following up to my previous post, I think it’d be brilliant if there was a meta-site to collate all the information in the major real-estate sites in Greece and tap on their collective broad user-base and its potential. Let me explain.
To begin with, such a service would include the few basic newspaper classified sites and the major real-estate agents’ ones. As it grows, further sites could also be added although that’s not necessary. The main concept lies in the meta-service scraping on a regular basis the major existing websites and processing that information. Processing means a number of things. Firstly, it should be able to re-display all ads in a homogeneous way regardless of their originating website. That way the visitor would do a single ‘master’ search and get back what there is to be found from all major sources. Secondly, duplicates could be removed from this summarising display – but more on that later. And thirdly and most importantly useful derivative information can be made available using data mining techniques.
But before I go into the data mining, let me expand a bit on the scraping. Now, by definition, classifieds are very concise pieces of text and as such can be analysed into some basic components like neighborhood, price, square meters of land, as well as type or storey if we’re talking about an apartment, extra features (like parking availability, number of rooms, type of heating etc) and even the street (if it’s stated). All this is possible because the vocabulary is particularly limited (perhaps 100-200 different words) or definable (by looking it up in a street directory). It’s important to note that despite the random structure and display in the various sites, the vocabulary remains the same across them and although some customisation will be required to process each site differently the essence of the information will be handled identically. (So if a classified has identical fields and comes from different sites it can be flagged as a duplicate.) Moreover, due to this peculiarity with the vocabulary one can also experiment with natural language searches. A proof of concept that this approach works can probably be seen in Rento which seems able to handle search queries in natural language. (Disclaimer: I do not know how well Rento does it – just that it does it.)
Furthermore, a lot of these individual websites rely to some extent to ads and marketing campaigns to supplement their revenues so depriving them of visitors (who would be able to view the ads in the meta-service instead) would be a major faux-pas. That would eventually lead to the ban the scraping script – if not to legal measures. So, it is important to ensure that once a visitor has found a preferred classified they are transferred back to the original website and that particular piece of text – to view the complete information about it, namely the contact details (or just the form to enter the code they will retrieve via SMS). This visitor transfer could also form the basis for a revenue exchange deal between the meta-service and source-sites. This is similar to the deal between YouTube and copyright holders of videos in their site: instead of flatly taking down videos with ‘their’ content they give them the option to share the revenue.
Back to the data mining features, there is a large amount of value that can be derived from these data. To begin with, simply the number of items for sale or rent is a useful figure – especially if known per location. Another type of result could come about by combining the location (which will vary in detail from neighborhood to exact street – depending on what’s available) and the price. For example, the most expensive and cheapest locations and their average prices can be determined. Conversely, alerts can be issued if any outliers from the average (i.e. possible bargains) have surfaced. Furthermore, accurate price ranges can also be calculated per square meter for each location. In addition to this, the meta-service can accumulate data over time (it should be able to remember when an ad has first appeared) and it can compare how different (or neighboring) location prices and other features evolve.
This meta-service can this way provide real-time statistics and become an as accurate as possible platform for the whole industry in a relatively easy way. This processed information can form the basis of a premium package that can be used to monetise the website. Such packages could be offered in conjunction with a free functionality – that would e.g. have limited features or would not go as back in time as the premium. An ad campaign can also be included in the website as well as an API offering access to the information – possibly for a charge if used for commercial purposes.
Now, scope-wise, an online real-estate business may be one of the few examples where going local instead of global is probably a better idea. Interest for a piece of land or site either either with the aim of living in it or housing a business usually comes from within the country – instead of from an individual or a company who has little knowledge of the culture and the geography. For example, large US sites like trulia and zillow, which admittedly, have considerable markets on their laps already, operate by focusing on the internal market. So, one could say the scope of such a meta-service should be mainly Greece.
However, there are some exceptions to the rule obviously and the first that comes to mind is tourism – a field where Greece is strong. The Greek islands but also the mainland have long been an attraction for foreigners (esp. Western Europeans) and a small fraction of them have settled in Greece. Indeed, lately a growing trend has been to transform rural Greece according to the model to other Mediterranean countries (e.g. Spain and Italy). This means turning large areas full of hotel complexes and villas, that are either put up for rent for the summer or the whole year or bought by foreigners (for instance, Russians, Germans, French etc). In the longterm therefore a niche global – or at least European – market may also develop that this meta-service could tap on.
What do you think overall – is this a good idea for a start-up?
(photo by oldyankee)tags: idea
|Digg |||Del.icio.us |||Ma.gnolia |||Newsvine |||Technorati|