Data marketplace… is it too big a thing to be tackled in a whole?

This my second post on data marketplaces… unfortunately triggered by the bad news of Talis’s winding Kasabi down. There are a number of good posts discussing this and its meaning to the Semantic Web and Linked Data efforts. I’d like to share my ideas here but focusing on the data markeplace side of the story.

In his blog post, Tim Hodson wrote:

So we were too early. We had a vision for easy data flow into and out of organisations, where everyone can find what they need in the form that they need it through the use of linked data and APIs, and where those data streams could be monetized and data layers could add value to your datasets

The previous quote aptly captures the essential aspects of data marketplaces. In its richest form, a data marketplace enables buying/selling access to quality data provided by different publishers (essential aspects are in bold).

Tim went on to say:

Other organisations besides Talis, sharing similar visions, have all had to change the way they present themselves as they realise that the market is simply not ready for something so new.

So I looked at a number of existing data marketplaces and see how they present themselves. It is hard to identify what exactly is a data marketplace, however I am including these mainly based on Paul Miller’s podcasts:

  • sells lists crawled from the Web as downloadable files.
  • Datafiniti: sells data crawled from the Web through SQL-like interface.
  • Microsoft Azure Data Marketplace: sells data from a number of publishers via API access based on OData.
  • Infochimps: sells data from a number of publishers via a mix of downloads and API access.
  • sells only numeric data provided by a number of publishers. It focuses mainly on visualization but also provides API access.
  • Factual: collects data (mainly related to locations) and sells API access to the data.
  • Kasabi: sells API access to data from different publishers.

Form the list above,, Azure, Infochimps and Kasabi fit the more specific definition of data marketplace i.e.  provide API access to data provided by different publishers. These functionalities have their implications:

  1. Supporting different publishers calls for a managed hosted service (a place for any publisher to put its data).
  2. API Access calls for cleansing and modeling any included data.

Selling simple access to collected data (e.g. downlodable crawled lists) doesn’t involve any of the two challenges above (or involves a simpler version of them). Providing data hosting services (i.e. database-as-a-service) doesn’t necessarily involve data cleansing and modeling (as these only affect the owner of the data which is mostly its only user). Both domains, collect-and-sell-data and database-as-a-service, seem to be doing fine and enjoying a good market. On the other hand, if we look at data marketplaces, it is clear that they don’t present themselves as pure data marketplaces (not anymore at least): ==> sells the platform as well, specialises in numbers and focuses on visualization.

Infochimps ==> calls itself “Big Data Platform for the Cloud”

Azure Data Marketplace ==> is still a pure marketplace but as part of the Microsoft Azure Cloud Platform.

All these make me wondering, is data marketplace too big a thing to be tackled now? is the market not ready? technology and tools not ready? are marketplaces not selling themselves well? should we give up the idea of having a marketplace for data?

I am just having hard time trying to understand…

P.S. All the best for the great Kasabi team… I learned a lot from you!

Kasabi directory matrix

Kasabi is a recent player in the data marketplace space. What distinguishes Kasabi from other marketplaces (and make it closer to my heart) is that it is based on Linked Data. All the datasets in Kasabi are represented in RDF and provide Linked Data capabilities (with additional set of standard and customised APIs for each dataset… more details).

A recent dataset on Kasabi is the directory of datasets on Kasabi itself. Having worked on related stuff before, especially dcat, I decided to spend this weekend playing with this dataset (not the best plan for a weekend you think hah?!).

To make the long story short, I built a (currently-not-very-helpful) visualization of the distribution of the classes in the datasets which you can see here.

In details:
I queried the SPARQL endpoint for a list of datasets and the classes used in each of them along with their count (the Python code I used is on github, however you need to provided your own Kasabi key and subscribe to the API).
Using Protovis I visualized the data in a matrix. Datasets are sorted alphabetically while classes are sorted descendingly according to the number of datasets they are used in. Clicking on a cell currently shows count of the corresponding dataset,class pair.

Note: I filtered out common classes like rdfs:class, owl:DatatypeProperty, etc… and I also didn’t include classes that appear in only one dataset.

Quick observations:
Not surprisingly, skos:Concept and foaf:Person are the most used classes. In general, the matrix is sparse as most of the datasets are ”focused”. Hampshire dataset, containing various information about Hampshire, uses a large number of classes.

This is still of limited value, but I have my ambitious plan below 🙂
1. set the colour hue of each cell according to the corresponding count i.e. entities of the class in the dataset
2. group (and may be colour) datasets based on their category
3. replace classes URIs with curies (using
4. when clicking on a cell, show the class structure in the corresponding dataset i.e. what properties are used to describe instances of the class in the corresponding dataset (problem here is that I need to subscribe to the dataset to query it). This can be a good example about smooth transition in RDF from schema to instance data